* [FFmpeg-devel] [PATCH 0/6] Implement SEI parsing for QSV decoders @ 2022-05-26 8:08 ffmpegagent 2022-05-26 8:08 ` [FFmpeg-devel] [PATCH 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() softworkz ` (7 more replies) 0 siblings, 8 replies; 65+ messages in thread From: ffmpegagent @ 2022-05-26 8:08 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz Missing SEI information has always been a major drawback when using the QSV decoders. I used to think that there's no chance to get at the data without explicit implementation from the MSDK side (or doing something weird like parsing in parallel). It turned out that there's a hardly known api method that provides access to all SEI (h264/hevc) or user data (mpeg2video). This allows to get things like closed captions, frame packing, display orientation, HDR data (mastering display, content light level, etc.) without having to rely on those data being provided by the MSDK as extended buffers. The commit "Implement SEI parsing for QSV decoders" includes some hard-coded workarounds for MSDK bugs which I reported: https://github.com/Intel-Media-SDK/MediaSDK/issues/2597#issuecomment-1072795311 But that doesn't help. Those bugs exist and I'm sharing my workarounds, which are empirically determined by testing a range of files. If someone is interested, I can provide private access to a repository where we have been testing this. Alternatively, I could also leave those workarounds out, and just skip those SEI types. In a previous version of this patchset, there was a concern that payload data might need to be re-ordered. Meanwhile I have researched this carefully and the conclusion is that this is not required. My detailed analysis can be found here: https://gist.github.com/softworkz/36c49586a8610813a32270ee3947a932 softworkz (6): avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() avcodec/vpp_qsv: Copy side data from input to output frame avcodec/mpeg12dec: make mpeg_decode_user_data() accessible avcodec/hevcdec: make set_side_data() accessible avcodec/h264dec: make h264_export_frame_props() accessible avcodec/qsvdec: Implement SEI parsing for QSV decoders doc/APIchanges | 4 + libavcodec/h264_slice.c | 98 +++++++-------- libavcodec/h264dec.h | 2 + libavcodec/hevcdec.c | 117 +++++++++--------- libavcodec/hevcdec.h | 2 + libavcodec/mpeg12.h | 28 +++++ libavcodec/mpeg12dec.c | 40 +----- libavcodec/qsvdec.c | 233 +++++++++++++++++++++++++++++++++++ libavfilter/qsvvpp.c | 6 + libavfilter/vf_overlay_qsv.c | 19 ++- libavutil/frame.c | 67 ++++++---- libavutil/frame.h | 32 +++++ libavutil/version.h | 2 +- 13 files changed, 477 insertions(+), 173 deletions(-) base-commit: b033913d1c5998a29dfd13e9906dd707ff6eff12 Published-As: https://github.com/ffstaging/FFmpeg/releases/tag/pr-ffstaging-31%2Fsoftworkz%2Fsubmit_qsv_sei-v1 Fetch-It-Via: git fetch https://github.com/ffstaging/FFmpeg pr-ffstaging-31/softworkz/submit_qsv_sei-v1 Pull-Request: https://github.com/ffstaging/FFmpeg/pull/31 -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() 2022-05-26 8:08 [FFmpeg-devel] [PATCH 0/6] Implement SEI parsing for QSV decoders ffmpegagent @ 2022-05-26 8:08 ` softworkz 2022-05-27 14:35 ` Soft Works 2022-05-26 8:08 ` [FFmpeg-devel] [PATCH 2/6] avcodec/vpp_qsv: Copy side data from input to output frame softworkz ` (6 subsequent siblings) 7 siblings, 1 reply; 65+ messages in thread From: softworkz @ 2022-05-26 8:08 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> Signed-off-by: Anton Khirnov <anton@khirnov.net> --- doc/APIchanges | 4 +++ libavutil/frame.c | 67 +++++++++++++++++++++++++++------------------ libavutil/frame.h | 32 ++++++++++++++++++++++ libavutil/version.h | 2 +- 4 files changed, 78 insertions(+), 27 deletions(-) diff --git a/doc/APIchanges b/doc/APIchanges index 337f1466d8..e5dd6f1e83 100644 --- a/doc/APIchanges +++ b/doc/APIchanges @@ -14,6 +14,10 @@ libavutil: 2021-04-27 API changes, most recent first: +2022-05-26 - xxxxxxxxx - lavu 57.26.100 - frame.h + Add av_frame_remove_all_side_data(), av_frame_copy_side_data(), + AV_FRAME_TRANSFER_SD_COPY, and AV_FRAME_TRANSFER_SD_FILTER. + 2022-05-23 - xxxxxxxxx - lavu 57.25.100 - avutil.h Deprecate av_fopen_utf8() without replacement. diff --git a/libavutil/frame.c b/libavutil/frame.c index fbb869fffa..bfe575612d 100644 --- a/libavutil/frame.c +++ b/libavutil/frame.c @@ -271,9 +271,45 @@ FF_ENABLE_DEPRECATION_WARNINGS return AVERROR(EINVAL); } +void av_frame_remove_all_side_data(AVFrame *frame) +{ + wipe_side_data(frame); +} + +int av_frame_copy_side_data(AVFrame* dst, const AVFrame* src, int flags) +{ + for (unsigned i = 0; i < src->nb_side_data; i++) { + const AVFrameSideData *sd_src = src->side_data[i]; + AVFrameSideData *sd_dst; + if ((flags & AV_FRAME_TRANSFER_SD_FILTER) && + sd_src->type == AV_FRAME_DATA_PANSCAN && + (src->width != dst->width || src->height != dst->height)) + continue; + if (flags & AV_FRAME_TRANSFER_SD_COPY) { + sd_dst = av_frame_new_side_data(dst, sd_src->type, + sd_src->size); + if (!sd_dst) { + wipe_side_data(dst); + return AVERROR(ENOMEM); + } + memcpy(sd_dst->data, sd_src->data, sd_src->size); + } else { + AVBufferRef *ref = av_buffer_ref(sd_src->buf); + sd_dst = av_frame_new_side_data_from_buf(dst, sd_src->type, ref); + if (!sd_dst) { + av_buffer_unref(&ref); + wipe_side_data(dst); + return AVERROR(ENOMEM); + } + } + av_dict_copy(&sd_dst->metadata, sd_src->metadata, 0); + } + return 0; +} + static int frame_copy_props(AVFrame *dst, const AVFrame *src, int force_copy) { - int ret, i; + int ret; dst->key_frame = src->key_frame; dst->pict_type = src->pict_type; @@ -309,31 +345,10 @@ static int frame_copy_props(AVFrame *dst, const AVFrame *src, int force_copy) av_dict_copy(&dst->metadata, src->metadata, 0); - for (i = 0; i < src->nb_side_data; i++) { - const AVFrameSideData *sd_src = src->side_data[i]; - AVFrameSideData *sd_dst; - if ( sd_src->type == AV_FRAME_DATA_PANSCAN - && (src->width != dst->width || src->height != dst->height)) - continue; - if (force_copy) { - sd_dst = av_frame_new_side_data(dst, sd_src->type, - sd_src->size); - if (!sd_dst) { - wipe_side_data(dst); - return AVERROR(ENOMEM); - } - memcpy(sd_dst->data, sd_src->data, sd_src->size); - } else { - AVBufferRef *ref = av_buffer_ref(sd_src->buf); - sd_dst = av_frame_new_side_data_from_buf(dst, sd_src->type, ref); - if (!sd_dst) { - av_buffer_unref(&ref); - wipe_side_data(dst); - return AVERROR(ENOMEM); - } - } - av_dict_copy(&sd_dst->metadata, sd_src->metadata, 0); - } + if ((ret = av_frame_copy_side_data(dst, src, + (force_copy ? AV_FRAME_TRANSFER_SD_COPY : 0) | + AV_FRAME_TRANSFER_SD_FILTER) < 0)) + return ret; ret = av_buffer_replace(&dst->opaque_ref, src->opaque_ref); ret |= av_buffer_replace(&dst->private_ref, src->private_ref); diff --git a/libavutil/frame.h b/libavutil/frame.h index 33fac2054c..a868fa70d7 100644 --- a/libavutil/frame.h +++ b/libavutil/frame.h @@ -850,6 +850,30 @@ int av_frame_copy(AVFrame *dst, const AVFrame *src); */ int av_frame_copy_props(AVFrame *dst, const AVFrame *src); + +/** + * Copy side data, rather than creating new references. + */ +#define AV_FRAME_TRANSFER_SD_COPY (1 << 0) +/** + * Filter out side data that does not match dst properties. + */ +#define AV_FRAME_TRANSFER_SD_FILTER (1 << 1) + +/** + * Copy all side-data from src to dst. + * + * @param dst a frame to which the side data should be copied. + * @param src a frame from which to copy the side data. + * @param flags a combination of AV_FRAME_TRANSFER_SD_* + * + * @return >= 0 on success, a negative AVERROR on error. + * + * @note This function will create new references to side data buffers in src, + * unless the AV_FRAME_TRANSFER_SD_COPY flag is passed. + */ +int av_frame_copy_side_data(AVFrame* dst, const AVFrame* src, int flags); + /** * Get the buffer reference a given data plane is stored in. * @@ -901,6 +925,14 @@ AVFrameSideData *av_frame_get_side_data(const AVFrame *frame, */ void av_frame_remove_side_data(AVFrame *frame, enum AVFrameSideDataType type); +/** + * Remove and free all side data instances. + * + * @param frame from which to remove all side data. + */ +void av_frame_remove_all_side_data(AVFrame *frame); + + /** * Flags for frame cropping. diff --git a/libavutil/version.h b/libavutil/version.h index 1b4b41d81f..2c7f4f6b37 100644 --- a/libavutil/version.h +++ b/libavutil/version.h @@ -79,7 +79,7 @@ */ #define LIBAVUTIL_VERSION_MAJOR 57 -#define LIBAVUTIL_VERSION_MINOR 25 +#define LIBAVUTIL_VERSION_MINOR 26 #define LIBAVUTIL_VERSION_MICRO 100 #define LIBAVUTIL_VERSION_INT AV_VERSION_INT(LIBAVUTIL_VERSION_MAJOR, \ -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [FFmpeg-devel] [PATCH 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() 2022-05-26 8:08 ` [FFmpeg-devel] [PATCH 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() softworkz @ 2022-05-27 14:35 ` Soft Works 0 siblings, 0 replies; 65+ messages in thread From: Soft Works @ 2022-05-27 14:35 UTC (permalink / raw) To: softworkz, ffmpeg-devel > -----Original Message----- > From: softworkz <ffmpegagent@gmail.com> > Sent: Thursday, May 26, 2022 10:09 AM > To: ffmpeg-devel@ffmpeg.org > Cc: softworkz <softworkz@hotmail.com>; softworkz > <softworkz@hotmail.com> > Subject: [PATCH 1/6] avutil/frame: Add av_frame_copy_side_data() and > av_frame_remove_all_side_data() > > From: softworkz <softworkz@hotmail.com> > > Signed-off-by: softworkz <softworkz@hotmail.com> > Signed-off-by: Anton Khirnov <anton@khirnov.net> > --- > doc/APIchanges | 4 +++ > libavutil/frame.c | 67 +++++++++++++++++++++++++++---------------- > -- > libavutil/frame.h | 32 ++++++++++++++++++++++ > libavutil/version.h | 2 +- > 4 files changed, 78 insertions(+), 27 deletions(-) > > diff --git a/doc/APIchanges b/doc/APIchanges > index 337f1466d8..e5dd6f1e83 100644 > --- a/doc/APIchanges > +++ b/doc/APIchanges > @@ -14,6 +14,10 @@ libavutil: 2021-04-27 > > API changes, most recent first: > > +2022-05-26 - xxxxxxxxx - lavu 57.26.100 - frame.h > + Add av_frame_remove_all_side_data(), av_frame_copy_side_data(), > + AV_FRAME_TRANSFER_SD_COPY, and AV_FRAME_TRANSFER_SD_FILTER. > + > 2022-05-23 - xxxxxxxxx - lavu 57.25.100 - avutil.h > Deprecate av_fopen_utf8() without replacement. > > diff --git a/libavutil/frame.c b/libavutil/frame.c > index fbb869fffa..bfe575612d 100644 > --- a/libavutil/frame.c > +++ b/libavutil/frame.c > @@ -271,9 +271,45 @@ FF_ENABLE_DEPRECATION_WARNINGS > return AVERROR(EINVAL); > } > > +void av_frame_remove_all_side_data(AVFrame *frame) > +{ > + wipe_side_data(frame); > +} > + > +int av_frame_copy_side_data(AVFrame* dst, const AVFrame* src, int > flags) > +{ > + for (unsigned i = 0; i < src->nb_side_data; i++) { > + const AVFrameSideData *sd_src = src->side_data[i]; > + AVFrameSideData *sd_dst; > + if ((flags & AV_FRAME_TRANSFER_SD_FILTER) && > + sd_src->type == AV_FRAME_DATA_PANSCAN && > + (src->width != dst->width || src->height != dst- > >height)) > + continue; > + if (flags & AV_FRAME_TRANSFER_SD_COPY) { > + sd_dst = av_frame_new_side_data(dst, sd_src->type, > + sd_src->size); > + if (!sd_dst) { > + wipe_side_data(dst); > + return AVERROR(ENOMEM); > + } > + memcpy(sd_dst->data, sd_src->data, sd_src->size); > + } else { > + AVBufferRef *ref = av_buffer_ref(sd_src->buf); > + sd_dst = av_frame_new_side_data_from_buf(dst, sd_src- > >type, ref); > + if (!sd_dst) { > + av_buffer_unref(&ref); > + wipe_side_data(dst); > + return AVERROR(ENOMEM); > + } > + } > + av_dict_copy(&sd_dst->metadata, sd_src->metadata, 0); > + } > + return 0; > +} > + > static int frame_copy_props(AVFrame *dst, const AVFrame *src, int > force_copy) > { > - int ret, i; > + int ret; > > dst->key_frame = src->key_frame; > dst->pict_type = src->pict_type; > @@ -309,31 +345,10 @@ static int frame_copy_props(AVFrame *dst, const > AVFrame *src, int force_copy) > > av_dict_copy(&dst->metadata, src->metadata, 0); > > - for (i = 0; i < src->nb_side_data; i++) { > - const AVFrameSideData *sd_src = src->side_data[i]; > - AVFrameSideData *sd_dst; > - if ( sd_src->type == AV_FRAME_DATA_PANSCAN > - && (src->width != dst->width || src->height != dst- > >height)) > - continue; > - if (force_copy) { > - sd_dst = av_frame_new_side_data(dst, sd_src->type, > - sd_src->size); > - if (!sd_dst) { > - wipe_side_data(dst); > - return AVERROR(ENOMEM); > - } > - memcpy(sd_dst->data, sd_src->data, sd_src->size); > - } else { > - AVBufferRef *ref = av_buffer_ref(sd_src->buf); > - sd_dst = av_frame_new_side_data_from_buf(dst, sd_src- > >type, ref); > - if (!sd_dst) { > - av_buffer_unref(&ref); > - wipe_side_data(dst); > - return AVERROR(ENOMEM); > - } > - } > - av_dict_copy(&sd_dst->metadata, sd_src->metadata, 0); > - } > + if ((ret = av_frame_copy_side_data(dst, src, > + (force_copy ? AV_FRAME_TRANSFER_SD_COPY : 0) | > + AV_FRAME_TRANSFER_SD_FILTER) < 0)) > + return ret; > > ret = av_buffer_replace(&dst->opaque_ref, src->opaque_ref); > ret |= av_buffer_replace(&dst->private_ref, src->private_ref); > diff --git a/libavutil/frame.h b/libavutil/frame.h > index 33fac2054c..a868fa70d7 100644 > --- a/libavutil/frame.h > +++ b/libavutil/frame.h > @@ -850,6 +850,30 @@ int av_frame_copy(AVFrame *dst, const AVFrame > *src); > */ > int av_frame_copy_props(AVFrame *dst, const AVFrame *src); > > + > +/** > + * Copy side data, rather than creating new references. > + */ > +#define AV_FRAME_TRANSFER_SD_COPY (1 << 0) > +/** > + * Filter out side data that does not match dst properties. > + */ > +#define AV_FRAME_TRANSFER_SD_FILTER (1 << 1) > + > +/** > + * Copy all side-data from src to dst. > + * > + * @param dst a frame to which the side data should be copied. > + * @param src a frame from which to copy the side data. > + * @param flags a combination of AV_FRAME_TRANSFER_SD_* > + * > + * @return >= 0 on success, a negative AVERROR on error. > + * > + * @note This function will create new references to side data > buffers in src, > + * unless the AV_FRAME_TRANSFER_SD_COPY flag is passed. > + */ > +int av_frame_copy_side_data(AVFrame* dst, const AVFrame* src, int > flags); > + > /** > * Get the buffer reference a given data plane is stored in. > * > @@ -901,6 +925,14 @@ AVFrameSideData *av_frame_get_side_data(const > AVFrame *frame, > */ > void av_frame_remove_side_data(AVFrame *frame, enum > AVFrameSideDataType type); > > +/** > + * Remove and free all side data instances. > + * > + * @param frame from which to remove all side data. > + */ > +void av_frame_remove_all_side_data(AVFrame *frame); > + > + > > /** > * Flags for frame cropping. > diff --git a/libavutil/version.h b/libavutil/version.h > index 1b4b41d81f..2c7f4f6b37 100644 > --- a/libavutil/version.h > +++ b/libavutil/version.h > @@ -79,7 +79,7 @@ > */ > > #define LIBAVUTIL_VERSION_MAJOR 57 > -#define LIBAVUTIL_VERSION_MINOR 25 > +#define LIBAVUTIL_VERSION_MINOR 26 > #define LIBAVUTIL_VERSION_MICRO 100 > > #define LIBAVUTIL_VERSION_INT > AV_VERSION_INT(LIBAVUTIL_VERSION_MAJOR, \ > -- > ffmpeg-codebot @Anton - I have integrated your proposed changes in this patch but Kept the av_frame_copy_side_data() name for the function. Would you be OK with this? I understand that "copy" is ambiguous. You said that without parameter it doesn't really do a copy. But while it doesn't create independent copies of the data buffers, it still "copies" the references to the side data, that's why I would consider the term "copy" still to be applicable. Please let me know what you think. Thanks, sw _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH 2/6] avcodec/vpp_qsv: Copy side data from input to output frame 2022-05-26 8:08 [FFmpeg-devel] [PATCH 0/6] Implement SEI parsing for QSV decoders ffmpegagent 2022-05-26 8:08 ` [FFmpeg-devel] [PATCH 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() softworkz @ 2022-05-26 8:08 ` softworkz 2022-05-31 9:19 ` Xiang, Haihao 2022-05-26 8:08 ` [FFmpeg-devel] [PATCH 3/6] avcodec/mpeg12dec: make mpeg_decode_user_data() accessible softworkz ` (5 subsequent siblings) 7 siblings, 1 reply; 65+ messages in thread From: softworkz @ 2022-05-26 8:08 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> --- libavfilter/qsvvpp.c | 6 ++++++ libavfilter/vf_overlay_qsv.c | 19 +++++++++++++++---- 2 files changed, 21 insertions(+), 4 deletions(-) diff --git a/libavfilter/qsvvpp.c b/libavfilter/qsvvpp.c index 954f882637..f4bf628073 100644 --- a/libavfilter/qsvvpp.c +++ b/libavfilter/qsvvpp.c @@ -843,6 +843,12 @@ int ff_qsvvpp_filter_frame(QSVVPPContext *s, AVFilterLink *inlink, AVFrame *picr return AVERROR(EAGAIN); break; } + + av_frame_remove_all_side_data(out_frame->frame); + ret = av_frame_copy_side_data(out_frame->frame, in_frame->frame, 0); + if (ret < 0) + return ret; + out_frame->frame->pts = av_rescale_q(out_frame->surface.Data.TimeStamp, default_tb, outlink->time_base); diff --git a/libavfilter/vf_overlay_qsv.c b/libavfilter/vf_overlay_qsv.c index 7e76b39aa9..e15214dbf2 100644 --- a/libavfilter/vf_overlay_qsv.c +++ b/libavfilter/vf_overlay_qsv.c @@ -231,13 +231,24 @@ static int process_frame(FFFrameSync *fs) { AVFilterContext *ctx = fs->parent; QSVOverlayContext *s = fs->opaque; + AVFrame *frame0 = NULL; AVFrame *frame = NULL; - int ret = 0, i; + int ret = 0; - for (i = 0; i < ctx->nb_inputs; i++) { + for (unsigned i = 0; i < ctx->nb_inputs; i++) { ret = ff_framesync_get_frame(fs, i, &frame, 0); - if (ret == 0) - ret = ff_qsvvpp_filter_frame(s->qsv, ctx->inputs[i], frame); + + if (ret == 0) { + if (i == 0) + frame0 = frame; + else { + av_frame_remove_all_side_data(frame); + ret = av_frame_copy_side_data(frame, frame0, 0); + } + + ret = ret < 0 ? ret : ff_qsvvpp_filter_frame(s->qsv, ctx->inputs[i], frame); + } + if (ret < 0 && ret != AVERROR(EAGAIN)) break; } -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [FFmpeg-devel] [PATCH 2/6] avcodec/vpp_qsv: Copy side data from input to output frame 2022-05-26 8:08 ` [FFmpeg-devel] [PATCH 2/6] avcodec/vpp_qsv: Copy side data from input to output frame softworkz @ 2022-05-31 9:19 ` Xiang, Haihao 0 siblings, 0 replies; 65+ messages in thread From: Xiang, Haihao @ 2022-05-31 9:19 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz On Thu, 2022-05-26 at 08:08 +0000, softworkz wrote: > From: softworkz <softworkz@hotmail.com> > > Signed-off-by: softworkz <softworkz@hotmail.com> > --- > libavfilter/qsvvpp.c | 6 ++++++ > libavfilter/vf_overlay_qsv.c | 19 +++++++++++++++---- > 2 files changed, 21 insertions(+), 4 deletions(-) > > diff --git a/libavfilter/qsvvpp.c b/libavfilter/qsvvpp.c > index 954f882637..f4bf628073 100644 > --- a/libavfilter/qsvvpp.c > +++ b/libavfilter/qsvvpp.c > @@ -843,6 +843,12 @@ int ff_qsvvpp_filter_frame(QSVVPPContext *s, AVFilterLink > *inlink, AVFrame *picr > return AVERROR(EAGAIN); > break; > } > + > + av_frame_remove_all_side_data(out_frame->frame); > + ret = av_frame_copy_side_data(out_frame->frame, in_frame->frame, 0); > + if (ret < 0) > + return ret; > + > out_frame->frame->pts = av_rescale_q(out_frame- > >surface.Data.TimeStamp, > default_tb, outlink->time_base); > > diff --git a/libavfilter/vf_overlay_qsv.c b/libavfilter/vf_overlay_qsv.c > index 7e76b39aa9..e15214dbf2 100644 > --- a/libavfilter/vf_overlay_qsv.c > +++ b/libavfilter/vf_overlay_qsv.c > @@ -231,13 +231,24 @@ static int process_frame(FFFrameSync *fs) > { > AVFilterContext *ctx = fs->parent; > QSVOverlayContext *s = fs->opaque; > + AVFrame *frame0 = NULL; > AVFrame *frame = NULL; > - int ret = 0, i; > + int ret = 0; > > - for (i = 0; i < ctx->nb_inputs; i++) { > + for (unsigned i = 0; i < ctx->nb_inputs; i++) { > ret = ff_framesync_get_frame(fs, i, &frame, 0); > - if (ret == 0) > - ret = ff_qsvvpp_filter_frame(s->qsv, ctx->inputs[i], frame); > + > + if (ret == 0) { > + if (i == 0) > + frame0 = frame; > + else { > + av_frame_remove_all_side_data(frame); > + ret = av_frame_copy_side_data(frame, frame0, 0); > + } > + > + ret = ret < 0 ? ret : ff_qsvvpp_filter_frame(s->qsv, ctx- > >inputs[i], frame); > + } > + > if (ret < 0 && ret != AVERROR(EAGAIN)) > break; > } LGTM -Haihao _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH 3/6] avcodec/mpeg12dec: make mpeg_decode_user_data() accessible 2022-05-26 8:08 [FFmpeg-devel] [PATCH 0/6] Implement SEI parsing for QSV decoders ffmpegagent 2022-05-26 8:08 ` [FFmpeg-devel] [PATCH 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() softworkz 2022-05-26 8:08 ` [FFmpeg-devel] [PATCH 2/6] avcodec/vpp_qsv: Copy side data from input to output frame softworkz @ 2022-05-26 8:08 ` softworkz 2022-05-31 9:24 ` Xiang, Haihao 2022-05-26 8:08 ` [FFmpeg-devel] [PATCH 4/6] avcodec/hevcdec: make set_side_data() accessible softworkz ` (4 subsequent siblings) 7 siblings, 1 reply; 65+ messages in thread From: softworkz @ 2022-05-26 8:08 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> --- libavcodec/mpeg12.h | 28 ++++++++++++++++++++++++++++ libavcodec/mpeg12dec.c | 40 +++++----------------------------------- 2 files changed, 33 insertions(+), 35 deletions(-) diff --git a/libavcodec/mpeg12.h b/libavcodec/mpeg12.h index e0406b32d9..84a829cdd3 100644 --- a/libavcodec/mpeg12.h +++ b/libavcodec/mpeg12.h @@ -23,6 +23,7 @@ #define AVCODEC_MPEG12_H #include "mpegvideo.h" +#include "libavutil/stereo3d.h" /* Start codes. */ #define SEQ_END_CODE 0x000001b7 @@ -34,6 +35,31 @@ #define EXT_START_CODE 0x000001b5 #define USER_START_CODE 0x000001b2 +typedef struct Mpeg1Context { + MpegEncContext mpeg_enc_ctx; + int mpeg_enc_ctx_allocated; /* true if decoding context allocated */ + int repeat_field; /* true if we must repeat the field */ + AVPanScan pan_scan; /* some temporary storage for the panscan */ + AVStereo3D stereo3d; + int has_stereo3d; + AVBufferRef *a53_buf_ref; + uint8_t afd; + int has_afd; + int slice_count; + unsigned aspect_ratio_info; + AVRational save_aspect; + int save_width, save_height, save_progressive_seq; + int rc_buffer_size; + AVRational frame_rate_ext; /* MPEG-2 specific framerate modificator */ + unsigned frame_rate_index; + int sync; /* Did we reach a sync point like a GOP/SEQ/KEYFrame? */ + int closed_gop; + int tmpgexs; + int first_slice; + int extradata_decoded; + int64_t timecode_frame_start; /*< GOP timecode frame start number, in non drop frame format */ +} Mpeg1Context; + void ff_mpeg12_common_init(MpegEncContext *s); void ff_mpeg1_clean_buffers(MpegEncContext *s); @@ -45,4 +71,6 @@ void ff_mpeg12_find_best_frame_rate(AVRational frame_rate, int *code, int *ext_n, int *ext_d, int nonstandard); +void ff_mpeg_decode_user_data(AVCodecContext *avctx, Mpeg1Context *s1, const uint8_t *p, int buf_size); + #endif /* AVCODEC_MPEG12_H */ diff --git a/libavcodec/mpeg12dec.c b/libavcodec/mpeg12dec.c index e9bde48f7a..11d2b58185 100644 --- a/libavcodec/mpeg12dec.c +++ b/libavcodec/mpeg12dec.c @@ -58,31 +58,6 @@ #define A53_MAX_CC_COUNT 2000 -typedef struct Mpeg1Context { - MpegEncContext mpeg_enc_ctx; - int mpeg_enc_ctx_allocated; /* true if decoding context allocated */ - int repeat_field; /* true if we must repeat the field */ - AVPanScan pan_scan; /* some temporary storage for the panscan */ - AVStereo3D stereo3d; - int has_stereo3d; - AVBufferRef *a53_buf_ref; - uint8_t afd; - int has_afd; - int slice_count; - unsigned aspect_ratio_info; - AVRational save_aspect; - int save_width, save_height, save_progressive_seq; - int rc_buffer_size; - AVRational frame_rate_ext; /* MPEG-2 specific framerate modificator */ - unsigned frame_rate_index; - int sync; /* Did we reach a sync point like a GOP/SEQ/KEYFrame? */ - int closed_gop; - int tmpgexs; - int first_slice; - int extradata_decoded; - int64_t timecode_frame_start; /*< GOP timecode frame start number, in non drop frame format */ -} Mpeg1Context; - #define MB_TYPE_ZERO_MV 0x20000000 static const uint32_t ptype2mb_type[7] = { @@ -2198,11 +2173,9 @@ static int vcr2_init_sequence(AVCodecContext *avctx) return 0; } -static int mpeg_decode_a53_cc(AVCodecContext *avctx, +static int mpeg_decode_a53_cc(AVCodecContext *avctx, Mpeg1Context *s1, const uint8_t *p, int buf_size) { - Mpeg1Context *s1 = avctx->priv_data; - if (buf_size >= 6 && p[0] == 'G' && p[1] == 'A' && p[2] == '9' && p[3] == '4' && p[4] == 3 && (p[5] & 0x40)) { @@ -2333,12 +2306,9 @@ static int mpeg_decode_a53_cc(AVCodecContext *avctx, return 0; } -static void mpeg_decode_user_data(AVCodecContext *avctx, - const uint8_t *p, int buf_size) +void ff_mpeg_decode_user_data(AVCodecContext *avctx, Mpeg1Context *s1, const uint8_t *p, int buf_size) { - Mpeg1Context *s = avctx->priv_data; const uint8_t *buf_end = p + buf_size; - Mpeg1Context *s1 = avctx->priv_data; #if 0 int i; @@ -2352,7 +2322,7 @@ static void mpeg_decode_user_data(AVCodecContext *avctx, int i; for(i=0; i<20; i++) if (!memcmp(p+i, "\0TMPGEXS\0", 9)){ - s->tmpgexs= 1; + s1->tmpgexs= 1; } } /* we parse the DTG active format information */ @@ -2398,7 +2368,7 @@ static void mpeg_decode_user_data(AVCodecContext *avctx, break; } } - } else if (mpeg_decode_a53_cc(avctx, p, buf_size)) { + } else if (mpeg_decode_a53_cc(avctx, s1, p, buf_size)) { return; } } @@ -2590,7 +2560,7 @@ static int decode_chunks(AVCodecContext *avctx, AVFrame *picture, } break; case USER_START_CODE: - mpeg_decode_user_data(avctx, buf_ptr, input_size); + ff_mpeg_decode_user_data(avctx, s, buf_ptr, input_size); break; case GOP_START_CODE: if (last_code == 0) { -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [FFmpeg-devel] [PATCH 3/6] avcodec/mpeg12dec: make mpeg_decode_user_data() accessible 2022-05-26 8:08 ` [FFmpeg-devel] [PATCH 3/6] avcodec/mpeg12dec: make mpeg_decode_user_data() accessible softworkz @ 2022-05-31 9:24 ` Xiang, Haihao 0 siblings, 0 replies; 65+ messages in thread From: Xiang, Haihao @ 2022-05-31 9:24 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz On Thu, 2022-05-26 at 08:08 +0000, softworkz wrote: > From: softworkz <softworkz@hotmail.com> > > Signed-off-by: softworkz <softworkz@hotmail.com> > --- > libavcodec/mpeg12.h | 28 ++++++++++++++++++++++++++++ > libavcodec/mpeg12dec.c | 40 +++++----------------------------------- > 2 files changed, 33 insertions(+), 35 deletions(-) > > diff --git a/libavcodec/mpeg12.h b/libavcodec/mpeg12.h > index e0406b32d9..84a829cdd3 100644 > --- a/libavcodec/mpeg12.h > +++ b/libavcodec/mpeg12.h > @@ -23,6 +23,7 @@ > #define AVCODEC_MPEG12_H > > #include "mpegvideo.h" > +#include "libavutil/stereo3d.h" > > /* Start codes. */ > #define SEQ_END_CODE 0x000001b7 > @@ -34,6 +35,31 @@ > #define EXT_START_CODE 0x000001b5 > #define USER_START_CODE 0x000001b2 > > +typedef struct Mpeg1Context { > + MpegEncContext mpeg_enc_ctx; > + int mpeg_enc_ctx_allocated; /* true if decoding context allocated */ > + int repeat_field; /* true if we must repeat the field */ > + AVPanScan pan_scan; /* some temporary storage for the panscan */ > + AVStereo3D stereo3d; > + int has_stereo3d; > + AVBufferRef *a53_buf_ref; > + uint8_t afd; > + int has_afd; > + int slice_count; > + unsigned aspect_ratio_info; > + AVRational save_aspect; > + int save_width, save_height, save_progressive_seq; > + int rc_buffer_size; > + AVRational frame_rate_ext; /* MPEG-2 specific framerate modificator */ > + unsigned frame_rate_index; > + int sync; /* Did we reach a sync point like a > GOP/SEQ/KEYFrame? */ > + int closed_gop; > + int tmpgexs; > + int first_slice; > + int extradata_decoded; > + int64_t timecode_frame_start; /*< GOP timecode frame start number, in > non drop frame format */ > +} Mpeg1Context; > + > void ff_mpeg12_common_init(MpegEncContext *s); > > void ff_mpeg1_clean_buffers(MpegEncContext *s); > @@ -45,4 +71,6 @@ void ff_mpeg12_find_best_frame_rate(AVRational frame_rate, > int *code, int *ext_n, int *ext_d, > int nonstandard); > > +void ff_mpeg_decode_user_data(AVCodecContext *avctx, Mpeg1Context *s1, const > uint8_t *p, int buf_size); > + > #endif /* AVCODEC_MPEG12_H */ > diff --git a/libavcodec/mpeg12dec.c b/libavcodec/mpeg12dec.c > index e9bde48f7a..11d2b58185 100644 > --- a/libavcodec/mpeg12dec.c > +++ b/libavcodec/mpeg12dec.c > @@ -58,31 +58,6 @@ > > #define A53_MAX_CC_COUNT 2000 > > -typedef struct Mpeg1Context { > - MpegEncContext mpeg_enc_ctx; > - int mpeg_enc_ctx_allocated; /* true if decoding context allocated */ > - int repeat_field; /* true if we must repeat the field */ > - AVPanScan pan_scan; /* some temporary storage for the panscan */ > - AVStereo3D stereo3d; > - int has_stereo3d; > - AVBufferRef *a53_buf_ref; > - uint8_t afd; > - int has_afd; > - int slice_count; > - unsigned aspect_ratio_info; > - AVRational save_aspect; > - int save_width, save_height, save_progressive_seq; > - int rc_buffer_size; > - AVRational frame_rate_ext; /* MPEG-2 specific framerate modificator */ > - unsigned frame_rate_index; > - int sync; /* Did we reach a sync point like a > GOP/SEQ/KEYFrame? */ > - int closed_gop; > - int tmpgexs; > - int first_slice; > - int extradata_decoded; > - int64_t timecode_frame_start; /*< GOP timecode frame start number, in > non drop frame format */ > -} Mpeg1Context; > - > #define MB_TYPE_ZERO_MV 0x20000000 > > static const uint32_t ptype2mb_type[7] = { > @@ -2198,11 +2173,9 @@ static int vcr2_init_sequence(AVCodecContext *avctx) > return 0; > } > > -static int mpeg_decode_a53_cc(AVCodecContext *avctx, > +static int mpeg_decode_a53_cc(AVCodecContext *avctx, Mpeg1Context *s1, > const uint8_t *p, int buf_size) > { > - Mpeg1Context *s1 = avctx->priv_data; > - > if (buf_size >= 6 && > p[0] == 'G' && p[1] == 'A' && p[2] == '9' && p[3] == '4' && > p[4] == 3 && (p[5] & 0x40)) { > @@ -2333,12 +2306,9 @@ static int mpeg_decode_a53_cc(AVCodecContext *avctx, > return 0; > } > > -static void mpeg_decode_user_data(AVCodecContext *avctx, > - const uint8_t *p, int buf_size) > +void ff_mpeg_decode_user_data(AVCodecContext *avctx, Mpeg1Context *s1, const > uint8_t *p, int buf_size) > { > - Mpeg1Context *s = avctx->priv_data; > const uint8_t *buf_end = p + buf_size; > - Mpeg1Context *s1 = avctx->priv_data; > > #if 0 > int i; > @@ -2352,7 +2322,7 @@ static void mpeg_decode_user_data(AVCodecContext *avctx, > int i; > for(i=0; i<20; i++) > if (!memcmp(p+i, "\0TMPGEXS\0", 9)){ > - s->tmpgexs= 1; > + s1->tmpgexs= 1; > } > } > /* we parse the DTG active format information */ > @@ -2398,7 +2368,7 @@ static void mpeg_decode_user_data(AVCodecContext *avctx, > break; > } > } > - } else if (mpeg_decode_a53_cc(avctx, p, buf_size)) { > + } else if (mpeg_decode_a53_cc(avctx, s1, p, buf_size)) { > return; > } > } > @@ -2590,7 +2560,7 @@ static int decode_chunks(AVCodecContext *avctx, AVFrame > *picture, > } > break; > case USER_START_CODE: > - mpeg_decode_user_data(avctx, buf_ptr, input_size); > + ff_mpeg_decode_user_data(avctx, s, buf_ptr, input_size); > break; > case GOP_START_CODE: > if (last_code == 0) { LGTM -Haihao _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH 4/6] avcodec/hevcdec: make set_side_data() accessible 2022-05-26 8:08 [FFmpeg-devel] [PATCH 0/6] Implement SEI parsing for QSV decoders ffmpegagent ` (2 preceding siblings ...) 2022-05-26 8:08 ` [FFmpeg-devel] [PATCH 3/6] avcodec/mpeg12dec: make mpeg_decode_user_data() accessible softworkz @ 2022-05-26 8:08 ` softworkz 2022-05-31 9:38 ` Xiang, Haihao 2022-05-31 9:40 ` Xiang, Haihao 2022-05-26 8:08 ` [FFmpeg-devel] [PATCH 5/6] avcodec/h264dec: make h264_export_frame_props() accessible softworkz ` (3 subsequent siblings) 7 siblings, 2 replies; 65+ messages in thread From: softworkz @ 2022-05-26 8:08 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> --- libavcodec/hevcdec.c | 117 +++++++++++++++++++++---------------------- libavcodec/hevcdec.h | 2 + 2 files changed, 60 insertions(+), 59 deletions(-) diff --git a/libavcodec/hevcdec.c b/libavcodec/hevcdec.c index f782ea6394..ff22081b46 100644 --- a/libavcodec/hevcdec.c +++ b/libavcodec/hevcdec.c @@ -2726,23 +2726,22 @@ error: return res; } -static int set_side_data(HEVCContext *s) +int ff_set_side_data(AVCodecContext *logctx, HEVCSEI *sei, HEVCContext *s, AVFrame *out) { - AVFrame *out = s->ref->frame; - int ret; + int ret = 0; - if (s->sei.frame_packing.present && - s->sei.frame_packing.arrangement_type >= 3 && - s->sei.frame_packing.arrangement_type <= 5 && - s->sei.frame_packing.content_interpretation_type > 0 && - s->sei.frame_packing.content_interpretation_type < 3) { + if (sei->frame_packing.present && + sei->frame_packing.arrangement_type >= 3 && + sei->frame_packing.arrangement_type <= 5 && + sei->frame_packing.content_interpretation_type > 0 && + sei->frame_packing.content_interpretation_type < 3) { AVStereo3D *stereo = av_stereo3d_create_side_data(out); if (!stereo) return AVERROR(ENOMEM); - switch (s->sei.frame_packing.arrangement_type) { + switch (sei->frame_packing.arrangement_type) { case 3: - if (s->sei.frame_packing.quincunx_subsampling) + if (sei->frame_packing.quincunx_subsampling) stereo->type = AV_STEREO3D_SIDEBYSIDE_QUINCUNX; else stereo->type = AV_STEREO3D_SIDEBYSIDE; @@ -2755,21 +2754,21 @@ static int set_side_data(HEVCContext *s) break; } - if (s->sei.frame_packing.content_interpretation_type == 2) + if (sei->frame_packing.content_interpretation_type == 2) stereo->flags = AV_STEREO3D_FLAG_INVERT; - if (s->sei.frame_packing.arrangement_type == 5) { - if (s->sei.frame_packing.current_frame_is_frame0_flag) + if (sei->frame_packing.arrangement_type == 5) { + if (sei->frame_packing.current_frame_is_frame0_flag) stereo->view = AV_STEREO3D_VIEW_LEFT; else stereo->view = AV_STEREO3D_VIEW_RIGHT; } } - if (s->sei.display_orientation.present && - (s->sei.display_orientation.anticlockwise_rotation || - s->sei.display_orientation.hflip || s->sei.display_orientation.vflip)) { - double angle = s->sei.display_orientation.anticlockwise_rotation * 360 / (double) (1 << 16); + if (sei->display_orientation.present && + (sei->display_orientation.anticlockwise_rotation || + sei->display_orientation.hflip || sei->display_orientation.vflip)) { + double angle = sei->display_orientation.anticlockwise_rotation * 360 / (double) (1 << 16); AVFrameSideData *rotation = av_frame_new_side_data(out, AV_FRAME_DATA_DISPLAYMATRIX, sizeof(int32_t) * 9); @@ -2788,17 +2787,17 @@ static int set_side_data(HEVCContext *s) * (1 - 2 * !!s->sei.display_orientation.vflip); av_display_rotation_set((int32_t *)rotation->data, angle); av_display_matrix_flip((int32_t *)rotation->data, - s->sei.display_orientation.hflip, - s->sei.display_orientation.vflip); + sei->display_orientation.hflip, + sei->display_orientation.vflip); } // Decrement the mastering display flag when IRAP frame has no_rasl_output_flag=1 // so the side data persists for the entire coded video sequence. - if (s->sei.mastering_display.present > 0 && + if (s && sei->mastering_display.present > 0 && IS_IRAP(s) && s->no_rasl_output_flag) { - s->sei.mastering_display.present--; + sei->mastering_display.present--; } - if (s->sei.mastering_display.present) { + if (sei->mastering_display.present) { // HEVC uses a g,b,r ordering, which we convert to a more natural r,g,b const int mapping[3] = {2, 0, 1}; const int chroma_den = 50000; @@ -2811,25 +2810,25 @@ static int set_side_data(HEVCContext *s) for (i = 0; i < 3; i++) { const int j = mapping[i]; - metadata->display_primaries[i][0].num = s->sei.mastering_display.display_primaries[j][0]; + metadata->display_primaries[i][0].num = sei->mastering_display.display_primaries[j][0]; metadata->display_primaries[i][0].den = chroma_den; - metadata->display_primaries[i][1].num = s->sei.mastering_display.display_primaries[j][1]; + metadata->display_primaries[i][1].num = sei->mastering_display.display_primaries[j][1]; metadata->display_primaries[i][1].den = chroma_den; } - metadata->white_point[0].num = s->sei.mastering_display.white_point[0]; + metadata->white_point[0].num = sei->mastering_display.white_point[0]; metadata->white_point[0].den = chroma_den; - metadata->white_point[1].num = s->sei.mastering_display.white_point[1]; + metadata->white_point[1].num = sei->mastering_display.white_point[1]; metadata->white_point[1].den = chroma_den; - metadata->max_luminance.num = s->sei.mastering_display.max_luminance; + metadata->max_luminance.num = sei->mastering_display.max_luminance; metadata->max_luminance.den = luma_den; - metadata->min_luminance.num = s->sei.mastering_display.min_luminance; + metadata->min_luminance.num = sei->mastering_display.min_luminance; metadata->min_luminance.den = luma_den; metadata->has_luminance = 1; metadata->has_primaries = 1; - av_log(s->avctx, AV_LOG_DEBUG, "Mastering Display Metadata:\n"); - av_log(s->avctx, AV_LOG_DEBUG, + av_log(logctx, AV_LOG_DEBUG, "Mastering Display Metadata:\n"); + av_log(logctx, AV_LOG_DEBUG, "r(%5.4f,%5.4f) g(%5.4f,%5.4f) b(%5.4f %5.4f) wp(%5.4f, %5.4f)\n", av_q2d(metadata->display_primaries[0][0]), av_q2d(metadata->display_primaries[0][1]), @@ -2838,31 +2837,31 @@ static int set_side_data(HEVCContext *s) av_q2d(metadata->display_primaries[2][0]), av_q2d(metadata->display_primaries[2][1]), av_q2d(metadata->white_point[0]), av_q2d(metadata->white_point[1])); - av_log(s->avctx, AV_LOG_DEBUG, + av_log(logctx, AV_LOG_DEBUG, "min_luminance=%f, max_luminance=%f\n", av_q2d(metadata->min_luminance), av_q2d(metadata->max_luminance)); } // Decrement the mastering display flag when IRAP frame has no_rasl_output_flag=1 // so the side data persists for the entire coded video sequence. - if (s->sei.content_light.present > 0 && + if (s && sei->content_light.present > 0 && IS_IRAP(s) && s->no_rasl_output_flag) { - s->sei.content_light.present--; + sei->content_light.present--; } - if (s->sei.content_light.present) { + if (sei->content_light.present) { AVContentLightMetadata *metadata = av_content_light_metadata_create_side_data(out); if (!metadata) return AVERROR(ENOMEM); - metadata->MaxCLL = s->sei.content_light.max_content_light_level; - metadata->MaxFALL = s->sei.content_light.max_pic_average_light_level; + metadata->MaxCLL = sei->content_light.max_content_light_level; + metadata->MaxFALL = sei->content_light.max_pic_average_light_level; - av_log(s->avctx, AV_LOG_DEBUG, "Content Light Level Metadata:\n"); - av_log(s->avctx, AV_LOG_DEBUG, "MaxCLL=%d, MaxFALL=%d\n", + av_log(logctx, AV_LOG_DEBUG, "Content Light Level Metadata:\n"); + av_log(logctx, AV_LOG_DEBUG, "MaxCLL=%d, MaxFALL=%d\n", metadata->MaxCLL, metadata->MaxFALL); } - if (s->sei.a53_caption.buf_ref) { - HEVCSEIA53Caption *a53 = &s->sei.a53_caption; + if (sei->a53_caption.buf_ref) { + HEVCSEIA53Caption *a53 = &sei->a53_caption; AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, a53->buf_ref); if (!sd) @@ -2870,8 +2869,8 @@ static int set_side_data(HEVCContext *s) a53->buf_ref = NULL; } - for (int i = 0; i < s->sei.unregistered.nb_buf_ref; i++) { - HEVCSEIUnregistered *unreg = &s->sei.unregistered; + for (int i = 0; i < sei->unregistered.nb_buf_ref; i++) { + HEVCSEIUnregistered *unreg = &sei->unregistered; if (unreg->buf_ref[i]) { AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, @@ -2882,9 +2881,9 @@ static int set_side_data(HEVCContext *s) unreg->buf_ref[i] = NULL; } } - s->sei.unregistered.nb_buf_ref = 0; + sei->unregistered.nb_buf_ref = 0; - if (s->sei.timecode.present) { + if (s && sei->timecode.present) { uint32_t *tc_sd; char tcbuf[AV_TIMECODE_STR_SIZE]; AVFrameSideData *tcside = av_frame_new_side_data(out, AV_FRAME_DATA_S12M_TIMECODE, @@ -2893,25 +2892,25 @@ static int set_side_data(HEVCContext *s) return AVERROR(ENOMEM); tc_sd = (uint32_t*)tcside->data; - tc_sd[0] = s->sei.timecode.num_clock_ts; + tc_sd[0] = sei->timecode.num_clock_ts; for (int i = 0; i < tc_sd[0]; i++) { - int drop = s->sei.timecode.cnt_dropped_flag[i]; - int hh = s->sei.timecode.hours_value[i]; - int mm = s->sei.timecode.minutes_value[i]; - int ss = s->sei.timecode.seconds_value[i]; - int ff = s->sei.timecode.n_frames[i]; + int drop = sei->timecode.cnt_dropped_flag[i]; + int hh = sei->timecode.hours_value[i]; + int mm = sei->timecode.minutes_value[i]; + int ss = sei->timecode.seconds_value[i]; + int ff = sei->timecode.n_frames[i]; tc_sd[i + 1] = av_timecode_get_smpte(s->avctx->framerate, drop, hh, mm, ss, ff); av_timecode_make_smpte_tc_string2(tcbuf, s->avctx->framerate, tc_sd[i + 1], 0, 0); av_dict_set(&out->metadata, "timecode", tcbuf, 0); } - s->sei.timecode.num_clock_ts = 0; + sei->timecode.num_clock_ts = 0; } - if (s->sei.film_grain_characteristics.present) { - HEVCSEIFilmGrainCharacteristics *fgc = &s->sei.film_grain_characteristics; + if (s && sei->film_grain_characteristics.present) { + HEVCSEIFilmGrainCharacteristics *fgc = &sei->film_grain_characteristics; AVFilmGrainParams *fgp = av_film_grain_params_create_side_data(out); if (!fgp) return AVERROR(ENOMEM); @@ -2965,8 +2964,8 @@ static int set_side_data(HEVCContext *s) fgc->present = fgc->persistence_flag; } - if (s->sei.dynamic_hdr_plus.info) { - AVBufferRef *info_ref = av_buffer_ref(s->sei.dynamic_hdr_plus.info); + if (sei->dynamic_hdr_plus.info) { + AVBufferRef *info_ref = av_buffer_ref(sei->dynamic_hdr_plus.info); if (!info_ref) return AVERROR(ENOMEM); @@ -2976,7 +2975,7 @@ static int set_side_data(HEVCContext *s) } } - if (s->rpu_buf) { + if (s && s->rpu_buf) { AVFrameSideData *rpu = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_DOVI_RPU_BUFFER, s->rpu_buf); if (!rpu) return AVERROR(ENOMEM); @@ -2984,10 +2983,10 @@ static int set_side_data(HEVCContext *s) s->rpu_buf = NULL; } - if ((ret = ff_dovi_attach_side_data(&s->dovi_ctx, out)) < 0) + if (s && (ret = ff_dovi_attach_side_data(&s->dovi_ctx, out)) < 0) return ret; - if (s->sei.dynamic_hdr_vivid.info) { + if (s && s->sei.dynamic_hdr_vivid.info) { AVBufferRef *info_ref = av_buffer_ref(s->sei.dynamic_hdr_vivid.info); if (!info_ref) return AVERROR(ENOMEM); @@ -3046,7 +3045,7 @@ static int hevc_frame_start(HEVCContext *s) goto fail; } - ret = set_side_data(s); + ret = ff_set_side_data(s->avctx, &s->sei, s, s->ref->frame); if (ret < 0) goto fail; diff --git a/libavcodec/hevcdec.h b/libavcodec/hevcdec.h index de861b88b3..d4001466f6 100644 --- a/libavcodec/hevcdec.h +++ b/libavcodec/hevcdec.h @@ -690,6 +690,8 @@ void ff_hevc_hls_residual_coding(HEVCContext *s, int x0, int y0, void ff_hevc_hls_mvd_coding(HEVCContext *s, int x0, int y0, int log2_cb_size); +int ff_set_side_data(AVCodecContext *logctx, HEVCSEI *sei, HEVCContext *s, AVFrame *out); + extern const uint8_t ff_hevc_qpel_extra_before[4]; extern const uint8_t ff_hevc_qpel_extra_after[4]; extern const uint8_t ff_hevc_qpel_extra[4]; -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [FFmpeg-devel] [PATCH 4/6] avcodec/hevcdec: make set_side_data() accessible 2022-05-26 8:08 ` [FFmpeg-devel] [PATCH 4/6] avcodec/hevcdec: make set_side_data() accessible softworkz @ 2022-05-31 9:38 ` Xiang, Haihao 2022-05-31 16:03 ` Soft Works 2022-05-31 9:40 ` Xiang, Haihao 1 sibling, 1 reply; 65+ messages in thread From: Xiang, Haihao @ 2022-05-31 9:38 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz On Thu, 2022-05-26 at 08:08 +0000, softworkz wrote: > From: softworkz <softworkz@hotmail.com> > > Signed-off-by: softworkz <softworkz@hotmail.com> > --- > libavcodec/hevcdec.c | 117 +++++++++++++++++++++---------------------- > libavcodec/hevcdec.h | 2 + > 2 files changed, 60 insertions(+), 59 deletions(-) > > diff --git a/libavcodec/hevcdec.c b/libavcodec/hevcdec.c > index f782ea6394..ff22081b46 100644 > --- a/libavcodec/hevcdec.c > +++ b/libavcodec/hevcdec.c > @@ -2726,23 +2726,22 @@ error: > return res; > } > > -static int set_side_data(HEVCContext *s) > +int ff_set_side_data(AVCodecContext *logctx, HEVCSEI *sei, HEVCContext *s, > AVFrame *out) > { > - AVFrame *out = s->ref->frame; > - int ret; > + int ret = 0; > > - if (s->sei.frame_packing.present && > - s->sei.frame_packing.arrangement_type >= 3 && > - s->sei.frame_packing.arrangement_type <= 5 && > - s->sei.frame_packing.content_interpretation_type > 0 && > - s->sei.frame_packing.content_interpretation_type < 3) { > + if (sei->frame_packing.present && > + sei->frame_packing.arrangement_type >= 3 && > + sei->frame_packing.arrangement_type <= 5 && > + sei->frame_packing.content_interpretation_type > 0 && > + sei->frame_packing.content_interpretation_type < 3) { > AVStereo3D *stereo = av_stereo3d_create_side_data(out); > if (!stereo) > return AVERROR(ENOMEM); > > - switch (s->sei.frame_packing.arrangement_type) { > + switch (sei->frame_packing.arrangement_type) { > case 3: > - if (s->sei.frame_packing.quincunx_subsampling) > + if (sei->frame_packing.quincunx_subsampling) > stereo->type = AV_STEREO3D_SIDEBYSIDE_QUINCUNX; > else > stereo->type = AV_STEREO3D_SIDEBYSIDE; > @@ -2755,21 +2754,21 @@ static int set_side_data(HEVCContext *s) > break; > } > > - if (s->sei.frame_packing.content_interpretation_type == 2) > + if (sei->frame_packing.content_interpretation_type == 2) > stereo->flags = AV_STEREO3D_FLAG_INVERT; > > - if (s->sei.frame_packing.arrangement_type == 5) { > - if (s->sei.frame_packing.current_frame_is_frame0_flag) > + if (sei->frame_packing.arrangement_type == 5) { > + if (sei->frame_packing.current_frame_is_frame0_flag) > stereo->view = AV_STEREO3D_VIEW_LEFT; > else > stereo->view = AV_STEREO3D_VIEW_RIGHT; > } > } > > - if (s->sei.display_orientation.present && > - (s->sei.display_orientation.anticlockwise_rotation || > - s->sei.display_orientation.hflip || s- > >sei.display_orientation.vflip)) { > - double angle = s->sei.display_orientation.anticlockwise_rotation * > 360 / (double) (1 << 16); > + if (sei->display_orientation.present && > + (sei->display_orientation.anticlockwise_rotation || > + sei->display_orientation.hflip || sei->display_orientation.vflip)) { > + double angle = sei->display_orientation.anticlockwise_rotation * 360 > / (double) (1 << 16); > AVFrameSideData *rotation = av_frame_new_side_data(out, > AV_FRAME_DATA_DISP > LAYMATRIX, > sizeof(int32_t) * > 9); > @@ -2788,17 +2787,17 @@ static int set_side_data(HEVCContext *s) > * (1 - 2 * !!s->sei.display_orientation.vflip); > av_display_rotation_set((int32_t *)rotation->data, angle); > av_display_matrix_flip((int32_t *)rotation->data, > - s->sei.display_orientation.hflip, > - s->sei.display_orientation.vflip); > + sei->display_orientation.hflip, > + sei->display_orientation.vflip); > } > > // Decrement the mastering display flag when IRAP frame has > no_rasl_output_flag=1 > // so the side data persists for the entire coded video sequence. > - if (s->sei.mastering_display.present > 0 && > + if (s && sei->mastering_display.present > 0 && So sei must be non-NULL but s may be NULL in the new function, right ? The caller should ensure s is non-NULL for the original function. It would be better to have some comment if s may be NULL now. Thanks Haihao > IS_IRAP(s) && s->no_rasl_output_flag) { > - s->sei.mastering_display.present--; > + sei->mastering_display.present--; > } > - if (s->sei.mastering_display.present) { > + if (sei->mastering_display.present) { > // HEVC uses a g,b,r ordering, which we convert to a more natural > r,g,b > const int mapping[3] = {2, 0, 1}; > const int chroma_den = 50000; > @@ -2811,25 +2810,25 @@ static int set_side_data(HEVCContext *s) > > for (i = 0; i < 3; i++) { > const int j = mapping[i]; > - metadata->display_primaries[i][0].num = s- > >sei.mastering_display.display_primaries[j][0]; > + metadata->display_primaries[i][0].num = sei- > >mastering_display.display_primaries[j][0]; > metadata->display_primaries[i][0].den = chroma_den; > - metadata->display_primaries[i][1].num = s- > >sei.mastering_display.display_primaries[j][1]; > + metadata->display_primaries[i][1].num = sei- > >mastering_display.display_primaries[j][1]; > metadata->display_primaries[i][1].den = chroma_den; > } > - metadata->white_point[0].num = s- > >sei.mastering_display.white_point[0]; > + metadata->white_point[0].num = sei->mastering_display.white_point[0]; > metadata->white_point[0].den = chroma_den; > - metadata->white_point[1].num = s- > >sei.mastering_display.white_point[1]; > + metadata->white_point[1].num = sei->mastering_display.white_point[1]; > metadata->white_point[1].den = chroma_den; > > - metadata->max_luminance.num = s->sei.mastering_display.max_luminance; > + metadata->max_luminance.num = sei->mastering_display.max_luminance; > metadata->max_luminance.den = luma_den; > - metadata->min_luminance.num = s->sei.mastering_display.min_luminance; > + metadata->min_luminance.num = sei->mastering_display.min_luminance; > metadata->min_luminance.den = luma_den; > metadata->has_luminance = 1; > metadata->has_primaries = 1; > > - av_log(s->avctx, AV_LOG_DEBUG, "Mastering Display Metadata:\n"); > - av_log(s->avctx, AV_LOG_DEBUG, > + av_log(logctx, AV_LOG_DEBUG, "Mastering Display Metadata:\n"); > + av_log(logctx, AV_LOG_DEBUG, > "r(%5.4f,%5.4f) g(%5.4f,%5.4f) b(%5.4f %5.4f) wp(%5.4f, > %5.4f)\n", > av_q2d(metadata->display_primaries[0][0]), > av_q2d(metadata->display_primaries[0][1]), > @@ -2838,31 +2837,31 @@ static int set_side_data(HEVCContext *s) > av_q2d(metadata->display_primaries[2][0]), > av_q2d(metadata->display_primaries[2][1]), > av_q2d(metadata->white_point[0]), av_q2d(metadata- > >white_point[1])); > - av_log(s->avctx, AV_LOG_DEBUG, > + av_log(logctx, AV_LOG_DEBUG, > "min_luminance=%f, max_luminance=%f\n", > av_q2d(metadata->min_luminance), av_q2d(metadata- > >max_luminance)); > } > // Decrement the mastering display flag when IRAP frame has > no_rasl_output_flag=1 > // so the side data persists for the entire coded video sequence. > - if (s->sei.content_light.present > 0 && > + if (s && sei->content_light.present > 0 && > IS_IRAP(s) && s->no_rasl_output_flag) { > - s->sei.content_light.present--; > + sei->content_light.present--; > } > - if (s->sei.content_light.present) { > + if (sei->content_light.present) { > AVContentLightMetadata *metadata = > av_content_light_metadata_create_side_data(out); > if (!metadata) > return AVERROR(ENOMEM); > - metadata->MaxCLL = s->sei.content_light.max_content_light_level; > - metadata->MaxFALL = s->sei.content_light.max_pic_average_light_level; > + metadata->MaxCLL = sei->content_light.max_content_light_level; > + metadata->MaxFALL = sei->content_light.max_pic_average_light_level; > > - av_log(s->avctx, AV_LOG_DEBUG, "Content Light Level Metadata:\n"); > - av_log(s->avctx, AV_LOG_DEBUG, "MaxCLL=%d, MaxFALL=%d\n", > + av_log(logctx, AV_LOG_DEBUG, "Content Light Level Metadata:\n"); > + av_log(logctx, AV_LOG_DEBUG, "MaxCLL=%d, MaxFALL=%d\n", > metadata->MaxCLL, metadata->MaxFALL); > } > > - if (s->sei.a53_caption.buf_ref) { > - HEVCSEIA53Caption *a53 = &s->sei.a53_caption; > + if (sei->a53_caption.buf_ref) { > + HEVCSEIA53Caption *a53 = &sei->a53_caption; > > AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, > AV_FRAME_DATA_A53_CC, a53->buf_ref); > if (!sd) > @@ -2870,8 +2869,8 @@ static int set_side_data(HEVCContext *s) > a53->buf_ref = NULL; > } > > - for (int i = 0; i < s->sei.unregistered.nb_buf_ref; i++) { > - HEVCSEIUnregistered *unreg = &s->sei.unregistered; > + for (int i = 0; i < sei->unregistered.nb_buf_ref; i++) { > + HEVCSEIUnregistered *unreg = &sei->unregistered; > > if (unreg->buf_ref[i]) { > AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, > @@ -2882,9 +2881,9 @@ static int set_side_data(HEVCContext *s) > unreg->buf_ref[i] = NULL; > } > } > - s->sei.unregistered.nb_buf_ref = 0; > + sei->unregistered.nb_buf_ref = 0; > > - if (s->sei.timecode.present) { > + if (s && sei->timecode.present) { > uint32_t *tc_sd; > char tcbuf[AV_TIMECODE_STR_SIZE]; > AVFrameSideData *tcside = av_frame_new_side_data(out, > AV_FRAME_DATA_S12M_TIMECODE, > @@ -2893,25 +2892,25 @@ static int set_side_data(HEVCContext *s) > return AVERROR(ENOMEM); > > tc_sd = (uint32_t*)tcside->data; > - tc_sd[0] = s->sei.timecode.num_clock_ts; > + tc_sd[0] = sei->timecode.num_clock_ts; > > for (int i = 0; i < tc_sd[0]; i++) { > - int drop = s->sei.timecode.cnt_dropped_flag[i]; > - int hh = s->sei.timecode.hours_value[i]; > - int mm = s->sei.timecode.minutes_value[i]; > - int ss = s->sei.timecode.seconds_value[i]; > - int ff = s->sei.timecode.n_frames[i]; > + int drop = sei->timecode.cnt_dropped_flag[i]; > + int hh = sei->timecode.hours_value[i]; > + int mm = sei->timecode.minutes_value[i]; > + int ss = sei->timecode.seconds_value[i]; > + int ff = sei->timecode.n_frames[i]; > > tc_sd[i + 1] = av_timecode_get_smpte(s->avctx->framerate, drop, > hh, mm, ss, ff); > av_timecode_make_smpte_tc_string2(tcbuf, s->avctx->framerate, > tc_sd[i + 1], 0, 0); > av_dict_set(&out->metadata, "timecode", tcbuf, 0); > } > > - s->sei.timecode.num_clock_ts = 0; > + sei->timecode.num_clock_ts = 0; > } > > - if (s->sei.film_grain_characteristics.present) { > - HEVCSEIFilmGrainCharacteristics *fgc = &s- > >sei.film_grain_characteristics; > + if (s && sei->film_grain_characteristics.present) { > + HEVCSEIFilmGrainCharacteristics *fgc = &sei- > >film_grain_characteristics; > AVFilmGrainParams *fgp = av_film_grain_params_create_side_data(out); > if (!fgp) > return AVERROR(ENOMEM); > @@ -2965,8 +2964,8 @@ static int set_side_data(HEVCContext *s) > fgc->present = fgc->persistence_flag; > } > > - if (s->sei.dynamic_hdr_plus.info) { > - AVBufferRef *info_ref = av_buffer_ref(s->sei.dynamic_hdr_plus.info); > + if (sei->dynamic_hdr_plus.info) { > + AVBufferRef *info_ref = av_buffer_ref(sei->dynamic_hdr_plus.info); > if (!info_ref) > return AVERROR(ENOMEM); > > @@ -2976,7 +2975,7 @@ static int set_side_data(HEVCContext *s) > } > } > > - if (s->rpu_buf) { > + if (s && s->rpu_buf) { > AVFrameSideData *rpu = av_frame_new_side_data_from_buf(out, > AV_FRAME_DATA_DOVI_RPU_BUFFER, s->rpu_buf); > if (!rpu) > return AVERROR(ENOMEM); > @@ -2984,10 +2983,10 @@ static int set_side_data(HEVCContext *s) > s->rpu_buf = NULL; > } > > - if ((ret = ff_dovi_attach_side_data(&s->dovi_ctx, out)) < 0) > + if (s && (ret = ff_dovi_attach_side_data(&s->dovi_ctx, out)) < 0) > return ret; > > - if (s->sei.dynamic_hdr_vivid.info) { > + if (s && s->sei.dynamic_hdr_vivid.info) { > AVBufferRef *info_ref = av_buffer_ref(s->sei.dynamic_hdr_vivid.info); > if (!info_ref) > return AVERROR(ENOMEM); > @@ -3046,7 +3045,7 @@ static int hevc_frame_start(HEVCContext *s) > goto fail; > } > > - ret = set_side_data(s); > + ret = ff_set_side_data(s->avctx, &s->sei, s, s->ref->frame); > if (ret < 0) > goto fail; > > diff --git a/libavcodec/hevcdec.h b/libavcodec/hevcdec.h > index de861b88b3..d4001466f6 100644 > --- a/libavcodec/hevcdec.h > +++ b/libavcodec/hevcdec.h > @@ -690,6 +690,8 @@ void ff_hevc_hls_residual_coding(HEVCContext *s, int x0, > int y0, > > void ff_hevc_hls_mvd_coding(HEVCContext *s, int x0, int y0, int > log2_cb_size); > > +int ff_set_side_data(AVCodecContext *logctx, HEVCSEI *sei, HEVCContext *s, > AVFrame *out); > + > extern const uint8_t ff_hevc_qpel_extra_before[4]; > extern const uint8_t ff_hevc_qpel_extra_after[4]; > extern const uint8_t ff_hevc_qpel_extra[4]; _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [FFmpeg-devel] [PATCH 4/6] avcodec/hevcdec: make set_side_data() accessible 2022-05-31 9:38 ` Xiang, Haihao @ 2022-05-31 16:03 ` Soft Works 0 siblings, 0 replies; 65+ messages in thread From: Soft Works @ 2022-05-31 16:03 UTC (permalink / raw) To: Xiang, Haihao, ffmpeg-devel > -----Original Message----- > From: Xiang, Haihao <haihao.xiang@intel.com> > Sent: Tuesday, May 31, 2022 11:38 AM > To: ffmpeg-devel@ffmpeg.org > Cc: softworkz@hotmail.com > Subject: Re: [FFmpeg-devel] [PATCH 4/6] avcodec/hevcdec: make > set_side_data() accessible > > On Thu, 2022-05-26 at 08:08 +0000, softworkz wrote: > > From: softworkz <softworkz@hotmail.com> > > > > Signed-off-by: softworkz <softworkz@hotmail.com> > > --- > > libavcodec/hevcdec.c | 117 +++++++++++++++++++++------------------ > ---- > > libavcodec/hevcdec.h | 2 + > > 2 files changed, 60 insertions(+), 59 deletions(-) > > > > diff --git a/libavcodec/hevcdec.c b/libavcodec/hevcdec.c > > index f782ea6394..ff22081b46 100644 > > --- a/libavcodec/hevcdec.c > > +++ b/libavcodec/hevcdec.c > > @@ -2726,23 +2726,22 @@ error: > > return res; > > } > > > > -static int set_side_data(HEVCContext *s) > > +int ff_set_side_data(AVCodecContext *logctx, HEVCSEI *sei, > HEVCContext *s, > > AVFrame *out) > > { > > - AVFrame *out = s->ref->frame; > > - int ret; > > + int ret = 0; > > > > - if (s->sei.frame_packing.present && > > - s->sei.frame_packing.arrangement_type >= 3 && > > - s->sei.frame_packing.arrangement_type <= 5 && > > - s->sei.frame_packing.content_interpretation_type > 0 && > > - s->sei.frame_packing.content_interpretation_type < 3) { > > + if (sei->frame_packing.present && > > + sei->frame_packing.arrangement_type >= 3 && > > + sei->frame_packing.arrangement_type <= 5 && > > + sei->frame_packing.content_interpretation_type > 0 && > > + sei->frame_packing.content_interpretation_type < 3) { > > AVStereo3D *stereo = av_stereo3d_create_side_data(out); > > if (!stereo) > > return AVERROR(ENOMEM); > > > > - switch (s->sei.frame_packing.arrangement_type) { > > + switch (sei->frame_packing.arrangement_type) { > > case 3: > > - if (s->sei.frame_packing.quincunx_subsampling) > > + if (sei->frame_packing.quincunx_subsampling) > > stereo->type = AV_STEREO3D_SIDEBYSIDE_QUINCUNX; > > else > > stereo->type = AV_STEREO3D_SIDEBYSIDE; > > @@ -2755,21 +2754,21 @@ static int set_side_data(HEVCContext *s) > > break; > > } > > > > - if (s->sei.frame_packing.content_interpretation_type == 2) > > + if (sei->frame_packing.content_interpretation_type == 2) > > stereo->flags = AV_STEREO3D_FLAG_INVERT; > > > > - if (s->sei.frame_packing.arrangement_type == 5) { > > - if (s->sei.frame_packing.current_frame_is_frame0_flag) > > + if (sei->frame_packing.arrangement_type == 5) { > > + if (sei->frame_packing.current_frame_is_frame0_flag) > > stereo->view = AV_STEREO3D_VIEW_LEFT; > > else > > stereo->view = AV_STEREO3D_VIEW_RIGHT; > > } > > } > > > > - if (s->sei.display_orientation.present && > > - (s->sei.display_orientation.anticlockwise_rotation || > > - s->sei.display_orientation.hflip || s- > > >sei.display_orientation.vflip)) { > > - double angle = s- > >sei.display_orientation.anticlockwise_rotation * > > 360 / (double) (1 << 16); > > + if (sei->display_orientation.present && > > + (sei->display_orientation.anticlockwise_rotation || > > + sei->display_orientation.hflip || sei- > >display_orientation.vflip)) { > > + double angle = sei- > >display_orientation.anticlockwise_rotation * 360 > > / (double) (1 << 16); > > AVFrameSideData *rotation = av_frame_new_side_data(out, > > > AV_FRAME_DATA_DISP > > LAYMATRIX, > > > sizeof(int32_t) * > > 9); > > @@ -2788,17 +2787,17 @@ static int set_side_data(HEVCContext *s) > > * (1 - 2 * !!s- > >sei.display_orientation.vflip); > > av_display_rotation_set((int32_t *)rotation->data, angle); > > av_display_matrix_flip((int32_t *)rotation->data, > > - s->sei.display_orientation.hflip, > > - s->sei.display_orientation.vflip); > > + sei->display_orientation.hflip, > > + sei->display_orientation.vflip); > > } > > > > // Decrement the mastering display flag when IRAP frame has > > no_rasl_output_flag=1 > > // so the side data persists for the entire coded video > sequence. > > - if (s->sei.mastering_display.present > 0 && > > + if (s && sei->mastering_display.present > 0 && > > So sei must be non-NULL but s may be NULL in the new function, right > ? Right. > The > caller should ensure s is non-NULL for the original function. It > would be better > to have some comment if s may be NULL now. Ok, I will add that comment. In a future update, I would like to further extend this to allow parsing of other SEI which currently still requires a non-null HEVCContext (e.g. DOVI parsing), but I figured that this would go too far for doing this as part of this patchset and I'm not sure about the best way to do that, means either: - Mocking an HEVCContext or - Changing the individual functions to not require an HEVCContext So, for now, I'll add the comment. > > -static int set_side_data(HEVCContext *s) > > +int ff_set_side_data(AVCodecContext *logctx, HEVCSEI *sei, > HEVCContext *s, > > How about to use ff_hevc as prefix ? This function is used for hevc > only Yes makes sense, will change. Thanks for reviewing, softworkz _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [FFmpeg-devel] [PATCH 4/6] avcodec/hevcdec: make set_side_data() accessible 2022-05-26 8:08 ` [FFmpeg-devel] [PATCH 4/6] avcodec/hevcdec: make set_side_data() accessible softworkz 2022-05-31 9:38 ` Xiang, Haihao @ 2022-05-31 9:40 ` Xiang, Haihao 1 sibling, 0 replies; 65+ messages in thread From: Xiang, Haihao @ 2022-05-31 9:40 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz On Thu, 2022-05-26 at 08:08 +0000, softworkz wrote: > From: softworkz <softworkz@hotmail.com> > > Signed-off-by: softworkz <softworkz@hotmail.com> > --- > libavcodec/hevcdec.c | 117 +++++++++++++++++++++---------------------- > libavcodec/hevcdec.h | 2 + > 2 files changed, 60 insertions(+), 59 deletions(-) > > diff --git a/libavcodec/hevcdec.c b/libavcodec/hevcdec.c > index f782ea6394..ff22081b46 100644 > --- a/libavcodec/hevcdec.c > +++ b/libavcodec/hevcdec.c > @@ -2726,23 +2726,22 @@ error: > return res; > } > > -static int set_side_data(HEVCContext *s) > +int ff_set_side_data(AVCodecContext *logctx, HEVCSEI *sei, HEVCContext *s, How about to use ff_hevc as prefix ? This function is used for hevc only Thanks Haihao > AVFrame *out) > { > - AVFrame *out = s->ref->frame; > - int ret; > + int ret = 0; > > - if (s->sei.frame_packing.present && > - s->sei.frame_packing.arrangement_type >= 3 && > - s->sei.frame_packing.arrangement_type <= 5 && > - s->sei.frame_packing.content_interpretation_type > 0 && > - s->sei.frame_packing.content_interpretation_type < 3) { > + if (sei->frame_packing.present && > + sei->frame_packing.arrangement_type >= 3 && > + sei->frame_packing.arrangement_type <= 5 && > + sei->frame_packing.content_interpretation_type > 0 && > + sei->frame_packing.content_interpretation_type < 3) { > AVStereo3D *stereo = av_stereo3d_create_side_data(out); > if (!stereo) > return AVERROR(ENOMEM); > > - switch (s->sei.frame_packing.arrangement_type) { > + switch (sei->frame_packing.arrangement_type) { > case 3: > - if (s->sei.frame_packing.quincunx_subsampling) > + if (sei->frame_packing.quincunx_subsampling) > stereo->type = AV_STEREO3D_SIDEBYSIDE_QUINCUNX; > else > stereo->type = AV_STEREO3D_SIDEBYSIDE; > @@ -2755,21 +2754,21 @@ static int set_side_data(HEVCContext *s) > break; > } > > - if (s->sei.frame_packing.content_interpretation_type == 2) > + if (sei->frame_packing.content_interpretation_type == 2) > stereo->flags = AV_STEREO3D_FLAG_INVERT; > > - if (s->sei.frame_packing.arrangement_type == 5) { > - if (s->sei.frame_packing.current_frame_is_frame0_flag) > + if (sei->frame_packing.arrangement_type == 5) { > + if (sei->frame_packing.current_frame_is_frame0_flag) > stereo->view = AV_STEREO3D_VIEW_LEFT; > else > stereo->view = AV_STEREO3D_VIEW_RIGHT; > } > } > > - if (s->sei.display_orientation.present && > - (s->sei.display_orientation.anticlockwise_rotation || > - s->sei.display_orientation.hflip || s- > >sei.display_orientation.vflip)) { > - double angle = s->sei.display_orientation.anticlockwise_rotation * > 360 / (double) (1 << 16); > + if (sei->display_orientation.present && > + (sei->display_orientation.anticlockwise_rotation || > + sei->display_orientation.hflip || sei->display_orientation.vflip)) { > + double angle = sei->display_orientation.anticlockwise_rotation * 360 > / (double) (1 << 16); > AVFrameSideData *rotation = av_frame_new_side_data(out, > AV_FRAME_DATA_DISP > LAYMATRIX, > sizeof(int32_t) * > 9); > @@ -2788,17 +2787,17 @@ static int set_side_data(HEVCContext *s) > * (1 - 2 * !!s->sei.display_orientation.vflip); > av_display_rotation_set((int32_t *)rotation->data, angle); > av_display_matrix_flip((int32_t *)rotation->data, > - s->sei.display_orientation.hflip, > - s->sei.display_orientation.vflip); > + sei->display_orientation.hflip, > + sei->display_orientation.vflip); > } > > // Decrement the mastering display flag when IRAP frame has > no_rasl_output_flag=1 > // so the side data persists for the entire coded video sequence. > - if (s->sei.mastering_display.present > 0 && > + if (s && sei->mastering_display.present > 0 && > IS_IRAP(s) && s->no_rasl_output_flag) { > - s->sei.mastering_display.present--; > + sei->mastering_display.present--; > } > - if (s->sei.mastering_display.present) { > + if (sei->mastering_display.present) { > // HEVC uses a g,b,r ordering, which we convert to a more natural > r,g,b > const int mapping[3] = {2, 0, 1}; > const int chroma_den = 50000; > @@ -2811,25 +2810,25 @@ static int set_side_data(HEVCContext *s) > > for (i = 0; i < 3; i++) { > const int j = mapping[i]; > - metadata->display_primaries[i][0].num = s- > >sei.mastering_display.display_primaries[j][0]; > + metadata->display_primaries[i][0].num = sei- > >mastering_display.display_primaries[j][0]; > metadata->display_primaries[i][0].den = chroma_den; > - metadata->display_primaries[i][1].num = s- > >sei.mastering_display.display_primaries[j][1]; > + metadata->display_primaries[i][1].num = sei- > >mastering_display.display_primaries[j][1]; > metadata->display_primaries[i][1].den = chroma_den; > } > - metadata->white_point[0].num = s- > >sei.mastering_display.white_point[0]; > + metadata->white_point[0].num = sei->mastering_display.white_point[0]; > metadata->white_point[0].den = chroma_den; > - metadata->white_point[1].num = s- > >sei.mastering_display.white_point[1]; > + metadata->white_point[1].num = sei->mastering_display.white_point[1]; > metadata->white_point[1].den = chroma_den; > > - metadata->max_luminance.num = s->sei.mastering_display.max_luminance; > + metadata->max_luminance.num = sei->mastering_display.max_luminance; > metadata->max_luminance.den = luma_den; > - metadata->min_luminance.num = s->sei.mastering_display.min_luminance; > + metadata->min_luminance.num = sei->mastering_display.min_luminance; > metadata->min_luminance.den = luma_den; > metadata->has_luminance = 1; > metadata->has_primaries = 1; > > - av_log(s->avctx, AV_LOG_DEBUG, "Mastering Display Metadata:\n"); > - av_log(s->avctx, AV_LOG_DEBUG, > + av_log(logctx, AV_LOG_DEBUG, "Mastering Display Metadata:\n"); > + av_log(logctx, AV_LOG_DEBUG, > "r(%5.4f,%5.4f) g(%5.4f,%5.4f) b(%5.4f %5.4f) wp(%5.4f, > %5.4f)\n", > av_q2d(metadata->display_primaries[0][0]), > av_q2d(metadata->display_primaries[0][1]), > @@ -2838,31 +2837,31 @@ static int set_side_data(HEVCContext *s) > av_q2d(metadata->display_primaries[2][0]), > av_q2d(metadata->display_primaries[2][1]), > av_q2d(metadata->white_point[0]), av_q2d(metadata- > >white_point[1])); > - av_log(s->avctx, AV_LOG_DEBUG, > + av_log(logctx, AV_LOG_DEBUG, > "min_luminance=%f, max_luminance=%f\n", > av_q2d(metadata->min_luminance), av_q2d(metadata- > >max_luminance)); > } > // Decrement the mastering display flag when IRAP frame has > no_rasl_output_flag=1 > // so the side data persists for the entire coded video sequence. > - if (s->sei.content_light.present > 0 && > + if (s && sei->content_light.present > 0 && > IS_IRAP(s) && s->no_rasl_output_flag) { > - s->sei.content_light.present--; > + sei->content_light.present--; > } > - if (s->sei.content_light.present) { > + if (sei->content_light.present) { > AVContentLightMetadata *metadata = > av_content_light_metadata_create_side_data(out); > if (!metadata) > return AVERROR(ENOMEM); > - metadata->MaxCLL = s->sei.content_light.max_content_light_level; > - metadata->MaxFALL = s->sei.content_light.max_pic_average_light_level; > + metadata->MaxCLL = sei->content_light.max_content_light_level; > + metadata->MaxFALL = sei->content_light.max_pic_average_light_level; > > - av_log(s->avctx, AV_LOG_DEBUG, "Content Light Level Metadata:\n"); > - av_log(s->avctx, AV_LOG_DEBUG, "MaxCLL=%d, MaxFALL=%d\n", > + av_log(logctx, AV_LOG_DEBUG, "Content Light Level Metadata:\n"); > + av_log(logctx, AV_LOG_DEBUG, "MaxCLL=%d, MaxFALL=%d\n", > metadata->MaxCLL, metadata->MaxFALL); > } > > - if (s->sei.a53_caption.buf_ref) { > - HEVCSEIA53Caption *a53 = &s->sei.a53_caption; > + if (sei->a53_caption.buf_ref) { > + HEVCSEIA53Caption *a53 = &sei->a53_caption; > > AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, > AV_FRAME_DATA_A53_CC, a53->buf_ref); > if (!sd) > @@ -2870,8 +2869,8 @@ static int set_side_data(HEVCContext *s) > a53->buf_ref = NULL; > } > > - for (int i = 0; i < s->sei.unregistered.nb_buf_ref; i++) { > - HEVCSEIUnregistered *unreg = &s->sei.unregistered; > + for (int i = 0; i < sei->unregistered.nb_buf_ref; i++) { > + HEVCSEIUnregistered *unreg = &sei->unregistered; > > if (unreg->buf_ref[i]) { > AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, > @@ -2882,9 +2881,9 @@ static int set_side_data(HEVCContext *s) > unreg->buf_ref[i] = NULL; > } > } > - s->sei.unregistered.nb_buf_ref = 0; > + sei->unregistered.nb_buf_ref = 0; > > - if (s->sei.timecode.present) { > + if (s && sei->timecode.present) { > uint32_t *tc_sd; > char tcbuf[AV_TIMECODE_STR_SIZE]; > AVFrameSideData *tcside = av_frame_new_side_data(out, > AV_FRAME_DATA_S12M_TIMECODE, > @@ -2893,25 +2892,25 @@ static int set_side_data(HEVCContext *s) > return AVERROR(ENOMEM); > > tc_sd = (uint32_t*)tcside->data; > - tc_sd[0] = s->sei.timecode.num_clock_ts; > + tc_sd[0] = sei->timecode.num_clock_ts; > > for (int i = 0; i < tc_sd[0]; i++) { > - int drop = s->sei.timecode.cnt_dropped_flag[i]; > - int hh = s->sei.timecode.hours_value[i]; > - int mm = s->sei.timecode.minutes_value[i]; > - int ss = s->sei.timecode.seconds_value[i]; > - int ff = s->sei.timecode.n_frames[i]; > + int drop = sei->timecode.cnt_dropped_flag[i]; > + int hh = sei->timecode.hours_value[i]; > + int mm = sei->timecode.minutes_value[i]; > + int ss = sei->timecode.seconds_value[i]; > + int ff = sei->timecode.n_frames[i]; > > tc_sd[i + 1] = av_timecode_get_smpte(s->avctx->framerate, drop, > hh, mm, ss, ff); > av_timecode_make_smpte_tc_string2(tcbuf, s->avctx->framerate, > tc_sd[i + 1], 0, 0); > av_dict_set(&out->metadata, "timecode", tcbuf, 0); > } > > - s->sei.timecode.num_clock_ts = 0; > + sei->timecode.num_clock_ts = 0; > } > > - if (s->sei.film_grain_characteristics.present) { > - HEVCSEIFilmGrainCharacteristics *fgc = &s- > >sei.film_grain_characteristics; > + if (s && sei->film_grain_characteristics.present) { > + HEVCSEIFilmGrainCharacteristics *fgc = &sei- > >film_grain_characteristics; > AVFilmGrainParams *fgp = av_film_grain_params_create_side_data(out); > if (!fgp) > return AVERROR(ENOMEM); > @@ -2965,8 +2964,8 @@ static int set_side_data(HEVCContext *s) > fgc->present = fgc->persistence_flag; > } > > - if (s->sei.dynamic_hdr_plus.info) { > - AVBufferRef *info_ref = av_buffer_ref(s->sei.dynamic_hdr_plus.info); > + if (sei->dynamic_hdr_plus.info) { > + AVBufferRef *info_ref = av_buffer_ref(sei->dynamic_hdr_plus.info); > if (!info_ref) > return AVERROR(ENOMEM); > > @@ -2976,7 +2975,7 @@ static int set_side_data(HEVCContext *s) > } > } > > - if (s->rpu_buf) { > + if (s && s->rpu_buf) { > AVFrameSideData *rpu = av_frame_new_side_data_from_buf(out, > AV_FRAME_DATA_DOVI_RPU_BUFFER, s->rpu_buf); > if (!rpu) > return AVERROR(ENOMEM); > @@ -2984,10 +2983,10 @@ static int set_side_data(HEVCContext *s) > s->rpu_buf = NULL; > } > > - if ((ret = ff_dovi_attach_side_data(&s->dovi_ctx, out)) < 0) > + if (s && (ret = ff_dovi_attach_side_data(&s->dovi_ctx, out)) < 0) > return ret; > > - if (s->sei.dynamic_hdr_vivid.info) { > + if (s && s->sei.dynamic_hdr_vivid.info) { > AVBufferRef *info_ref = av_buffer_ref(s->sei.dynamic_hdr_vivid.info); > if (!info_ref) > return AVERROR(ENOMEM); > @@ -3046,7 +3045,7 @@ static int hevc_frame_start(HEVCContext *s) > goto fail; > } > > - ret = set_side_data(s); > + ret = ff_set_side_data(s->avctx, &s->sei, s, s->ref->frame); > if (ret < 0) > goto fail; > > diff --git a/libavcodec/hevcdec.h b/libavcodec/hevcdec.h > index de861b88b3..d4001466f6 100644 > --- a/libavcodec/hevcdec.h > +++ b/libavcodec/hevcdec.h > @@ -690,6 +690,8 @@ void ff_hevc_hls_residual_coding(HEVCContext *s, int x0, > int y0, > > void ff_hevc_hls_mvd_coding(HEVCContext *s, int x0, int y0, int > log2_cb_size); > > +int ff_set_side_data(AVCodecContext *logctx, HEVCSEI *sei, HEVCContext *s, > AVFrame *out); > + > extern const uint8_t ff_hevc_qpel_extra_before[4]; > extern const uint8_t ff_hevc_qpel_extra_after[4]; > extern const uint8_t ff_hevc_qpel_extra[4]; _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH 5/6] avcodec/h264dec: make h264_export_frame_props() accessible 2022-05-26 8:08 [FFmpeg-devel] [PATCH 0/6] Implement SEI parsing for QSV decoders ffmpegagent ` (3 preceding siblings ...) 2022-05-26 8:08 ` [FFmpeg-devel] [PATCH 4/6] avcodec/hevcdec: make set_side_data() accessible softworkz @ 2022-05-26 8:08 ` softworkz 2022-05-26 8:08 ` [FFmpeg-devel] [PATCH 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders softworkz ` (2 subsequent siblings) 7 siblings, 0 replies; 65+ messages in thread From: softworkz @ 2022-05-26 8:08 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> --- libavcodec/h264_slice.c | 98 +++++++++++++++++++++-------------------- libavcodec/h264dec.h | 2 + 2 files changed, 52 insertions(+), 48 deletions(-) diff --git a/libavcodec/h264_slice.c b/libavcodec/h264_slice.c index d56722a5c2..f2a4c1c657 100644 --- a/libavcodec/h264_slice.c +++ b/libavcodec/h264_slice.c @@ -1157,11 +1157,10 @@ static int h264_init_ps(H264Context *h, const H264SliceContext *sl, int first_sl return 0; } -static int h264_export_frame_props(H264Context *h) +int ff_h264_export_frame_props(AVCodecContext *logctx, H264SEIContext *sei, H264Context *h, AVFrame *out) { - const SPS *sps = h->ps.sps; - H264Picture *cur = h->cur_pic_ptr; - AVFrame *out = cur->f; + const SPS *sps = h ? h->ps.sps : NULL; + H264Picture *cur = h ? h->cur_pic_ptr : NULL; out->interlaced_frame = 0; out->repeat_pict = 0; @@ -1169,19 +1168,19 @@ static int h264_export_frame_props(H264Context *h) /* Signal interlacing information externally. */ /* Prioritize picture timing SEI information over used * decoding process if it exists. */ - if (h->sei.picture_timing.present) { - int ret = ff_h264_sei_process_picture_timing(&h->sei.picture_timing, sps, - h->avctx); + if (sps && sei->picture_timing.present) { + int ret = ff_h264_sei_process_picture_timing(&sei->picture_timing, sps, + logctx); if (ret < 0) { - av_log(h->avctx, AV_LOG_ERROR, "Error processing a picture timing SEI\n"); - if (h->avctx->err_recognition & AV_EF_EXPLODE) + av_log(logctx, AV_LOG_ERROR, "Error processing a picture timing SEI\n"); + if (logctx->err_recognition & AV_EF_EXPLODE) return ret; - h->sei.picture_timing.present = 0; + sei->picture_timing.present = 0; } } - if (sps->pic_struct_present_flag && h->sei.picture_timing.present) { - H264SEIPictureTiming *pt = &h->sei.picture_timing; + if (h && sps && sps->pic_struct_present_flag && sei->picture_timing.present) { + H264SEIPictureTiming *pt = &sei->picture_timing; switch (pt->pic_struct) { case H264_SEI_PIC_STRUCT_FRAME: break; @@ -1215,21 +1214,23 @@ static int h264_export_frame_props(H264Context *h) if ((pt->ct_type & 3) && pt->pic_struct <= H264_SEI_PIC_STRUCT_BOTTOM_TOP) out->interlaced_frame = (pt->ct_type & (1 << 1)) != 0; - } else { + } else if (h) { /* Derive interlacing flag from used decoding process. */ out->interlaced_frame = FIELD_OR_MBAFF_PICTURE(h); } - h->prev_interlaced_frame = out->interlaced_frame; - if (cur->field_poc[0] != cur->field_poc[1]) { + if (h) + h->prev_interlaced_frame = out->interlaced_frame; + + if (sps && cur->field_poc[0] != cur->field_poc[1]) { /* Derive top_field_first from field pocs. */ out->top_field_first = cur->field_poc[0] < cur->field_poc[1]; - } else { - if (sps->pic_struct_present_flag && h->sei.picture_timing.present) { + } else if (sps) { + if (sps->pic_struct_present_flag && sei->picture_timing.present) { /* Use picture timing SEI information. Even if it is a * information of a past frame, better than nothing. */ - if (h->sei.picture_timing.pic_struct == H264_SEI_PIC_STRUCT_TOP_BOTTOM || - h->sei.picture_timing.pic_struct == H264_SEI_PIC_STRUCT_TOP_BOTTOM_TOP) + if (sei->picture_timing.pic_struct == H264_SEI_PIC_STRUCT_TOP_BOTTOM || + sei->picture_timing.pic_struct == H264_SEI_PIC_STRUCT_TOP_BOTTOM_TOP) out->top_field_first = 1; else out->top_field_first = 0; @@ -1243,11 +1244,11 @@ static int h264_export_frame_props(H264Context *h) } } - if (h->sei.frame_packing.present && - h->sei.frame_packing.arrangement_type <= 6 && - h->sei.frame_packing.content_interpretation_type > 0 && - h->sei.frame_packing.content_interpretation_type < 3) { - H264SEIFramePacking *fp = &h->sei.frame_packing; + if (sei->frame_packing.present && + sei->frame_packing.arrangement_type <= 6 && + sei->frame_packing.content_interpretation_type > 0 && + sei->frame_packing.content_interpretation_type < 3) { + H264SEIFramePacking *fp = &sei->frame_packing; AVStereo3D *stereo = av_stereo3d_create_side_data(out); if (stereo) { switch (fp->arrangement_type) { @@ -1289,11 +1290,11 @@ static int h264_export_frame_props(H264Context *h) } } - if (h->sei.display_orientation.present && - (h->sei.display_orientation.anticlockwise_rotation || - h->sei.display_orientation.hflip || - h->sei.display_orientation.vflip)) { - H264SEIDisplayOrientation *o = &h->sei.display_orientation; + if (sei->display_orientation.present && + (sei->display_orientation.anticlockwise_rotation || + sei->display_orientation.hflip || + sei->display_orientation.vflip)) { + H264SEIDisplayOrientation *o = &sei->display_orientation; double angle = o->anticlockwise_rotation * 360 / (double) (1 << 16); AVFrameSideData *rotation = av_frame_new_side_data(out, AV_FRAME_DATA_DISPLAYMATRIX, @@ -1314,29 +1315,30 @@ static int h264_export_frame_props(H264Context *h) } } - if (h->sei.afd.present) { + if (sei->afd.present) { AVFrameSideData *sd = av_frame_new_side_data(out, AV_FRAME_DATA_AFD, sizeof(uint8_t)); if (sd) { - *sd->data = h->sei.afd.active_format_description; - h->sei.afd.present = 0; + *sd->data = sei->afd.active_format_description; + sei->afd.present = 0; } } - if (h->sei.a53_caption.buf_ref) { - H264SEIA53Caption *a53 = &h->sei.a53_caption; + if (sei->a53_caption.buf_ref) { + H264SEIA53Caption *a53 = &sei->a53_caption; AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, a53->buf_ref); if (!sd) av_buffer_unref(&a53->buf_ref); a53->buf_ref = NULL; - h->avctx->properties |= FF_CODEC_PROPERTY_CLOSED_CAPTIONS; + if (h) + h->avctx->properties |= FF_CODEC_PROPERTY_CLOSED_CAPTIONS; } - for (int i = 0; i < h->sei.unregistered.nb_buf_ref; i++) { - H264SEIUnregistered *unreg = &h->sei.unregistered; + for (int i = 0; i < sei->unregistered.nb_buf_ref; i++) { + H264SEIUnregistered *unreg = &sei->unregistered; if (unreg->buf_ref[i]) { AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, @@ -1347,10 +1349,10 @@ static int h264_export_frame_props(H264Context *h) unreg->buf_ref[i] = NULL; } } - h->sei.unregistered.nb_buf_ref = 0; + sei->unregistered.nb_buf_ref = 0; - if (h->sei.film_grain_characteristics.present) { - H264SEIFilmGrainCharacteristics *fgc = &h->sei.film_grain_characteristics; + if (h && sps && sei->film_grain_characteristics.present) { + H264SEIFilmGrainCharacteristics *fgc = &sei->film_grain_characteristics; AVFilmGrainParams *fgp = av_film_grain_params_create_side_data(out); if (!fgp) return AVERROR(ENOMEM); @@ -1404,7 +1406,7 @@ static int h264_export_frame_props(H264Context *h) h->avctx->properties |= FF_CODEC_PROPERTY_FILM_GRAIN; } - if (h->sei.picture_timing.timecode_cnt > 0) { + if (h && sei->picture_timing.timecode_cnt > 0) { uint32_t *tc_sd; char tcbuf[AV_TIMECODE_STR_SIZE]; @@ -1415,14 +1417,14 @@ static int h264_export_frame_props(H264Context *h) return AVERROR(ENOMEM); tc_sd = (uint32_t*)tcside->data; - tc_sd[0] = h->sei.picture_timing.timecode_cnt; + tc_sd[0] = sei->picture_timing.timecode_cnt; for (int i = 0; i < tc_sd[0]; i++) { - int drop = h->sei.picture_timing.timecode[i].dropframe; - int hh = h->sei.picture_timing.timecode[i].hours; - int mm = h->sei.picture_timing.timecode[i].minutes; - int ss = h->sei.picture_timing.timecode[i].seconds; - int ff = h->sei.picture_timing.timecode[i].frame; + int drop = sei->picture_timing.timecode[i].dropframe; + int hh = sei->picture_timing.timecode[i].hours; + int mm = sei->picture_timing.timecode[i].minutes; + int ss = sei->picture_timing.timecode[i].seconds; + int ff = sei->picture_timing.timecode[i].frame; tc_sd[i + 1] = av_timecode_get_smpte(h->avctx->framerate, drop, hh, mm, ss, ff); av_timecode_make_smpte_tc_string2(tcbuf, h->avctx->framerate, tc_sd[i + 1], 0, 0); @@ -1817,7 +1819,7 @@ static int h264_field_start(H264Context *h, const H264SliceContext *sl, * field coded frames, since some SEI information is present for each field * and is merged by the SEI parsing code. */ if (!FIELD_PICTURE(h) || !h->first_field || h->missing_fields > 1) { - ret = h264_export_frame_props(h); + ret = ff_h264_export_frame_props(h->avctx, &h->sei, h, h->cur_pic_ptr->f); if (ret < 0) return ret; diff --git a/libavcodec/h264dec.h b/libavcodec/h264dec.h index 9a1ec1bace..38930da4ca 100644 --- a/libavcodec/h264dec.h +++ b/libavcodec/h264dec.h @@ -808,4 +808,6 @@ void ff_h264_free_tables(H264Context *h); void ff_h264_set_erpic(ERPicture *dst, H264Picture *src); +int ff_h264_export_frame_props(AVCodecContext *logctx, H264SEIContext *sei, H264Context *h, AVFrame *out); + #endif /* AVCODEC_H264DEC_H */ -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders 2022-05-26 8:08 [FFmpeg-devel] [PATCH 0/6] Implement SEI parsing for QSV decoders ffmpegagent ` (4 preceding siblings ...) 2022-05-26 8:08 ` [FFmpeg-devel] [PATCH 5/6] avcodec/h264dec: make h264_export_frame_props() accessible softworkz @ 2022-05-26 8:08 ` softworkz 2022-06-01 5:15 ` Xiang, Haihao 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 0/6] " ffmpegagent 2022-06-01 19:15 ` [FFmpeg-devel] [PATCH 0/6] " Kieran Kunhya 7 siblings, 1 reply; 65+ messages in thread From: softworkz @ 2022-05-26 8:08 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> --- libavcodec/qsvdec.c | 233 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 233 insertions(+) diff --git a/libavcodec/qsvdec.c b/libavcodec/qsvdec.c index 5fc5bed4c8..7d6a491aa0 100644 --- a/libavcodec/qsvdec.c +++ b/libavcodec/qsvdec.c @@ -49,6 +49,12 @@ #include "hwconfig.h" #include "qsv.h" #include "qsv_internal.h" +#include "h264dec.h" +#include "h264_sei.h" +#include "hevcdec.h" +#include "hevc_ps.h" +#include "hevc_sei.h" +#include "mpeg12.h" static const AVRational mfx_tb = { 1, 90000 }; @@ -60,6 +66,8 @@ static const AVRational mfx_tb = { 1, 90000 }; AV_NOPTS_VALUE : pts_tb.num ? \ av_rescale_q(mfx_pts, mfx_tb, pts_tb) : mfx_pts) +#define PAYLOAD_BUFFER_SIZE 65535 + typedef struct QSVAsyncFrame { mfxSyncPoint *sync; QSVFrame *frame; @@ -101,6 +109,9 @@ typedef struct QSVContext { mfxExtBuffer **ext_buffers; int nb_ext_buffers; + + mfxU8 payload_buffer[PAYLOAD_BUFFER_SIZE]; + Mpeg1Context mpeg_ctx; } QSVContext; static const AVCodecHWConfigInternal *const qsv_hw_configs[] = { @@ -599,6 +610,208 @@ static int qsv_export_film_grain(AVCodecContext *avctx, mfxExtAV1FilmGrainParam return 0; } #endif +static int find_start_offset(mfxU8 data[4]) +{ + if (data[0] == 0 && data[1] == 0 && data[2] == 1) + return 3; + + if (data[0] == 0 && data[1] == 0 && data[2] == 0 && data[3] == 1) + return 4; + + return 0; +} + +static int parse_sei_h264(AVCodecContext* avctx, QSVContext* q, AVFrame* out) +{ + H264SEIContext sei = { 0 }; + GetBitContext gb = { 0 }; + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = sizeof(q->payload_buffer) }; + mfxU64 ts; + int ret; + + while (1) { + int start; + memset(payload.Data, 0, payload.BufSize); + + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); + if (ret != MFX_ERR_NONE) { + av_log(avctx, AV_LOG_WARNING, "error getting SEI payload: %d \n", ret); + return ret; + } + + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) { + break; + } + + start = find_start_offset(payload.Data); + + switch (payload.Type) { + case SEI_TYPE_BUFFERING_PERIOD: + case SEI_TYPE_PIC_TIMING: + continue; + } + + if (init_get_bits(&gb, &payload.Data[start], payload.NumBit - start * 8) < 0) + av_log(avctx, AV_LOG_ERROR, "Error initializing bitstream reader"); + else + ret = ff_h264_sei_decode(&sei, &gb, NULL, avctx); + + if (ret < 0) + av_log(avctx, AV_LOG_WARNING, "Error parsing SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); + else + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d\n", payload.Type, payload.NumBit); + } + + if (out) + return ff_h264_export_frame_props(avctx, &sei, NULL, out); + + return 0; +} + +static int parse_sei_hevc(AVCodecContext* avctx, QSVContext* q, QSVFrame* out) +{ + HEVCSEI sei = { 0 }; + HEVCParamSets ps = { 0 }; + GetBitContext gb = { 0 }; + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = sizeof(q->payload_buffer) }; + mfxFrameSurface1 *surface = &out->surface; + mfxU64 ts; + int ret, has_logged = 0; + + while (1) { + int start; + memset(payload.Data, 0, payload.BufSize); + + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); + if (ret != MFX_ERR_NONE) { + av_log(avctx, AV_LOG_WARNING, "error getting SEI payload: %d \n", ret); + return 0; + } + + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) { + break; + } + + if (!has_logged) { + has_logged = 1; + av_log(avctx, AV_LOG_VERBOSE, "-----------------------------------------\n"); + av_log(avctx, AV_LOG_VERBOSE, "Start reading SEI - payload timestamp: %llu - surface timestamp: %llu\n", ts, surface->Data.TimeStamp); + } + + if (ts != surface->Data.TimeStamp) { + av_log(avctx, AV_LOG_WARNING, "GetPayload timestamp (%llu) does not match surface timestamp: (%llu)\n", ts, surface->Data.TimeStamp); + } + + start = find_start_offset(payload.Data); + + av_log(avctx, AV_LOG_VERBOSE, "parsing SEI type: %3d Numbits %3d Start: %d\n", payload.Type, payload.NumBit, start); + + switch (payload.Type) { + case SEI_TYPE_BUFFERING_PERIOD: + case SEI_TYPE_PIC_TIMING: + continue; + case SEI_TYPE_MASTERING_DISPLAY_COLOUR_VOLUME: + // There seems to be a bug in MSDK + payload.NumBit -= 8; + + break; + case SEI_TYPE_CONTENT_LIGHT_LEVEL_INFO: + // There seems to be a bug in MSDK + payload.NumBit = 48; + + break; + case SEI_TYPE_USER_DATA_REGISTERED_ITU_T_T35: + // There seems to be a bug in MSDK + if (payload.NumBit == 552) + payload.NumBit = 528; + break; + } + + if (init_get_bits(&gb, &payload.Data[start], payload.NumBit - start * 8) < 0) + av_log(avctx, AV_LOG_ERROR, "Error initializing bitstream reader"); + else + ret = ff_hevc_decode_nal_sei(&gb, avctx, &sei, &ps, HEVC_NAL_SEI_PREFIX); + + if (ret < 0) + av_log(avctx, AV_LOG_WARNING, "error parsing SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); + else + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d\n", payload.Type, payload.NumBit); + } + + if (has_logged) { + av_log(avctx, AV_LOG_VERBOSE, "End reading SEI\n"); + } + + if (out && out->frame) + return ff_set_side_data(avctx, &sei, NULL, out->frame); + + return 0; +} + +static int parse_sei_mpeg12(AVCodecContext* avctx, QSVContext* q, AVFrame* out) +{ + Mpeg1Context *mpeg_ctx = &q->mpeg_ctx; + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = sizeof(q->payload_buffer) }; + mfxU64 ts; + int ret; + + while (1) { + int start; + + memset(payload.Data, 0, payload.BufSize); + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); + if (ret != MFX_ERR_NONE) { + av_log(avctx, AV_LOG_WARNING, "error getting SEI payload: %d \n", ret); + return ret; + } + + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) { + break; + } + + start = find_start_offset(payload.Data); + + start++; + + ff_mpeg_decode_user_data(avctx, mpeg_ctx, &payload.Data[start], (int)((payload.NumBit + 7) / 8) - start); + + if (ret < 0) + av_log(avctx, AV_LOG_WARNING, "error parsing SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); + else + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d start %d -> %.s\n", payload.Type, payload.NumBit, start, (char *)(&payload.Data[start])); + } + + if (!out) + return 0; + + if (mpeg_ctx->a53_buf_ref) { + + AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, mpeg_ctx->a53_buf_ref); + if (!sd) + av_buffer_unref(&mpeg_ctx->a53_buf_ref); + mpeg_ctx->a53_buf_ref = NULL; + } + + if (mpeg_ctx->has_stereo3d) { + AVStereo3D *stereo = av_stereo3d_create_side_data(out); + if (!stereo) + return AVERROR(ENOMEM); + + *stereo = mpeg_ctx->stereo3d; + mpeg_ctx->has_stereo3d = 0; + } + + if (mpeg_ctx->has_afd) { + AVFrameSideData *sd = av_frame_new_side_data(out, AV_FRAME_DATA_AFD, 1); + if (!sd) + return AVERROR(ENOMEM); + + *sd->data = mpeg_ctx->afd; + mpeg_ctx->has_afd = 0; + } + + return 0; +} static int qsv_decode(AVCodecContext *avctx, QSVContext *q, AVFrame *frame, int *got_frame, @@ -636,6 +849,8 @@ static int qsv_decode(AVCodecContext *avctx, QSVContext *q, insurf, &outsurf, sync); if (ret == MFX_WRN_DEVICE_BUSY) av_usleep(500); + else if (avctx->codec_id == AV_CODEC_ID_MPEG1VIDEO || avctx->codec_id == AV_CODEC_ID_MPEG2VIDEO) + parse_sei_mpeg12(avctx, q, NULL); } while (ret == MFX_WRN_DEVICE_BUSY || ret == MFX_ERR_MORE_SURFACE); @@ -677,6 +892,24 @@ static int qsv_decode(AVCodecContext *avctx, QSVContext *q, return AVERROR_BUG; } + switch (avctx->codec_id) { + case AV_CODEC_ID_MPEG1VIDEO: + case AV_CODEC_ID_MPEG2VIDEO: + ret = parse_sei_mpeg12(avctx, q, out_frame->frame); + break; + case AV_CODEC_ID_H264: + ret = parse_sei_h264(avctx, q, out_frame->frame); + break; + case AV_CODEC_ID_HEVC: + ret = parse_sei_hevc(avctx, q, out_frame); + break; + default: + ret = 0; + } + + if (ret < 0) + av_log(avctx, AV_LOG_ERROR, "Error parsing SEI data\n"); + out_frame->queued += 1; aframe = (QSVAsyncFrame){ sync, out_frame }; -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [FFmpeg-devel] [PATCH 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders 2022-05-26 8:08 ` [FFmpeg-devel] [PATCH 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders softworkz @ 2022-06-01 5:15 ` Xiang, Haihao 2022-06-01 8:51 ` Soft Works 0 siblings, 1 reply; 65+ messages in thread From: Xiang, Haihao @ 2022-06-01 5:15 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz On Thu, 2022-05-26 at 08:08 +0000, softworkz wrote: > From: softworkz <softworkz@hotmail.com> > > Signed-off-by: softworkz <softworkz@hotmail.com> > --- > libavcodec/qsvdec.c | 233 ++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 233 insertions(+) > > diff --git a/libavcodec/qsvdec.c b/libavcodec/qsvdec.c > index 5fc5bed4c8..7d6a491aa0 100644 > --- a/libavcodec/qsvdec.c > +++ b/libavcodec/qsvdec.c > @@ -49,6 +49,12 @@ > #include "hwconfig.h" > #include "qsv.h" > #include "qsv_internal.h" > +#include "h264dec.h" > +#include "h264_sei.h" > +#include "hevcdec.h" > +#include "hevc_ps.h" > +#include "hevc_sei.h" > +#include "mpeg12.h" > > static const AVRational mfx_tb = { 1, 90000 }; > > @@ -60,6 +66,8 @@ static const AVRational mfx_tb = { 1, 90000 }; > AV_NOPTS_VALUE : pts_tb.num ? \ > av_rescale_q(mfx_pts, mfx_tb, pts_tb) : mfx_pts) > > +#define PAYLOAD_BUFFER_SIZE 65535 > + > typedef struct QSVAsyncFrame { > mfxSyncPoint *sync; > QSVFrame *frame; > @@ -101,6 +109,9 @@ typedef struct QSVContext { > > mfxExtBuffer **ext_buffers; > int nb_ext_buffers; > + > + mfxU8 payload_buffer[PAYLOAD_BUFFER_SIZE]; > + Mpeg1Context mpeg_ctx; I wonder why only mpeg1 context is required in QSVContext. > } QSVContext; > > static const AVCodecHWConfigInternal *const qsv_hw_configs[] = { > @@ -599,6 +610,208 @@ static int qsv_export_film_grain(AVCodecContext *avctx, > mfxExtAV1FilmGrainParam > return 0; > } > #endif > +static int find_start_offset(mfxU8 data[4]) > +{ > + if (data[0] == 0 && data[1] == 0 && data[2] == 1) > + return 3; > + > + if (data[0] == 0 && data[1] == 0 && data[2] == 0 && data[3] == 1) > + return 4; > + > + return 0; > +} > + > +static int parse_sei_h264(AVCodecContext* avctx, QSVContext* q, AVFrame* out) > +{ > + H264SEIContext sei = { 0 }; > + GetBitContext gb = { 0 }; > + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = > sizeof(q->payload_buffer) }; > + mfxU64 ts; > + int ret; > + > + while (1) { > + int start; > + memset(payload.Data, 0, payload.BufSize); > + > + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); > + if (ret != MFX_ERR_NONE) { > + av_log(avctx, AV_LOG_WARNING, "error getting SEI payload: %d \n", > ret); Better to use AV_LOG_ERROR to match the description. > + return ret; > + } > + > + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) { > + break; > + } > + > + start = find_start_offset(payload.Data); > + > + switch (payload.Type) { > + case SEI_TYPE_BUFFERING_PERIOD: > + case SEI_TYPE_PIC_TIMING: > + continue; > + } > + > + if (init_get_bits(&gb, &payload.Data[start], payload.NumBit - start * > 8) < 0) > + av_log(avctx, AV_LOG_ERROR, "Error initializing bitstream > reader"); > + else I think it should return an error when failed to initialize GetBitContext, and the `else` statement is not needed. > + ret = ff_h264_sei_decode(&sei, &gb, NULL, avctx); > + > + if (ret < 0) > + av_log(avctx, AV_LOG_WARNING, "Error parsing SEI type: > %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); Better to use AV_LOG_ERROR and return an error. Otherwise please use 'warning' instead of 'error' in the description. > + else > + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d\n", > payload.Type, payload.NumBit); > + } > + > + if (out) > + return ff_h264_export_frame_props(avctx, &sei, NULL, out); > + > + return 0; > +} > + > +static int parse_sei_hevc(AVCodecContext* avctx, QSVContext* q, QSVFrame* > out) > +{ > + HEVCSEI sei = { 0 }; > + HEVCParamSets ps = { 0 }; > + GetBitContext gb = { 0 }; > + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = > sizeof(q->payload_buffer) }; > + mfxFrameSurface1 *surface = &out->surface; > + mfxU64 ts; > + int ret, has_logged = 0; > + > + while (1) { > + int start; > + memset(payload.Data, 0, payload.BufSize); > + > + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); > + if (ret != MFX_ERR_NONE) { > + av_log(avctx, AV_LOG_WARNING, "error getting SEI payload: %d \n", > ret); > + return 0; It returns an error in parse_sei_h264() when MFXVideoDECODE_GetPayload fails to get the payload. Please make the behavior consistent across the codecs. (I'm fine to return 0 instead of an error to ignore errors in SEI decoding. ) > + } > + > + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) { > + break; > + } > + > + if (!has_logged) { > + has_logged = 1; > + av_log(avctx, AV_LOG_VERBOSE, "-------------------------------- > ---------\n"); > + av_log(avctx, AV_LOG_VERBOSE, "Start reading SEI - payload > timestamp: %llu - surface timestamp: %llu\n", ts, surface->Data.TimeStamp); > + } > + > + if (ts != surface->Data.TimeStamp) { > + av_log(avctx, AV_LOG_WARNING, "GetPayload timestamp (%llu) does > not match surface timestamp: (%llu)\n", ts, surface->Data.TimeStamp); > + } > + > + start = find_start_offset(payload.Data); > + > + av_log(avctx, AV_LOG_VERBOSE, "parsing SEI type: %3d Numbits > %3d Start: %d\n", payload.Type, payload.NumBit, start); > + > + switch (payload.Type) { > + case SEI_TYPE_BUFFERING_PERIOD: > + case SEI_TYPE_PIC_TIMING: > + continue; > + case SEI_TYPE_MASTERING_DISPLAY_COLOUR_VOLUME: > + // There seems to be a bug in MSDK > + payload.NumBit -= 8; > + > + break; > + case SEI_TYPE_CONTENT_LIGHT_LEVEL_INFO: > + // There seems to be a bug in MSDK > + payload.NumBit = 48; > + > + break; > + case SEI_TYPE_USER_DATA_REGISTERED_ITU_T_T35: > + // There seems to be a bug in MSDK > + if (payload.NumBit == 552) > + payload.NumBit = 528; > + break; > + } > + > + if (init_get_bits(&gb, &payload.Data[start], payload.NumBit - start * > 8) < 0) > + av_log(avctx, AV_LOG_ERROR, "Error initializing bitstream > reader"); > + else I think it should return an error when failed to initialize GetBitContext and the `else` statement is not needed. > + ret = ff_hevc_decode_nal_sei(&gb, avctx, &sei, &ps, > HEVC_NAL_SEI_PREFIX); > + > + if (ret < 0) > + av_log(avctx, AV_LOG_WARNING, "error parsing SEI type: > %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); > + else > + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d\n", > payload.Type, payload.NumBit); > + } > + > + if (has_logged) { > + av_log(avctx, AV_LOG_VERBOSE, "End reading SEI\n"); > + } > + > + if (out && out->frame) > + return ff_set_side_data(avctx, &sei, NULL, out->frame); > + > + return 0; > +} > + > +static int parse_sei_mpeg12(AVCodecContext* avctx, QSVContext* q, AVFrame* > out) > +{ > + Mpeg1Context *mpeg_ctx = &q->mpeg_ctx; > + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = > sizeof(q->payload_buffer) }; > + mfxU64 ts; > + int ret; > + > + while (1) { > + int start; > + > + memset(payload.Data, 0, payload.BufSize); > + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); > + if (ret != MFX_ERR_NONE) { > + av_log(avctx, AV_LOG_WARNING, "error getting SEI payload: %d \n", > ret); WARNING or ERROR ? > + return ret; > + } > + > + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) { > + break; > + } > + > + start = find_start_offset(payload.Data); > + > + start++; > + > + ff_mpeg_decode_user_data(avctx, mpeg_ctx, &payload.Data[start], > (int)((payload.NumBit + 7) / 8) - start); > + > + if (ret < 0) Here ret is always MFX_ERR_NONE > + av_log(avctx, AV_LOG_WARNING, "error parsing SEI type: > %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); > + else > + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d > start %d -> %.s\n", payload.Type, payload.NumBit, start, (char > *)(&payload.Data[start])); > + } > + > + if (!out) > + return 0; > + > + if (mpeg_ctx->a53_buf_ref) { > + > + AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, > AV_FRAME_DATA_A53_CC, mpeg_ctx->a53_buf_ref); > + if (!sd) > + av_buffer_unref(&mpeg_ctx->a53_buf_ref); > + mpeg_ctx->a53_buf_ref = NULL; > + } > + > + if (mpeg_ctx->has_stereo3d) { > + AVStereo3D *stereo = av_stereo3d_create_side_data(out); > + if (!stereo) > + return AVERROR(ENOMEM); > + > + *stereo = mpeg_ctx->stereo3d; > + mpeg_ctx->has_stereo3d = 0; > + } > + > + if (mpeg_ctx->has_afd) { > + AVFrameSideData *sd = av_frame_new_side_data(out, AV_FRAME_DATA_AFD, > 1); > + if (!sd) > + return AVERROR(ENOMEM); > + > + *sd->data = mpeg_ctx->afd; > + mpeg_ctx->has_afd = 0; > + } > + > + return 0; > +} > > static int qsv_decode(AVCodecContext *avctx, QSVContext *q, > AVFrame *frame, int *got_frame, > @@ -636,6 +849,8 @@ static int qsv_decode(AVCodecContext *avctx, QSVContext > *q, > insurf, &outsurf, sync); > if (ret == MFX_WRN_DEVICE_BUSY) > av_usleep(500); > + else if (avctx->codec_id == AV_CODEC_ID_MPEG1VIDEO || avctx->codec_id > == AV_CODEC_ID_MPEG2VIDEO) We didn't add qsv decoder for AV_CODEC_ID_MPEG1VIDEO in qsvdec.c > + parse_sei_mpeg12(avctx, q, NULL); > > } while (ret == MFX_WRN_DEVICE_BUSY || ret == MFX_ERR_MORE_SURFACE); > > @@ -677,6 +892,24 @@ static int qsv_decode(AVCodecContext *avctx, QSVContext > *q, > return AVERROR_BUG; > } > > + switch (avctx->codec_id) { > + case AV_CODEC_ID_MPEG1VIDEO: > + case AV_CODEC_ID_MPEG2VIDEO: No support for AV_CODEC_ID_MPEG1VIDEO. Thanks Haihao > + ret = parse_sei_mpeg12(avctx, q, out_frame->frame); > + break; > + case AV_CODEC_ID_H264: > + ret = parse_sei_h264(avctx, q, out_frame->frame); > + break; > + case AV_CODEC_ID_HEVC: > + ret = parse_sei_hevc(avctx, q, out_frame); > + break; > + default: > + ret = 0; > + } > + > + if (ret < 0) > + av_log(avctx, AV_LOG_ERROR, "Error parsing SEI data\n"); > + > out_frame->queued += 1; > > aframe = (QSVAsyncFrame){ sync, out_frame }; _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [FFmpeg-devel] [PATCH 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders 2022-06-01 5:15 ` Xiang, Haihao @ 2022-06-01 8:51 ` Soft Works 0 siblings, 0 replies; 65+ messages in thread From: Soft Works @ 2022-06-01 8:51 UTC (permalink / raw) To: Xiang, Haihao, ffmpeg-devel > -----Original Message----- > From: Xiang, Haihao <haihao.xiang@intel.com> > Sent: Wednesday, June 1, 2022 7:16 AM > To: ffmpeg-devel@ffmpeg.org > Cc: softworkz@hotmail.com > Subject: Re: [FFmpeg-devel] [PATCH 6/6] avcodec/qsvdec: Implement SEI > parsing for QSV decoders > > On Thu, 2022-05-26 at 08:08 +0000, softworkz wrote: > > From: softworkz <softworkz@hotmail.com> > > > > Signed-off-by: softworkz <softworkz@hotmail.com> > > --- > > libavcodec/qsvdec.c | 233 > ++++++++++++++++++++++++++++++++++++++++++++ > > 1 file changed, 233 insertions(+) > > > > diff --git a/libavcodec/qsvdec.c b/libavcodec/qsvdec.c > > index 5fc5bed4c8..7d6a491aa0 100644 > > --- a/libavcodec/qsvdec.c > > +++ b/libavcodec/qsvdec.c > > @@ -49,6 +49,12 @@ > > #include "hwconfig.h" > > #include "qsv.h" > > #include "qsv_internal.h" > > +#include "h264dec.h" > > +#include "h264_sei.h" > > +#include "hevcdec.h" > > +#include "hevc_ps.h" > > +#include "hevc_sei.h" > > +#include "mpeg12.h" > > > > static const AVRational mfx_tb = { 1, 90000 }; > > > > @@ -60,6 +66,8 @@ static const AVRational mfx_tb = { 1, 90000 }; > > AV_NOPTS_VALUE : pts_tb.num ? \ > > av_rescale_q(mfx_pts, mfx_tb, pts_tb) : mfx_pts) > > > > +#define PAYLOAD_BUFFER_SIZE 65535 > > + > > typedef struct QSVAsyncFrame { > > mfxSyncPoint *sync; > > QSVFrame *frame; > > @@ -101,6 +109,9 @@ typedef struct QSVContext { > > > > mfxExtBuffer **ext_buffers; > > int nb_ext_buffers; > > + > > + mfxU8 payload_buffer[PAYLOAD_BUFFER_SIZE]; > > + Mpeg1Context mpeg_ctx; > > I wonder why only mpeg1 context is required in QSVContext. This is due to different implementations of SEI (user data in case of mpeg) data decoding. For H264 SEI decoding, we need H264SEIContext only. see ff_h264_sei_decode() Also, we don't need to state information between calls, so we can have that as a stack variable in parse_sei_h264(). Similar applies to HEVC, where we use HEVCSEI and HEVCParamSets. For MPEG1/2 video, we need to preserve state between some user data parsing calls; for that reason it needs to be a member of QsvContext while the others don't. > > } QSVContext; > > > > static const AVCodecHWConfigInternal *const qsv_hw_configs[] = { > > @@ -599,6 +610,208 @@ static int > qsv_export_film_grain(AVCodecContext *avctx, > > mfxExtAV1FilmGrainParam > > return 0; > > } > > #endif > > +static int find_start_offset(mfxU8 data[4]) > > +{ > > + if (data[0] == 0 && data[1] == 0 && data[2] == 1) > > + return 3; > > + > > + if (data[0] == 0 && data[1] == 0 && data[2] == 0 && data[3] == > 1) > > + return 4; > > + > > + return 0; > > +} > > + > > +static int parse_sei_h264(AVCodecContext* avctx, QSVContext* q, > AVFrame* out) > > +{ > > + H264SEIContext sei = { 0 }; > > + GetBitContext gb = { 0 }; > > + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], > .BufSize = > > sizeof(q->payload_buffer) }; > > + mfxU64 ts; > > + int ret; > > + > > + while (1) { > > + int start; > > + memset(payload.Data, 0, payload.BufSize); > > + > > + ret = MFXVideoDECODE_GetPayload(q->session, &ts, > &payload); > > + if (ret != MFX_ERR_NONE) { > > + av_log(avctx, AV_LOG_WARNING, "error getting SEI > payload: %d \n", > > ret); > > Better to use AV_LOG_ERROR to match the description. See answer below (sorry, worked on it from bottom to top ;-) > > > + return ret; > > + } > > + > > + if (payload.NumBit == 0 || payload.NumBit >= > payload.BufSize * 8) { > > + break; > > + } > > + > > + start = find_start_offset(payload.Data); > > + > > + switch (payload.Type) { > > + case SEI_TYPE_BUFFERING_PERIOD: > > + case SEI_TYPE_PIC_TIMING: > > + continue; > > + } > > + > > + if (init_get_bits(&gb, &payload.Data[start], > payload.NumBit - start * > > 8) < 0) > > + av_log(avctx, AV_LOG_ERROR, "Error initializing > bitstream > > reader"); > > + else > > I think it should return an error when failed to initialize > GetBitContext, and > the `else` statement is not needed. See answer below. > > > + ret = ff_h264_sei_decode(&sei, &gb, NULL, avctx); > > + > > + if (ret < 0) > > + av_log(avctx, AV_LOG_WARNING, "Error parsing SEI type: > > %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); > > Better to use AV_LOG_ERROR and return an error. Otherwise please use > 'warning' > instead of 'error' in the description. I changed it as "Failed to parse SEI type:..." and kept AV_LOG_WARNING > > + else > > + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d > Numbits %d\n", > > payload.Type, payload.NumBit); > > + } > > + > > + if (out) > > + return ff_h264_export_frame_props(avctx, &sei, NULL, out); > > + > > + return 0; > > +} > > + > > +static int parse_sei_hevc(AVCodecContext* avctx, QSVContext* q, > QSVFrame* > > out) > > +{ > > + HEVCSEI sei = { 0 }; > > + HEVCParamSets ps = { 0 }; > > + GetBitContext gb = { 0 }; > > + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], > .BufSize = > > sizeof(q->payload_buffer) }; > > + mfxFrameSurface1 *surface = &out->surface; > > + mfxU64 ts; > > + int ret, has_logged = 0; > > + > > + while (1) { > > + int start; > > + memset(payload.Data, 0, payload.BufSize); > > + > > + ret = MFXVideoDECODE_GetPayload(q->session, &ts, > &payload); > > + if (ret != MFX_ERR_NONE) { > > + av_log(avctx, AV_LOG_WARNING, "error getting SEI > payload: %d \n", > > ret); > > + return 0; > > It returns an error in parse_sei_h264() when > MFXVideoDECODE_GetPayload fails to > get the payload. Please make the behavior consistent across the > codecs. > (I'm fine to return 0 instead of an error to ignore errors in SEI > decoding. ) I made it consistent and added a special message for the typical error (MFX_ERR_NOT_ENOUGH_BUFFER), even though it is unlikely to happen, now that I've chosen a fairly large fixed-size buffer which is part of the context. > > + } > > + > > + if (payload.NumBit == 0 || payload.NumBit >= > payload.BufSize * 8) { > > + break; > > + } > > + > > + if (!has_logged) { > > + has_logged = 1; > > + av_log(avctx, AV_LOG_VERBOSE, "----------------------- > --------- > > ---------\n"); > > + av_log(avctx, AV_LOG_VERBOSE, "Start reading SEI - > payload > > timestamp: %llu - surface timestamp: %llu\n", ts, surface- > >Data.TimeStamp); > > + } > > + > > + if (ts != surface->Data.TimeStamp) { > > + av_log(avctx, AV_LOG_WARNING, "GetPayload timestamp > (%llu) does > > not match surface timestamp: (%llu)\n", ts, surface- > >Data.TimeStamp); > > + } > > + > > + start = find_start_offset(payload.Data); > > + > > + av_log(avctx, AV_LOG_VERBOSE, "parsing SEI type: %3d > Numbits > > %3d Start: %d\n", payload.Type, payload.NumBit, start); > > + > > + switch (payload.Type) { > > + case SEI_TYPE_BUFFERING_PERIOD: > > + case SEI_TYPE_PIC_TIMING: > > + continue; > > + case SEI_TYPE_MASTERING_DISPLAY_COLOUR_VOLUME: > > + // There seems to be a bug in MSDK > > + payload.NumBit -= 8; > > + > > + break; > > + case SEI_TYPE_CONTENT_LIGHT_LEVEL_INFO: > > + // There seems to be a bug in MSDK > > + payload.NumBit = 48; > > + > > + break; > > + case SEI_TYPE_USER_DATA_REGISTERED_ITU_T_T35: > > + // There seems to be a bug in MSDK > > + if (payload.NumBit == 552) > > + payload.NumBit = 528; > > + break; > > + } > > + > > + if (init_get_bits(&gb, &payload.Data[start], > payload.NumBit - start * > > 8) < 0) > > + av_log(avctx, AV_LOG_ERROR, "Error initializing > bitstream > > reader"); > > + else > > I think it should return an error when failed to initialize > GetBitContext and > the `else` statement is not needed. The output from MSDK cannot be 100% trusted as we have seen in the recent discussion with the MSDK team. In case of "normal" bit streams we would of course need to error out, but in this implementation we always get a new buffer while looping, so when there's an error with one of the buffers, we can still continue the loop and get the next buffer. But I changed the code flow now to make it more clear and output just a single error message when init_get_bits() would fail. > > + ret = ff_hevc_decode_nal_sei(&gb, avctx, &sei, &ps, > > HEVC_NAL_SEI_PREFIX); > > + > > + if (ret < 0) > > + av_log(avctx, AV_LOG_WARNING, "error parsing SEI type: > > %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); > > + else > > + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d > Numbits %d\n", > > payload.Type, payload.NumBit); > > + } > > + > > + if (has_logged) { > > + av_log(avctx, AV_LOG_VERBOSE, "End reading SEI\n"); > > + } > > + > > + if (out && out->frame) > > + return ff_set_side_data(avctx, &sei, NULL, out->frame); > > + > > + return 0; > > +} > > + > > +static int parse_sei_mpeg12(AVCodecContext* avctx, QSVContext* q, > AVFrame* > > out) > > +{ > > + Mpeg1Context *mpeg_ctx = &q->mpeg_ctx; > > + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], > .BufSize = > > sizeof(q->payload_buffer) }; > > + mfxU64 ts; > > + int ret; > > + > > + while (1) { > > + int start; > > + > > + memset(payload.Data, 0, payload.BufSize); > > + ret = MFXVideoDECODE_GetPayload(q->session, &ts, > &payload); > > + if (ret != MFX_ERR_NONE) { > > + av_log(avctx, AV_LOG_WARNING, "error getting SEI > payload: %d \n", > > ret); > > WARNING or ERROR ? Fixed. > > > + return ret; > > + } > > + > > + if (payload.NumBit == 0 || payload.NumBit >= > payload.BufSize * 8) { > > + break; > > + } > > + > > + start = find_start_offset(payload.Data); > > + > > + start++; > > + > > + ff_mpeg_decode_user_data(avctx, mpeg_ctx, > &payload.Data[start], > > (int)((payload.NumBit + 7) / 8) - start); > > + > > + if (ret < 0) > > Here ret is always MFX_ERR_NONE Right. Dropped. > > > + av_log(avctx, AV_LOG_WARNING, "error parsing SEI type: > > %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); > > + else > > + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d > Numbits %d > > start %d -> %.s\n", payload.Type, payload.NumBit, start, (char > > *)(&payload.Data[start])); > > + } > > + > > + if (!out) > > + return 0; > > + > > + if (mpeg_ctx->a53_buf_ref) { > > + > > + AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, > > AV_FRAME_DATA_A53_CC, mpeg_ctx->a53_buf_ref); > > + if (!sd) > > + av_buffer_unref(&mpeg_ctx->a53_buf_ref); > > + mpeg_ctx->a53_buf_ref = NULL; > > + } > > + > > + if (mpeg_ctx->has_stereo3d) { > > + AVStereo3D *stereo = av_stereo3d_create_side_data(out); > > + if (!stereo) > > + return AVERROR(ENOMEM); > > + > > + *stereo = mpeg_ctx->stereo3d; > > + mpeg_ctx->has_stereo3d = 0; > > + } > > + > > + if (mpeg_ctx->has_afd) { > > + AVFrameSideData *sd = av_frame_new_side_data(out, > AV_FRAME_DATA_AFD, > > 1); > > + if (!sd) > > + return AVERROR(ENOMEM); > > + > > + *sd->data = mpeg_ctx->afd; > > + mpeg_ctx->has_afd = 0; > > + } > > + > > + return 0; > > +} > > > > static int qsv_decode(AVCodecContext *avctx, QSVContext *q, > > AVFrame *frame, int *got_frame, > > @@ -636,6 +849,8 @@ static int qsv_decode(AVCodecContext *avctx, > QSVContext > > *q, > > insurf, &outsurf, > sync); > > if (ret == MFX_WRN_DEVICE_BUSY) > > av_usleep(500); > > + else if (avctx->codec_id == AV_CODEC_ID_MPEG1VIDEO || > avctx->codec_id > > == AV_CODEC_ID_MPEG2VIDEO) > > > We didn't add qsv decoder for AV_CODEC_ID_MPEG1VIDEO in qsvdec.c The decoder (mpeg2_qsv) could still be selected from the command line. > > + parse_sei_mpeg12(avctx, q, NULL); > > > > } while (ret == MFX_WRN_DEVICE_BUSY || ret == > MFX_ERR_MORE_SURFACE); > > > > @@ -677,6 +892,24 @@ static int qsv_decode(AVCodecContext *avctx, > QSVContext > > *q, > > return AVERROR_BUG; > > } > > > > + switch (avctx->codec_id) { > > + case AV_CODEC_ID_MPEG1VIDEO: > > + case AV_CODEC_ID_MPEG2VIDEO: > > No support for AV_CODEC_ID_MPEG1VIDEO. I wasn't sure about that due to the explicit mapping here: https://github.com/ffstaging/FFmpeg/blob/f55c91497d4d16d393ae9c034bd3032a683802ca/libavcodec/qsv.c#L50-L52 I thought that maybe the mpeg2 decoder would be able to handle mp1video as well, but I did a bit of research and it turns out that you're right and that it's not supported. Removed the MPEG1VIDEO constants. Thanks a lot for your review, will submit an update shortly. Best, softworkz _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v2 0/6] Implement SEI parsing for QSV decoders 2022-05-26 8:08 [FFmpeg-devel] [PATCH 0/6] Implement SEI parsing for QSV decoders ffmpegagent ` (5 preceding siblings ...) 2022-05-26 8:08 ` [FFmpeg-devel] [PATCH 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders softworkz @ 2022-06-01 9:06 ` ffmpegagent 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() softworkz ` (6 more replies) 2022-06-01 19:15 ` [FFmpeg-devel] [PATCH 0/6] " Kieran Kunhya 7 siblings, 7 replies; 65+ messages in thread From: ffmpegagent @ 2022-06-01 9:06 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz, Xiang, Haihao Missing SEI information has always been a major drawback when using the QSV decoders. I used to think that there's no chance to get at the data without explicit implementation from the MSDK side (or doing something weird like parsing in parallel). It turned out that there's a hardly known api method that provides access to all SEI (h264/hevc) or user data (mpeg2video). This allows to get things like closed captions, frame packing, display orientation, HDR data (mastering display, content light level, etc.) without having to rely on those data being provided by the MSDK as extended buffers. The commit "Implement SEI parsing for QSV decoders" includes some hard-coded workarounds for MSDK bugs which I reported: https://github.com/Intel-Media-SDK/MediaSDK/issues/2597#issuecomment-1072795311 But that doesn't help. Those bugs exist and I'm sharing my workarounds, which are empirically determined by testing a range of files. If someone is interested, I can provide private access to a repository where we have been testing this. Alternatively, I could also leave those workarounds out, and just skip those SEI types. In a previous version of this patchset, there was a concern that payload data might need to be re-ordered. Meanwhile I have researched this carefully and the conclusion is that this is not required. My detailed analysis can be found here: https://gist.github.com/softworkz/36c49586a8610813a32270ee3947a932 v2 * qsvdec: make error handling consistent and clear * qsvdec: remove AV_CODEC_ID_MPEG1VIDEO constants * hevcdec: rename function to ff_hevc_set_side_data(), add doc text softworkz (6): avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() avcodec/vpp_qsv: Copy side data from input to output frame avcodec/mpeg12dec: make mpeg_decode_user_data() accessible avcodec/hevcdec: make set_side_data() accessible avcodec/h264dec: make h264_export_frame_props() accessible avcodec/qsvdec: Implement SEI parsing for QSV decoders doc/APIchanges | 4 + libavcodec/h264_slice.c | 98 ++++++++------- libavcodec/h264dec.h | 2 + libavcodec/hevcdec.c | 117 +++++++++--------- libavcodec/hevcdec.h | 9 ++ libavcodec/mpeg12.h | 28 +++++ libavcodec/mpeg12dec.c | 40 +----- libavcodec/qsvdec.c | 234 +++++++++++++++++++++++++++++++++++ libavfilter/qsvvpp.c | 6 + libavfilter/vf_overlay_qsv.c | 19 ++- libavutil/frame.c | 67 ++++++---- libavutil/frame.h | 32 +++++ libavutil/version.h | 2 +- 13 files changed, 485 insertions(+), 173 deletions(-) base-commit: b033913d1c5998a29dfd13e9906dd707ff6eff12 Published-As: https://github.com/ffstaging/FFmpeg/releases/tag/pr-ffstaging-31%2Fsoftworkz%2Fsubmit_qsv_sei-v2 Fetch-It-Via: git fetch https://github.com/ffstaging/FFmpeg pr-ffstaging-31/softworkz/submit_qsv_sei-v2 Pull-Request: https://github.com/ffstaging/FFmpeg/pull/31 Range-diff vs v1: 1: 4ee6cb47db = 1: 4ee6cb47db avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() 2: 3152156c97 = 2: 3152156c97 avcodec/vpp_qsv: Copy side data from input to output frame 3: 8082c3ab84 = 3: 8082c3ab84 avcodec/mpeg12dec: make mpeg_decode_user_data() accessible 4: 485d7f913d ! 4: 306bdaa39c avcodec/hevcdec: make set_side_data() accessible @@ libavcodec/hevcdec.c: error: } -static int set_side_data(HEVCContext *s) -+int ff_set_side_data(AVCodecContext *logctx, HEVCSEI *sei, HEVCContext *s, AVFrame *out) ++int ff_hevc_set_side_data(AVCodecContext *logctx, HEVCSEI *sei, HEVCContext *s, AVFrame *out) { - AVFrame *out = s->ref->frame; - int ret; @@ libavcodec/hevcdec.c: static int hevc_frame_start(HEVCContext *s) } - ret = set_side_data(s); -+ ret = ff_set_side_data(s->avctx, &s->sei, s, s->ref->frame); ++ ret = ff_hevc_set_side_data(s->avctx, &s->sei, s, s->ref->frame); if (ret < 0) goto fail; @@ libavcodec/hevcdec.h: void ff_hevc_hls_residual_coding(HEVCContext *s, int x0, i void ff_hevc_hls_mvd_coding(HEVCContext *s, int x0, int y0, int log2_cb_size); -+int ff_set_side_data(AVCodecContext *logctx, HEVCSEI *sei, HEVCContext *s, AVFrame *out); ++/** ++ * Set the decodec side data to an AVFrame. ++ * @logctx context for logging. ++ * @sei HEVCSEI decoding context, must not be NULL. ++ * @s HEVCContext, can be NULL. ++ * @return < 0 on error, 0 otherwise. ++ */ ++int ff_hevc_set_side_data(AVCodecContext *logctx, HEVCSEI *sei, HEVCContext *s, AVFrame *out); + extern const uint8_t ff_hevc_qpel_extra_before[4]; extern const uint8_t ff_hevc_qpel_extra_after[4]; 5: fb5c3df8e5 = 5: 16f5dfbfd1 avcodec/h264dec: make h264_export_frame_props() accessible 6: dcf08cd7b7 ! 6: 23de6d2774 avcodec/qsvdec: Implement SEI parsing for QSV decoders @@ libavcodec/qsvdec.c: static int qsv_export_film_grain(AVCodecContext *avctx, mfx + memset(payload.Data, 0, payload.BufSize); + + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); -+ if (ret != MFX_ERR_NONE) { -+ av_log(avctx, AV_LOG_WARNING, "error getting SEI payload: %d \n", ret); -+ return ret; ++ if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { ++ av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient buffer on GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q->payload_buffer), payload.BufSize); ++ return 0; + } ++ if (ret != MFX_ERR_NONE) ++ return ret; + -+ if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) { ++ if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) + break; -+ } + + start = find_start_offset(payload.Data); + @@ libavcodec/qsvdec.c: static int qsv_export_film_grain(AVCodecContext *avctx, mfx + } + + if (init_get_bits(&gb, &payload.Data[start], payload.NumBit - start * 8) < 0) -+ av_log(avctx, AV_LOG_ERROR, "Error initializing bitstream reader"); -+ else ++ av_log(avctx, AV_LOG_ERROR, "Error initializing bitstream reader SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); ++ else { + ret = ff_h264_sei_decode(&sei, &gb, NULL, avctx); + -+ if (ret < 0) -+ av_log(avctx, AV_LOG_WARNING, "Error parsing SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); -+ else -+ av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d\n", payload.Type, payload.NumBit); ++ if (ret < 0) ++ av_log(avctx, AV_LOG_WARNING, "Failed to parse SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); ++ else ++ av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d\n", payload.Type, payload.NumBit); ++ } + } + + if (out) @@ libavcodec/qsvdec.c: static int qsv_export_film_grain(AVCodecContext *avctx, mfx + memset(payload.Data, 0, payload.BufSize); + + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); -+ if (ret != MFX_ERR_NONE) { -+ av_log(avctx, AV_LOG_WARNING, "error getting SEI payload: %d \n", ret); ++ if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { ++ av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient buffer on GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q->payload_buffer), payload.BufSize); + return 0; + } ++ if (ret != MFX_ERR_NONE) ++ return ret; + -+ if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) { ++ if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) + break; -+ } + + if (!has_logged) { + has_logged = 1; @@ libavcodec/qsvdec.c: static int qsv_export_film_grain(AVCodecContext *avctx, mfx + } + + if (init_get_bits(&gb, &payload.Data[start], payload.NumBit - start * 8) < 0) -+ av_log(avctx, AV_LOG_ERROR, "Error initializing bitstream reader"); -+ else -+ ret = ff_hevc_decode_nal_sei(&gb, avctx, &sei, &ps, HEVC_NAL_SEI_PREFIX); ++ av_log(avctx, AV_LOG_ERROR, "Error initializing bitstream reader SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); ++ else { ++ ret = ff_h264_sei_decode(&sei, &gb, NULL, avctx); + -+ if (ret < 0) -+ av_log(avctx, AV_LOG_WARNING, "error parsing SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); -+ else -+ av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d\n", payload.Type, payload.NumBit); ++ if (ret < 0) ++ av_log(avctx, AV_LOG_WARNING, "Failed to parse SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); ++ else ++ av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d\n", payload.Type, payload.NumBit); ++ } + } + + if (has_logged) { @@ libavcodec/qsvdec.c: static int qsv_export_film_grain(AVCodecContext *avctx, mfx + } + + if (out && out->frame) -+ return ff_set_side_data(avctx, &sei, NULL, out->frame); ++ return ff_hevc_set_side_data(avctx, &sei, NULL, out->frame); + + return 0; +} @@ libavcodec/qsvdec.c: static int qsv_export_film_grain(AVCodecContext *avctx, mfx + + memset(payload.Data, 0, payload.BufSize); + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); -+ if (ret != MFX_ERR_NONE) { -+ av_log(avctx, AV_LOG_WARNING, "error getting SEI payload: %d \n", ret); -+ return ret; ++ if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { ++ av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient buffer on GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q->payload_buffer), payload.BufSize); ++ return 0; + } ++ if (ret != MFX_ERR_NONE) ++ return ret; + -+ if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) { ++ if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) + break; -+ } + + start = find_start_offset(payload.Data); + @@ libavcodec/qsvdec.c: static int qsv_export_film_grain(AVCodecContext *avctx, mfx + + ff_mpeg_decode_user_data(avctx, mpeg_ctx, &payload.Data[start], (int)((payload.NumBit + 7) / 8) - start); + -+ if (ret < 0) -+ av_log(avctx, AV_LOG_WARNING, "error parsing SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); -+ else -+ av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d start %d -> %.s\n", payload.Type, payload.NumBit, start, (char *)(&payload.Data[start])); ++ av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d start %d -> %.s\n", payload.Type, payload.NumBit, start, (char *)(&payload.Data[start])); + } + + if (!out) @@ libavcodec/qsvdec.c: static int qsv_decode(AVCodecContext *avctx, QSVContext *q, insurf, &outsurf, sync); if (ret == MFX_WRN_DEVICE_BUSY) av_usleep(500); -+ else if (avctx->codec_id == AV_CODEC_ID_MPEG1VIDEO || avctx->codec_id == AV_CODEC_ID_MPEG2VIDEO) ++ else if (avctx->codec_id == AV_CODEC_ID_MPEG2VIDEO) + parse_sei_mpeg12(avctx, q, NULL); } while (ret == MFX_WRN_DEVICE_BUSY || ret == MFX_ERR_MORE_SURFACE); @@ libavcodec/qsvdec.c: static int qsv_decode(AVCodecContext *avctx, QSVContext *q, } + switch (avctx->codec_id) { -+ case AV_CODEC_ID_MPEG1VIDEO: + case AV_CODEC_ID_MPEG2VIDEO: + ret = parse_sei_mpeg12(avctx, q, out_frame->frame); + break; @@ libavcodec/qsvdec.c: static int qsv_decode(AVCodecContext *avctx, QSVContext *q, + } + + if (ret < 0) -+ av_log(avctx, AV_LOG_ERROR, "Error parsing SEI data\n"); ++ av_log(avctx, AV_LOG_ERROR, "Error parsing SEI data: %d\n", ret); + out_frame->queued += 1; -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v2 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 0/6] " ffmpegagent @ 2022-06-01 9:06 ` softworkz 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 2/6] avcodec/vpp_qsv: Copy side data from input to output frame softworkz ` (5 subsequent siblings) 6 siblings, 0 replies; 65+ messages in thread From: softworkz @ 2022-06-01 9:06 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz, Xiang, Haihao From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> Signed-off-by: Anton Khirnov <anton@khirnov.net> --- doc/APIchanges | 4 +++ libavutil/frame.c | 67 +++++++++++++++++++++++++++------------------ libavutil/frame.h | 32 ++++++++++++++++++++++ libavutil/version.h | 2 +- 4 files changed, 78 insertions(+), 27 deletions(-) diff --git a/doc/APIchanges b/doc/APIchanges index 337f1466d8..e5dd6f1e83 100644 --- a/doc/APIchanges +++ b/doc/APIchanges @@ -14,6 +14,10 @@ libavutil: 2021-04-27 API changes, most recent first: +2022-05-26 - xxxxxxxxx - lavu 57.26.100 - frame.h + Add av_frame_remove_all_side_data(), av_frame_copy_side_data(), + AV_FRAME_TRANSFER_SD_COPY, and AV_FRAME_TRANSFER_SD_FILTER. + 2022-05-23 - xxxxxxxxx - lavu 57.25.100 - avutil.h Deprecate av_fopen_utf8() without replacement. diff --git a/libavutil/frame.c b/libavutil/frame.c index fbb869fffa..bfe575612d 100644 --- a/libavutil/frame.c +++ b/libavutil/frame.c @@ -271,9 +271,45 @@ FF_ENABLE_DEPRECATION_WARNINGS return AVERROR(EINVAL); } +void av_frame_remove_all_side_data(AVFrame *frame) +{ + wipe_side_data(frame); +} + +int av_frame_copy_side_data(AVFrame* dst, const AVFrame* src, int flags) +{ + for (unsigned i = 0; i < src->nb_side_data; i++) { + const AVFrameSideData *sd_src = src->side_data[i]; + AVFrameSideData *sd_dst; + if ((flags & AV_FRAME_TRANSFER_SD_FILTER) && + sd_src->type == AV_FRAME_DATA_PANSCAN && + (src->width != dst->width || src->height != dst->height)) + continue; + if (flags & AV_FRAME_TRANSFER_SD_COPY) { + sd_dst = av_frame_new_side_data(dst, sd_src->type, + sd_src->size); + if (!sd_dst) { + wipe_side_data(dst); + return AVERROR(ENOMEM); + } + memcpy(sd_dst->data, sd_src->data, sd_src->size); + } else { + AVBufferRef *ref = av_buffer_ref(sd_src->buf); + sd_dst = av_frame_new_side_data_from_buf(dst, sd_src->type, ref); + if (!sd_dst) { + av_buffer_unref(&ref); + wipe_side_data(dst); + return AVERROR(ENOMEM); + } + } + av_dict_copy(&sd_dst->metadata, sd_src->metadata, 0); + } + return 0; +} + static int frame_copy_props(AVFrame *dst, const AVFrame *src, int force_copy) { - int ret, i; + int ret; dst->key_frame = src->key_frame; dst->pict_type = src->pict_type; @@ -309,31 +345,10 @@ static int frame_copy_props(AVFrame *dst, const AVFrame *src, int force_copy) av_dict_copy(&dst->metadata, src->metadata, 0); - for (i = 0; i < src->nb_side_data; i++) { - const AVFrameSideData *sd_src = src->side_data[i]; - AVFrameSideData *sd_dst; - if ( sd_src->type == AV_FRAME_DATA_PANSCAN - && (src->width != dst->width || src->height != dst->height)) - continue; - if (force_copy) { - sd_dst = av_frame_new_side_data(dst, sd_src->type, - sd_src->size); - if (!sd_dst) { - wipe_side_data(dst); - return AVERROR(ENOMEM); - } - memcpy(sd_dst->data, sd_src->data, sd_src->size); - } else { - AVBufferRef *ref = av_buffer_ref(sd_src->buf); - sd_dst = av_frame_new_side_data_from_buf(dst, sd_src->type, ref); - if (!sd_dst) { - av_buffer_unref(&ref); - wipe_side_data(dst); - return AVERROR(ENOMEM); - } - } - av_dict_copy(&sd_dst->metadata, sd_src->metadata, 0); - } + if ((ret = av_frame_copy_side_data(dst, src, + (force_copy ? AV_FRAME_TRANSFER_SD_COPY : 0) | + AV_FRAME_TRANSFER_SD_FILTER) < 0)) + return ret; ret = av_buffer_replace(&dst->opaque_ref, src->opaque_ref); ret |= av_buffer_replace(&dst->private_ref, src->private_ref); diff --git a/libavutil/frame.h b/libavutil/frame.h index 33fac2054c..a868fa70d7 100644 --- a/libavutil/frame.h +++ b/libavutil/frame.h @@ -850,6 +850,30 @@ int av_frame_copy(AVFrame *dst, const AVFrame *src); */ int av_frame_copy_props(AVFrame *dst, const AVFrame *src); + +/** + * Copy side data, rather than creating new references. + */ +#define AV_FRAME_TRANSFER_SD_COPY (1 << 0) +/** + * Filter out side data that does not match dst properties. + */ +#define AV_FRAME_TRANSFER_SD_FILTER (1 << 1) + +/** + * Copy all side-data from src to dst. + * + * @param dst a frame to which the side data should be copied. + * @param src a frame from which to copy the side data. + * @param flags a combination of AV_FRAME_TRANSFER_SD_* + * + * @return >= 0 on success, a negative AVERROR on error. + * + * @note This function will create new references to side data buffers in src, + * unless the AV_FRAME_TRANSFER_SD_COPY flag is passed. + */ +int av_frame_copy_side_data(AVFrame* dst, const AVFrame* src, int flags); + /** * Get the buffer reference a given data plane is stored in. * @@ -901,6 +925,14 @@ AVFrameSideData *av_frame_get_side_data(const AVFrame *frame, */ void av_frame_remove_side_data(AVFrame *frame, enum AVFrameSideDataType type); +/** + * Remove and free all side data instances. + * + * @param frame from which to remove all side data. + */ +void av_frame_remove_all_side_data(AVFrame *frame); + + /** * Flags for frame cropping. diff --git a/libavutil/version.h b/libavutil/version.h index 1b4b41d81f..2c7f4f6b37 100644 --- a/libavutil/version.h +++ b/libavutil/version.h @@ -79,7 +79,7 @@ */ #define LIBAVUTIL_VERSION_MAJOR 57 -#define LIBAVUTIL_VERSION_MINOR 25 +#define LIBAVUTIL_VERSION_MINOR 26 #define LIBAVUTIL_VERSION_MICRO 100 #define LIBAVUTIL_VERSION_INT AV_VERSION_INT(LIBAVUTIL_VERSION_MAJOR, \ -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v2 2/6] avcodec/vpp_qsv: Copy side data from input to output frame 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 0/6] " ffmpegagent 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() softworkz @ 2022-06-01 9:06 ` softworkz 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 3/6] avcodec/mpeg12dec: make mpeg_decode_user_data() accessible softworkz ` (4 subsequent siblings) 6 siblings, 0 replies; 65+ messages in thread From: softworkz @ 2022-06-01 9:06 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz, Xiang, Haihao From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> --- libavfilter/qsvvpp.c | 6 ++++++ libavfilter/vf_overlay_qsv.c | 19 +++++++++++++++---- 2 files changed, 21 insertions(+), 4 deletions(-) diff --git a/libavfilter/qsvvpp.c b/libavfilter/qsvvpp.c index 954f882637..f4bf628073 100644 --- a/libavfilter/qsvvpp.c +++ b/libavfilter/qsvvpp.c @@ -843,6 +843,12 @@ int ff_qsvvpp_filter_frame(QSVVPPContext *s, AVFilterLink *inlink, AVFrame *picr return AVERROR(EAGAIN); break; } + + av_frame_remove_all_side_data(out_frame->frame); + ret = av_frame_copy_side_data(out_frame->frame, in_frame->frame, 0); + if (ret < 0) + return ret; + out_frame->frame->pts = av_rescale_q(out_frame->surface.Data.TimeStamp, default_tb, outlink->time_base); diff --git a/libavfilter/vf_overlay_qsv.c b/libavfilter/vf_overlay_qsv.c index 7e76b39aa9..e15214dbf2 100644 --- a/libavfilter/vf_overlay_qsv.c +++ b/libavfilter/vf_overlay_qsv.c @@ -231,13 +231,24 @@ static int process_frame(FFFrameSync *fs) { AVFilterContext *ctx = fs->parent; QSVOverlayContext *s = fs->opaque; + AVFrame *frame0 = NULL; AVFrame *frame = NULL; - int ret = 0, i; + int ret = 0; - for (i = 0; i < ctx->nb_inputs; i++) { + for (unsigned i = 0; i < ctx->nb_inputs; i++) { ret = ff_framesync_get_frame(fs, i, &frame, 0); - if (ret == 0) - ret = ff_qsvvpp_filter_frame(s->qsv, ctx->inputs[i], frame); + + if (ret == 0) { + if (i == 0) + frame0 = frame; + else { + av_frame_remove_all_side_data(frame); + ret = av_frame_copy_side_data(frame, frame0, 0); + } + + ret = ret < 0 ? ret : ff_qsvvpp_filter_frame(s->qsv, ctx->inputs[i], frame); + } + if (ret < 0 && ret != AVERROR(EAGAIN)) break; } -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v2 3/6] avcodec/mpeg12dec: make mpeg_decode_user_data() accessible 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 0/6] " ffmpegagent 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() softworkz 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 2/6] avcodec/vpp_qsv: Copy side data from input to output frame softworkz @ 2022-06-01 9:06 ` softworkz 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 4/6] avcodec/hevcdec: make set_side_data() accessible softworkz ` (3 subsequent siblings) 6 siblings, 0 replies; 65+ messages in thread From: softworkz @ 2022-06-01 9:06 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz, Xiang, Haihao From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> --- libavcodec/mpeg12.h | 28 ++++++++++++++++++++++++++++ libavcodec/mpeg12dec.c | 40 +++++----------------------------------- 2 files changed, 33 insertions(+), 35 deletions(-) diff --git a/libavcodec/mpeg12.h b/libavcodec/mpeg12.h index e0406b32d9..84a829cdd3 100644 --- a/libavcodec/mpeg12.h +++ b/libavcodec/mpeg12.h @@ -23,6 +23,7 @@ #define AVCODEC_MPEG12_H #include "mpegvideo.h" +#include "libavutil/stereo3d.h" /* Start codes. */ #define SEQ_END_CODE 0x000001b7 @@ -34,6 +35,31 @@ #define EXT_START_CODE 0x000001b5 #define USER_START_CODE 0x000001b2 +typedef struct Mpeg1Context { + MpegEncContext mpeg_enc_ctx; + int mpeg_enc_ctx_allocated; /* true if decoding context allocated */ + int repeat_field; /* true if we must repeat the field */ + AVPanScan pan_scan; /* some temporary storage for the panscan */ + AVStereo3D stereo3d; + int has_stereo3d; + AVBufferRef *a53_buf_ref; + uint8_t afd; + int has_afd; + int slice_count; + unsigned aspect_ratio_info; + AVRational save_aspect; + int save_width, save_height, save_progressive_seq; + int rc_buffer_size; + AVRational frame_rate_ext; /* MPEG-2 specific framerate modificator */ + unsigned frame_rate_index; + int sync; /* Did we reach a sync point like a GOP/SEQ/KEYFrame? */ + int closed_gop; + int tmpgexs; + int first_slice; + int extradata_decoded; + int64_t timecode_frame_start; /*< GOP timecode frame start number, in non drop frame format */ +} Mpeg1Context; + void ff_mpeg12_common_init(MpegEncContext *s); void ff_mpeg1_clean_buffers(MpegEncContext *s); @@ -45,4 +71,6 @@ void ff_mpeg12_find_best_frame_rate(AVRational frame_rate, int *code, int *ext_n, int *ext_d, int nonstandard); +void ff_mpeg_decode_user_data(AVCodecContext *avctx, Mpeg1Context *s1, const uint8_t *p, int buf_size); + #endif /* AVCODEC_MPEG12_H */ diff --git a/libavcodec/mpeg12dec.c b/libavcodec/mpeg12dec.c index e9bde48f7a..11d2b58185 100644 --- a/libavcodec/mpeg12dec.c +++ b/libavcodec/mpeg12dec.c @@ -58,31 +58,6 @@ #define A53_MAX_CC_COUNT 2000 -typedef struct Mpeg1Context { - MpegEncContext mpeg_enc_ctx; - int mpeg_enc_ctx_allocated; /* true if decoding context allocated */ - int repeat_field; /* true if we must repeat the field */ - AVPanScan pan_scan; /* some temporary storage for the panscan */ - AVStereo3D stereo3d; - int has_stereo3d; - AVBufferRef *a53_buf_ref; - uint8_t afd; - int has_afd; - int slice_count; - unsigned aspect_ratio_info; - AVRational save_aspect; - int save_width, save_height, save_progressive_seq; - int rc_buffer_size; - AVRational frame_rate_ext; /* MPEG-2 specific framerate modificator */ - unsigned frame_rate_index; - int sync; /* Did we reach a sync point like a GOP/SEQ/KEYFrame? */ - int closed_gop; - int tmpgexs; - int first_slice; - int extradata_decoded; - int64_t timecode_frame_start; /*< GOP timecode frame start number, in non drop frame format */ -} Mpeg1Context; - #define MB_TYPE_ZERO_MV 0x20000000 static const uint32_t ptype2mb_type[7] = { @@ -2198,11 +2173,9 @@ static int vcr2_init_sequence(AVCodecContext *avctx) return 0; } -static int mpeg_decode_a53_cc(AVCodecContext *avctx, +static int mpeg_decode_a53_cc(AVCodecContext *avctx, Mpeg1Context *s1, const uint8_t *p, int buf_size) { - Mpeg1Context *s1 = avctx->priv_data; - if (buf_size >= 6 && p[0] == 'G' && p[1] == 'A' && p[2] == '9' && p[3] == '4' && p[4] == 3 && (p[5] & 0x40)) { @@ -2333,12 +2306,9 @@ static int mpeg_decode_a53_cc(AVCodecContext *avctx, return 0; } -static void mpeg_decode_user_data(AVCodecContext *avctx, - const uint8_t *p, int buf_size) +void ff_mpeg_decode_user_data(AVCodecContext *avctx, Mpeg1Context *s1, const uint8_t *p, int buf_size) { - Mpeg1Context *s = avctx->priv_data; const uint8_t *buf_end = p + buf_size; - Mpeg1Context *s1 = avctx->priv_data; #if 0 int i; @@ -2352,7 +2322,7 @@ static void mpeg_decode_user_data(AVCodecContext *avctx, int i; for(i=0; i<20; i++) if (!memcmp(p+i, "\0TMPGEXS\0", 9)){ - s->tmpgexs= 1; + s1->tmpgexs= 1; } } /* we parse the DTG active format information */ @@ -2398,7 +2368,7 @@ static void mpeg_decode_user_data(AVCodecContext *avctx, break; } } - } else if (mpeg_decode_a53_cc(avctx, p, buf_size)) { + } else if (mpeg_decode_a53_cc(avctx, s1, p, buf_size)) { return; } } @@ -2590,7 +2560,7 @@ static int decode_chunks(AVCodecContext *avctx, AVFrame *picture, } break; case USER_START_CODE: - mpeg_decode_user_data(avctx, buf_ptr, input_size); + ff_mpeg_decode_user_data(avctx, s, buf_ptr, input_size); break; case GOP_START_CODE: if (last_code == 0) { -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v2 4/6] avcodec/hevcdec: make set_side_data() accessible 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 0/6] " ffmpegagent ` (2 preceding siblings ...) 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 3/6] avcodec/mpeg12dec: make mpeg_decode_user_data() accessible softworkz @ 2022-06-01 9:06 ` softworkz 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 5/6] avcodec/h264dec: make h264_export_frame_props() accessible softworkz ` (2 subsequent siblings) 6 siblings, 0 replies; 65+ messages in thread From: softworkz @ 2022-06-01 9:06 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz, Xiang, Haihao From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> --- libavcodec/hevcdec.c | 117 +++++++++++++++++++++---------------------- libavcodec/hevcdec.h | 9 ++++ 2 files changed, 67 insertions(+), 59 deletions(-) diff --git a/libavcodec/hevcdec.c b/libavcodec/hevcdec.c index f782ea6394..9e9bb48202 100644 --- a/libavcodec/hevcdec.c +++ b/libavcodec/hevcdec.c @@ -2726,23 +2726,22 @@ error: return res; } -static int set_side_data(HEVCContext *s) +int ff_hevc_set_side_data(AVCodecContext *logctx, HEVCSEI *sei, HEVCContext *s, AVFrame *out) { - AVFrame *out = s->ref->frame; - int ret; + int ret = 0; - if (s->sei.frame_packing.present && - s->sei.frame_packing.arrangement_type >= 3 && - s->sei.frame_packing.arrangement_type <= 5 && - s->sei.frame_packing.content_interpretation_type > 0 && - s->sei.frame_packing.content_interpretation_type < 3) { + if (sei->frame_packing.present && + sei->frame_packing.arrangement_type >= 3 && + sei->frame_packing.arrangement_type <= 5 && + sei->frame_packing.content_interpretation_type > 0 && + sei->frame_packing.content_interpretation_type < 3) { AVStereo3D *stereo = av_stereo3d_create_side_data(out); if (!stereo) return AVERROR(ENOMEM); - switch (s->sei.frame_packing.arrangement_type) { + switch (sei->frame_packing.arrangement_type) { case 3: - if (s->sei.frame_packing.quincunx_subsampling) + if (sei->frame_packing.quincunx_subsampling) stereo->type = AV_STEREO3D_SIDEBYSIDE_QUINCUNX; else stereo->type = AV_STEREO3D_SIDEBYSIDE; @@ -2755,21 +2754,21 @@ static int set_side_data(HEVCContext *s) break; } - if (s->sei.frame_packing.content_interpretation_type == 2) + if (sei->frame_packing.content_interpretation_type == 2) stereo->flags = AV_STEREO3D_FLAG_INVERT; - if (s->sei.frame_packing.arrangement_type == 5) { - if (s->sei.frame_packing.current_frame_is_frame0_flag) + if (sei->frame_packing.arrangement_type == 5) { + if (sei->frame_packing.current_frame_is_frame0_flag) stereo->view = AV_STEREO3D_VIEW_LEFT; else stereo->view = AV_STEREO3D_VIEW_RIGHT; } } - if (s->sei.display_orientation.present && - (s->sei.display_orientation.anticlockwise_rotation || - s->sei.display_orientation.hflip || s->sei.display_orientation.vflip)) { - double angle = s->sei.display_orientation.anticlockwise_rotation * 360 / (double) (1 << 16); + if (sei->display_orientation.present && + (sei->display_orientation.anticlockwise_rotation || + sei->display_orientation.hflip || sei->display_orientation.vflip)) { + double angle = sei->display_orientation.anticlockwise_rotation * 360 / (double) (1 << 16); AVFrameSideData *rotation = av_frame_new_side_data(out, AV_FRAME_DATA_DISPLAYMATRIX, sizeof(int32_t) * 9); @@ -2788,17 +2787,17 @@ static int set_side_data(HEVCContext *s) * (1 - 2 * !!s->sei.display_orientation.vflip); av_display_rotation_set((int32_t *)rotation->data, angle); av_display_matrix_flip((int32_t *)rotation->data, - s->sei.display_orientation.hflip, - s->sei.display_orientation.vflip); + sei->display_orientation.hflip, + sei->display_orientation.vflip); } // Decrement the mastering display flag when IRAP frame has no_rasl_output_flag=1 // so the side data persists for the entire coded video sequence. - if (s->sei.mastering_display.present > 0 && + if (s && sei->mastering_display.present > 0 && IS_IRAP(s) && s->no_rasl_output_flag) { - s->sei.mastering_display.present--; + sei->mastering_display.present--; } - if (s->sei.mastering_display.present) { + if (sei->mastering_display.present) { // HEVC uses a g,b,r ordering, which we convert to a more natural r,g,b const int mapping[3] = {2, 0, 1}; const int chroma_den = 50000; @@ -2811,25 +2810,25 @@ static int set_side_data(HEVCContext *s) for (i = 0; i < 3; i++) { const int j = mapping[i]; - metadata->display_primaries[i][0].num = s->sei.mastering_display.display_primaries[j][0]; + metadata->display_primaries[i][0].num = sei->mastering_display.display_primaries[j][0]; metadata->display_primaries[i][0].den = chroma_den; - metadata->display_primaries[i][1].num = s->sei.mastering_display.display_primaries[j][1]; + metadata->display_primaries[i][1].num = sei->mastering_display.display_primaries[j][1]; metadata->display_primaries[i][1].den = chroma_den; } - metadata->white_point[0].num = s->sei.mastering_display.white_point[0]; + metadata->white_point[0].num = sei->mastering_display.white_point[0]; metadata->white_point[0].den = chroma_den; - metadata->white_point[1].num = s->sei.mastering_display.white_point[1]; + metadata->white_point[1].num = sei->mastering_display.white_point[1]; metadata->white_point[1].den = chroma_den; - metadata->max_luminance.num = s->sei.mastering_display.max_luminance; + metadata->max_luminance.num = sei->mastering_display.max_luminance; metadata->max_luminance.den = luma_den; - metadata->min_luminance.num = s->sei.mastering_display.min_luminance; + metadata->min_luminance.num = sei->mastering_display.min_luminance; metadata->min_luminance.den = luma_den; metadata->has_luminance = 1; metadata->has_primaries = 1; - av_log(s->avctx, AV_LOG_DEBUG, "Mastering Display Metadata:\n"); - av_log(s->avctx, AV_LOG_DEBUG, + av_log(logctx, AV_LOG_DEBUG, "Mastering Display Metadata:\n"); + av_log(logctx, AV_LOG_DEBUG, "r(%5.4f,%5.4f) g(%5.4f,%5.4f) b(%5.4f %5.4f) wp(%5.4f, %5.4f)\n", av_q2d(metadata->display_primaries[0][0]), av_q2d(metadata->display_primaries[0][1]), @@ -2838,31 +2837,31 @@ static int set_side_data(HEVCContext *s) av_q2d(metadata->display_primaries[2][0]), av_q2d(metadata->display_primaries[2][1]), av_q2d(metadata->white_point[0]), av_q2d(metadata->white_point[1])); - av_log(s->avctx, AV_LOG_DEBUG, + av_log(logctx, AV_LOG_DEBUG, "min_luminance=%f, max_luminance=%f\n", av_q2d(metadata->min_luminance), av_q2d(metadata->max_luminance)); } // Decrement the mastering display flag when IRAP frame has no_rasl_output_flag=1 // so the side data persists for the entire coded video sequence. - if (s->sei.content_light.present > 0 && + if (s && sei->content_light.present > 0 && IS_IRAP(s) && s->no_rasl_output_flag) { - s->sei.content_light.present--; + sei->content_light.present--; } - if (s->sei.content_light.present) { + if (sei->content_light.present) { AVContentLightMetadata *metadata = av_content_light_metadata_create_side_data(out); if (!metadata) return AVERROR(ENOMEM); - metadata->MaxCLL = s->sei.content_light.max_content_light_level; - metadata->MaxFALL = s->sei.content_light.max_pic_average_light_level; + metadata->MaxCLL = sei->content_light.max_content_light_level; + metadata->MaxFALL = sei->content_light.max_pic_average_light_level; - av_log(s->avctx, AV_LOG_DEBUG, "Content Light Level Metadata:\n"); - av_log(s->avctx, AV_LOG_DEBUG, "MaxCLL=%d, MaxFALL=%d\n", + av_log(logctx, AV_LOG_DEBUG, "Content Light Level Metadata:\n"); + av_log(logctx, AV_LOG_DEBUG, "MaxCLL=%d, MaxFALL=%d\n", metadata->MaxCLL, metadata->MaxFALL); } - if (s->sei.a53_caption.buf_ref) { - HEVCSEIA53Caption *a53 = &s->sei.a53_caption; + if (sei->a53_caption.buf_ref) { + HEVCSEIA53Caption *a53 = &sei->a53_caption; AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, a53->buf_ref); if (!sd) @@ -2870,8 +2869,8 @@ static int set_side_data(HEVCContext *s) a53->buf_ref = NULL; } - for (int i = 0; i < s->sei.unregistered.nb_buf_ref; i++) { - HEVCSEIUnregistered *unreg = &s->sei.unregistered; + for (int i = 0; i < sei->unregistered.nb_buf_ref; i++) { + HEVCSEIUnregistered *unreg = &sei->unregistered; if (unreg->buf_ref[i]) { AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, @@ -2882,9 +2881,9 @@ static int set_side_data(HEVCContext *s) unreg->buf_ref[i] = NULL; } } - s->sei.unregistered.nb_buf_ref = 0; + sei->unregistered.nb_buf_ref = 0; - if (s->sei.timecode.present) { + if (s && sei->timecode.present) { uint32_t *tc_sd; char tcbuf[AV_TIMECODE_STR_SIZE]; AVFrameSideData *tcside = av_frame_new_side_data(out, AV_FRAME_DATA_S12M_TIMECODE, @@ -2893,25 +2892,25 @@ static int set_side_data(HEVCContext *s) return AVERROR(ENOMEM); tc_sd = (uint32_t*)tcside->data; - tc_sd[0] = s->sei.timecode.num_clock_ts; + tc_sd[0] = sei->timecode.num_clock_ts; for (int i = 0; i < tc_sd[0]; i++) { - int drop = s->sei.timecode.cnt_dropped_flag[i]; - int hh = s->sei.timecode.hours_value[i]; - int mm = s->sei.timecode.minutes_value[i]; - int ss = s->sei.timecode.seconds_value[i]; - int ff = s->sei.timecode.n_frames[i]; + int drop = sei->timecode.cnt_dropped_flag[i]; + int hh = sei->timecode.hours_value[i]; + int mm = sei->timecode.minutes_value[i]; + int ss = sei->timecode.seconds_value[i]; + int ff = sei->timecode.n_frames[i]; tc_sd[i + 1] = av_timecode_get_smpte(s->avctx->framerate, drop, hh, mm, ss, ff); av_timecode_make_smpte_tc_string2(tcbuf, s->avctx->framerate, tc_sd[i + 1], 0, 0); av_dict_set(&out->metadata, "timecode", tcbuf, 0); } - s->sei.timecode.num_clock_ts = 0; + sei->timecode.num_clock_ts = 0; } - if (s->sei.film_grain_characteristics.present) { - HEVCSEIFilmGrainCharacteristics *fgc = &s->sei.film_grain_characteristics; + if (s && sei->film_grain_characteristics.present) { + HEVCSEIFilmGrainCharacteristics *fgc = &sei->film_grain_characteristics; AVFilmGrainParams *fgp = av_film_grain_params_create_side_data(out); if (!fgp) return AVERROR(ENOMEM); @@ -2965,8 +2964,8 @@ static int set_side_data(HEVCContext *s) fgc->present = fgc->persistence_flag; } - if (s->sei.dynamic_hdr_plus.info) { - AVBufferRef *info_ref = av_buffer_ref(s->sei.dynamic_hdr_plus.info); + if (sei->dynamic_hdr_plus.info) { + AVBufferRef *info_ref = av_buffer_ref(sei->dynamic_hdr_plus.info); if (!info_ref) return AVERROR(ENOMEM); @@ -2976,7 +2975,7 @@ static int set_side_data(HEVCContext *s) } } - if (s->rpu_buf) { + if (s && s->rpu_buf) { AVFrameSideData *rpu = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_DOVI_RPU_BUFFER, s->rpu_buf); if (!rpu) return AVERROR(ENOMEM); @@ -2984,10 +2983,10 @@ static int set_side_data(HEVCContext *s) s->rpu_buf = NULL; } - if ((ret = ff_dovi_attach_side_data(&s->dovi_ctx, out)) < 0) + if (s && (ret = ff_dovi_attach_side_data(&s->dovi_ctx, out)) < 0) return ret; - if (s->sei.dynamic_hdr_vivid.info) { + if (s && s->sei.dynamic_hdr_vivid.info) { AVBufferRef *info_ref = av_buffer_ref(s->sei.dynamic_hdr_vivid.info); if (!info_ref) return AVERROR(ENOMEM); @@ -3046,7 +3045,7 @@ static int hevc_frame_start(HEVCContext *s) goto fail; } - ret = set_side_data(s); + ret = ff_hevc_set_side_data(s->avctx, &s->sei, s, s->ref->frame); if (ret < 0) goto fail; diff --git a/libavcodec/hevcdec.h b/libavcodec/hevcdec.h index de861b88b3..cd8cd40da0 100644 --- a/libavcodec/hevcdec.h +++ b/libavcodec/hevcdec.h @@ -690,6 +690,15 @@ void ff_hevc_hls_residual_coding(HEVCContext *s, int x0, int y0, void ff_hevc_hls_mvd_coding(HEVCContext *s, int x0, int y0, int log2_cb_size); +/** + * Set the decodec side data to an AVFrame. + * @logctx context for logging. + * @sei HEVCSEI decoding context, must not be NULL. + * @s HEVCContext, can be NULL. + * @return < 0 on error, 0 otherwise. + */ +int ff_hevc_set_side_data(AVCodecContext *logctx, HEVCSEI *sei, HEVCContext *s, AVFrame *out); + extern const uint8_t ff_hevc_qpel_extra_before[4]; extern const uint8_t ff_hevc_qpel_extra_after[4]; extern const uint8_t ff_hevc_qpel_extra[4]; -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v2 5/6] avcodec/h264dec: make h264_export_frame_props() accessible 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 0/6] " ffmpegagent ` (3 preceding siblings ...) 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 4/6] avcodec/hevcdec: make set_side_data() accessible softworkz @ 2022-06-01 9:06 ` softworkz 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders softworkz 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 0/6] " ffmpegagent 6 siblings, 0 replies; 65+ messages in thread From: softworkz @ 2022-06-01 9:06 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz, Xiang, Haihao From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> --- libavcodec/h264_slice.c | 98 +++++++++++++++++++++-------------------- libavcodec/h264dec.h | 2 + 2 files changed, 52 insertions(+), 48 deletions(-) diff --git a/libavcodec/h264_slice.c b/libavcodec/h264_slice.c index d56722a5c2..f2a4c1c657 100644 --- a/libavcodec/h264_slice.c +++ b/libavcodec/h264_slice.c @@ -1157,11 +1157,10 @@ static int h264_init_ps(H264Context *h, const H264SliceContext *sl, int first_sl return 0; } -static int h264_export_frame_props(H264Context *h) +int ff_h264_export_frame_props(AVCodecContext *logctx, H264SEIContext *sei, H264Context *h, AVFrame *out) { - const SPS *sps = h->ps.sps; - H264Picture *cur = h->cur_pic_ptr; - AVFrame *out = cur->f; + const SPS *sps = h ? h->ps.sps : NULL; + H264Picture *cur = h ? h->cur_pic_ptr : NULL; out->interlaced_frame = 0; out->repeat_pict = 0; @@ -1169,19 +1168,19 @@ static int h264_export_frame_props(H264Context *h) /* Signal interlacing information externally. */ /* Prioritize picture timing SEI information over used * decoding process if it exists. */ - if (h->sei.picture_timing.present) { - int ret = ff_h264_sei_process_picture_timing(&h->sei.picture_timing, sps, - h->avctx); + if (sps && sei->picture_timing.present) { + int ret = ff_h264_sei_process_picture_timing(&sei->picture_timing, sps, + logctx); if (ret < 0) { - av_log(h->avctx, AV_LOG_ERROR, "Error processing a picture timing SEI\n"); - if (h->avctx->err_recognition & AV_EF_EXPLODE) + av_log(logctx, AV_LOG_ERROR, "Error processing a picture timing SEI\n"); + if (logctx->err_recognition & AV_EF_EXPLODE) return ret; - h->sei.picture_timing.present = 0; + sei->picture_timing.present = 0; } } - if (sps->pic_struct_present_flag && h->sei.picture_timing.present) { - H264SEIPictureTiming *pt = &h->sei.picture_timing; + if (h && sps && sps->pic_struct_present_flag && sei->picture_timing.present) { + H264SEIPictureTiming *pt = &sei->picture_timing; switch (pt->pic_struct) { case H264_SEI_PIC_STRUCT_FRAME: break; @@ -1215,21 +1214,23 @@ static int h264_export_frame_props(H264Context *h) if ((pt->ct_type & 3) && pt->pic_struct <= H264_SEI_PIC_STRUCT_BOTTOM_TOP) out->interlaced_frame = (pt->ct_type & (1 << 1)) != 0; - } else { + } else if (h) { /* Derive interlacing flag from used decoding process. */ out->interlaced_frame = FIELD_OR_MBAFF_PICTURE(h); } - h->prev_interlaced_frame = out->interlaced_frame; - if (cur->field_poc[0] != cur->field_poc[1]) { + if (h) + h->prev_interlaced_frame = out->interlaced_frame; + + if (sps && cur->field_poc[0] != cur->field_poc[1]) { /* Derive top_field_first from field pocs. */ out->top_field_first = cur->field_poc[0] < cur->field_poc[1]; - } else { - if (sps->pic_struct_present_flag && h->sei.picture_timing.present) { + } else if (sps) { + if (sps->pic_struct_present_flag && sei->picture_timing.present) { /* Use picture timing SEI information. Even if it is a * information of a past frame, better than nothing. */ - if (h->sei.picture_timing.pic_struct == H264_SEI_PIC_STRUCT_TOP_BOTTOM || - h->sei.picture_timing.pic_struct == H264_SEI_PIC_STRUCT_TOP_BOTTOM_TOP) + if (sei->picture_timing.pic_struct == H264_SEI_PIC_STRUCT_TOP_BOTTOM || + sei->picture_timing.pic_struct == H264_SEI_PIC_STRUCT_TOP_BOTTOM_TOP) out->top_field_first = 1; else out->top_field_first = 0; @@ -1243,11 +1244,11 @@ static int h264_export_frame_props(H264Context *h) } } - if (h->sei.frame_packing.present && - h->sei.frame_packing.arrangement_type <= 6 && - h->sei.frame_packing.content_interpretation_type > 0 && - h->sei.frame_packing.content_interpretation_type < 3) { - H264SEIFramePacking *fp = &h->sei.frame_packing; + if (sei->frame_packing.present && + sei->frame_packing.arrangement_type <= 6 && + sei->frame_packing.content_interpretation_type > 0 && + sei->frame_packing.content_interpretation_type < 3) { + H264SEIFramePacking *fp = &sei->frame_packing; AVStereo3D *stereo = av_stereo3d_create_side_data(out); if (stereo) { switch (fp->arrangement_type) { @@ -1289,11 +1290,11 @@ static int h264_export_frame_props(H264Context *h) } } - if (h->sei.display_orientation.present && - (h->sei.display_orientation.anticlockwise_rotation || - h->sei.display_orientation.hflip || - h->sei.display_orientation.vflip)) { - H264SEIDisplayOrientation *o = &h->sei.display_orientation; + if (sei->display_orientation.present && + (sei->display_orientation.anticlockwise_rotation || + sei->display_orientation.hflip || + sei->display_orientation.vflip)) { + H264SEIDisplayOrientation *o = &sei->display_orientation; double angle = o->anticlockwise_rotation * 360 / (double) (1 << 16); AVFrameSideData *rotation = av_frame_new_side_data(out, AV_FRAME_DATA_DISPLAYMATRIX, @@ -1314,29 +1315,30 @@ static int h264_export_frame_props(H264Context *h) } } - if (h->sei.afd.present) { + if (sei->afd.present) { AVFrameSideData *sd = av_frame_new_side_data(out, AV_FRAME_DATA_AFD, sizeof(uint8_t)); if (sd) { - *sd->data = h->sei.afd.active_format_description; - h->sei.afd.present = 0; + *sd->data = sei->afd.active_format_description; + sei->afd.present = 0; } } - if (h->sei.a53_caption.buf_ref) { - H264SEIA53Caption *a53 = &h->sei.a53_caption; + if (sei->a53_caption.buf_ref) { + H264SEIA53Caption *a53 = &sei->a53_caption; AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, a53->buf_ref); if (!sd) av_buffer_unref(&a53->buf_ref); a53->buf_ref = NULL; - h->avctx->properties |= FF_CODEC_PROPERTY_CLOSED_CAPTIONS; + if (h) + h->avctx->properties |= FF_CODEC_PROPERTY_CLOSED_CAPTIONS; } - for (int i = 0; i < h->sei.unregistered.nb_buf_ref; i++) { - H264SEIUnregistered *unreg = &h->sei.unregistered; + for (int i = 0; i < sei->unregistered.nb_buf_ref; i++) { + H264SEIUnregistered *unreg = &sei->unregistered; if (unreg->buf_ref[i]) { AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, @@ -1347,10 +1349,10 @@ static int h264_export_frame_props(H264Context *h) unreg->buf_ref[i] = NULL; } } - h->sei.unregistered.nb_buf_ref = 0; + sei->unregistered.nb_buf_ref = 0; - if (h->sei.film_grain_characteristics.present) { - H264SEIFilmGrainCharacteristics *fgc = &h->sei.film_grain_characteristics; + if (h && sps && sei->film_grain_characteristics.present) { + H264SEIFilmGrainCharacteristics *fgc = &sei->film_grain_characteristics; AVFilmGrainParams *fgp = av_film_grain_params_create_side_data(out); if (!fgp) return AVERROR(ENOMEM); @@ -1404,7 +1406,7 @@ static int h264_export_frame_props(H264Context *h) h->avctx->properties |= FF_CODEC_PROPERTY_FILM_GRAIN; } - if (h->sei.picture_timing.timecode_cnt > 0) { + if (h && sei->picture_timing.timecode_cnt > 0) { uint32_t *tc_sd; char tcbuf[AV_TIMECODE_STR_SIZE]; @@ -1415,14 +1417,14 @@ static int h264_export_frame_props(H264Context *h) return AVERROR(ENOMEM); tc_sd = (uint32_t*)tcside->data; - tc_sd[0] = h->sei.picture_timing.timecode_cnt; + tc_sd[0] = sei->picture_timing.timecode_cnt; for (int i = 0; i < tc_sd[0]; i++) { - int drop = h->sei.picture_timing.timecode[i].dropframe; - int hh = h->sei.picture_timing.timecode[i].hours; - int mm = h->sei.picture_timing.timecode[i].minutes; - int ss = h->sei.picture_timing.timecode[i].seconds; - int ff = h->sei.picture_timing.timecode[i].frame; + int drop = sei->picture_timing.timecode[i].dropframe; + int hh = sei->picture_timing.timecode[i].hours; + int mm = sei->picture_timing.timecode[i].minutes; + int ss = sei->picture_timing.timecode[i].seconds; + int ff = sei->picture_timing.timecode[i].frame; tc_sd[i + 1] = av_timecode_get_smpte(h->avctx->framerate, drop, hh, mm, ss, ff); av_timecode_make_smpte_tc_string2(tcbuf, h->avctx->framerate, tc_sd[i + 1], 0, 0); @@ -1817,7 +1819,7 @@ static int h264_field_start(H264Context *h, const H264SliceContext *sl, * field coded frames, since some SEI information is present for each field * and is merged by the SEI parsing code. */ if (!FIELD_PICTURE(h) || !h->first_field || h->missing_fields > 1) { - ret = h264_export_frame_props(h); + ret = ff_h264_export_frame_props(h->avctx, &h->sei, h, h->cur_pic_ptr->f); if (ret < 0) return ret; diff --git a/libavcodec/h264dec.h b/libavcodec/h264dec.h index 9a1ec1bace..38930da4ca 100644 --- a/libavcodec/h264dec.h +++ b/libavcodec/h264dec.h @@ -808,4 +808,6 @@ void ff_h264_free_tables(H264Context *h); void ff_h264_set_erpic(ERPicture *dst, H264Picture *src); +int ff_h264_export_frame_props(AVCodecContext *logctx, H264SEIContext *sei, H264Context *h, AVFrame *out); + #endif /* AVCODEC_H264DEC_H */ -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v2 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 0/6] " ffmpegagent ` (4 preceding siblings ...) 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 5/6] avcodec/h264dec: make h264_export_frame_props() accessible softworkz @ 2022-06-01 9:06 ` softworkz 2022-06-01 17:20 ` Xiang, Haihao 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 0/6] " ffmpegagent 6 siblings, 1 reply; 65+ messages in thread From: softworkz @ 2022-06-01 9:06 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz, Xiang, Haihao From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> --- libavcodec/qsvdec.c | 234 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 234 insertions(+) diff --git a/libavcodec/qsvdec.c b/libavcodec/qsvdec.c index 5fc5bed4c8..3fc5dc3f20 100644 --- a/libavcodec/qsvdec.c +++ b/libavcodec/qsvdec.c @@ -49,6 +49,12 @@ #include "hwconfig.h" #include "qsv.h" #include "qsv_internal.h" +#include "h264dec.h" +#include "h264_sei.h" +#include "hevcdec.h" +#include "hevc_ps.h" +#include "hevc_sei.h" +#include "mpeg12.h" static const AVRational mfx_tb = { 1, 90000 }; @@ -60,6 +66,8 @@ static const AVRational mfx_tb = { 1, 90000 }; AV_NOPTS_VALUE : pts_tb.num ? \ av_rescale_q(mfx_pts, mfx_tb, pts_tb) : mfx_pts) +#define PAYLOAD_BUFFER_SIZE 65535 + typedef struct QSVAsyncFrame { mfxSyncPoint *sync; QSVFrame *frame; @@ -101,6 +109,9 @@ typedef struct QSVContext { mfxExtBuffer **ext_buffers; int nb_ext_buffers; + + mfxU8 payload_buffer[PAYLOAD_BUFFER_SIZE]; + Mpeg1Context mpeg_ctx; } QSVContext; static const AVCodecHWConfigInternal *const qsv_hw_configs[] = { @@ -599,6 +610,210 @@ static int qsv_export_film_grain(AVCodecContext *avctx, mfxExtAV1FilmGrainParam return 0; } #endif +static int find_start_offset(mfxU8 data[4]) +{ + if (data[0] == 0 && data[1] == 0 && data[2] == 1) + return 3; + + if (data[0] == 0 && data[1] == 0 && data[2] == 0 && data[3] == 1) + return 4; + + return 0; +} + +static int parse_sei_h264(AVCodecContext* avctx, QSVContext* q, AVFrame* out) +{ + H264SEIContext sei = { 0 }; + GetBitContext gb = { 0 }; + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = sizeof(q->payload_buffer) }; + mfxU64 ts; + int ret; + + while (1) { + int start; + memset(payload.Data, 0, payload.BufSize); + + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient buffer on GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q->payload_buffer), payload.BufSize); + return 0; + } + if (ret != MFX_ERR_NONE) + return ret; + + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) + break; + + start = find_start_offset(payload.Data); + + switch (payload.Type) { + case SEI_TYPE_BUFFERING_PERIOD: + case SEI_TYPE_PIC_TIMING: + continue; + } + + if (init_get_bits(&gb, &payload.Data[start], payload.NumBit - start * 8) < 0) + av_log(avctx, AV_LOG_ERROR, "Error initializing bitstream reader SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); + else { + ret = ff_h264_sei_decode(&sei, &gb, NULL, avctx); + + if (ret < 0) + av_log(avctx, AV_LOG_WARNING, "Failed to parse SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); + else + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d\n", payload.Type, payload.NumBit); + } + } + + if (out) + return ff_h264_export_frame_props(avctx, &sei, NULL, out); + + return 0; +} + +static int parse_sei_hevc(AVCodecContext* avctx, QSVContext* q, QSVFrame* out) +{ + HEVCSEI sei = { 0 }; + HEVCParamSets ps = { 0 }; + GetBitContext gb = { 0 }; + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = sizeof(q->payload_buffer) }; + mfxFrameSurface1 *surface = &out->surface; + mfxU64 ts; + int ret, has_logged = 0; + + while (1) { + int start; + memset(payload.Data, 0, payload.BufSize); + + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient buffer on GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q->payload_buffer), payload.BufSize); + return 0; + } + if (ret != MFX_ERR_NONE) + return ret; + + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) + break; + + if (!has_logged) { + has_logged = 1; + av_log(avctx, AV_LOG_VERBOSE, "-----------------------------------------\n"); + av_log(avctx, AV_LOG_VERBOSE, "Start reading SEI - payload timestamp: %llu - surface timestamp: %llu\n", ts, surface->Data.TimeStamp); + } + + if (ts != surface->Data.TimeStamp) { + av_log(avctx, AV_LOG_WARNING, "GetPayload timestamp (%llu) does not match surface timestamp: (%llu)\n", ts, surface->Data.TimeStamp); + } + + start = find_start_offset(payload.Data); + + av_log(avctx, AV_LOG_VERBOSE, "parsing SEI type: %3d Numbits %3d Start: %d\n", payload.Type, payload.NumBit, start); + + switch (payload.Type) { + case SEI_TYPE_BUFFERING_PERIOD: + case SEI_TYPE_PIC_TIMING: + continue; + case SEI_TYPE_MASTERING_DISPLAY_COLOUR_VOLUME: + // There seems to be a bug in MSDK + payload.NumBit -= 8; + + break; + case SEI_TYPE_CONTENT_LIGHT_LEVEL_INFO: + // There seems to be a bug in MSDK + payload.NumBit = 48; + + break; + case SEI_TYPE_USER_DATA_REGISTERED_ITU_T_T35: + // There seems to be a bug in MSDK + if (payload.NumBit == 552) + payload.NumBit = 528; + break; + } + + if (init_get_bits(&gb, &payload.Data[start], payload.NumBit - start * 8) < 0) + av_log(avctx, AV_LOG_ERROR, "Error initializing bitstream reader SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); + else { + ret = ff_h264_sei_decode(&sei, &gb, NULL, avctx); + + if (ret < 0) + av_log(avctx, AV_LOG_WARNING, "Failed to parse SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); + else + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d\n", payload.Type, payload.NumBit); + } + } + + if (has_logged) { + av_log(avctx, AV_LOG_VERBOSE, "End reading SEI\n"); + } + + if (out && out->frame) + return ff_hevc_set_side_data(avctx, &sei, NULL, out->frame); + + return 0; +} + +static int parse_sei_mpeg12(AVCodecContext* avctx, QSVContext* q, AVFrame* out) +{ + Mpeg1Context *mpeg_ctx = &q->mpeg_ctx; + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = sizeof(q->payload_buffer) }; + mfxU64 ts; + int ret; + + while (1) { + int start; + + memset(payload.Data, 0, payload.BufSize); + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient buffer on GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q->payload_buffer), payload.BufSize); + return 0; + } + if (ret != MFX_ERR_NONE) + return ret; + + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) + break; + + start = find_start_offset(payload.Data); + + start++; + + ff_mpeg_decode_user_data(avctx, mpeg_ctx, &payload.Data[start], (int)((payload.NumBit + 7) / 8) - start); + + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d start %d -> %.s\n", payload.Type, payload.NumBit, start, (char *)(&payload.Data[start])); + } + + if (!out) + return 0; + + if (mpeg_ctx->a53_buf_ref) { + + AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, mpeg_ctx->a53_buf_ref); + if (!sd) + av_buffer_unref(&mpeg_ctx->a53_buf_ref); + mpeg_ctx->a53_buf_ref = NULL; + } + + if (mpeg_ctx->has_stereo3d) { + AVStereo3D *stereo = av_stereo3d_create_side_data(out); + if (!stereo) + return AVERROR(ENOMEM); + + *stereo = mpeg_ctx->stereo3d; + mpeg_ctx->has_stereo3d = 0; + } + + if (mpeg_ctx->has_afd) { + AVFrameSideData *sd = av_frame_new_side_data(out, AV_FRAME_DATA_AFD, 1); + if (!sd) + return AVERROR(ENOMEM); + + *sd->data = mpeg_ctx->afd; + mpeg_ctx->has_afd = 0; + } + + return 0; +} static int qsv_decode(AVCodecContext *avctx, QSVContext *q, AVFrame *frame, int *got_frame, @@ -636,6 +851,8 @@ static int qsv_decode(AVCodecContext *avctx, QSVContext *q, insurf, &outsurf, sync); if (ret == MFX_WRN_DEVICE_BUSY) av_usleep(500); + else if (avctx->codec_id == AV_CODEC_ID_MPEG2VIDEO) + parse_sei_mpeg12(avctx, q, NULL); } while (ret == MFX_WRN_DEVICE_BUSY || ret == MFX_ERR_MORE_SURFACE); @@ -677,6 +894,23 @@ static int qsv_decode(AVCodecContext *avctx, QSVContext *q, return AVERROR_BUG; } + switch (avctx->codec_id) { + case AV_CODEC_ID_MPEG2VIDEO: + ret = parse_sei_mpeg12(avctx, q, out_frame->frame); + break; + case AV_CODEC_ID_H264: + ret = parse_sei_h264(avctx, q, out_frame->frame); + break; + case AV_CODEC_ID_HEVC: + ret = parse_sei_hevc(avctx, q, out_frame); + break; + default: + ret = 0; + } + + if (ret < 0) + av_log(avctx, AV_LOG_ERROR, "Error parsing SEI data: %d\n", ret); + out_frame->queued += 1; aframe = (QSVAsyncFrame){ sync, out_frame }; -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [FFmpeg-devel] [PATCH v2 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders softworkz @ 2022-06-01 17:20 ` Xiang, Haihao 2022-06-01 17:50 ` Soft Works 0 siblings, 1 reply; 65+ messages in thread From: Xiang, Haihao @ 2022-06-01 17:20 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz, haihao.xiang-at-intel.com On Wed, 2022-06-01 at 09:06 +0000, softworkz wrote: > From: softworkz <softworkz@hotmail.com> > > Signed-off-by: softworkz <softworkz@hotmail.com> > --- > libavcodec/qsvdec.c | 234 ++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 234 insertions(+) > > diff --git a/libavcodec/qsvdec.c b/libavcodec/qsvdec.c > index 5fc5bed4c8..3fc5dc3f20 100644 > --- a/libavcodec/qsvdec.c > +++ b/libavcodec/qsvdec.c > @@ -49,6 +49,12 @@ > #include "hwconfig.h" > #include "qsv.h" > #include "qsv_internal.h" > +#include "h264dec.h" > +#include "h264_sei.h" > +#include "hevcdec.h" > +#include "hevc_ps.h" > +#include "hevc_sei.h" > +#include "mpeg12.h" > > [...] > + > +static int parse_sei_hevc(AVCodecContext* avctx, QSVContext* q, QSVFrame* > out) > +{ > + HEVCSEI sei = { 0 }; > + HEVCParamSets ps = { 0 }; ps is not used. > + GetBitContext gb = { 0 }; > + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = > sizeof(q->payload_buffer) }; > + mfxFrameSurface1 *surface = &out->surface; > + mfxU64 ts; > + int ret, has_logged = 0; > + > + while (1) { > + int start; > + memset(payload.Data, 0, payload.BufSize); > + > + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); > + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { > + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient buffer on > GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q->payload_buffer), > payload.BufSize); > + return 0; > + } > + if (ret != MFX_ERR_NONE) > + return ret; > + > + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) > + break; > + > + if (!has_logged) { > + has_logged = 1; > + av_log(avctx, AV_LOG_VERBOSE, "-------------------------------- > ---------\n"); > + av_log(avctx, AV_LOG_VERBOSE, "Start reading SEI - payload > timestamp: %llu - surface timestamp: %llu\n", ts, surface->Data.TimeStamp); > + } > + > + if (ts != surface->Data.TimeStamp) { > + av_log(avctx, AV_LOG_WARNING, "GetPayload timestamp (%llu) does > not match surface timestamp: (%llu)\n", ts, surface->Data.TimeStamp); > + } > + > + start = find_start_offset(payload.Data); > + > + av_log(avctx, AV_LOG_VERBOSE, "parsing SEI type: %3d Numbits > %3d Start: %d\n", payload.Type, payload.NumBit, start); > + > + switch (payload.Type) { > + case SEI_TYPE_BUFFERING_PERIOD: > + case SEI_TYPE_PIC_TIMING: > + continue; > + case SEI_TYPE_MASTERING_DISPLAY_COLOUR_VOLUME: > + // There seems to be a bug in MSDK > + payload.NumBit -= 8; > + > + break; > + case SEI_TYPE_CONTENT_LIGHT_LEVEL_INFO: > + // There seems to be a bug in MSDK > + payload.NumBit = 48; > + > + break; > + case SEI_TYPE_USER_DATA_REGISTERED_ITU_T_T35: > + // There seems to be a bug in MSDK > + if (payload.NumBit == 552) > + payload.NumBit = 528; > + break; > + } > + > + if (init_get_bits(&gb, &payload.Data[start], payload.NumBit - start * > 8) < 0) > + av_log(avctx, AV_LOG_ERROR, "Error initializing bitstream reader > SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); > + else { > + ret = ff_h264_sei_decode(&sei, &gb, NULL, avctx); The type of &sei is HEVCSEI *, however the type of the first argument of ff_h264_sei_decode() is H264SEIContext *. ff_h264_sei_decode() can't be used to parse hevc sei. BRs Haihao > + > + if (ret < 0) > + av_log(avctx, AV_LOG_WARNING, "Failed to parse SEI type: > %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); > + else > + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits > %d\n", payload.Type, payload.NumBit); > + } > + } > + > + if (has_logged) { > + av_log(avctx, AV_LOG_VERBOSE, "End reading SEI\n"); > + } > + > + if (out && out->frame) > + return ff_hevc_set_side_data(avctx, &sei, NULL, out->frame); > + > + return 0; > +} > + > +static int parse_sei_mpeg12(AVCodecContext* avctx, QSVContext* q, AVFrame* > out) > +{ > + Mpeg1Context *mpeg_ctx = &q->mpeg_ctx; > + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = > sizeof(q->payload_buffer) }; > + mfxU64 ts; > + int ret; > + > + while (1) { > + int start; > + > + memset(payload.Data, 0, payload.BufSize); > + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); > + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { > + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient buffer on > GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q->payload_buffer), > payload.BufSize); > + return 0; > + } > + if (ret != MFX_ERR_NONE) > + return ret; > + > + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) > + break; > + > + start = find_start_offset(payload.Data); > + > + start++; > + > + ff_mpeg_decode_user_data(avctx, mpeg_ctx, &payload.Data[start], > (int)((payload.NumBit + 7) / 8) - start); > + > + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d start %d > -> %.s\n", payload.Type, payload.NumBit, start, (char > *)(&payload.Data[start])); > + } > + > + if (!out) > + return 0; > + > + if (mpeg_ctx->a53_buf_ref) { > + > + AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, > AV_FRAME_DATA_A53_CC, mpeg_ctx->a53_buf_ref); > + if (!sd) > + av_buffer_unref(&mpeg_ctx->a53_buf_ref); > + mpeg_ctx->a53_buf_ref = NULL; > + } > + > + if (mpeg_ctx->has_stereo3d) { > + AVStereo3D *stereo = av_stereo3d_create_side_data(out); > + if (!stereo) > + return AVERROR(ENOMEM); > + > + *stereo = mpeg_ctx->stereo3d; > + mpeg_ctx->has_stereo3d = 0; > + } > + > + if (mpeg_ctx->has_afd) { > + AVFrameSideData *sd = av_frame_new_side_data(out, AV_FRAME_DATA_AFD, > 1); > + if (!sd) > + return AVERROR(ENOMEM); > + > + *sd->data = mpeg_ctx->afd; > + mpeg_ctx->has_afd = 0; > + } > + > + return 0; > +} > > static int qsv_decode(AVCodecContext *avctx, QSVContext *q, > AVFrame *frame, int *got_frame, > @@ -636,6 +851,8 @@ static int qsv_decode(AVCodecContext *avctx, QSVContext > *q, > insurf, &outsurf, sync); > if (ret == MFX_WRN_DEVICE_BUSY) > av_usleep(500); > + else if (avctx->codec_id == AV_CODEC_ID_MPEG2VIDEO) > + parse_sei_mpeg12(avctx, q, NULL); > > } while (ret == MFX_WRN_DEVICE_BUSY || ret == MFX_ERR_MORE_SURFACE); > > @@ -677,6 +894,23 @@ static int qsv_decode(AVCodecContext *avctx, QSVContext > *q, > return AVERROR_BUG; > } > > + switch (avctx->codec_id) { > + case AV_CODEC_ID_MPEG2VIDEO: > + ret = parse_sei_mpeg12(avctx, q, out_frame->frame); > + break; > + case AV_CODEC_ID_H264: > + ret = parse_sei_h264(avctx, q, out_frame->frame); > + break; > + case AV_CODEC_ID_HEVC: > + ret = parse_sei_hevc(avctx, q, out_frame); > + break; > + default: > + ret = 0; > + } > + > + if (ret < 0) > + av_log(avctx, AV_LOG_ERROR, "Error parsing SEI data: %d\n", ret); > + > out_frame->queued += 1; > > aframe = (QSVAsyncFrame){ sync, out_frame }; _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [FFmpeg-devel] [PATCH v2 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders 2022-06-01 17:20 ` Xiang, Haihao @ 2022-06-01 17:50 ` Soft Works 0 siblings, 0 replies; 65+ messages in thread From: Soft Works @ 2022-06-01 17:50 UTC (permalink / raw) To: Xiang, Haihao, ffmpeg-devel; +Cc: haihao.xiang-at-intel.com > -----Original Message----- > From: Xiang, Haihao <haihao.xiang@intel.com> > Sent: Wednesday, June 1, 2022 7:20 PM > To: ffmpeg-devel@ffmpeg.org > Cc: haihao.xiang-at-intel.com@ffmpeg.org; softworkz@hotmail.com > Subject: Re: [FFmpeg-devel] [PATCH v2 6/6] avcodec/qsvdec: Implement SEI > parsing for QSV decoders > > On Wed, 2022-06-01 at 09:06 +0000, softworkz wrote: > > From: softworkz <softworkz@hotmail.com> > > > > Signed-off-by: softworkz <softworkz@hotmail.com> > > --- > > libavcodec/qsvdec.c | 234 ++++++++++++++++++++++++++++++++++++++++++++ > > 1 file changed, 234 insertions(+) > > > > diff --git a/libavcodec/qsvdec.c b/libavcodec/qsvdec.c > > index 5fc5bed4c8..3fc5dc3f20 100644 > > --- a/libavcodec/qsvdec.c > > +++ b/libavcodec/qsvdec.c > > @@ -49,6 +49,12 @@ > > #include "hwconfig.h" > > #include "qsv.h" > > #include "qsv_internal.h" > > +#include "h264dec.h" > > +#include "h264_sei.h" > > +#include "hevcdec.h" > > +#include "hevc_ps.h" > > +#include "hevc_sei.h" > > +#include "mpeg12.h" > > > > > > [...] > > 8) < 0) > > + av_log(avctx, AV_LOG_ERROR, "Error initializing bitstream > reader > > SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); > > + else { > > + ret = ff_h264_sei_decode(&sei, &gb, NULL, avctx); > > The type of &sei is HEVCSEI *, however the type of the first argument of > ff_h264_sei_decode() is H264SEIContext *. ff_h264_sei_decode() can't be used > to > parse hevc sei. Oops - I know how it happened but I wonder that it compiled without error.. Anyway, it's fixed now. > ps is not used. Now it is :-) Thanks a lot, softworkz _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v3 0/6] Implement SEI parsing for QSV decoders 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 0/6] " ffmpegagent ` (5 preceding siblings ...) 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders softworkz @ 2022-06-01 18:01 ` ffmpegagent 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() softworkz ` (6 more replies) 6 siblings, 7 replies; 65+ messages in thread From: ffmpegagent @ 2022-06-01 18:01 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz, Xiang, Haihao Missing SEI information has always been a major drawback when using the QSV decoders. I used to think that there's no chance to get at the data without explicit implementation from the MSDK side (or doing something weird like parsing in parallel). It turned out that there's a hardly known api method that provides access to all SEI (h264/hevc) or user data (mpeg2video). This allows to get things like closed captions, frame packing, display orientation, HDR data (mastering display, content light level, etc.) without having to rely on those data being provided by the MSDK as extended buffers. The commit "Implement SEI parsing for QSV decoders" includes some hard-coded workarounds for MSDK bugs which I reported: https://github.com/Intel-Media-SDK/MediaSDK/issues/2597#issuecomment-1072795311 But that doesn't help. Those bugs exist and I'm sharing my workarounds, which are empirically determined by testing a range of files. If someone is interested, I can provide private access to a repository where we have been testing this. Alternatively, I could also leave those workarounds out, and just skip those SEI types. In a previous version of this patchset, there was a concern that payload data might need to be re-ordered. Meanwhile I have researched this carefully and the conclusion is that this is not required. My detailed analysis can be found here: https://gist.github.com/softworkz/36c49586a8610813a32270ee3947a932 v2 * qsvdec: make error handling consistent and clear * qsvdec: remove AV_CODEC_ID_MPEG1VIDEO constants * hevcdec: rename function to ff_hevc_set_side_data(), add doc text v3 * qsvdec: fix c/p error softworkz (6): avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() avcodec/vpp_qsv: Copy side data from input to output frame avcodec/mpeg12dec: make mpeg_decode_user_data() accessible avcodec/hevcdec: make set_side_data() accessible avcodec/h264dec: make h264_export_frame_props() accessible avcodec/qsvdec: Implement SEI parsing for QSV decoders doc/APIchanges | 4 + libavcodec/h264_slice.c | 98 ++++++++------- libavcodec/h264dec.h | 2 + libavcodec/hevcdec.c | 117 +++++++++--------- libavcodec/hevcdec.h | 9 ++ libavcodec/mpeg12.h | 28 +++++ libavcodec/mpeg12dec.c | 40 +----- libavcodec/qsvdec.c | 234 +++++++++++++++++++++++++++++++++++ libavfilter/qsvvpp.c | 6 + libavfilter/vf_overlay_qsv.c | 19 ++- libavutil/frame.c | 67 ++++++---- libavutil/frame.h | 32 +++++ libavutil/version.h | 2 +- 13 files changed, 485 insertions(+), 173 deletions(-) base-commit: 77b529fbd228fe30a870e3157f051885b436ad92 Published-As: https://github.com/ffstaging/FFmpeg/releases/tag/pr-ffstaging-31%2Fsoftworkz%2Fsubmit_qsv_sei-v3 Fetch-It-Via: git fetch https://github.com/ffstaging/FFmpeg pr-ffstaging-31/softworkz/submit_qsv_sei-v3 Pull-Request: https://github.com/ffstaging/FFmpeg/pull/31 Range-diff vs v2: 1: 4ee6cb47db = 1: c442597a35 avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() 2: 3152156c97 = 2: 6f50d0bd57 avcodec/vpp_qsv: Copy side data from input to output frame 3: 8082c3ab84 = 3: f682b1d695 avcodec/mpeg12dec: make mpeg_decode_user_data() accessible 4: 306bdaa39c = 4: 995d835233 avcodec/hevcdec: make set_side_data() accessible 5: 16f5dfbfd1 = 5: ac8dc06395 avcodec/h264dec: make h264_export_frame_props() accessible 6: 23de6d2774 ! 6: 27c3dded4d avcodec/qsvdec: Implement SEI parsing for QSV decoders @@ libavcodec/qsvdec.c: static int qsv_export_film_grain(AVCodecContext *avctx, mfx + if (init_get_bits(&gb, &payload.Data[start], payload.NumBit - start * 8) < 0) + av_log(avctx, AV_LOG_ERROR, "Error initializing bitstream reader SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); + else { -+ ret = ff_h264_sei_decode(&sei, &gb, NULL, avctx); ++ ret = ff_hevc_decode_nal_sei(&gb, avctx, &sei, &ps, HEVC_NAL_SEI_PREFIX); + + if (ret < 0) + av_log(avctx, AV_LOG_WARNING, "Failed to parse SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v3 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 0/6] " ffmpegagent @ 2022-06-01 18:01 ` softworkz 2022-06-24 7:01 ` Xiang, Haihao 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 2/6] avcodec/vpp_qsv: Copy side data from input to output frame softworkz ` (5 subsequent siblings) 6 siblings, 1 reply; 65+ messages in thread From: softworkz @ 2022-06-01 18:01 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz, Xiang, Haihao From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> Signed-off-by: Anton Khirnov <anton@khirnov.net> --- doc/APIchanges | 4 +++ libavutil/frame.c | 67 +++++++++++++++++++++++++++------------------ libavutil/frame.h | 32 ++++++++++++++++++++++ libavutil/version.h | 2 +- 4 files changed, 78 insertions(+), 27 deletions(-) diff --git a/doc/APIchanges b/doc/APIchanges index 337f1466d8..e5dd6f1e83 100644 --- a/doc/APIchanges +++ b/doc/APIchanges @@ -14,6 +14,10 @@ libavutil: 2021-04-27 API changes, most recent first: +2022-05-26 - xxxxxxxxx - lavu 57.26.100 - frame.h + Add av_frame_remove_all_side_data(), av_frame_copy_side_data(), + AV_FRAME_TRANSFER_SD_COPY, and AV_FRAME_TRANSFER_SD_FILTER. + 2022-05-23 - xxxxxxxxx - lavu 57.25.100 - avutil.h Deprecate av_fopen_utf8() without replacement. diff --git a/libavutil/frame.c b/libavutil/frame.c index fbb869fffa..bfe575612d 100644 --- a/libavutil/frame.c +++ b/libavutil/frame.c @@ -271,9 +271,45 @@ FF_ENABLE_DEPRECATION_WARNINGS return AVERROR(EINVAL); } +void av_frame_remove_all_side_data(AVFrame *frame) +{ + wipe_side_data(frame); +} + +int av_frame_copy_side_data(AVFrame* dst, const AVFrame* src, int flags) +{ + for (unsigned i = 0; i < src->nb_side_data; i++) { + const AVFrameSideData *sd_src = src->side_data[i]; + AVFrameSideData *sd_dst; + if ((flags & AV_FRAME_TRANSFER_SD_FILTER) && + sd_src->type == AV_FRAME_DATA_PANSCAN && + (src->width != dst->width || src->height != dst->height)) + continue; + if (flags & AV_FRAME_TRANSFER_SD_COPY) { + sd_dst = av_frame_new_side_data(dst, sd_src->type, + sd_src->size); + if (!sd_dst) { + wipe_side_data(dst); + return AVERROR(ENOMEM); + } + memcpy(sd_dst->data, sd_src->data, sd_src->size); + } else { + AVBufferRef *ref = av_buffer_ref(sd_src->buf); + sd_dst = av_frame_new_side_data_from_buf(dst, sd_src->type, ref); + if (!sd_dst) { + av_buffer_unref(&ref); + wipe_side_data(dst); + return AVERROR(ENOMEM); + } + } + av_dict_copy(&sd_dst->metadata, sd_src->metadata, 0); + } + return 0; +} + static int frame_copy_props(AVFrame *dst, const AVFrame *src, int force_copy) { - int ret, i; + int ret; dst->key_frame = src->key_frame; dst->pict_type = src->pict_type; @@ -309,31 +345,10 @@ static int frame_copy_props(AVFrame *dst, const AVFrame *src, int force_copy) av_dict_copy(&dst->metadata, src->metadata, 0); - for (i = 0; i < src->nb_side_data; i++) { - const AVFrameSideData *sd_src = src->side_data[i]; - AVFrameSideData *sd_dst; - if ( sd_src->type == AV_FRAME_DATA_PANSCAN - && (src->width != dst->width || src->height != dst->height)) - continue; - if (force_copy) { - sd_dst = av_frame_new_side_data(dst, sd_src->type, - sd_src->size); - if (!sd_dst) { - wipe_side_data(dst); - return AVERROR(ENOMEM); - } - memcpy(sd_dst->data, sd_src->data, sd_src->size); - } else { - AVBufferRef *ref = av_buffer_ref(sd_src->buf); - sd_dst = av_frame_new_side_data_from_buf(dst, sd_src->type, ref); - if (!sd_dst) { - av_buffer_unref(&ref); - wipe_side_data(dst); - return AVERROR(ENOMEM); - } - } - av_dict_copy(&sd_dst->metadata, sd_src->metadata, 0); - } + if ((ret = av_frame_copy_side_data(dst, src, + (force_copy ? AV_FRAME_TRANSFER_SD_COPY : 0) | + AV_FRAME_TRANSFER_SD_FILTER) < 0)) + return ret; ret = av_buffer_replace(&dst->opaque_ref, src->opaque_ref); ret |= av_buffer_replace(&dst->private_ref, src->private_ref); diff --git a/libavutil/frame.h b/libavutil/frame.h index 33fac2054c..a868fa70d7 100644 --- a/libavutil/frame.h +++ b/libavutil/frame.h @@ -850,6 +850,30 @@ int av_frame_copy(AVFrame *dst, const AVFrame *src); */ int av_frame_copy_props(AVFrame *dst, const AVFrame *src); + +/** + * Copy side data, rather than creating new references. + */ +#define AV_FRAME_TRANSFER_SD_COPY (1 << 0) +/** + * Filter out side data that does not match dst properties. + */ +#define AV_FRAME_TRANSFER_SD_FILTER (1 << 1) + +/** + * Copy all side-data from src to dst. + * + * @param dst a frame to which the side data should be copied. + * @param src a frame from which to copy the side data. + * @param flags a combination of AV_FRAME_TRANSFER_SD_* + * + * @return >= 0 on success, a negative AVERROR on error. + * + * @note This function will create new references to side data buffers in src, + * unless the AV_FRAME_TRANSFER_SD_COPY flag is passed. + */ +int av_frame_copy_side_data(AVFrame* dst, const AVFrame* src, int flags); + /** * Get the buffer reference a given data plane is stored in. * @@ -901,6 +925,14 @@ AVFrameSideData *av_frame_get_side_data(const AVFrame *frame, */ void av_frame_remove_side_data(AVFrame *frame, enum AVFrameSideDataType type); +/** + * Remove and free all side data instances. + * + * @param frame from which to remove all side data. + */ +void av_frame_remove_all_side_data(AVFrame *frame); + + /** * Flags for frame cropping. diff --git a/libavutil/version.h b/libavutil/version.h index 1b4b41d81f..2c7f4f6b37 100644 --- a/libavutil/version.h +++ b/libavutil/version.h @@ -79,7 +79,7 @@ */ #define LIBAVUTIL_VERSION_MAJOR 57 -#define LIBAVUTIL_VERSION_MINOR 25 +#define LIBAVUTIL_VERSION_MINOR 26 #define LIBAVUTIL_VERSION_MICRO 100 #define LIBAVUTIL_VERSION_INT AV_VERSION_INT(LIBAVUTIL_VERSION_MAJOR, \ -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [FFmpeg-devel] [PATCH v3 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() softworkz @ 2022-06-24 7:01 ` Xiang, Haihao 2022-06-26 23:35 ` Soft Works 0 siblings, 1 reply; 65+ messages in thread From: Xiang, Haihao @ 2022-06-24 7:01 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz, haihao.xiang-at-intel.com On Wed, 2022-06-01 at 18:01 +0000, softworkz wrote: > From: softworkz <softworkz@hotmail.com> > > Signed-off-by: softworkz <softworkz@hotmail.com> > Signed-off-by: Anton Khirnov <anton@khirnov.net> > --- > doc/APIchanges | 4 +++ > libavutil/frame.c | 67 +++++++++++++++++++++++++++------------------ > libavutil/frame.h | 32 ++++++++++++++++++++++ > libavutil/version.h | 2 +- > 4 files changed, 78 insertions(+), 27 deletions(-) > > diff --git a/doc/APIchanges b/doc/APIchanges > index 337f1466d8..e5dd6f1e83 100644 > --- a/doc/APIchanges > +++ b/doc/APIchanges > @@ -14,6 +14,10 @@ libavutil: 2021-04-27 > > API changes, most recent first: > > +2022-05-26 - xxxxxxxxx - lavu 57.26.100 - frame.h > + Add av_frame_remove_all_side_data(), av_frame_copy_side_data(), > + AV_FRAME_TRANSFER_SD_COPY, and AV_FRAME_TRANSFER_SD_FILTER. > + > 2022-05-23 - xxxxxxxxx - lavu 57.25.100 - avutil.h > Deprecate av_fopen_utf8() without replacement. > > diff --git a/libavutil/frame.c b/libavutil/frame.c > index fbb869fffa..bfe575612d 100644 > --- a/libavutil/frame.c > +++ b/libavutil/frame.c > @@ -271,9 +271,45 @@ FF_ENABLE_DEPRECATION_WARNINGS > return AVERROR(EINVAL); > } > > +void av_frame_remove_all_side_data(AVFrame *frame) > +{ > + wipe_side_data(frame); > +} > + > +int av_frame_copy_side_data(AVFrame* dst, const AVFrame* src, int flags) > +{ > + for (unsigned i = 0; i < src->nb_side_data; i++) { > + const AVFrameSideData *sd_src = src->side_data[i]; > + AVFrameSideData *sd_dst; > + if ((flags & AV_FRAME_TRANSFER_SD_FILTER) && > + sd_src->type == AV_FRAME_DATA_PANSCAN && > + (src->width != dst->width || src->height != dst->height)) > + continue; > + if (flags & AV_FRAME_TRANSFER_SD_COPY) { > + sd_dst = av_frame_new_side_data(dst, sd_src->type, > + sd_src->size); > + if (!sd_dst) { > + wipe_side_data(dst); > + return AVERROR(ENOMEM); > + } > + memcpy(sd_dst->data, sd_src->data, sd_src->size); > + } else { > + AVBufferRef *ref = av_buffer_ref(sd_src->buf); > + sd_dst = av_frame_new_side_data_from_buf(dst, sd_src->type, ref); > + if (!sd_dst) { > + av_buffer_unref(&ref); > + wipe_side_data(dst); > + return AVERROR(ENOMEM); > + } > + } > + av_dict_copy(&sd_dst->metadata, sd_src->metadata, 0); > + } > + return 0; > +} > + > static int frame_copy_props(AVFrame *dst, const AVFrame *src, int force_copy) > { > - int ret, i; > + int ret; > > dst->key_frame = src->key_frame; > dst->pict_type = src->pict_type; > @@ -309,31 +345,10 @@ static int frame_copy_props(AVFrame *dst, const AVFrame > *src, int force_copy) > > av_dict_copy(&dst->metadata, src->metadata, 0); > > - for (i = 0; i < src->nb_side_data; i++) { > - const AVFrameSideData *sd_src = src->side_data[i]; > - AVFrameSideData *sd_dst; > - if ( sd_src->type == AV_FRAME_DATA_PANSCAN > - && (src->width != dst->width || src->height != dst->height)) > - continue; > - if (force_copy) { > - sd_dst = av_frame_new_side_data(dst, sd_src->type, > - sd_src->size); > - if (!sd_dst) { > - wipe_side_data(dst); > - return AVERROR(ENOMEM); > - } > - memcpy(sd_dst->data, sd_src->data, sd_src->size); > - } else { > - AVBufferRef *ref = av_buffer_ref(sd_src->buf); > - sd_dst = av_frame_new_side_data_from_buf(dst, sd_src->type, ref); > - if (!sd_dst) { > - av_buffer_unref(&ref); > - wipe_side_data(dst); > - return AVERROR(ENOMEM); > - } > - } > - av_dict_copy(&sd_dst->metadata, sd_src->metadata, 0); > - } > + if ((ret = av_frame_copy_side_data(dst, src, > + (force_copy ? AV_FRAME_TRANSFER_SD_COPY : 0) | > + AV_FRAME_TRANSFER_SD_FILTER) < 0)) > + return ret; > > ret = av_buffer_replace(&dst->opaque_ref, src->opaque_ref); > ret |= av_buffer_replace(&dst->private_ref, src->private_ref); > diff --git a/libavutil/frame.h b/libavutil/frame.h > index 33fac2054c..a868fa70d7 100644 > --- a/libavutil/frame.h > +++ b/libavutil/frame.h > @@ -850,6 +850,30 @@ int av_frame_copy(AVFrame *dst, const AVFrame *src); > */ > int av_frame_copy_props(AVFrame *dst, const AVFrame *src); > > + > +/** > + * Copy side data, rather than creating new references. > + */ > +#define AV_FRAME_TRANSFER_SD_COPY (1 << 0) > +/** > + * Filter out side data that does not match dst properties. > + */ > +#define AV_FRAME_TRANSFER_SD_FILTER (1 << 1) > + > +/** > + * Copy all side-data from src to dst. > + * > + * @param dst a frame to which the side data should be copied. > + * @param src a frame from which to copy the side data. > + * @param flags a combination of AV_FRAME_TRANSFER_SD_* > + * > + * @return >= 0 on success, a negative AVERROR on error. Can it return a positive value on success ? I only see 0 is returned on success in av_frame_copy_side_data(). May I miss something about your patch ? Thanks Haihao > + * > + * @note This function will create new references to side data buffers in > src, > + * unless the AV_FRAME_TRANSFER_SD_COPY flag is passed. > + */ > +int av_frame_copy_side_data(AVFrame* dst, const AVFrame* src, int flags); > + > /** > * Get the buffer reference a given data plane is stored in. > * > @@ -901,6 +925,14 @@ AVFrameSideData *av_frame_get_side_data(const AVFrame > *frame, > */ > void av_frame_remove_side_data(AVFrame *frame, enum AVFrameSideDataType > type); > > +/** > + * Remove and free all side data instances. > + * > + * @param frame from which to remove all side data. > + */ > +void av_frame_remove_all_side_data(AVFrame *frame); > + > + > > /** > * Flags for frame cropping. > diff --git a/libavutil/version.h b/libavutil/version.h > index 1b4b41d81f..2c7f4f6b37 100644 > --- a/libavutil/version.h > +++ b/libavutil/version.h > @@ -79,7 +79,7 @@ > */ > > #define LIBAVUTIL_VERSION_MAJOR 57 > -#define LIBAVUTIL_VERSION_MINOR 25 > +#define LIBAVUTIL_VERSION_MINOR 26 > #define LIBAVUTIL_VERSION_MICRO 100 > > #define LIBAVUTIL_VERSION_INT AV_VERSION_INT(LIBAVUTIL_VERSION_MAJOR, \ _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [FFmpeg-devel] [PATCH v3 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() 2022-06-24 7:01 ` Xiang, Haihao @ 2022-06-26 23:35 ` Soft Works 0 siblings, 0 replies; 65+ messages in thread From: Soft Works @ 2022-06-26 23:35 UTC (permalink / raw) To: Xiang, Haihao, ffmpeg-devel; +Cc: haihao.xiang-at-intel.com > -----Original Message----- > From: Xiang, Haihao <haihao.xiang@intel.com> > Sent: Friday, June 24, 2022 9:02 AM > To: ffmpeg-devel@ffmpeg.org > Cc: haihao.xiang-at-intel.com@ffmpeg.org; softworkz@hotmail.com > Subject: Re: [FFmpeg-devel] [PATCH v3 1/6] avutil/frame: Add > av_frame_copy_side_data() and av_frame_remove_all_side_data() > > On Wed, 2022-06-01 at 18:01 +0000, softworkz wrote: > > From: softworkz <softworkz@hotmail.com> > > > > Signed-off-by: softworkz <softworkz@hotmail.com> > > Signed-off-by: Anton Khirnov <anton@khirnov.net> > > --- > > doc/APIchanges | 4 +++ > > libavutil/frame.c | 67 +++++++++++++++++++++++++++-------------- > ---- > > libavutil/frame.h | 32 ++++++++++++++++++++++ > > libavutil/version.h | 2 +- > > 4 files changed, 78 insertions(+), 27 deletions(-) > > > > diff --git a/doc/APIchanges b/doc/APIchanges > > index 337f1466d8..e5dd6f1e83 100644 > > --- a/doc/APIchanges > > +++ b/doc/APIchanges > > @@ -14,6 +14,10 @@ libavutil: 2021-04-27 > > > > API changes, most recent first: > > > > +2022-05-26 - xxxxxxxxx - lavu 57.26.100 - frame.h > > + Add av_frame_remove_all_side_data(), av_frame_copy_side_data(), > > + AV_FRAME_TRANSFER_SD_COPY, and AV_FRAME_TRANSFER_SD_FILTER. > > + > > 2022-05-23 - xxxxxxxxx - lavu 57.25.100 - avutil.h > > Deprecate av_fopen_utf8() without replacement. > > > > diff --git a/libavutil/frame.c b/libavutil/frame.c > > index fbb869fffa..bfe575612d 100644 > > --- a/libavutil/frame.c > > +++ b/libavutil/frame.c > > @@ -271,9 +271,45 @@ FF_ENABLE_DEPRECATION_WARNINGS > > return AVERROR(EINVAL); > > } > > > > +void av_frame_remove_all_side_data(AVFrame *frame) > > +{ > > + wipe_side_data(frame); > > +} > > + > > +int av_frame_copy_side_data(AVFrame* dst, const AVFrame* src, int > flags) > > +{ > > + for (unsigned i = 0; i < src->nb_side_data; i++) { > > + const AVFrameSideData *sd_src = src->side_data[i]; > > + AVFrameSideData *sd_dst; > > + if ((flags & AV_FRAME_TRANSFER_SD_FILTER) && > > + sd_src->type == AV_FRAME_DATA_PANSCAN && > > + (src->width != dst->width || src->height != dst- > >height)) > > + continue; > > + if (flags & AV_FRAME_TRANSFER_SD_COPY) { > > + sd_dst = av_frame_new_side_data(dst, sd_src->type, > > + sd_src->size); > > + if (!sd_dst) { > > + wipe_side_data(dst); > > + return AVERROR(ENOMEM); > > + } > > + memcpy(sd_dst->data, sd_src->data, sd_src->size); > > + } else { > > + AVBufferRef *ref = av_buffer_ref(sd_src->buf); > > + sd_dst = av_frame_new_side_data_from_buf(dst, sd_src- > >type, ref); > > + if (!sd_dst) { > > + av_buffer_unref(&ref); > > + wipe_side_data(dst); > > + return AVERROR(ENOMEM); > > + } > > + } > > + av_dict_copy(&sd_dst->metadata, sd_src->metadata, 0); > > + } > > + return 0; > > +} > > + > > static int frame_copy_props(AVFrame *dst, const AVFrame *src, int > force_copy) > > { > > - int ret, i; > > + int ret; > > > > dst->key_frame = src->key_frame; > > dst->pict_type = src->pict_type; > > @@ -309,31 +345,10 @@ static int frame_copy_props(AVFrame *dst, > const AVFrame > > *src, int force_copy) > > > > av_dict_copy(&dst->metadata, src->metadata, 0); > > > > - for (i = 0; i < src->nb_side_data; i++) { > > - const AVFrameSideData *sd_src = src->side_data[i]; > > - AVFrameSideData *sd_dst; > > - if ( sd_src->type == AV_FRAME_DATA_PANSCAN > > - && (src->width != dst->width || src->height != dst- > >height)) > > - continue; > > - if (force_copy) { > > - sd_dst = av_frame_new_side_data(dst, sd_src->type, > > - sd_src->size); > > - if (!sd_dst) { > > - wipe_side_data(dst); > > - return AVERROR(ENOMEM); > > - } > > - memcpy(sd_dst->data, sd_src->data, sd_src->size); > > - } else { > > - AVBufferRef *ref = av_buffer_ref(sd_src->buf); > > - sd_dst = av_frame_new_side_data_from_buf(dst, sd_src- > >type, ref); > > - if (!sd_dst) { > > - av_buffer_unref(&ref); > > - wipe_side_data(dst); > > - return AVERROR(ENOMEM); > > - } > > - } > > - av_dict_copy(&sd_dst->metadata, sd_src->metadata, 0); > > - } > > + if ((ret = av_frame_copy_side_data(dst, src, > > + (force_copy ? AV_FRAME_TRANSFER_SD_COPY : 0) | > > + AV_FRAME_TRANSFER_SD_FILTER) < 0)) > > + return ret; > > > > ret = av_buffer_replace(&dst->opaque_ref, src->opaque_ref); > > ret |= av_buffer_replace(&dst->private_ref, src->private_ref); > > diff --git a/libavutil/frame.h b/libavutil/frame.h > > index 33fac2054c..a868fa70d7 100644 > > --- a/libavutil/frame.h > > +++ b/libavutil/frame.h > > @@ -850,6 +850,30 @@ int av_frame_copy(AVFrame *dst, const AVFrame > *src); > > */ > > int av_frame_copy_props(AVFrame *dst, const AVFrame *src); > > > > + > > +/** > > + * Copy side data, rather than creating new references. > > + */ > > +#define AV_FRAME_TRANSFER_SD_COPY (1 << 0) > > +/** > > + * Filter out side data that does not match dst properties. > > + */ > > +#define AV_FRAME_TRANSFER_SD_FILTER (1 << 1) > > + > > +/** > > + * Copy all side-data from src to dst. > > + * > > + * @param dst a frame to which the side data should be copied. > > + * @param src a frame from which to copy the side data. > > + * @param flags a combination of AV_FRAME_TRANSFER_SD_* > > + * > > + * @return >= 0 on success, a negative AVERROR on error. > > Can it return a positive value on success ? I only see 0 is returned > on success > in av_frame_copy_side_data(). May I miss something about your patch ? I guess I had just copied that message from av_frame_copy() or av_frame_apply_cropping(). Both don't seem to return positive values either. But I don't want to talk it away - I'll submit an update with the typical doc message. Thanks, softworkz _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v3 2/6] avcodec/vpp_qsv: Copy side data from input to output frame 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 0/6] " ffmpegagent 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() softworkz @ 2022-06-01 18:01 ` softworkz 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 3/6] avcodec/mpeg12dec: make mpeg_decode_user_data() accessible softworkz ` (4 subsequent siblings) 6 siblings, 0 replies; 65+ messages in thread From: softworkz @ 2022-06-01 18:01 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz, Xiang, Haihao From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> --- libavfilter/qsvvpp.c | 6 ++++++ libavfilter/vf_overlay_qsv.c | 19 +++++++++++++++---- 2 files changed, 21 insertions(+), 4 deletions(-) diff --git a/libavfilter/qsvvpp.c b/libavfilter/qsvvpp.c index 954f882637..f4bf628073 100644 --- a/libavfilter/qsvvpp.c +++ b/libavfilter/qsvvpp.c @@ -843,6 +843,12 @@ int ff_qsvvpp_filter_frame(QSVVPPContext *s, AVFilterLink *inlink, AVFrame *picr return AVERROR(EAGAIN); break; } + + av_frame_remove_all_side_data(out_frame->frame); + ret = av_frame_copy_side_data(out_frame->frame, in_frame->frame, 0); + if (ret < 0) + return ret; + out_frame->frame->pts = av_rescale_q(out_frame->surface.Data.TimeStamp, default_tb, outlink->time_base); diff --git a/libavfilter/vf_overlay_qsv.c b/libavfilter/vf_overlay_qsv.c index 7e76b39aa9..e15214dbf2 100644 --- a/libavfilter/vf_overlay_qsv.c +++ b/libavfilter/vf_overlay_qsv.c @@ -231,13 +231,24 @@ static int process_frame(FFFrameSync *fs) { AVFilterContext *ctx = fs->parent; QSVOverlayContext *s = fs->opaque; + AVFrame *frame0 = NULL; AVFrame *frame = NULL; - int ret = 0, i; + int ret = 0; - for (i = 0; i < ctx->nb_inputs; i++) { + for (unsigned i = 0; i < ctx->nb_inputs; i++) { ret = ff_framesync_get_frame(fs, i, &frame, 0); - if (ret == 0) - ret = ff_qsvvpp_filter_frame(s->qsv, ctx->inputs[i], frame); + + if (ret == 0) { + if (i == 0) + frame0 = frame; + else { + av_frame_remove_all_side_data(frame); + ret = av_frame_copy_side_data(frame, frame0, 0); + } + + ret = ret < 0 ? ret : ff_qsvvpp_filter_frame(s->qsv, ctx->inputs[i], frame); + } + if (ret < 0 && ret != AVERROR(EAGAIN)) break; } -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v3 3/6] avcodec/mpeg12dec: make mpeg_decode_user_data() accessible 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 0/6] " ffmpegagent 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() softworkz 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 2/6] avcodec/vpp_qsv: Copy side data from input to output frame softworkz @ 2022-06-01 18:01 ` softworkz 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 4/6] avcodec/hevcdec: make set_side_data() accessible softworkz ` (3 subsequent siblings) 6 siblings, 0 replies; 65+ messages in thread From: softworkz @ 2022-06-01 18:01 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz, Xiang, Haihao From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> --- libavcodec/mpeg12.h | 28 ++++++++++++++++++++++++++++ libavcodec/mpeg12dec.c | 40 +++++----------------------------------- 2 files changed, 33 insertions(+), 35 deletions(-) diff --git a/libavcodec/mpeg12.h b/libavcodec/mpeg12.h index e0406b32d9..84a829cdd3 100644 --- a/libavcodec/mpeg12.h +++ b/libavcodec/mpeg12.h @@ -23,6 +23,7 @@ #define AVCODEC_MPEG12_H #include "mpegvideo.h" +#include "libavutil/stereo3d.h" /* Start codes. */ #define SEQ_END_CODE 0x000001b7 @@ -34,6 +35,31 @@ #define EXT_START_CODE 0x000001b5 #define USER_START_CODE 0x000001b2 +typedef struct Mpeg1Context { + MpegEncContext mpeg_enc_ctx; + int mpeg_enc_ctx_allocated; /* true if decoding context allocated */ + int repeat_field; /* true if we must repeat the field */ + AVPanScan pan_scan; /* some temporary storage for the panscan */ + AVStereo3D stereo3d; + int has_stereo3d; + AVBufferRef *a53_buf_ref; + uint8_t afd; + int has_afd; + int slice_count; + unsigned aspect_ratio_info; + AVRational save_aspect; + int save_width, save_height, save_progressive_seq; + int rc_buffer_size; + AVRational frame_rate_ext; /* MPEG-2 specific framerate modificator */ + unsigned frame_rate_index; + int sync; /* Did we reach a sync point like a GOP/SEQ/KEYFrame? */ + int closed_gop; + int tmpgexs; + int first_slice; + int extradata_decoded; + int64_t timecode_frame_start; /*< GOP timecode frame start number, in non drop frame format */ +} Mpeg1Context; + void ff_mpeg12_common_init(MpegEncContext *s); void ff_mpeg1_clean_buffers(MpegEncContext *s); @@ -45,4 +71,6 @@ void ff_mpeg12_find_best_frame_rate(AVRational frame_rate, int *code, int *ext_n, int *ext_d, int nonstandard); +void ff_mpeg_decode_user_data(AVCodecContext *avctx, Mpeg1Context *s1, const uint8_t *p, int buf_size); + #endif /* AVCODEC_MPEG12_H */ diff --git a/libavcodec/mpeg12dec.c b/libavcodec/mpeg12dec.c index e9bde48f7a..11d2b58185 100644 --- a/libavcodec/mpeg12dec.c +++ b/libavcodec/mpeg12dec.c @@ -58,31 +58,6 @@ #define A53_MAX_CC_COUNT 2000 -typedef struct Mpeg1Context { - MpegEncContext mpeg_enc_ctx; - int mpeg_enc_ctx_allocated; /* true if decoding context allocated */ - int repeat_field; /* true if we must repeat the field */ - AVPanScan pan_scan; /* some temporary storage for the panscan */ - AVStereo3D stereo3d; - int has_stereo3d; - AVBufferRef *a53_buf_ref; - uint8_t afd; - int has_afd; - int slice_count; - unsigned aspect_ratio_info; - AVRational save_aspect; - int save_width, save_height, save_progressive_seq; - int rc_buffer_size; - AVRational frame_rate_ext; /* MPEG-2 specific framerate modificator */ - unsigned frame_rate_index; - int sync; /* Did we reach a sync point like a GOP/SEQ/KEYFrame? */ - int closed_gop; - int tmpgexs; - int first_slice; - int extradata_decoded; - int64_t timecode_frame_start; /*< GOP timecode frame start number, in non drop frame format */ -} Mpeg1Context; - #define MB_TYPE_ZERO_MV 0x20000000 static const uint32_t ptype2mb_type[7] = { @@ -2198,11 +2173,9 @@ static int vcr2_init_sequence(AVCodecContext *avctx) return 0; } -static int mpeg_decode_a53_cc(AVCodecContext *avctx, +static int mpeg_decode_a53_cc(AVCodecContext *avctx, Mpeg1Context *s1, const uint8_t *p, int buf_size) { - Mpeg1Context *s1 = avctx->priv_data; - if (buf_size >= 6 && p[0] == 'G' && p[1] == 'A' && p[2] == '9' && p[3] == '4' && p[4] == 3 && (p[5] & 0x40)) { @@ -2333,12 +2306,9 @@ static int mpeg_decode_a53_cc(AVCodecContext *avctx, return 0; } -static void mpeg_decode_user_data(AVCodecContext *avctx, - const uint8_t *p, int buf_size) +void ff_mpeg_decode_user_data(AVCodecContext *avctx, Mpeg1Context *s1, const uint8_t *p, int buf_size) { - Mpeg1Context *s = avctx->priv_data; const uint8_t *buf_end = p + buf_size; - Mpeg1Context *s1 = avctx->priv_data; #if 0 int i; @@ -2352,7 +2322,7 @@ static void mpeg_decode_user_data(AVCodecContext *avctx, int i; for(i=0; i<20; i++) if (!memcmp(p+i, "\0TMPGEXS\0", 9)){ - s->tmpgexs= 1; + s1->tmpgexs= 1; } } /* we parse the DTG active format information */ @@ -2398,7 +2368,7 @@ static void mpeg_decode_user_data(AVCodecContext *avctx, break; } } - } else if (mpeg_decode_a53_cc(avctx, p, buf_size)) { + } else if (mpeg_decode_a53_cc(avctx, s1, p, buf_size)) { return; } } @@ -2590,7 +2560,7 @@ static int decode_chunks(AVCodecContext *avctx, AVFrame *picture, } break; case USER_START_CODE: - mpeg_decode_user_data(avctx, buf_ptr, input_size); + ff_mpeg_decode_user_data(avctx, s, buf_ptr, input_size); break; case GOP_START_CODE: if (last_code == 0) { -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v3 4/6] avcodec/hevcdec: make set_side_data() accessible 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 0/6] " ffmpegagent ` (2 preceding siblings ...) 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 3/6] avcodec/mpeg12dec: make mpeg_decode_user_data() accessible softworkz @ 2022-06-01 18:01 ` softworkz 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 5/6] avcodec/h264dec: make h264_export_frame_props() accessible softworkz ` (2 subsequent siblings) 6 siblings, 0 replies; 65+ messages in thread From: softworkz @ 2022-06-01 18:01 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz, Xiang, Haihao From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> --- libavcodec/hevcdec.c | 117 +++++++++++++++++++++---------------------- libavcodec/hevcdec.h | 9 ++++ 2 files changed, 67 insertions(+), 59 deletions(-) diff --git a/libavcodec/hevcdec.c b/libavcodec/hevcdec.c index f782ea6394..9e9bb48202 100644 --- a/libavcodec/hevcdec.c +++ b/libavcodec/hevcdec.c @@ -2726,23 +2726,22 @@ error: return res; } -static int set_side_data(HEVCContext *s) +int ff_hevc_set_side_data(AVCodecContext *logctx, HEVCSEI *sei, HEVCContext *s, AVFrame *out) { - AVFrame *out = s->ref->frame; - int ret; + int ret = 0; - if (s->sei.frame_packing.present && - s->sei.frame_packing.arrangement_type >= 3 && - s->sei.frame_packing.arrangement_type <= 5 && - s->sei.frame_packing.content_interpretation_type > 0 && - s->sei.frame_packing.content_interpretation_type < 3) { + if (sei->frame_packing.present && + sei->frame_packing.arrangement_type >= 3 && + sei->frame_packing.arrangement_type <= 5 && + sei->frame_packing.content_interpretation_type > 0 && + sei->frame_packing.content_interpretation_type < 3) { AVStereo3D *stereo = av_stereo3d_create_side_data(out); if (!stereo) return AVERROR(ENOMEM); - switch (s->sei.frame_packing.arrangement_type) { + switch (sei->frame_packing.arrangement_type) { case 3: - if (s->sei.frame_packing.quincunx_subsampling) + if (sei->frame_packing.quincunx_subsampling) stereo->type = AV_STEREO3D_SIDEBYSIDE_QUINCUNX; else stereo->type = AV_STEREO3D_SIDEBYSIDE; @@ -2755,21 +2754,21 @@ static int set_side_data(HEVCContext *s) break; } - if (s->sei.frame_packing.content_interpretation_type == 2) + if (sei->frame_packing.content_interpretation_type == 2) stereo->flags = AV_STEREO3D_FLAG_INVERT; - if (s->sei.frame_packing.arrangement_type == 5) { - if (s->sei.frame_packing.current_frame_is_frame0_flag) + if (sei->frame_packing.arrangement_type == 5) { + if (sei->frame_packing.current_frame_is_frame0_flag) stereo->view = AV_STEREO3D_VIEW_LEFT; else stereo->view = AV_STEREO3D_VIEW_RIGHT; } } - if (s->sei.display_orientation.present && - (s->sei.display_orientation.anticlockwise_rotation || - s->sei.display_orientation.hflip || s->sei.display_orientation.vflip)) { - double angle = s->sei.display_orientation.anticlockwise_rotation * 360 / (double) (1 << 16); + if (sei->display_orientation.present && + (sei->display_orientation.anticlockwise_rotation || + sei->display_orientation.hflip || sei->display_orientation.vflip)) { + double angle = sei->display_orientation.anticlockwise_rotation * 360 / (double) (1 << 16); AVFrameSideData *rotation = av_frame_new_side_data(out, AV_FRAME_DATA_DISPLAYMATRIX, sizeof(int32_t) * 9); @@ -2788,17 +2787,17 @@ static int set_side_data(HEVCContext *s) * (1 - 2 * !!s->sei.display_orientation.vflip); av_display_rotation_set((int32_t *)rotation->data, angle); av_display_matrix_flip((int32_t *)rotation->data, - s->sei.display_orientation.hflip, - s->sei.display_orientation.vflip); + sei->display_orientation.hflip, + sei->display_orientation.vflip); } // Decrement the mastering display flag when IRAP frame has no_rasl_output_flag=1 // so the side data persists for the entire coded video sequence. - if (s->sei.mastering_display.present > 0 && + if (s && sei->mastering_display.present > 0 && IS_IRAP(s) && s->no_rasl_output_flag) { - s->sei.mastering_display.present--; + sei->mastering_display.present--; } - if (s->sei.mastering_display.present) { + if (sei->mastering_display.present) { // HEVC uses a g,b,r ordering, which we convert to a more natural r,g,b const int mapping[3] = {2, 0, 1}; const int chroma_den = 50000; @@ -2811,25 +2810,25 @@ static int set_side_data(HEVCContext *s) for (i = 0; i < 3; i++) { const int j = mapping[i]; - metadata->display_primaries[i][0].num = s->sei.mastering_display.display_primaries[j][0]; + metadata->display_primaries[i][0].num = sei->mastering_display.display_primaries[j][0]; metadata->display_primaries[i][0].den = chroma_den; - metadata->display_primaries[i][1].num = s->sei.mastering_display.display_primaries[j][1]; + metadata->display_primaries[i][1].num = sei->mastering_display.display_primaries[j][1]; metadata->display_primaries[i][1].den = chroma_den; } - metadata->white_point[0].num = s->sei.mastering_display.white_point[0]; + metadata->white_point[0].num = sei->mastering_display.white_point[0]; metadata->white_point[0].den = chroma_den; - metadata->white_point[1].num = s->sei.mastering_display.white_point[1]; + metadata->white_point[1].num = sei->mastering_display.white_point[1]; metadata->white_point[1].den = chroma_den; - metadata->max_luminance.num = s->sei.mastering_display.max_luminance; + metadata->max_luminance.num = sei->mastering_display.max_luminance; metadata->max_luminance.den = luma_den; - metadata->min_luminance.num = s->sei.mastering_display.min_luminance; + metadata->min_luminance.num = sei->mastering_display.min_luminance; metadata->min_luminance.den = luma_den; metadata->has_luminance = 1; metadata->has_primaries = 1; - av_log(s->avctx, AV_LOG_DEBUG, "Mastering Display Metadata:\n"); - av_log(s->avctx, AV_LOG_DEBUG, + av_log(logctx, AV_LOG_DEBUG, "Mastering Display Metadata:\n"); + av_log(logctx, AV_LOG_DEBUG, "r(%5.4f,%5.4f) g(%5.4f,%5.4f) b(%5.4f %5.4f) wp(%5.4f, %5.4f)\n", av_q2d(metadata->display_primaries[0][0]), av_q2d(metadata->display_primaries[0][1]), @@ -2838,31 +2837,31 @@ static int set_side_data(HEVCContext *s) av_q2d(metadata->display_primaries[2][0]), av_q2d(metadata->display_primaries[2][1]), av_q2d(metadata->white_point[0]), av_q2d(metadata->white_point[1])); - av_log(s->avctx, AV_LOG_DEBUG, + av_log(logctx, AV_LOG_DEBUG, "min_luminance=%f, max_luminance=%f\n", av_q2d(metadata->min_luminance), av_q2d(metadata->max_luminance)); } // Decrement the mastering display flag when IRAP frame has no_rasl_output_flag=1 // so the side data persists for the entire coded video sequence. - if (s->sei.content_light.present > 0 && + if (s && sei->content_light.present > 0 && IS_IRAP(s) && s->no_rasl_output_flag) { - s->sei.content_light.present--; + sei->content_light.present--; } - if (s->sei.content_light.present) { + if (sei->content_light.present) { AVContentLightMetadata *metadata = av_content_light_metadata_create_side_data(out); if (!metadata) return AVERROR(ENOMEM); - metadata->MaxCLL = s->sei.content_light.max_content_light_level; - metadata->MaxFALL = s->sei.content_light.max_pic_average_light_level; + metadata->MaxCLL = sei->content_light.max_content_light_level; + metadata->MaxFALL = sei->content_light.max_pic_average_light_level; - av_log(s->avctx, AV_LOG_DEBUG, "Content Light Level Metadata:\n"); - av_log(s->avctx, AV_LOG_DEBUG, "MaxCLL=%d, MaxFALL=%d\n", + av_log(logctx, AV_LOG_DEBUG, "Content Light Level Metadata:\n"); + av_log(logctx, AV_LOG_DEBUG, "MaxCLL=%d, MaxFALL=%d\n", metadata->MaxCLL, metadata->MaxFALL); } - if (s->sei.a53_caption.buf_ref) { - HEVCSEIA53Caption *a53 = &s->sei.a53_caption; + if (sei->a53_caption.buf_ref) { + HEVCSEIA53Caption *a53 = &sei->a53_caption; AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, a53->buf_ref); if (!sd) @@ -2870,8 +2869,8 @@ static int set_side_data(HEVCContext *s) a53->buf_ref = NULL; } - for (int i = 0; i < s->sei.unregistered.nb_buf_ref; i++) { - HEVCSEIUnregistered *unreg = &s->sei.unregistered; + for (int i = 0; i < sei->unregistered.nb_buf_ref; i++) { + HEVCSEIUnregistered *unreg = &sei->unregistered; if (unreg->buf_ref[i]) { AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, @@ -2882,9 +2881,9 @@ static int set_side_data(HEVCContext *s) unreg->buf_ref[i] = NULL; } } - s->sei.unregistered.nb_buf_ref = 0; + sei->unregistered.nb_buf_ref = 0; - if (s->sei.timecode.present) { + if (s && sei->timecode.present) { uint32_t *tc_sd; char tcbuf[AV_TIMECODE_STR_SIZE]; AVFrameSideData *tcside = av_frame_new_side_data(out, AV_FRAME_DATA_S12M_TIMECODE, @@ -2893,25 +2892,25 @@ static int set_side_data(HEVCContext *s) return AVERROR(ENOMEM); tc_sd = (uint32_t*)tcside->data; - tc_sd[0] = s->sei.timecode.num_clock_ts; + tc_sd[0] = sei->timecode.num_clock_ts; for (int i = 0; i < tc_sd[0]; i++) { - int drop = s->sei.timecode.cnt_dropped_flag[i]; - int hh = s->sei.timecode.hours_value[i]; - int mm = s->sei.timecode.minutes_value[i]; - int ss = s->sei.timecode.seconds_value[i]; - int ff = s->sei.timecode.n_frames[i]; + int drop = sei->timecode.cnt_dropped_flag[i]; + int hh = sei->timecode.hours_value[i]; + int mm = sei->timecode.minutes_value[i]; + int ss = sei->timecode.seconds_value[i]; + int ff = sei->timecode.n_frames[i]; tc_sd[i + 1] = av_timecode_get_smpte(s->avctx->framerate, drop, hh, mm, ss, ff); av_timecode_make_smpte_tc_string2(tcbuf, s->avctx->framerate, tc_sd[i + 1], 0, 0); av_dict_set(&out->metadata, "timecode", tcbuf, 0); } - s->sei.timecode.num_clock_ts = 0; + sei->timecode.num_clock_ts = 0; } - if (s->sei.film_grain_characteristics.present) { - HEVCSEIFilmGrainCharacteristics *fgc = &s->sei.film_grain_characteristics; + if (s && sei->film_grain_characteristics.present) { + HEVCSEIFilmGrainCharacteristics *fgc = &sei->film_grain_characteristics; AVFilmGrainParams *fgp = av_film_grain_params_create_side_data(out); if (!fgp) return AVERROR(ENOMEM); @@ -2965,8 +2964,8 @@ static int set_side_data(HEVCContext *s) fgc->present = fgc->persistence_flag; } - if (s->sei.dynamic_hdr_plus.info) { - AVBufferRef *info_ref = av_buffer_ref(s->sei.dynamic_hdr_plus.info); + if (sei->dynamic_hdr_plus.info) { + AVBufferRef *info_ref = av_buffer_ref(sei->dynamic_hdr_plus.info); if (!info_ref) return AVERROR(ENOMEM); @@ -2976,7 +2975,7 @@ static int set_side_data(HEVCContext *s) } } - if (s->rpu_buf) { + if (s && s->rpu_buf) { AVFrameSideData *rpu = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_DOVI_RPU_BUFFER, s->rpu_buf); if (!rpu) return AVERROR(ENOMEM); @@ -2984,10 +2983,10 @@ static int set_side_data(HEVCContext *s) s->rpu_buf = NULL; } - if ((ret = ff_dovi_attach_side_data(&s->dovi_ctx, out)) < 0) + if (s && (ret = ff_dovi_attach_side_data(&s->dovi_ctx, out)) < 0) return ret; - if (s->sei.dynamic_hdr_vivid.info) { + if (s && s->sei.dynamic_hdr_vivid.info) { AVBufferRef *info_ref = av_buffer_ref(s->sei.dynamic_hdr_vivid.info); if (!info_ref) return AVERROR(ENOMEM); @@ -3046,7 +3045,7 @@ static int hevc_frame_start(HEVCContext *s) goto fail; } - ret = set_side_data(s); + ret = ff_hevc_set_side_data(s->avctx, &s->sei, s, s->ref->frame); if (ret < 0) goto fail; diff --git a/libavcodec/hevcdec.h b/libavcodec/hevcdec.h index de861b88b3..cd8cd40da0 100644 --- a/libavcodec/hevcdec.h +++ b/libavcodec/hevcdec.h @@ -690,6 +690,15 @@ void ff_hevc_hls_residual_coding(HEVCContext *s, int x0, int y0, void ff_hevc_hls_mvd_coding(HEVCContext *s, int x0, int y0, int log2_cb_size); +/** + * Set the decodec side data to an AVFrame. + * @logctx context for logging. + * @sei HEVCSEI decoding context, must not be NULL. + * @s HEVCContext, can be NULL. + * @return < 0 on error, 0 otherwise. + */ +int ff_hevc_set_side_data(AVCodecContext *logctx, HEVCSEI *sei, HEVCContext *s, AVFrame *out); + extern const uint8_t ff_hevc_qpel_extra_before[4]; extern const uint8_t ff_hevc_qpel_extra_after[4]; extern const uint8_t ff_hevc_qpel_extra[4]; -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v3 5/6] avcodec/h264dec: make h264_export_frame_props() accessible 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 0/6] " ffmpegagent ` (3 preceding siblings ...) 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 4/6] avcodec/hevcdec: make set_side_data() accessible softworkz @ 2022-06-01 18:01 ` softworkz 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders softworkz 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 0/6] " ffmpegagent 6 siblings, 0 replies; 65+ messages in thread From: softworkz @ 2022-06-01 18:01 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz, Xiang, Haihao From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> --- libavcodec/h264_slice.c | 98 +++++++++++++++++++++-------------------- libavcodec/h264dec.h | 2 + 2 files changed, 52 insertions(+), 48 deletions(-) diff --git a/libavcodec/h264_slice.c b/libavcodec/h264_slice.c index d56722a5c2..f2a4c1c657 100644 --- a/libavcodec/h264_slice.c +++ b/libavcodec/h264_slice.c @@ -1157,11 +1157,10 @@ static int h264_init_ps(H264Context *h, const H264SliceContext *sl, int first_sl return 0; } -static int h264_export_frame_props(H264Context *h) +int ff_h264_export_frame_props(AVCodecContext *logctx, H264SEIContext *sei, H264Context *h, AVFrame *out) { - const SPS *sps = h->ps.sps; - H264Picture *cur = h->cur_pic_ptr; - AVFrame *out = cur->f; + const SPS *sps = h ? h->ps.sps : NULL; + H264Picture *cur = h ? h->cur_pic_ptr : NULL; out->interlaced_frame = 0; out->repeat_pict = 0; @@ -1169,19 +1168,19 @@ static int h264_export_frame_props(H264Context *h) /* Signal interlacing information externally. */ /* Prioritize picture timing SEI information over used * decoding process if it exists. */ - if (h->sei.picture_timing.present) { - int ret = ff_h264_sei_process_picture_timing(&h->sei.picture_timing, sps, - h->avctx); + if (sps && sei->picture_timing.present) { + int ret = ff_h264_sei_process_picture_timing(&sei->picture_timing, sps, + logctx); if (ret < 0) { - av_log(h->avctx, AV_LOG_ERROR, "Error processing a picture timing SEI\n"); - if (h->avctx->err_recognition & AV_EF_EXPLODE) + av_log(logctx, AV_LOG_ERROR, "Error processing a picture timing SEI\n"); + if (logctx->err_recognition & AV_EF_EXPLODE) return ret; - h->sei.picture_timing.present = 0; + sei->picture_timing.present = 0; } } - if (sps->pic_struct_present_flag && h->sei.picture_timing.present) { - H264SEIPictureTiming *pt = &h->sei.picture_timing; + if (h && sps && sps->pic_struct_present_flag && sei->picture_timing.present) { + H264SEIPictureTiming *pt = &sei->picture_timing; switch (pt->pic_struct) { case H264_SEI_PIC_STRUCT_FRAME: break; @@ -1215,21 +1214,23 @@ static int h264_export_frame_props(H264Context *h) if ((pt->ct_type & 3) && pt->pic_struct <= H264_SEI_PIC_STRUCT_BOTTOM_TOP) out->interlaced_frame = (pt->ct_type & (1 << 1)) != 0; - } else { + } else if (h) { /* Derive interlacing flag from used decoding process. */ out->interlaced_frame = FIELD_OR_MBAFF_PICTURE(h); } - h->prev_interlaced_frame = out->interlaced_frame; - if (cur->field_poc[0] != cur->field_poc[1]) { + if (h) + h->prev_interlaced_frame = out->interlaced_frame; + + if (sps && cur->field_poc[0] != cur->field_poc[1]) { /* Derive top_field_first from field pocs. */ out->top_field_first = cur->field_poc[0] < cur->field_poc[1]; - } else { - if (sps->pic_struct_present_flag && h->sei.picture_timing.present) { + } else if (sps) { + if (sps->pic_struct_present_flag && sei->picture_timing.present) { /* Use picture timing SEI information. Even if it is a * information of a past frame, better than nothing. */ - if (h->sei.picture_timing.pic_struct == H264_SEI_PIC_STRUCT_TOP_BOTTOM || - h->sei.picture_timing.pic_struct == H264_SEI_PIC_STRUCT_TOP_BOTTOM_TOP) + if (sei->picture_timing.pic_struct == H264_SEI_PIC_STRUCT_TOP_BOTTOM || + sei->picture_timing.pic_struct == H264_SEI_PIC_STRUCT_TOP_BOTTOM_TOP) out->top_field_first = 1; else out->top_field_first = 0; @@ -1243,11 +1244,11 @@ static int h264_export_frame_props(H264Context *h) } } - if (h->sei.frame_packing.present && - h->sei.frame_packing.arrangement_type <= 6 && - h->sei.frame_packing.content_interpretation_type > 0 && - h->sei.frame_packing.content_interpretation_type < 3) { - H264SEIFramePacking *fp = &h->sei.frame_packing; + if (sei->frame_packing.present && + sei->frame_packing.arrangement_type <= 6 && + sei->frame_packing.content_interpretation_type > 0 && + sei->frame_packing.content_interpretation_type < 3) { + H264SEIFramePacking *fp = &sei->frame_packing; AVStereo3D *stereo = av_stereo3d_create_side_data(out); if (stereo) { switch (fp->arrangement_type) { @@ -1289,11 +1290,11 @@ static int h264_export_frame_props(H264Context *h) } } - if (h->sei.display_orientation.present && - (h->sei.display_orientation.anticlockwise_rotation || - h->sei.display_orientation.hflip || - h->sei.display_orientation.vflip)) { - H264SEIDisplayOrientation *o = &h->sei.display_orientation; + if (sei->display_orientation.present && + (sei->display_orientation.anticlockwise_rotation || + sei->display_orientation.hflip || + sei->display_orientation.vflip)) { + H264SEIDisplayOrientation *o = &sei->display_orientation; double angle = o->anticlockwise_rotation * 360 / (double) (1 << 16); AVFrameSideData *rotation = av_frame_new_side_data(out, AV_FRAME_DATA_DISPLAYMATRIX, @@ -1314,29 +1315,30 @@ static int h264_export_frame_props(H264Context *h) } } - if (h->sei.afd.present) { + if (sei->afd.present) { AVFrameSideData *sd = av_frame_new_side_data(out, AV_FRAME_DATA_AFD, sizeof(uint8_t)); if (sd) { - *sd->data = h->sei.afd.active_format_description; - h->sei.afd.present = 0; + *sd->data = sei->afd.active_format_description; + sei->afd.present = 0; } } - if (h->sei.a53_caption.buf_ref) { - H264SEIA53Caption *a53 = &h->sei.a53_caption; + if (sei->a53_caption.buf_ref) { + H264SEIA53Caption *a53 = &sei->a53_caption; AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, a53->buf_ref); if (!sd) av_buffer_unref(&a53->buf_ref); a53->buf_ref = NULL; - h->avctx->properties |= FF_CODEC_PROPERTY_CLOSED_CAPTIONS; + if (h) + h->avctx->properties |= FF_CODEC_PROPERTY_CLOSED_CAPTIONS; } - for (int i = 0; i < h->sei.unregistered.nb_buf_ref; i++) { - H264SEIUnregistered *unreg = &h->sei.unregistered; + for (int i = 0; i < sei->unregistered.nb_buf_ref; i++) { + H264SEIUnregistered *unreg = &sei->unregistered; if (unreg->buf_ref[i]) { AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, @@ -1347,10 +1349,10 @@ static int h264_export_frame_props(H264Context *h) unreg->buf_ref[i] = NULL; } } - h->sei.unregistered.nb_buf_ref = 0; + sei->unregistered.nb_buf_ref = 0; - if (h->sei.film_grain_characteristics.present) { - H264SEIFilmGrainCharacteristics *fgc = &h->sei.film_grain_characteristics; + if (h && sps && sei->film_grain_characteristics.present) { + H264SEIFilmGrainCharacteristics *fgc = &sei->film_grain_characteristics; AVFilmGrainParams *fgp = av_film_grain_params_create_side_data(out); if (!fgp) return AVERROR(ENOMEM); @@ -1404,7 +1406,7 @@ static int h264_export_frame_props(H264Context *h) h->avctx->properties |= FF_CODEC_PROPERTY_FILM_GRAIN; } - if (h->sei.picture_timing.timecode_cnt > 0) { + if (h && sei->picture_timing.timecode_cnt > 0) { uint32_t *tc_sd; char tcbuf[AV_TIMECODE_STR_SIZE]; @@ -1415,14 +1417,14 @@ static int h264_export_frame_props(H264Context *h) return AVERROR(ENOMEM); tc_sd = (uint32_t*)tcside->data; - tc_sd[0] = h->sei.picture_timing.timecode_cnt; + tc_sd[0] = sei->picture_timing.timecode_cnt; for (int i = 0; i < tc_sd[0]; i++) { - int drop = h->sei.picture_timing.timecode[i].dropframe; - int hh = h->sei.picture_timing.timecode[i].hours; - int mm = h->sei.picture_timing.timecode[i].minutes; - int ss = h->sei.picture_timing.timecode[i].seconds; - int ff = h->sei.picture_timing.timecode[i].frame; + int drop = sei->picture_timing.timecode[i].dropframe; + int hh = sei->picture_timing.timecode[i].hours; + int mm = sei->picture_timing.timecode[i].minutes; + int ss = sei->picture_timing.timecode[i].seconds; + int ff = sei->picture_timing.timecode[i].frame; tc_sd[i + 1] = av_timecode_get_smpte(h->avctx->framerate, drop, hh, mm, ss, ff); av_timecode_make_smpte_tc_string2(tcbuf, h->avctx->framerate, tc_sd[i + 1], 0, 0); @@ -1817,7 +1819,7 @@ static int h264_field_start(H264Context *h, const H264SliceContext *sl, * field coded frames, since some SEI information is present for each field * and is merged by the SEI parsing code. */ if (!FIELD_PICTURE(h) || !h->first_field || h->missing_fields > 1) { - ret = h264_export_frame_props(h); + ret = ff_h264_export_frame_props(h->avctx, &h->sei, h, h->cur_pic_ptr->f); if (ret < 0) return ret; diff --git a/libavcodec/h264dec.h b/libavcodec/h264dec.h index 9a1ec1bace..38930da4ca 100644 --- a/libavcodec/h264dec.h +++ b/libavcodec/h264dec.h @@ -808,4 +808,6 @@ void ff_h264_free_tables(H264Context *h); void ff_h264_set_erpic(ERPicture *dst, H264Picture *src); +int ff_h264_export_frame_props(AVCodecContext *logctx, H264SEIContext *sei, H264Context *h, AVFrame *out); + #endif /* AVCODEC_H264DEC_H */ -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v3 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 0/6] " ffmpegagent ` (4 preceding siblings ...) 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 5/6] avcodec/h264dec: make h264_export_frame_props() accessible softworkz @ 2022-06-01 18:01 ` softworkz 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 0/6] " ffmpegagent 6 siblings, 0 replies; 65+ messages in thread From: softworkz @ 2022-06-01 18:01 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz, Xiang, Haihao From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> --- libavcodec/qsvdec.c | 234 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 234 insertions(+) diff --git a/libavcodec/qsvdec.c b/libavcodec/qsvdec.c index 5fc5bed4c8..e854f363ec 100644 --- a/libavcodec/qsvdec.c +++ b/libavcodec/qsvdec.c @@ -49,6 +49,12 @@ #include "hwconfig.h" #include "qsv.h" #include "qsv_internal.h" +#include "h264dec.h" +#include "h264_sei.h" +#include "hevcdec.h" +#include "hevc_ps.h" +#include "hevc_sei.h" +#include "mpeg12.h" static const AVRational mfx_tb = { 1, 90000 }; @@ -60,6 +66,8 @@ static const AVRational mfx_tb = { 1, 90000 }; AV_NOPTS_VALUE : pts_tb.num ? \ av_rescale_q(mfx_pts, mfx_tb, pts_tb) : mfx_pts) +#define PAYLOAD_BUFFER_SIZE 65535 + typedef struct QSVAsyncFrame { mfxSyncPoint *sync; QSVFrame *frame; @@ -101,6 +109,9 @@ typedef struct QSVContext { mfxExtBuffer **ext_buffers; int nb_ext_buffers; + + mfxU8 payload_buffer[PAYLOAD_BUFFER_SIZE]; + Mpeg1Context mpeg_ctx; } QSVContext; static const AVCodecHWConfigInternal *const qsv_hw_configs[] = { @@ -599,6 +610,210 @@ static int qsv_export_film_grain(AVCodecContext *avctx, mfxExtAV1FilmGrainParam return 0; } #endif +static int find_start_offset(mfxU8 data[4]) +{ + if (data[0] == 0 && data[1] == 0 && data[2] == 1) + return 3; + + if (data[0] == 0 && data[1] == 0 && data[2] == 0 && data[3] == 1) + return 4; + + return 0; +} + +static int parse_sei_h264(AVCodecContext* avctx, QSVContext* q, AVFrame* out) +{ + H264SEIContext sei = { 0 }; + GetBitContext gb = { 0 }; + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = sizeof(q->payload_buffer) }; + mfxU64 ts; + int ret; + + while (1) { + int start; + memset(payload.Data, 0, payload.BufSize); + + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient buffer on GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q->payload_buffer), payload.BufSize); + return 0; + } + if (ret != MFX_ERR_NONE) + return ret; + + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) + break; + + start = find_start_offset(payload.Data); + + switch (payload.Type) { + case SEI_TYPE_BUFFERING_PERIOD: + case SEI_TYPE_PIC_TIMING: + continue; + } + + if (init_get_bits(&gb, &payload.Data[start], payload.NumBit - start * 8) < 0) + av_log(avctx, AV_LOG_ERROR, "Error initializing bitstream reader SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); + else { + ret = ff_h264_sei_decode(&sei, &gb, NULL, avctx); + + if (ret < 0) + av_log(avctx, AV_LOG_WARNING, "Failed to parse SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); + else + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d\n", payload.Type, payload.NumBit); + } + } + + if (out) + return ff_h264_export_frame_props(avctx, &sei, NULL, out); + + return 0; +} + +static int parse_sei_hevc(AVCodecContext* avctx, QSVContext* q, QSVFrame* out) +{ + HEVCSEI sei = { 0 }; + HEVCParamSets ps = { 0 }; + GetBitContext gb = { 0 }; + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = sizeof(q->payload_buffer) }; + mfxFrameSurface1 *surface = &out->surface; + mfxU64 ts; + int ret, has_logged = 0; + + while (1) { + int start; + memset(payload.Data, 0, payload.BufSize); + + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient buffer on GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q->payload_buffer), payload.BufSize); + return 0; + } + if (ret != MFX_ERR_NONE) + return ret; + + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) + break; + + if (!has_logged) { + has_logged = 1; + av_log(avctx, AV_LOG_VERBOSE, "-----------------------------------------\n"); + av_log(avctx, AV_LOG_VERBOSE, "Start reading SEI - payload timestamp: %llu - surface timestamp: %llu\n", ts, surface->Data.TimeStamp); + } + + if (ts != surface->Data.TimeStamp) { + av_log(avctx, AV_LOG_WARNING, "GetPayload timestamp (%llu) does not match surface timestamp: (%llu)\n", ts, surface->Data.TimeStamp); + } + + start = find_start_offset(payload.Data); + + av_log(avctx, AV_LOG_VERBOSE, "parsing SEI type: %3d Numbits %3d Start: %d\n", payload.Type, payload.NumBit, start); + + switch (payload.Type) { + case SEI_TYPE_BUFFERING_PERIOD: + case SEI_TYPE_PIC_TIMING: + continue; + case SEI_TYPE_MASTERING_DISPLAY_COLOUR_VOLUME: + // There seems to be a bug in MSDK + payload.NumBit -= 8; + + break; + case SEI_TYPE_CONTENT_LIGHT_LEVEL_INFO: + // There seems to be a bug in MSDK + payload.NumBit = 48; + + break; + case SEI_TYPE_USER_DATA_REGISTERED_ITU_T_T35: + // There seems to be a bug in MSDK + if (payload.NumBit == 552) + payload.NumBit = 528; + break; + } + + if (init_get_bits(&gb, &payload.Data[start], payload.NumBit - start * 8) < 0) + av_log(avctx, AV_LOG_ERROR, "Error initializing bitstream reader SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); + else { + ret = ff_hevc_decode_nal_sei(&gb, avctx, &sei, &ps, HEVC_NAL_SEI_PREFIX); + + if (ret < 0) + av_log(avctx, AV_LOG_WARNING, "Failed to parse SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); + else + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d\n", payload.Type, payload.NumBit); + } + } + + if (has_logged) { + av_log(avctx, AV_LOG_VERBOSE, "End reading SEI\n"); + } + + if (out && out->frame) + return ff_hevc_set_side_data(avctx, &sei, NULL, out->frame); + + return 0; +} + +static int parse_sei_mpeg12(AVCodecContext* avctx, QSVContext* q, AVFrame* out) +{ + Mpeg1Context *mpeg_ctx = &q->mpeg_ctx; + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = sizeof(q->payload_buffer) }; + mfxU64 ts; + int ret; + + while (1) { + int start; + + memset(payload.Data, 0, payload.BufSize); + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient buffer on GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q->payload_buffer), payload.BufSize); + return 0; + } + if (ret != MFX_ERR_NONE) + return ret; + + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) + break; + + start = find_start_offset(payload.Data); + + start++; + + ff_mpeg_decode_user_data(avctx, mpeg_ctx, &payload.Data[start], (int)((payload.NumBit + 7) / 8) - start); + + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d start %d -> %.s\n", payload.Type, payload.NumBit, start, (char *)(&payload.Data[start])); + } + + if (!out) + return 0; + + if (mpeg_ctx->a53_buf_ref) { + + AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, mpeg_ctx->a53_buf_ref); + if (!sd) + av_buffer_unref(&mpeg_ctx->a53_buf_ref); + mpeg_ctx->a53_buf_ref = NULL; + } + + if (mpeg_ctx->has_stereo3d) { + AVStereo3D *stereo = av_stereo3d_create_side_data(out); + if (!stereo) + return AVERROR(ENOMEM); + + *stereo = mpeg_ctx->stereo3d; + mpeg_ctx->has_stereo3d = 0; + } + + if (mpeg_ctx->has_afd) { + AVFrameSideData *sd = av_frame_new_side_data(out, AV_FRAME_DATA_AFD, 1); + if (!sd) + return AVERROR(ENOMEM); + + *sd->data = mpeg_ctx->afd; + mpeg_ctx->has_afd = 0; + } + + return 0; +} static int qsv_decode(AVCodecContext *avctx, QSVContext *q, AVFrame *frame, int *got_frame, @@ -636,6 +851,8 @@ static int qsv_decode(AVCodecContext *avctx, QSVContext *q, insurf, &outsurf, sync); if (ret == MFX_WRN_DEVICE_BUSY) av_usleep(500); + else if (avctx->codec_id == AV_CODEC_ID_MPEG2VIDEO) + parse_sei_mpeg12(avctx, q, NULL); } while (ret == MFX_WRN_DEVICE_BUSY || ret == MFX_ERR_MORE_SURFACE); @@ -677,6 +894,23 @@ static int qsv_decode(AVCodecContext *avctx, QSVContext *q, return AVERROR_BUG; } + switch (avctx->codec_id) { + case AV_CODEC_ID_MPEG2VIDEO: + ret = parse_sei_mpeg12(avctx, q, out_frame->frame); + break; + case AV_CODEC_ID_H264: + ret = parse_sei_h264(avctx, q, out_frame->frame); + break; + case AV_CODEC_ID_HEVC: + ret = parse_sei_hevc(avctx, q, out_frame); + break; + default: + ret = 0; + } + + if (ret < 0) + av_log(avctx, AV_LOG_ERROR, "Error parsing SEI data: %d\n", ret); + out_frame->queued += 1; aframe = (QSVAsyncFrame){ sync, out_frame }; -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v4 0/6] Implement SEI parsing for QSV decoders 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 0/6] " ffmpegagent ` (5 preceding siblings ...) 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders softworkz @ 2022-06-26 23:41 ` ffmpegagent 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() softworkz ` (7 more replies) 6 siblings, 8 replies; 65+ messages in thread From: ffmpegagent @ 2022-06-26 23:41 UTC (permalink / raw) To: ffmpeg-devel; +Cc: Kieran Kunhya, softworkz, Xiang, Haihao Missing SEI information has always been a major drawback when using the QSV decoders. I used to think that there's no chance to get at the data without explicit implementation from the MSDK side (or doing something weird like parsing in parallel). It turned out that there's a hardly known api method that provides access to all SEI (h264/hevc) or user data (mpeg2video). This allows to get things like closed captions, frame packing, display orientation, HDR data (mastering display, content light level, etc.) without having to rely on those data being provided by the MSDK as extended buffers. The commit "Implement SEI parsing for QSV decoders" includes some hard-coded workarounds for MSDK bugs which I reported: https://github.com/Intel-Media-SDK/MediaSDK/issues/2597#issuecomment-1072795311 But that doesn't help. Those bugs exist and I'm sharing my workarounds, which are empirically determined by testing a range of files. If someone is interested, I can provide private access to a repository where we have been testing this. Alternatively, I could also leave those workarounds out, and just skip those SEI types. In a previous version of this patchset, there was a concern that payload data might need to be re-ordered. Meanwhile I have researched this carefully and the conclusion is that this is not required. My detailed analysis can be found here: https://gist.github.com/softworkz/36c49586a8610813a32270ee3947a932 v3 * frame.h: clarify doc text for av_frame_copy_side_data() v2 * qsvdec: make error handling consistent and clear * qsvdec: remove AV_CODEC_ID_MPEG1VIDEO constants * hevcdec: rename function to ff_hevc_set_side_data(), add doc text v3 * qsvdec: fix c/p error softworkz (6): avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() avcodec/vpp_qsv: Copy side data from input to output frame avcodec/mpeg12dec: make mpeg_decode_user_data() accessible avcodec/hevcdec: make set_side_data() accessible avcodec/h264dec: make h264_export_frame_props() accessible avcodec/qsvdec: Implement SEI parsing for QSV decoders doc/APIchanges | 4 + libavcodec/h264_slice.c | 98 ++++++++------- libavcodec/h264dec.h | 2 + libavcodec/hevcdec.c | 117 +++++++++--------- libavcodec/hevcdec.h | 9 ++ libavcodec/mpeg12.h | 28 +++++ libavcodec/mpeg12dec.c | 40 +----- libavcodec/qsvdec.c | 234 +++++++++++++++++++++++++++++++++++ libavfilter/qsvvpp.c | 6 + libavfilter/vf_overlay_qsv.c | 19 ++- libavutil/frame.c | 67 ++++++---- libavutil/frame.h | 32 +++++ libavutil/version.h | 2 +- 13 files changed, 485 insertions(+), 173 deletions(-) base-commit: 6a82412bf33108111eb3f63076fd5a51349ae114 Published-As: https://github.com/ffstaging/FFmpeg/releases/tag/pr-ffstaging-31%2Fsoftworkz%2Fsubmit_qsv_sei-v4 Fetch-It-Via: git fetch https://github.com/ffstaging/FFmpeg pr-ffstaging-31/softworkz/submit_qsv_sei-v4 Pull-Request: https://github.com/ffstaging/FFmpeg/pull/31 Range-diff vs v3: 1: c442597a35 ! 1: 7656477360 avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() @@ doc/APIchanges: libavutil: 2021-04-27 API changes, most recent first: -+2022-05-26 - xxxxxxxxx - lavu 57.26.100 - frame.h ++2022-05-26 - xxxxxxxxx - lavu 57.28.100 - frame.h + Add av_frame_remove_all_side_data(), av_frame_copy_side_data(), + AV_FRAME_TRANSFER_SD_COPY, and AV_FRAME_TRANSFER_SD_FILTER. + - 2022-05-23 - xxxxxxxxx - lavu 57.25.100 - avutil.h - Deprecate av_fopen_utf8() without replacement. - + 2022-06-12 - xxxxxxxxxx - lavf 59.25.100 - avio.h + Add avio_vprintf(), similar to avio_printf() but allow to use it + from within a function taking a variable argument list as input. ## libavutil/frame.c ## @@ libavutil/frame.c: FF_ENABLE_DEPRECATION_WARNINGS @@ libavutil/frame.h: int av_frame_copy(AVFrame *dst, const AVFrame *src); + * @param src a frame from which to copy the side data. + * @param flags a combination of AV_FRAME_TRANSFER_SD_* + * -+ * @return >= 0 on success, a negative AVERROR on error. ++ * @return 0 on success, a negative AVERROR on error. + * + * @note This function will create new references to side data buffers in src, + * unless the AV_FRAME_TRANSFER_SD_COPY flag is passed. @@ libavutil/version.h */ #define LIBAVUTIL_VERSION_MAJOR 57 --#define LIBAVUTIL_VERSION_MINOR 25 -+#define LIBAVUTIL_VERSION_MINOR 26 +-#define LIBAVUTIL_VERSION_MINOR 27 ++#define LIBAVUTIL_VERSION_MINOR 28 #define LIBAVUTIL_VERSION_MICRO 100 #define LIBAVUTIL_VERSION_INT AV_VERSION_INT(LIBAVUTIL_VERSION_MAJOR, \ 2: 6f50d0bd57 = 2: 06976606c5 avcodec/vpp_qsv: Copy side data from input to output frame 3: f682b1d695 = 3: 320a8a535c avcodec/mpeg12dec: make mpeg_decode_user_data() accessible 4: 995d835233 = 4: e58ad6564f avcodec/hevcdec: make set_side_data() accessible 5: ac8dc06395 = 5: a57bfaebb9 avcodec/h264dec: make h264_export_frame_props() accessible 6: 27c3dded4d = 6: 3f2588563e avcodec/qsvdec: Implement SEI parsing for QSV decoders -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v4 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 0/6] " ffmpegagent @ 2022-06-26 23:41 ` softworkz 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 2/6] avcodec/vpp_qsv: Copy side data from input to output frame softworkz ` (6 subsequent siblings) 7 siblings, 0 replies; 65+ messages in thread From: softworkz @ 2022-06-26 23:41 UTC (permalink / raw) To: ffmpeg-devel; +Cc: Kieran Kunhya, softworkz, Xiang, Haihao From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> Signed-off-by: Anton Khirnov <anton@khirnov.net> --- doc/APIchanges | 4 +++ libavutil/frame.c | 67 +++++++++++++++++++++++++++------------------ libavutil/frame.h | 32 ++++++++++++++++++++++ libavutil/version.h | 2 +- 4 files changed, 78 insertions(+), 27 deletions(-) diff --git a/doc/APIchanges b/doc/APIchanges index 20b944933a..6b5bf61d85 100644 --- a/doc/APIchanges +++ b/doc/APIchanges @@ -14,6 +14,10 @@ libavutil: 2021-04-27 API changes, most recent first: +2022-05-26 - xxxxxxxxx - lavu 57.28.100 - frame.h + Add av_frame_remove_all_side_data(), av_frame_copy_side_data(), + AV_FRAME_TRANSFER_SD_COPY, and AV_FRAME_TRANSFER_SD_FILTER. + 2022-06-12 - xxxxxxxxxx - lavf 59.25.100 - avio.h Add avio_vprintf(), similar to avio_printf() but allow to use it from within a function taking a variable argument list as input. diff --git a/libavutil/frame.c b/libavutil/frame.c index 4c16488c66..5d34fde904 100644 --- a/libavutil/frame.c +++ b/libavutil/frame.c @@ -271,9 +271,45 @@ FF_ENABLE_DEPRECATION_WARNINGS return AVERROR(EINVAL); } +void av_frame_remove_all_side_data(AVFrame *frame) +{ + wipe_side_data(frame); +} + +int av_frame_copy_side_data(AVFrame* dst, const AVFrame* src, int flags) +{ + for (unsigned i = 0; i < src->nb_side_data; i++) { + const AVFrameSideData *sd_src = src->side_data[i]; + AVFrameSideData *sd_dst; + if ((flags & AV_FRAME_TRANSFER_SD_FILTER) && + sd_src->type == AV_FRAME_DATA_PANSCAN && + (src->width != dst->width || src->height != dst->height)) + continue; + if (flags & AV_FRAME_TRANSFER_SD_COPY) { + sd_dst = av_frame_new_side_data(dst, sd_src->type, + sd_src->size); + if (!sd_dst) { + wipe_side_data(dst); + return AVERROR(ENOMEM); + } + memcpy(sd_dst->data, sd_src->data, sd_src->size); + } else { + AVBufferRef *ref = av_buffer_ref(sd_src->buf); + sd_dst = av_frame_new_side_data_from_buf(dst, sd_src->type, ref); + if (!sd_dst) { + av_buffer_unref(&ref); + wipe_side_data(dst); + return AVERROR(ENOMEM); + } + } + av_dict_copy(&sd_dst->metadata, sd_src->metadata, 0); + } + return 0; +} + static int frame_copy_props(AVFrame *dst, const AVFrame *src, int force_copy) { - int ret, i; + int ret; dst->key_frame = src->key_frame; dst->pict_type = src->pict_type; @@ -309,31 +345,10 @@ static int frame_copy_props(AVFrame *dst, const AVFrame *src, int force_copy) av_dict_copy(&dst->metadata, src->metadata, 0); - for (i = 0; i < src->nb_side_data; i++) { - const AVFrameSideData *sd_src = src->side_data[i]; - AVFrameSideData *sd_dst; - if ( sd_src->type == AV_FRAME_DATA_PANSCAN - && (src->width != dst->width || src->height != dst->height)) - continue; - if (force_copy) { - sd_dst = av_frame_new_side_data(dst, sd_src->type, - sd_src->size); - if (!sd_dst) { - wipe_side_data(dst); - return AVERROR(ENOMEM); - } - memcpy(sd_dst->data, sd_src->data, sd_src->size); - } else { - AVBufferRef *ref = av_buffer_ref(sd_src->buf); - sd_dst = av_frame_new_side_data_from_buf(dst, sd_src->type, ref); - if (!sd_dst) { - av_buffer_unref(&ref); - wipe_side_data(dst); - return AVERROR(ENOMEM); - } - } - av_dict_copy(&sd_dst->metadata, sd_src->metadata, 0); - } + if ((ret = av_frame_copy_side_data(dst, src, + (force_copy ? AV_FRAME_TRANSFER_SD_COPY : 0) | + AV_FRAME_TRANSFER_SD_FILTER) < 0)) + return ret; ret = av_buffer_replace(&dst->opaque_ref, src->opaque_ref); ret |= av_buffer_replace(&dst->private_ref, src->private_ref); diff --git a/libavutil/frame.h b/libavutil/frame.h index 33fac2054c..f72b6fae71 100644 --- a/libavutil/frame.h +++ b/libavutil/frame.h @@ -850,6 +850,30 @@ int av_frame_copy(AVFrame *dst, const AVFrame *src); */ int av_frame_copy_props(AVFrame *dst, const AVFrame *src); + +/** + * Copy side data, rather than creating new references. + */ +#define AV_FRAME_TRANSFER_SD_COPY (1 << 0) +/** + * Filter out side data that does not match dst properties. + */ +#define AV_FRAME_TRANSFER_SD_FILTER (1 << 1) + +/** + * Copy all side-data from src to dst. + * + * @param dst a frame to which the side data should be copied. + * @param src a frame from which to copy the side data. + * @param flags a combination of AV_FRAME_TRANSFER_SD_* + * + * @return 0 on success, a negative AVERROR on error. + * + * @note This function will create new references to side data buffers in src, + * unless the AV_FRAME_TRANSFER_SD_COPY flag is passed. + */ +int av_frame_copy_side_data(AVFrame* dst, const AVFrame* src, int flags); + /** * Get the buffer reference a given data plane is stored in. * @@ -901,6 +925,14 @@ AVFrameSideData *av_frame_get_side_data(const AVFrame *frame, */ void av_frame_remove_side_data(AVFrame *frame, enum AVFrameSideDataType type); +/** + * Remove and free all side data instances. + * + * @param frame from which to remove all side data. + */ +void av_frame_remove_all_side_data(AVFrame *frame); + + /** * Flags for frame cropping. diff --git a/libavutil/version.h b/libavutil/version.h index 2e9e02dda8..87178e9e9a 100644 --- a/libavutil/version.h +++ b/libavutil/version.h @@ -79,7 +79,7 @@ */ #define LIBAVUTIL_VERSION_MAJOR 57 -#define LIBAVUTIL_VERSION_MINOR 27 +#define LIBAVUTIL_VERSION_MINOR 28 #define LIBAVUTIL_VERSION_MICRO 100 #define LIBAVUTIL_VERSION_INT AV_VERSION_INT(LIBAVUTIL_VERSION_MAJOR, \ -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v4 2/6] avcodec/vpp_qsv: Copy side data from input to output frame 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 0/6] " ffmpegagent 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() softworkz @ 2022-06-26 23:41 ` softworkz 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 3/6] avcodec/mpeg12dec: make mpeg_decode_user_data() accessible softworkz ` (5 subsequent siblings) 7 siblings, 0 replies; 65+ messages in thread From: softworkz @ 2022-06-26 23:41 UTC (permalink / raw) To: ffmpeg-devel; +Cc: Kieran Kunhya, softworkz, Xiang, Haihao From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> --- libavfilter/qsvvpp.c | 6 ++++++ libavfilter/vf_overlay_qsv.c | 19 +++++++++++++++---- 2 files changed, 21 insertions(+), 4 deletions(-) diff --git a/libavfilter/qsvvpp.c b/libavfilter/qsvvpp.c index 954f882637..f4bf628073 100644 --- a/libavfilter/qsvvpp.c +++ b/libavfilter/qsvvpp.c @@ -843,6 +843,12 @@ int ff_qsvvpp_filter_frame(QSVVPPContext *s, AVFilterLink *inlink, AVFrame *picr return AVERROR(EAGAIN); break; } + + av_frame_remove_all_side_data(out_frame->frame); + ret = av_frame_copy_side_data(out_frame->frame, in_frame->frame, 0); + if (ret < 0) + return ret; + out_frame->frame->pts = av_rescale_q(out_frame->surface.Data.TimeStamp, default_tb, outlink->time_base); diff --git a/libavfilter/vf_overlay_qsv.c b/libavfilter/vf_overlay_qsv.c index 7e76b39aa9..e15214dbf2 100644 --- a/libavfilter/vf_overlay_qsv.c +++ b/libavfilter/vf_overlay_qsv.c @@ -231,13 +231,24 @@ static int process_frame(FFFrameSync *fs) { AVFilterContext *ctx = fs->parent; QSVOverlayContext *s = fs->opaque; + AVFrame *frame0 = NULL; AVFrame *frame = NULL; - int ret = 0, i; + int ret = 0; - for (i = 0; i < ctx->nb_inputs; i++) { + for (unsigned i = 0; i < ctx->nb_inputs; i++) { ret = ff_framesync_get_frame(fs, i, &frame, 0); - if (ret == 0) - ret = ff_qsvvpp_filter_frame(s->qsv, ctx->inputs[i], frame); + + if (ret == 0) { + if (i == 0) + frame0 = frame; + else { + av_frame_remove_all_side_data(frame); + ret = av_frame_copy_side_data(frame, frame0, 0); + } + + ret = ret < 0 ? ret : ff_qsvvpp_filter_frame(s->qsv, ctx->inputs[i], frame); + } + if (ret < 0 && ret != AVERROR(EAGAIN)) break; } -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v4 3/6] avcodec/mpeg12dec: make mpeg_decode_user_data() accessible 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 0/6] " ffmpegagent 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() softworkz 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 2/6] avcodec/vpp_qsv: Copy side data from input to output frame softworkz @ 2022-06-26 23:41 ` softworkz 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 4/6] avcodec/hevcdec: make set_side_data() accessible softworkz ` (4 subsequent siblings) 7 siblings, 0 replies; 65+ messages in thread From: softworkz @ 2022-06-26 23:41 UTC (permalink / raw) To: ffmpeg-devel; +Cc: Kieran Kunhya, softworkz, Xiang, Haihao From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> --- libavcodec/mpeg12.h | 28 ++++++++++++++++++++++++++++ libavcodec/mpeg12dec.c | 40 +++++----------------------------------- 2 files changed, 33 insertions(+), 35 deletions(-) diff --git a/libavcodec/mpeg12.h b/libavcodec/mpeg12.h index e0406b32d9..84a829cdd3 100644 --- a/libavcodec/mpeg12.h +++ b/libavcodec/mpeg12.h @@ -23,6 +23,7 @@ #define AVCODEC_MPEG12_H #include "mpegvideo.h" +#include "libavutil/stereo3d.h" /* Start codes. */ #define SEQ_END_CODE 0x000001b7 @@ -34,6 +35,31 @@ #define EXT_START_CODE 0x000001b5 #define USER_START_CODE 0x000001b2 +typedef struct Mpeg1Context { + MpegEncContext mpeg_enc_ctx; + int mpeg_enc_ctx_allocated; /* true if decoding context allocated */ + int repeat_field; /* true if we must repeat the field */ + AVPanScan pan_scan; /* some temporary storage for the panscan */ + AVStereo3D stereo3d; + int has_stereo3d; + AVBufferRef *a53_buf_ref; + uint8_t afd; + int has_afd; + int slice_count; + unsigned aspect_ratio_info; + AVRational save_aspect; + int save_width, save_height, save_progressive_seq; + int rc_buffer_size; + AVRational frame_rate_ext; /* MPEG-2 specific framerate modificator */ + unsigned frame_rate_index; + int sync; /* Did we reach a sync point like a GOP/SEQ/KEYFrame? */ + int closed_gop; + int tmpgexs; + int first_slice; + int extradata_decoded; + int64_t timecode_frame_start; /*< GOP timecode frame start number, in non drop frame format */ +} Mpeg1Context; + void ff_mpeg12_common_init(MpegEncContext *s); void ff_mpeg1_clean_buffers(MpegEncContext *s); @@ -45,4 +71,6 @@ void ff_mpeg12_find_best_frame_rate(AVRational frame_rate, int *code, int *ext_n, int *ext_d, int nonstandard); +void ff_mpeg_decode_user_data(AVCodecContext *avctx, Mpeg1Context *s1, const uint8_t *p, int buf_size); + #endif /* AVCODEC_MPEG12_H */ diff --git a/libavcodec/mpeg12dec.c b/libavcodec/mpeg12dec.c index e9bde48f7a..11d2b58185 100644 --- a/libavcodec/mpeg12dec.c +++ b/libavcodec/mpeg12dec.c @@ -58,31 +58,6 @@ #define A53_MAX_CC_COUNT 2000 -typedef struct Mpeg1Context { - MpegEncContext mpeg_enc_ctx; - int mpeg_enc_ctx_allocated; /* true if decoding context allocated */ - int repeat_field; /* true if we must repeat the field */ - AVPanScan pan_scan; /* some temporary storage for the panscan */ - AVStereo3D stereo3d; - int has_stereo3d; - AVBufferRef *a53_buf_ref; - uint8_t afd; - int has_afd; - int slice_count; - unsigned aspect_ratio_info; - AVRational save_aspect; - int save_width, save_height, save_progressive_seq; - int rc_buffer_size; - AVRational frame_rate_ext; /* MPEG-2 specific framerate modificator */ - unsigned frame_rate_index; - int sync; /* Did we reach a sync point like a GOP/SEQ/KEYFrame? */ - int closed_gop; - int tmpgexs; - int first_slice; - int extradata_decoded; - int64_t timecode_frame_start; /*< GOP timecode frame start number, in non drop frame format */ -} Mpeg1Context; - #define MB_TYPE_ZERO_MV 0x20000000 static const uint32_t ptype2mb_type[7] = { @@ -2198,11 +2173,9 @@ static int vcr2_init_sequence(AVCodecContext *avctx) return 0; } -static int mpeg_decode_a53_cc(AVCodecContext *avctx, +static int mpeg_decode_a53_cc(AVCodecContext *avctx, Mpeg1Context *s1, const uint8_t *p, int buf_size) { - Mpeg1Context *s1 = avctx->priv_data; - if (buf_size >= 6 && p[0] == 'G' && p[1] == 'A' && p[2] == '9' && p[3] == '4' && p[4] == 3 && (p[5] & 0x40)) { @@ -2333,12 +2306,9 @@ static int mpeg_decode_a53_cc(AVCodecContext *avctx, return 0; } -static void mpeg_decode_user_data(AVCodecContext *avctx, - const uint8_t *p, int buf_size) +void ff_mpeg_decode_user_data(AVCodecContext *avctx, Mpeg1Context *s1, const uint8_t *p, int buf_size) { - Mpeg1Context *s = avctx->priv_data; const uint8_t *buf_end = p + buf_size; - Mpeg1Context *s1 = avctx->priv_data; #if 0 int i; @@ -2352,7 +2322,7 @@ static void mpeg_decode_user_data(AVCodecContext *avctx, int i; for(i=0; i<20; i++) if (!memcmp(p+i, "\0TMPGEXS\0", 9)){ - s->tmpgexs= 1; + s1->tmpgexs= 1; } } /* we parse the DTG active format information */ @@ -2398,7 +2368,7 @@ static void mpeg_decode_user_data(AVCodecContext *avctx, break; } } - } else if (mpeg_decode_a53_cc(avctx, p, buf_size)) { + } else if (mpeg_decode_a53_cc(avctx, s1, p, buf_size)) { return; } } @@ -2590,7 +2560,7 @@ static int decode_chunks(AVCodecContext *avctx, AVFrame *picture, } break; case USER_START_CODE: - mpeg_decode_user_data(avctx, buf_ptr, input_size); + ff_mpeg_decode_user_data(avctx, s, buf_ptr, input_size); break; case GOP_START_CODE: if (last_code == 0) { -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v4 4/6] avcodec/hevcdec: make set_side_data() accessible 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 0/6] " ffmpegagent ` (2 preceding siblings ...) 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 3/6] avcodec/mpeg12dec: make mpeg_decode_user_data() accessible softworkz @ 2022-06-26 23:41 ` softworkz 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 5/6] avcodec/h264dec: make h264_export_frame_props() accessible softworkz ` (3 subsequent siblings) 7 siblings, 0 replies; 65+ messages in thread From: softworkz @ 2022-06-26 23:41 UTC (permalink / raw) To: ffmpeg-devel; +Cc: Kieran Kunhya, softworkz, Xiang, Haihao From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> --- libavcodec/hevcdec.c | 117 +++++++++++++++++++++---------------------- libavcodec/hevcdec.h | 9 ++++ 2 files changed, 67 insertions(+), 59 deletions(-) diff --git a/libavcodec/hevcdec.c b/libavcodec/hevcdec.c index e84c30dd13..b4d8db8c6b 100644 --- a/libavcodec/hevcdec.c +++ b/libavcodec/hevcdec.c @@ -2726,23 +2726,22 @@ error: return res; } -static int set_side_data(HEVCContext *s) +int ff_hevc_set_side_data(AVCodecContext *logctx, HEVCSEI *sei, HEVCContext *s, AVFrame *out) { - AVFrame *out = s->ref->frame; - int ret; + int ret = 0; - if (s->sei.frame_packing.present && - s->sei.frame_packing.arrangement_type >= 3 && - s->sei.frame_packing.arrangement_type <= 5 && - s->sei.frame_packing.content_interpretation_type > 0 && - s->sei.frame_packing.content_interpretation_type < 3) { + if (sei->frame_packing.present && + sei->frame_packing.arrangement_type >= 3 && + sei->frame_packing.arrangement_type <= 5 && + sei->frame_packing.content_interpretation_type > 0 && + sei->frame_packing.content_interpretation_type < 3) { AVStereo3D *stereo = av_stereo3d_create_side_data(out); if (!stereo) return AVERROR(ENOMEM); - switch (s->sei.frame_packing.arrangement_type) { + switch (sei->frame_packing.arrangement_type) { case 3: - if (s->sei.frame_packing.quincunx_subsampling) + if (sei->frame_packing.quincunx_subsampling) stereo->type = AV_STEREO3D_SIDEBYSIDE_QUINCUNX; else stereo->type = AV_STEREO3D_SIDEBYSIDE; @@ -2755,21 +2754,21 @@ static int set_side_data(HEVCContext *s) break; } - if (s->sei.frame_packing.content_interpretation_type == 2) + if (sei->frame_packing.content_interpretation_type == 2) stereo->flags = AV_STEREO3D_FLAG_INVERT; - if (s->sei.frame_packing.arrangement_type == 5) { - if (s->sei.frame_packing.current_frame_is_frame0_flag) + if (sei->frame_packing.arrangement_type == 5) { + if (sei->frame_packing.current_frame_is_frame0_flag) stereo->view = AV_STEREO3D_VIEW_LEFT; else stereo->view = AV_STEREO3D_VIEW_RIGHT; } } - if (s->sei.display_orientation.present && - (s->sei.display_orientation.anticlockwise_rotation || - s->sei.display_orientation.hflip || s->sei.display_orientation.vflip)) { - double angle = s->sei.display_orientation.anticlockwise_rotation * 360 / (double) (1 << 16); + if (sei->display_orientation.present && + (sei->display_orientation.anticlockwise_rotation || + sei->display_orientation.hflip || sei->display_orientation.vflip)) { + double angle = sei->display_orientation.anticlockwise_rotation * 360 / (double) (1 << 16); AVFrameSideData *rotation = av_frame_new_side_data(out, AV_FRAME_DATA_DISPLAYMATRIX, sizeof(int32_t) * 9); @@ -2788,17 +2787,17 @@ static int set_side_data(HEVCContext *s) * (1 - 2 * !!s->sei.display_orientation.vflip); av_display_rotation_set((int32_t *)rotation->data, angle); av_display_matrix_flip((int32_t *)rotation->data, - s->sei.display_orientation.hflip, - s->sei.display_orientation.vflip); + sei->display_orientation.hflip, + sei->display_orientation.vflip); } // Decrement the mastering display flag when IRAP frame has no_rasl_output_flag=1 // so the side data persists for the entire coded video sequence. - if (s->sei.mastering_display.present > 0 && + if (s && sei->mastering_display.present > 0 && IS_IRAP(s) && s->no_rasl_output_flag) { - s->sei.mastering_display.present--; + sei->mastering_display.present--; } - if (s->sei.mastering_display.present) { + if (sei->mastering_display.present) { // HEVC uses a g,b,r ordering, which we convert to a more natural r,g,b const int mapping[3] = {2, 0, 1}; const int chroma_den = 50000; @@ -2811,25 +2810,25 @@ static int set_side_data(HEVCContext *s) for (i = 0; i < 3; i++) { const int j = mapping[i]; - metadata->display_primaries[i][0].num = s->sei.mastering_display.display_primaries[j][0]; + metadata->display_primaries[i][0].num = sei->mastering_display.display_primaries[j][0]; metadata->display_primaries[i][0].den = chroma_den; - metadata->display_primaries[i][1].num = s->sei.mastering_display.display_primaries[j][1]; + metadata->display_primaries[i][1].num = sei->mastering_display.display_primaries[j][1]; metadata->display_primaries[i][1].den = chroma_den; } - metadata->white_point[0].num = s->sei.mastering_display.white_point[0]; + metadata->white_point[0].num = sei->mastering_display.white_point[0]; metadata->white_point[0].den = chroma_den; - metadata->white_point[1].num = s->sei.mastering_display.white_point[1]; + metadata->white_point[1].num = sei->mastering_display.white_point[1]; metadata->white_point[1].den = chroma_den; - metadata->max_luminance.num = s->sei.mastering_display.max_luminance; + metadata->max_luminance.num = sei->mastering_display.max_luminance; metadata->max_luminance.den = luma_den; - metadata->min_luminance.num = s->sei.mastering_display.min_luminance; + metadata->min_luminance.num = sei->mastering_display.min_luminance; metadata->min_luminance.den = luma_den; metadata->has_luminance = 1; metadata->has_primaries = 1; - av_log(s->avctx, AV_LOG_DEBUG, "Mastering Display Metadata:\n"); - av_log(s->avctx, AV_LOG_DEBUG, + av_log(logctx, AV_LOG_DEBUG, "Mastering Display Metadata:\n"); + av_log(logctx, AV_LOG_DEBUG, "r(%5.4f,%5.4f) g(%5.4f,%5.4f) b(%5.4f %5.4f) wp(%5.4f, %5.4f)\n", av_q2d(metadata->display_primaries[0][0]), av_q2d(metadata->display_primaries[0][1]), @@ -2838,31 +2837,31 @@ static int set_side_data(HEVCContext *s) av_q2d(metadata->display_primaries[2][0]), av_q2d(metadata->display_primaries[2][1]), av_q2d(metadata->white_point[0]), av_q2d(metadata->white_point[1])); - av_log(s->avctx, AV_LOG_DEBUG, + av_log(logctx, AV_LOG_DEBUG, "min_luminance=%f, max_luminance=%f\n", av_q2d(metadata->min_luminance), av_q2d(metadata->max_luminance)); } // Decrement the mastering display flag when IRAP frame has no_rasl_output_flag=1 // so the side data persists for the entire coded video sequence. - if (s->sei.content_light.present > 0 && + if (s && sei->content_light.present > 0 && IS_IRAP(s) && s->no_rasl_output_flag) { - s->sei.content_light.present--; + sei->content_light.present--; } - if (s->sei.content_light.present) { + if (sei->content_light.present) { AVContentLightMetadata *metadata = av_content_light_metadata_create_side_data(out); if (!metadata) return AVERROR(ENOMEM); - metadata->MaxCLL = s->sei.content_light.max_content_light_level; - metadata->MaxFALL = s->sei.content_light.max_pic_average_light_level; + metadata->MaxCLL = sei->content_light.max_content_light_level; + metadata->MaxFALL = sei->content_light.max_pic_average_light_level; - av_log(s->avctx, AV_LOG_DEBUG, "Content Light Level Metadata:\n"); - av_log(s->avctx, AV_LOG_DEBUG, "MaxCLL=%d, MaxFALL=%d\n", + av_log(logctx, AV_LOG_DEBUG, "Content Light Level Metadata:\n"); + av_log(logctx, AV_LOG_DEBUG, "MaxCLL=%d, MaxFALL=%d\n", metadata->MaxCLL, metadata->MaxFALL); } - if (s->sei.a53_caption.buf_ref) { - HEVCSEIA53Caption *a53 = &s->sei.a53_caption; + if (sei->a53_caption.buf_ref) { + HEVCSEIA53Caption *a53 = &sei->a53_caption; AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, a53->buf_ref); if (!sd) @@ -2870,8 +2869,8 @@ static int set_side_data(HEVCContext *s) a53->buf_ref = NULL; } - for (int i = 0; i < s->sei.unregistered.nb_buf_ref; i++) { - HEVCSEIUnregistered *unreg = &s->sei.unregistered; + for (int i = 0; i < sei->unregistered.nb_buf_ref; i++) { + HEVCSEIUnregistered *unreg = &sei->unregistered; if (unreg->buf_ref[i]) { AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, @@ -2882,9 +2881,9 @@ static int set_side_data(HEVCContext *s) unreg->buf_ref[i] = NULL; } } - s->sei.unregistered.nb_buf_ref = 0; + sei->unregistered.nb_buf_ref = 0; - if (s->sei.timecode.present) { + if (s && sei->timecode.present) { uint32_t *tc_sd; char tcbuf[AV_TIMECODE_STR_SIZE]; AVFrameSideData *tcside = av_frame_new_side_data(out, AV_FRAME_DATA_S12M_TIMECODE, @@ -2893,25 +2892,25 @@ static int set_side_data(HEVCContext *s) return AVERROR(ENOMEM); tc_sd = (uint32_t*)tcside->data; - tc_sd[0] = s->sei.timecode.num_clock_ts; + tc_sd[0] = sei->timecode.num_clock_ts; for (int i = 0; i < tc_sd[0]; i++) { - int drop = s->sei.timecode.cnt_dropped_flag[i]; - int hh = s->sei.timecode.hours_value[i]; - int mm = s->sei.timecode.minutes_value[i]; - int ss = s->sei.timecode.seconds_value[i]; - int ff = s->sei.timecode.n_frames[i]; + int drop = sei->timecode.cnt_dropped_flag[i]; + int hh = sei->timecode.hours_value[i]; + int mm = sei->timecode.minutes_value[i]; + int ss = sei->timecode.seconds_value[i]; + int ff = sei->timecode.n_frames[i]; tc_sd[i + 1] = av_timecode_get_smpte(s->avctx->framerate, drop, hh, mm, ss, ff); av_timecode_make_smpte_tc_string2(tcbuf, s->avctx->framerate, tc_sd[i + 1], 0, 0); av_dict_set(&out->metadata, "timecode", tcbuf, 0); } - s->sei.timecode.num_clock_ts = 0; + sei->timecode.num_clock_ts = 0; } - if (s->sei.film_grain_characteristics.present) { - HEVCSEIFilmGrainCharacteristics *fgc = &s->sei.film_grain_characteristics; + if (s && sei->film_grain_characteristics.present) { + HEVCSEIFilmGrainCharacteristics *fgc = &sei->film_grain_characteristics; AVFilmGrainParams *fgp = av_film_grain_params_create_side_data(out); if (!fgp) return AVERROR(ENOMEM); @@ -2965,8 +2964,8 @@ static int set_side_data(HEVCContext *s) fgc->present = fgc->persistence_flag; } - if (s->sei.dynamic_hdr_plus.info) { - AVBufferRef *info_ref = av_buffer_ref(s->sei.dynamic_hdr_plus.info); + if (sei->dynamic_hdr_plus.info) { + AVBufferRef *info_ref = av_buffer_ref(sei->dynamic_hdr_plus.info); if (!info_ref) return AVERROR(ENOMEM); @@ -2976,7 +2975,7 @@ static int set_side_data(HEVCContext *s) } } - if (s->rpu_buf) { + if (s && s->rpu_buf) { AVFrameSideData *rpu = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_DOVI_RPU_BUFFER, s->rpu_buf); if (!rpu) return AVERROR(ENOMEM); @@ -2984,10 +2983,10 @@ static int set_side_data(HEVCContext *s) s->rpu_buf = NULL; } - if ((ret = ff_dovi_attach_side_data(&s->dovi_ctx, out)) < 0) + if (s && (ret = ff_dovi_attach_side_data(&s->dovi_ctx, out)) < 0) return ret; - if (s->sei.dynamic_hdr_vivid.info) { + if (s && s->sei.dynamic_hdr_vivid.info) { AVBufferRef *info_ref = av_buffer_ref(s->sei.dynamic_hdr_vivid.info); if (!info_ref) return AVERROR(ENOMEM); @@ -3046,7 +3045,7 @@ static int hevc_frame_start(HEVCContext *s) goto fail; } - ret = set_side_data(s); + ret = ff_hevc_set_side_data(s->avctx, &s->sei, s, s->ref->frame); if (ret < 0) goto fail; diff --git a/libavcodec/hevcdec.h b/libavcodec/hevcdec.h index de861b88b3..cd8cd40da0 100644 --- a/libavcodec/hevcdec.h +++ b/libavcodec/hevcdec.h @@ -690,6 +690,15 @@ void ff_hevc_hls_residual_coding(HEVCContext *s, int x0, int y0, void ff_hevc_hls_mvd_coding(HEVCContext *s, int x0, int y0, int log2_cb_size); +/** + * Set the decodec side data to an AVFrame. + * @logctx context for logging. + * @sei HEVCSEI decoding context, must not be NULL. + * @s HEVCContext, can be NULL. + * @return < 0 on error, 0 otherwise. + */ +int ff_hevc_set_side_data(AVCodecContext *logctx, HEVCSEI *sei, HEVCContext *s, AVFrame *out); + extern const uint8_t ff_hevc_qpel_extra_before[4]; extern const uint8_t ff_hevc_qpel_extra_after[4]; extern const uint8_t ff_hevc_qpel_extra[4]; -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v4 5/6] avcodec/h264dec: make h264_export_frame_props() accessible 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 0/6] " ffmpegagent ` (3 preceding siblings ...) 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 4/6] avcodec/hevcdec: make set_side_data() accessible softworkz @ 2022-06-26 23:41 ` softworkz 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders softworkz ` (2 subsequent siblings) 7 siblings, 0 replies; 65+ messages in thread From: softworkz @ 2022-06-26 23:41 UTC (permalink / raw) To: ffmpeg-devel; +Cc: Kieran Kunhya, softworkz, Xiang, Haihao From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> --- libavcodec/h264_slice.c | 98 +++++++++++++++++++++-------------------- libavcodec/h264dec.h | 2 + 2 files changed, 52 insertions(+), 48 deletions(-) diff --git a/libavcodec/h264_slice.c b/libavcodec/h264_slice.c index d56722a5c2..f2a4c1c657 100644 --- a/libavcodec/h264_slice.c +++ b/libavcodec/h264_slice.c @@ -1157,11 +1157,10 @@ static int h264_init_ps(H264Context *h, const H264SliceContext *sl, int first_sl return 0; } -static int h264_export_frame_props(H264Context *h) +int ff_h264_export_frame_props(AVCodecContext *logctx, H264SEIContext *sei, H264Context *h, AVFrame *out) { - const SPS *sps = h->ps.sps; - H264Picture *cur = h->cur_pic_ptr; - AVFrame *out = cur->f; + const SPS *sps = h ? h->ps.sps : NULL; + H264Picture *cur = h ? h->cur_pic_ptr : NULL; out->interlaced_frame = 0; out->repeat_pict = 0; @@ -1169,19 +1168,19 @@ static int h264_export_frame_props(H264Context *h) /* Signal interlacing information externally. */ /* Prioritize picture timing SEI information over used * decoding process if it exists. */ - if (h->sei.picture_timing.present) { - int ret = ff_h264_sei_process_picture_timing(&h->sei.picture_timing, sps, - h->avctx); + if (sps && sei->picture_timing.present) { + int ret = ff_h264_sei_process_picture_timing(&sei->picture_timing, sps, + logctx); if (ret < 0) { - av_log(h->avctx, AV_LOG_ERROR, "Error processing a picture timing SEI\n"); - if (h->avctx->err_recognition & AV_EF_EXPLODE) + av_log(logctx, AV_LOG_ERROR, "Error processing a picture timing SEI\n"); + if (logctx->err_recognition & AV_EF_EXPLODE) return ret; - h->sei.picture_timing.present = 0; + sei->picture_timing.present = 0; } } - if (sps->pic_struct_present_flag && h->sei.picture_timing.present) { - H264SEIPictureTiming *pt = &h->sei.picture_timing; + if (h && sps && sps->pic_struct_present_flag && sei->picture_timing.present) { + H264SEIPictureTiming *pt = &sei->picture_timing; switch (pt->pic_struct) { case H264_SEI_PIC_STRUCT_FRAME: break; @@ -1215,21 +1214,23 @@ static int h264_export_frame_props(H264Context *h) if ((pt->ct_type & 3) && pt->pic_struct <= H264_SEI_PIC_STRUCT_BOTTOM_TOP) out->interlaced_frame = (pt->ct_type & (1 << 1)) != 0; - } else { + } else if (h) { /* Derive interlacing flag from used decoding process. */ out->interlaced_frame = FIELD_OR_MBAFF_PICTURE(h); } - h->prev_interlaced_frame = out->interlaced_frame; - if (cur->field_poc[0] != cur->field_poc[1]) { + if (h) + h->prev_interlaced_frame = out->interlaced_frame; + + if (sps && cur->field_poc[0] != cur->field_poc[1]) { /* Derive top_field_first from field pocs. */ out->top_field_first = cur->field_poc[0] < cur->field_poc[1]; - } else { - if (sps->pic_struct_present_flag && h->sei.picture_timing.present) { + } else if (sps) { + if (sps->pic_struct_present_flag && sei->picture_timing.present) { /* Use picture timing SEI information. Even if it is a * information of a past frame, better than nothing. */ - if (h->sei.picture_timing.pic_struct == H264_SEI_PIC_STRUCT_TOP_BOTTOM || - h->sei.picture_timing.pic_struct == H264_SEI_PIC_STRUCT_TOP_BOTTOM_TOP) + if (sei->picture_timing.pic_struct == H264_SEI_PIC_STRUCT_TOP_BOTTOM || + sei->picture_timing.pic_struct == H264_SEI_PIC_STRUCT_TOP_BOTTOM_TOP) out->top_field_first = 1; else out->top_field_first = 0; @@ -1243,11 +1244,11 @@ static int h264_export_frame_props(H264Context *h) } } - if (h->sei.frame_packing.present && - h->sei.frame_packing.arrangement_type <= 6 && - h->sei.frame_packing.content_interpretation_type > 0 && - h->sei.frame_packing.content_interpretation_type < 3) { - H264SEIFramePacking *fp = &h->sei.frame_packing; + if (sei->frame_packing.present && + sei->frame_packing.arrangement_type <= 6 && + sei->frame_packing.content_interpretation_type > 0 && + sei->frame_packing.content_interpretation_type < 3) { + H264SEIFramePacking *fp = &sei->frame_packing; AVStereo3D *stereo = av_stereo3d_create_side_data(out); if (stereo) { switch (fp->arrangement_type) { @@ -1289,11 +1290,11 @@ static int h264_export_frame_props(H264Context *h) } } - if (h->sei.display_orientation.present && - (h->sei.display_orientation.anticlockwise_rotation || - h->sei.display_orientation.hflip || - h->sei.display_orientation.vflip)) { - H264SEIDisplayOrientation *o = &h->sei.display_orientation; + if (sei->display_orientation.present && + (sei->display_orientation.anticlockwise_rotation || + sei->display_orientation.hflip || + sei->display_orientation.vflip)) { + H264SEIDisplayOrientation *o = &sei->display_orientation; double angle = o->anticlockwise_rotation * 360 / (double) (1 << 16); AVFrameSideData *rotation = av_frame_new_side_data(out, AV_FRAME_DATA_DISPLAYMATRIX, @@ -1314,29 +1315,30 @@ static int h264_export_frame_props(H264Context *h) } } - if (h->sei.afd.present) { + if (sei->afd.present) { AVFrameSideData *sd = av_frame_new_side_data(out, AV_FRAME_DATA_AFD, sizeof(uint8_t)); if (sd) { - *sd->data = h->sei.afd.active_format_description; - h->sei.afd.present = 0; + *sd->data = sei->afd.active_format_description; + sei->afd.present = 0; } } - if (h->sei.a53_caption.buf_ref) { - H264SEIA53Caption *a53 = &h->sei.a53_caption; + if (sei->a53_caption.buf_ref) { + H264SEIA53Caption *a53 = &sei->a53_caption; AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, a53->buf_ref); if (!sd) av_buffer_unref(&a53->buf_ref); a53->buf_ref = NULL; - h->avctx->properties |= FF_CODEC_PROPERTY_CLOSED_CAPTIONS; + if (h) + h->avctx->properties |= FF_CODEC_PROPERTY_CLOSED_CAPTIONS; } - for (int i = 0; i < h->sei.unregistered.nb_buf_ref; i++) { - H264SEIUnregistered *unreg = &h->sei.unregistered; + for (int i = 0; i < sei->unregistered.nb_buf_ref; i++) { + H264SEIUnregistered *unreg = &sei->unregistered; if (unreg->buf_ref[i]) { AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, @@ -1347,10 +1349,10 @@ static int h264_export_frame_props(H264Context *h) unreg->buf_ref[i] = NULL; } } - h->sei.unregistered.nb_buf_ref = 0; + sei->unregistered.nb_buf_ref = 0; - if (h->sei.film_grain_characteristics.present) { - H264SEIFilmGrainCharacteristics *fgc = &h->sei.film_grain_characteristics; + if (h && sps && sei->film_grain_characteristics.present) { + H264SEIFilmGrainCharacteristics *fgc = &sei->film_grain_characteristics; AVFilmGrainParams *fgp = av_film_grain_params_create_side_data(out); if (!fgp) return AVERROR(ENOMEM); @@ -1404,7 +1406,7 @@ static int h264_export_frame_props(H264Context *h) h->avctx->properties |= FF_CODEC_PROPERTY_FILM_GRAIN; } - if (h->sei.picture_timing.timecode_cnt > 0) { + if (h && sei->picture_timing.timecode_cnt > 0) { uint32_t *tc_sd; char tcbuf[AV_TIMECODE_STR_SIZE]; @@ -1415,14 +1417,14 @@ static int h264_export_frame_props(H264Context *h) return AVERROR(ENOMEM); tc_sd = (uint32_t*)tcside->data; - tc_sd[0] = h->sei.picture_timing.timecode_cnt; + tc_sd[0] = sei->picture_timing.timecode_cnt; for (int i = 0; i < tc_sd[0]; i++) { - int drop = h->sei.picture_timing.timecode[i].dropframe; - int hh = h->sei.picture_timing.timecode[i].hours; - int mm = h->sei.picture_timing.timecode[i].minutes; - int ss = h->sei.picture_timing.timecode[i].seconds; - int ff = h->sei.picture_timing.timecode[i].frame; + int drop = sei->picture_timing.timecode[i].dropframe; + int hh = sei->picture_timing.timecode[i].hours; + int mm = sei->picture_timing.timecode[i].minutes; + int ss = sei->picture_timing.timecode[i].seconds; + int ff = sei->picture_timing.timecode[i].frame; tc_sd[i + 1] = av_timecode_get_smpte(h->avctx->framerate, drop, hh, mm, ss, ff); av_timecode_make_smpte_tc_string2(tcbuf, h->avctx->framerate, tc_sd[i + 1], 0, 0); @@ -1817,7 +1819,7 @@ static int h264_field_start(H264Context *h, const H264SliceContext *sl, * field coded frames, since some SEI information is present for each field * and is merged by the SEI parsing code. */ if (!FIELD_PICTURE(h) || !h->first_field || h->missing_fields > 1) { - ret = h264_export_frame_props(h); + ret = ff_h264_export_frame_props(h->avctx, &h->sei, h, h->cur_pic_ptr->f); if (ret < 0) return ret; diff --git a/libavcodec/h264dec.h b/libavcodec/h264dec.h index 9a1ec1bace..38930da4ca 100644 --- a/libavcodec/h264dec.h +++ b/libavcodec/h264dec.h @@ -808,4 +808,6 @@ void ff_h264_free_tables(H264Context *h); void ff_h264_set_erpic(ERPicture *dst, H264Picture *src); +int ff_h264_export_frame_props(AVCodecContext *logctx, H264SEIContext *sei, H264Context *h, AVFrame *out); + #endif /* AVCODEC_H264DEC_H */ -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v4 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 0/6] " ffmpegagent ` (4 preceding siblings ...) 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 5/6] avcodec/h264dec: make h264_export_frame_props() accessible softworkz @ 2022-06-26 23:41 ` softworkz 2022-06-28 4:16 ` Andreas Rheinhardt 2022-06-27 4:18 ` [FFmpeg-devel] [PATCH v4 0/6] " Xiang, Haihao 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 " ffmpegagent 7 siblings, 1 reply; 65+ messages in thread From: softworkz @ 2022-06-26 23:41 UTC (permalink / raw) To: ffmpeg-devel; +Cc: Kieran Kunhya, softworkz, Xiang, Haihao From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> --- libavcodec/qsvdec.c | 234 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 234 insertions(+) diff --git a/libavcodec/qsvdec.c b/libavcodec/qsvdec.c index 5fc5bed4c8..e854f363ec 100644 --- a/libavcodec/qsvdec.c +++ b/libavcodec/qsvdec.c @@ -49,6 +49,12 @@ #include "hwconfig.h" #include "qsv.h" #include "qsv_internal.h" +#include "h264dec.h" +#include "h264_sei.h" +#include "hevcdec.h" +#include "hevc_ps.h" +#include "hevc_sei.h" +#include "mpeg12.h" static const AVRational mfx_tb = { 1, 90000 }; @@ -60,6 +66,8 @@ static const AVRational mfx_tb = { 1, 90000 }; AV_NOPTS_VALUE : pts_tb.num ? \ av_rescale_q(mfx_pts, mfx_tb, pts_tb) : mfx_pts) +#define PAYLOAD_BUFFER_SIZE 65535 + typedef struct QSVAsyncFrame { mfxSyncPoint *sync; QSVFrame *frame; @@ -101,6 +109,9 @@ typedef struct QSVContext { mfxExtBuffer **ext_buffers; int nb_ext_buffers; + + mfxU8 payload_buffer[PAYLOAD_BUFFER_SIZE]; + Mpeg1Context mpeg_ctx; } QSVContext; static const AVCodecHWConfigInternal *const qsv_hw_configs[] = { @@ -599,6 +610,210 @@ static int qsv_export_film_grain(AVCodecContext *avctx, mfxExtAV1FilmGrainParam return 0; } #endif +static int find_start_offset(mfxU8 data[4]) +{ + if (data[0] == 0 && data[1] == 0 && data[2] == 1) + return 3; + + if (data[0] == 0 && data[1] == 0 && data[2] == 0 && data[3] == 1) + return 4; + + return 0; +} + +static int parse_sei_h264(AVCodecContext* avctx, QSVContext* q, AVFrame* out) +{ + H264SEIContext sei = { 0 }; + GetBitContext gb = { 0 }; + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = sizeof(q->payload_buffer) }; + mfxU64 ts; + int ret; + + while (1) { + int start; + memset(payload.Data, 0, payload.BufSize); + + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient buffer on GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q->payload_buffer), payload.BufSize); + return 0; + } + if (ret != MFX_ERR_NONE) + return ret; + + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) + break; + + start = find_start_offset(payload.Data); + + switch (payload.Type) { + case SEI_TYPE_BUFFERING_PERIOD: + case SEI_TYPE_PIC_TIMING: + continue; + } + + if (init_get_bits(&gb, &payload.Data[start], payload.NumBit - start * 8) < 0) + av_log(avctx, AV_LOG_ERROR, "Error initializing bitstream reader SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); + else { + ret = ff_h264_sei_decode(&sei, &gb, NULL, avctx); + + if (ret < 0) + av_log(avctx, AV_LOG_WARNING, "Failed to parse SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); + else + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d\n", payload.Type, payload.NumBit); + } + } + + if (out) + return ff_h264_export_frame_props(avctx, &sei, NULL, out); + + return 0; +} + +static int parse_sei_hevc(AVCodecContext* avctx, QSVContext* q, QSVFrame* out) +{ + HEVCSEI sei = { 0 }; + HEVCParamSets ps = { 0 }; + GetBitContext gb = { 0 }; + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = sizeof(q->payload_buffer) }; + mfxFrameSurface1 *surface = &out->surface; + mfxU64 ts; + int ret, has_logged = 0; + + while (1) { + int start; + memset(payload.Data, 0, payload.BufSize); + + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient buffer on GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q->payload_buffer), payload.BufSize); + return 0; + } + if (ret != MFX_ERR_NONE) + return ret; + + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) + break; + + if (!has_logged) { + has_logged = 1; + av_log(avctx, AV_LOG_VERBOSE, "-----------------------------------------\n"); + av_log(avctx, AV_LOG_VERBOSE, "Start reading SEI - payload timestamp: %llu - surface timestamp: %llu\n", ts, surface->Data.TimeStamp); + } + + if (ts != surface->Data.TimeStamp) { + av_log(avctx, AV_LOG_WARNING, "GetPayload timestamp (%llu) does not match surface timestamp: (%llu)\n", ts, surface->Data.TimeStamp); + } + + start = find_start_offset(payload.Data); + + av_log(avctx, AV_LOG_VERBOSE, "parsing SEI type: %3d Numbits %3d Start: %d\n", payload.Type, payload.NumBit, start); + + switch (payload.Type) { + case SEI_TYPE_BUFFERING_PERIOD: + case SEI_TYPE_PIC_TIMING: + continue; + case SEI_TYPE_MASTERING_DISPLAY_COLOUR_VOLUME: + // There seems to be a bug in MSDK + payload.NumBit -= 8; + + break; + case SEI_TYPE_CONTENT_LIGHT_LEVEL_INFO: + // There seems to be a bug in MSDK + payload.NumBit = 48; + + break; + case SEI_TYPE_USER_DATA_REGISTERED_ITU_T_T35: + // There seems to be a bug in MSDK + if (payload.NumBit == 552) + payload.NumBit = 528; + break; + } + + if (init_get_bits(&gb, &payload.Data[start], payload.NumBit - start * 8) < 0) + av_log(avctx, AV_LOG_ERROR, "Error initializing bitstream reader SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); + else { + ret = ff_hevc_decode_nal_sei(&gb, avctx, &sei, &ps, HEVC_NAL_SEI_PREFIX); + + if (ret < 0) + av_log(avctx, AV_LOG_WARNING, "Failed to parse SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); + else + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d\n", payload.Type, payload.NumBit); + } + } + + if (has_logged) { + av_log(avctx, AV_LOG_VERBOSE, "End reading SEI\n"); + } + + if (out && out->frame) + return ff_hevc_set_side_data(avctx, &sei, NULL, out->frame); + + return 0; +} + +static int parse_sei_mpeg12(AVCodecContext* avctx, QSVContext* q, AVFrame* out) +{ + Mpeg1Context *mpeg_ctx = &q->mpeg_ctx; + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = sizeof(q->payload_buffer) }; + mfxU64 ts; + int ret; + + while (1) { + int start; + + memset(payload.Data, 0, payload.BufSize); + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient buffer on GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q->payload_buffer), payload.BufSize); + return 0; + } + if (ret != MFX_ERR_NONE) + return ret; + + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) + break; + + start = find_start_offset(payload.Data); + + start++; + + ff_mpeg_decode_user_data(avctx, mpeg_ctx, &payload.Data[start], (int)((payload.NumBit + 7) / 8) - start); + + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d start %d -> %.s\n", payload.Type, payload.NumBit, start, (char *)(&payload.Data[start])); + } + + if (!out) + return 0; + + if (mpeg_ctx->a53_buf_ref) { + + AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, mpeg_ctx->a53_buf_ref); + if (!sd) + av_buffer_unref(&mpeg_ctx->a53_buf_ref); + mpeg_ctx->a53_buf_ref = NULL; + } + + if (mpeg_ctx->has_stereo3d) { + AVStereo3D *stereo = av_stereo3d_create_side_data(out); + if (!stereo) + return AVERROR(ENOMEM); + + *stereo = mpeg_ctx->stereo3d; + mpeg_ctx->has_stereo3d = 0; + } + + if (mpeg_ctx->has_afd) { + AVFrameSideData *sd = av_frame_new_side_data(out, AV_FRAME_DATA_AFD, 1); + if (!sd) + return AVERROR(ENOMEM); + + *sd->data = mpeg_ctx->afd; + mpeg_ctx->has_afd = 0; + } + + return 0; +} static int qsv_decode(AVCodecContext *avctx, QSVContext *q, AVFrame *frame, int *got_frame, @@ -636,6 +851,8 @@ static int qsv_decode(AVCodecContext *avctx, QSVContext *q, insurf, &outsurf, sync); if (ret == MFX_WRN_DEVICE_BUSY) av_usleep(500); + else if (avctx->codec_id == AV_CODEC_ID_MPEG2VIDEO) + parse_sei_mpeg12(avctx, q, NULL); } while (ret == MFX_WRN_DEVICE_BUSY || ret == MFX_ERR_MORE_SURFACE); @@ -677,6 +894,23 @@ static int qsv_decode(AVCodecContext *avctx, QSVContext *q, return AVERROR_BUG; } + switch (avctx->codec_id) { + case AV_CODEC_ID_MPEG2VIDEO: + ret = parse_sei_mpeg12(avctx, q, out_frame->frame); + break; + case AV_CODEC_ID_H264: + ret = parse_sei_h264(avctx, q, out_frame->frame); + break; + case AV_CODEC_ID_HEVC: + ret = parse_sei_hevc(avctx, q, out_frame); + break; + default: + ret = 0; + } + + if (ret < 0) + av_log(avctx, AV_LOG_ERROR, "Error parsing SEI data: %d\n", ret); + out_frame->queued += 1; aframe = (QSVAsyncFrame){ sync, out_frame }; -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [FFmpeg-devel] [PATCH v4 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders softworkz @ 2022-06-28 4:16 ` Andreas Rheinhardt 2022-06-28 5:25 ` Soft Works 0 siblings, 1 reply; 65+ messages in thread From: Andreas Rheinhardt @ 2022-06-28 4:16 UTC (permalink / raw) To: ffmpeg-devel softworkz: > From: softworkz <softworkz@hotmail.com> > > Signed-off-by: softworkz <softworkz@hotmail.com> > --- > libavcodec/qsvdec.c | 234 ++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 234 insertions(+) > > diff --git a/libavcodec/qsvdec.c b/libavcodec/qsvdec.c > index 5fc5bed4c8..e854f363ec 100644 > --- a/libavcodec/qsvdec.c > +++ b/libavcodec/qsvdec.c > @@ -49,6 +49,12 @@ > #include "hwconfig.h" > #include "qsv.h" > #include "qsv_internal.h" > +#include "h264dec.h" > +#include "h264_sei.h" > +#include "hevcdec.h" > +#include "hevc_ps.h" > +#include "hevc_sei.h" > +#include "mpeg12.h" > > static const AVRational mfx_tb = { 1, 90000 }; > > @@ -60,6 +66,8 @@ static const AVRational mfx_tb = { 1, 90000 }; > AV_NOPTS_VALUE : pts_tb.num ? \ > av_rescale_q(mfx_pts, mfx_tb, pts_tb) : mfx_pts) > > +#define PAYLOAD_BUFFER_SIZE 65535 > + > typedef struct QSVAsyncFrame { > mfxSyncPoint *sync; > QSVFrame *frame; > @@ -101,6 +109,9 @@ typedef struct QSVContext { > > mfxExtBuffer **ext_buffers; > int nb_ext_buffers; > + > + mfxU8 payload_buffer[PAYLOAD_BUFFER_SIZE]; > + Mpeg1Context mpeg_ctx; > } QSVContext; > > static const AVCodecHWConfigInternal *const qsv_hw_configs[] = { > @@ -599,6 +610,210 @@ static int qsv_export_film_grain(AVCodecContext *avctx, mfxExtAV1FilmGrainParam > return 0; > } > #endif > +static int find_start_offset(mfxU8 data[4]) > +{ > + if (data[0] == 0 && data[1] == 0 && data[2] == 1) > + return 3; > + > + if (data[0] == 0 && data[1] == 0 && data[2] == 0 && data[3] == 1) > + return 4; > + > + return 0; > +} > + > +static int parse_sei_h264(AVCodecContext* avctx, QSVContext* q, AVFrame* out) > +{ > + H264SEIContext sei = { 0 }; > + GetBitContext gb = { 0 }; > + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = sizeof(q->payload_buffer) }; > + mfxU64 ts; > + int ret; > + > + while (1) { > + int start; > + memset(payload.Data, 0, payload.BufSize); > + > + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); > + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { > + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient buffer on GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q->payload_buffer), payload.BufSize); > + return 0; > + } > + if (ret != MFX_ERR_NONE) > + return ret; > + > + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) > + break; > + > + start = find_start_offset(payload.Data); > + > + switch (payload.Type) { > + case SEI_TYPE_BUFFERING_PERIOD: > + case SEI_TYPE_PIC_TIMING: > + continue; > + } > + > + if (init_get_bits(&gb, &payload.Data[start], payload.NumBit - start * 8) < 0) > + av_log(avctx, AV_LOG_ERROR, "Error initializing bitstream reader SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); > + else { > + ret = ff_h264_sei_decode(&sei, &gb, NULL, avctx); > + > + if (ret < 0) > + av_log(avctx, AV_LOG_WARNING, "Failed to parse SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); > + else > + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d\n", payload.Type, payload.NumBit); > + } > + } > + > + if (out) > + return ff_h264_export_frame_props(avctx, &sei, NULL, out); > + > + return 0; > +} > + > +static int parse_sei_hevc(AVCodecContext* avctx, QSVContext* q, QSVFrame* out) > +{ > + HEVCSEI sei = { 0 }; > + HEVCParamSets ps = { 0 }; > + GetBitContext gb = { 0 }; > + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = sizeof(q->payload_buffer) }; > + mfxFrameSurface1 *surface = &out->surface; > + mfxU64 ts; > + int ret, has_logged = 0; > + > + while (1) { > + int start; > + memset(payload.Data, 0, payload.BufSize); > + > + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); > + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { > + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient buffer on GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q->payload_buffer), payload.BufSize); > + return 0; > + } > + if (ret != MFX_ERR_NONE) > + return ret; > + > + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) > + break; > + > + if (!has_logged) { > + has_logged = 1; > + av_log(avctx, AV_LOG_VERBOSE, "-----------------------------------------\n"); > + av_log(avctx, AV_LOG_VERBOSE, "Start reading SEI - payload timestamp: %llu - surface timestamp: %llu\n", ts, surface->Data.TimeStamp); > + } > + > + if (ts != surface->Data.TimeStamp) { > + av_log(avctx, AV_LOG_WARNING, "GetPayload timestamp (%llu) does not match surface timestamp: (%llu)\n", ts, surface->Data.TimeStamp); > + } > + > + start = find_start_offset(payload.Data); > + > + av_log(avctx, AV_LOG_VERBOSE, "parsing SEI type: %3d Numbits %3d Start: %d\n", payload.Type, payload.NumBit, start); > + > + switch (payload.Type) { > + case SEI_TYPE_BUFFERING_PERIOD: > + case SEI_TYPE_PIC_TIMING: > + continue; > + case SEI_TYPE_MASTERING_DISPLAY_COLOUR_VOLUME: > + // There seems to be a bug in MSDK > + payload.NumBit -= 8; > + > + break; > + case SEI_TYPE_CONTENT_LIGHT_LEVEL_INFO: > + // There seems to be a bug in MSDK > + payload.NumBit = 48; > + > + break; > + case SEI_TYPE_USER_DATA_REGISTERED_ITU_T_T35: > + // There seems to be a bug in MSDK > + if (payload.NumBit == 552) > + payload.NumBit = 528; > + break; > + } > + > + if (init_get_bits(&gb, &payload.Data[start], payload.NumBit - start * 8) < 0) > + av_log(avctx, AV_LOG_ERROR, "Error initializing bitstream reader SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); > + else { > + ret = ff_hevc_decode_nal_sei(&gb, avctx, &sei, &ps, HEVC_NAL_SEI_PREFIX); > + > + if (ret < 0) > + av_log(avctx, AV_LOG_WARNING, "Failed to parse SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); > + else > + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d\n", payload.Type, payload.NumBit); > + } > + } > + > + if (has_logged) { > + av_log(avctx, AV_LOG_VERBOSE, "End reading SEI\n"); > + } > + > + if (out && out->frame) > + return ff_hevc_set_side_data(avctx, &sei, NULL, out->frame); > + > + return 0; > +} > + > +static int parse_sei_mpeg12(AVCodecContext* avctx, QSVContext* q, AVFrame* out) > +{ > + Mpeg1Context *mpeg_ctx = &q->mpeg_ctx; > + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = sizeof(q->payload_buffer) }; > + mfxU64 ts; > + int ret; > + > + while (1) { > + int start; > + > + memset(payload.Data, 0, payload.BufSize); > + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); > + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { > + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient buffer on GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q->payload_buffer), payload.BufSize); > + return 0; > + } > + if (ret != MFX_ERR_NONE) > + return ret; > + > + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) > + break; > + > + start = find_start_offset(payload.Data); > + > + start++; > + > + ff_mpeg_decode_user_data(avctx, mpeg_ctx, &payload.Data[start], (int)((payload.NumBit + 7) / 8) - start); > + > + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d start %d -> %.s\n", payload.Type, payload.NumBit, start, (char *)(&payload.Data[start])); > + } > + > + if (!out) > + return 0; > + > + if (mpeg_ctx->a53_buf_ref) { > + > + AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, mpeg_ctx->a53_buf_ref); > + if (!sd) > + av_buffer_unref(&mpeg_ctx->a53_buf_ref); > + mpeg_ctx->a53_buf_ref = NULL; > + } > + > + if (mpeg_ctx->has_stereo3d) { > + AVStereo3D *stereo = av_stereo3d_create_side_data(out); > + if (!stereo) > + return AVERROR(ENOMEM); > + > + *stereo = mpeg_ctx->stereo3d; > + mpeg_ctx->has_stereo3d = 0; > + } > + > + if (mpeg_ctx->has_afd) { > + AVFrameSideData *sd = av_frame_new_side_data(out, AV_FRAME_DATA_AFD, 1); > + if (!sd) > + return AVERROR(ENOMEM); > + > + *sd->data = mpeg_ctx->afd; > + mpeg_ctx->has_afd = 0; > + } > + > + return 0; > +} > > static int qsv_decode(AVCodecContext *avctx, QSVContext *q, > AVFrame *frame, int *got_frame, > @@ -636,6 +851,8 @@ static int qsv_decode(AVCodecContext *avctx, QSVContext *q, > insurf, &outsurf, sync); > if (ret == MFX_WRN_DEVICE_BUSY) > av_usleep(500); > + else if (avctx->codec_id == AV_CODEC_ID_MPEG2VIDEO) > + parse_sei_mpeg12(avctx, q, NULL); > > } while (ret == MFX_WRN_DEVICE_BUSY || ret == MFX_ERR_MORE_SURFACE); > > @@ -677,6 +894,23 @@ static int qsv_decode(AVCodecContext *avctx, QSVContext *q, > return AVERROR_BUG; > } > > + switch (avctx->codec_id) { > + case AV_CODEC_ID_MPEG2VIDEO: > + ret = parse_sei_mpeg12(avctx, q, out_frame->frame); > + break; > + case AV_CODEC_ID_H264: > + ret = parse_sei_h264(avctx, q, out_frame->frame); > + break; > + case AV_CODEC_ID_HEVC: > + ret = parse_sei_hevc(avctx, q, out_frame); > + break; > + default: > + ret = 0; > + } > + > + if (ret < 0) > + av_log(avctx, AV_LOG_ERROR, "Error parsing SEI data: %d\n", ret); > + > out_frame->queued += 1; > > aframe = (QSVAsyncFrame){ sync, out_frame }; You completely forgot necessary changes to configure/the Makefile. The way you are doing it here means that you basically have the qsv decoders to rely on the H.264/HEVC/MPEG-1/2 decoders which is way too much. - Andreas _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [FFmpeg-devel] [PATCH v4 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders 2022-06-28 4:16 ` Andreas Rheinhardt @ 2022-06-28 5:25 ` Soft Works 0 siblings, 0 replies; 65+ messages in thread From: Soft Works @ 2022-06-28 5:25 UTC (permalink / raw) To: FFmpeg development discussions and patches > -----Original Message----- > From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of > Andreas Rheinhardt > Sent: Tuesday, June 28, 2022 6:17 AM > To: ffmpeg-devel@ffmpeg.org > Subject: Re: [FFmpeg-devel] [PATCH v4 6/6] avcodec/qsvdec: Implement > SEI parsing for QSV decoders > > softworkz: > > From: softworkz <softworkz@hotmail.com> > > > > Signed-off-by: softworkz <softworkz@hotmail.com> > > --- > > libavcodec/qsvdec.c | 234 > ++++++++++++++++++++++++++++++++++++++++++++ > > 1 file changed, 234 insertions(+) > > > > diff --git a/libavcodec/qsvdec.c b/libavcodec/qsvdec.c > > index 5fc5bed4c8..e854f363ec 100644 > > --- a/libavcodec/qsvdec.c > > +++ b/libavcodec/qsvdec.c > > @@ -49,6 +49,12 @@ > > #include "hwconfig.h" > > #include "qsv.h" > > #include "qsv_internal.h" > > +#include "h264dec.h" > > +#include "h264_sei.h" > > +#include "hevcdec.h" > > +#include "hevc_ps.h" > > +#include "hevc_sei.h" > > +#include "mpeg12.h" > > > > static const AVRational mfx_tb = { 1, 90000 }; > > > > @@ -60,6 +66,8 @@ static const AVRational mfx_tb = { 1, 90000 }; > > AV_NOPTS_VALUE : pts_tb.num ? \ > > av_rescale_q(mfx_pts, mfx_tb, pts_tb) : mfx_pts) > > > > +#define PAYLOAD_BUFFER_SIZE 65535 > > + > > typedef struct QSVAsyncFrame { > > mfxSyncPoint *sync; > > QSVFrame *frame; > > @@ -101,6 +109,9 @@ typedef struct QSVContext { > > > > mfxExtBuffer **ext_buffers; > > int nb_ext_buffers; > > + > > + mfxU8 payload_buffer[PAYLOAD_BUFFER_SIZE]; > > + Mpeg1Context mpeg_ctx; > > } QSVContext; > > > > static const AVCodecHWConfigInternal *const qsv_hw_configs[] = { > > @@ -599,6 +610,210 @@ static int > qsv_export_film_grain(AVCodecContext *avctx, mfxExtAV1FilmGrainParam > > return 0; > > } > > #endif > > +static int find_start_offset(mfxU8 data[4]) > > +{ > > + if (data[0] == 0 && data[1] == 0 && data[2] == 1) > > + return 3; > > + > > + if (data[0] == 0 && data[1] == 0 && data[2] == 0 && data[3] == > 1) > > + return 4; > > + > > + return 0; > > +} > > + > > +static int parse_sei_h264(AVCodecContext* avctx, QSVContext* q, > AVFrame* out) > > +{ > > + H264SEIContext sei = { 0 }; > > + GetBitContext gb = { 0 }; > > + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], > .BufSize = sizeof(q->payload_buffer) }; > > + mfxU64 ts; > > + int ret; > > + > > + while (1) { > > + int start; > > + memset(payload.Data, 0, payload.BufSize); > > + > > + ret = MFXVideoDECODE_GetPayload(q->session, &ts, > &payload); > > + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { > > + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient > buffer on GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q- > >payload_buffer), payload.BufSize); > > + return 0; > > + } > > + if (ret != MFX_ERR_NONE) > > + return ret; > > + > > + if (payload.NumBit == 0 || payload.NumBit >= > payload.BufSize * 8) > > + break; > > + > > + start = find_start_offset(payload.Data); > > + > > + switch (payload.Type) { > > + case SEI_TYPE_BUFFERING_PERIOD: > > + case SEI_TYPE_PIC_TIMING: > > + continue; > > + } > > + > > + if (init_get_bits(&gb, &payload.Data[start], > payload.NumBit - start * 8) < 0) > > + av_log(avctx, AV_LOG_ERROR, "Error initializing > bitstream reader SEI type: %d Numbits %d error: %d\n", payload.Type, > payload.NumBit, ret); > > + else { > > + ret = ff_h264_sei_decode(&sei, &gb, NULL, avctx); > > + > > + if (ret < 0) > > + av_log(avctx, AV_LOG_WARNING, "Failed to parse SEI > type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, > ret); > > + else > > + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d > Numbits %d\n", payload.Type, payload.NumBit); > > + } > > + } > > + > > + if (out) > > + return ff_h264_export_frame_props(avctx, &sei, NULL, out); > > + > > + return 0; > > +} > > + > > +static int parse_sei_hevc(AVCodecContext* avctx, QSVContext* q, > QSVFrame* out) > > +{ > > + HEVCSEI sei = { 0 }; > > + HEVCParamSets ps = { 0 }; > > + GetBitContext gb = { 0 }; > > + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], > .BufSize = sizeof(q->payload_buffer) }; > > + mfxFrameSurface1 *surface = &out->surface; > > + mfxU64 ts; > > + int ret, has_logged = 0; > > + > > + while (1) { > > + int start; > > + memset(payload.Data, 0, payload.BufSize); > > + > > + ret = MFXVideoDECODE_GetPayload(q->session, &ts, > &payload); > > + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { > > + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient > buffer on GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q- > >payload_buffer), payload.BufSize); > > + return 0; > > + } > > + if (ret != MFX_ERR_NONE) > > + return ret; > > + > > + if (payload.NumBit == 0 || payload.NumBit >= > payload.BufSize * 8) > > + break; > > + > > + if (!has_logged) { > > + has_logged = 1; > > + av_log(avctx, AV_LOG_VERBOSE, "----------------------- > ------------------\n"); > > + av_log(avctx, AV_LOG_VERBOSE, "Start reading SEI - > payload timestamp: %llu - surface timestamp: %llu\n", ts, surface- > >Data.TimeStamp); > > + } > > + > > + if (ts != surface->Data.TimeStamp) { > > + av_log(avctx, AV_LOG_WARNING, "GetPayload timestamp > (%llu) does not match surface timestamp: (%llu)\n", ts, surface- > >Data.TimeStamp); > > + } > > + > > + start = find_start_offset(payload.Data); > > + > > + av_log(avctx, AV_LOG_VERBOSE, "parsing SEI type: %3d > Numbits %3d Start: %d\n", payload.Type, payload.NumBit, start); > > + > > + switch (payload.Type) { > > + case SEI_TYPE_BUFFERING_PERIOD: > > + case SEI_TYPE_PIC_TIMING: > > + continue; > > + case SEI_TYPE_MASTERING_DISPLAY_COLOUR_VOLUME: > > + // There seems to be a bug in MSDK > > + payload.NumBit -= 8; > > + > > + break; > > + case SEI_TYPE_CONTENT_LIGHT_LEVEL_INFO: > > + // There seems to be a bug in MSDK > > + payload.NumBit = 48; > > + > > + break; > > + case SEI_TYPE_USER_DATA_REGISTERED_ITU_T_T35: > > + // There seems to be a bug in MSDK > > + if (payload.NumBit == 552) > > + payload.NumBit = 528; > > + break; > > + } > > + > > + if (init_get_bits(&gb, &payload.Data[start], > payload.NumBit - start * 8) < 0) > > + av_log(avctx, AV_LOG_ERROR, "Error initializing > bitstream reader SEI type: %d Numbits %d error: %d\n", payload.Type, > payload.NumBit, ret); > > + else { > > + ret = ff_hevc_decode_nal_sei(&gb, avctx, &sei, &ps, > HEVC_NAL_SEI_PREFIX); > > + > > + if (ret < 0) > > + av_log(avctx, AV_LOG_WARNING, "Failed to parse SEI > type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, > ret); > > + else > > + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d > Numbits %d\n", payload.Type, payload.NumBit); > > + } > > + } > > + > > + if (has_logged) { > > + av_log(avctx, AV_LOG_VERBOSE, "End reading SEI\n"); > > + } > > + > > + if (out && out->frame) > > + return ff_hevc_set_side_data(avctx, &sei, NULL, out- > >frame); > > + > > + return 0; > > +} > > + > > +static int parse_sei_mpeg12(AVCodecContext* avctx, QSVContext* q, > AVFrame* out) > > +{ > > + Mpeg1Context *mpeg_ctx = &q->mpeg_ctx; > > + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], > .BufSize = sizeof(q->payload_buffer) }; > > + mfxU64 ts; > > + int ret; > > + > > + while (1) { > > + int start; > > + > > + memset(payload.Data, 0, payload.BufSize); > > + ret = MFXVideoDECODE_GetPayload(q->session, &ts, > &payload); > > + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { > > + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient > buffer on GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q- > >payload_buffer), payload.BufSize); > > + return 0; > > + } > > + if (ret != MFX_ERR_NONE) > > + return ret; > > + > > + if (payload.NumBit == 0 || payload.NumBit >= > payload.BufSize * 8) > > + break; > > + > > + start = find_start_offset(payload.Data); > > + > > + start++; > > + > > + ff_mpeg_decode_user_data(avctx, mpeg_ctx, > &payload.Data[start], (int)((payload.NumBit + 7) / 8) - start); > > + > > + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits > %d start %d -> %.s\n", payload.Type, payload.NumBit, start, (char > *)(&payload.Data[start])); > > + } > > + > > + if (!out) > > + return 0; > > + > > + if (mpeg_ctx->a53_buf_ref) { > > + > > + AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, > AV_FRAME_DATA_A53_CC, mpeg_ctx->a53_buf_ref); > > + if (!sd) > > + av_buffer_unref(&mpeg_ctx->a53_buf_ref); > > + mpeg_ctx->a53_buf_ref = NULL; > > + } > > + > > + if (mpeg_ctx->has_stereo3d) { > > + AVStereo3D *stereo = av_stereo3d_create_side_data(out); > > + if (!stereo) > > + return AVERROR(ENOMEM); > > + > > + *stereo = mpeg_ctx->stereo3d; > > + mpeg_ctx->has_stereo3d = 0; > > + } > > + > > + if (mpeg_ctx->has_afd) { > > + AVFrameSideData *sd = av_frame_new_side_data(out, > AV_FRAME_DATA_AFD, 1); > > + if (!sd) > > + return AVERROR(ENOMEM); > > + > > + *sd->data = mpeg_ctx->afd; > > + mpeg_ctx->has_afd = 0; > > + } > > + > > + return 0; > > +} > > > > static int qsv_decode(AVCodecContext *avctx, QSVContext *q, > > AVFrame *frame, int *got_frame, > > @@ -636,6 +851,8 @@ static int qsv_decode(AVCodecContext *avctx, > QSVContext *q, > > insurf, &outsurf, > sync); > > if (ret == MFX_WRN_DEVICE_BUSY) > > av_usleep(500); > > + else if (avctx->codec_id == AV_CODEC_ID_MPEG2VIDEO) > > + parse_sei_mpeg12(avctx, q, NULL); > > > > } while (ret == MFX_WRN_DEVICE_BUSY || ret == > MFX_ERR_MORE_SURFACE); > > > > @@ -677,6 +894,23 @@ static int qsv_decode(AVCodecContext *avctx, > QSVContext *q, > > return AVERROR_BUG; > > } > > > > + switch (avctx->codec_id) { > > + case AV_CODEC_ID_MPEG2VIDEO: > > + ret = parse_sei_mpeg12(avctx, q, out_frame->frame); > > + break; > > + case AV_CODEC_ID_H264: > > + ret = parse_sei_h264(avctx, q, out_frame->frame); > > + break; > > + case AV_CODEC_ID_HEVC: > > + ret = parse_sei_hevc(avctx, q, out_frame); > > + break; > > + default: > > + ret = 0; > > + } > > + > > + if (ret < 0) > > + av_log(avctx, AV_LOG_ERROR, "Error parsing SEI data: > %d\n", ret); > > + > > out_frame->queued += 1; > > > > aframe = (QSVAsyncFrame){ sync, out_frame }; > > You completely forgot necessary changes to configure/the Makefile. > The > way you are doing it here means that you basically have the qsv > decoders > to rely on the H.264/HEVC/MPEG-1/2 decoders which is way too much. You are referring to the hypothetical case where one would disable one of the sw decoders while having a qsv decoder enabled, right? The SEI parsing code is not trivial and tied to those decoders (means using these contexts). It would be not a straightforward task to extract/separate those parts, that's why I preferred to just make that functionality accessible. I wouldn't mind when the QSV decoders would be dependent on those decoders being included in compilation, even more when considering that so many other hwaccel decoders have the same dependencies; DXVA2, D3D11VA, NVDEC, VAAPI. The question would be whether to not build the qsv decoders when the sw decoders are deselected or whether to build the sw decoder code even these are disabled. AFAIU, both would be possible? Or would you have a better idea? Thanks, sw _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [FFmpeg-devel] [PATCH v4 0/6] Implement SEI parsing for QSV decoders 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 0/6] " ffmpegagent ` (5 preceding siblings ...) 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders softworkz @ 2022-06-27 4:18 ` Xiang, Haihao 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 " ffmpegagent 7 siblings, 0 replies; 65+ messages in thread From: Xiang, Haihao @ 2022-06-27 4:18 UTC (permalink / raw) To: ffmpeg-devel; +Cc: softworkz, kierank, haihao.xiang-at-intel.com On Sun, 2022-06-26 at 23:41 +0000, ffmpegagent wrote: > Missing SEI information has always been a major drawback when using the QSV > decoders. I used to think that there's no chance to get at the data without > explicit implementation from the MSDK side (or doing something weird like > parsing in parallel). It turned out that there's a hardly known api method > that provides access to all SEI (h264/hevc) or user data (mpeg2video). > > This allows to get things like closed captions, frame packing, display > orientation, HDR data (mastering display, content light level, etc.) without > having to rely on those data being provided by the MSDK as extended buffers. > > The commit "Implement SEI parsing for QSV decoders" includes some hard-coded > workarounds for MSDK bugs which I reported: > https://github.com/Intel-Media-SDK/MediaSDK/issues/2597#issuecomment-1072795311 > > But that doesn't help. Those bugs exist and I'm sharing my workarounds, > which are empirically determined by testing a range of files. If someone is > interested, I can provide private access to a repository where we have been > testing this. Alternatively, I could also leave those workarounds out, and > just skip those SEI types. > > In a previous version of this patchset, there was a concern that payload > data might need to be re-ordered. Meanwhile I have researched this carefully > and the conclusion is that this is not required. > > My detailed analysis can be found here: > https://gist.github.com/softworkz/36c49586a8610813a32270ee3947a932 > > v3 > > * frame.h: clarify doc text for av_frame_copy_side_data() > > v2 > > * qsvdec: make error handling consistent and clear > * qsvdec: remove AV_CODEC_ID_MPEG1VIDEO constants > * hevcdec: rename function to ff_hevc_set_side_data(), add doc text > > v3 > > * qsvdec: fix c/p error > > softworkz (6): > avutil/frame: Add av_frame_copy_side_data() and > av_frame_remove_all_side_data() > avcodec/vpp_qsv: Copy side data from input to output frame > avcodec/mpeg12dec: make mpeg_decode_user_data() accessible > avcodec/hevcdec: make set_side_data() accessible > avcodec/h264dec: make h264_export_frame_props() accessible > avcodec/qsvdec: Implement SEI parsing for QSV decoders > > doc/APIchanges | 4 + > libavcodec/h264_slice.c | 98 ++++++++------- > libavcodec/h264dec.h | 2 + > libavcodec/hevcdec.c | 117 +++++++++--------- > libavcodec/hevcdec.h | 9 ++ > libavcodec/mpeg12.h | 28 +++++ > libavcodec/mpeg12dec.c | 40 +----- > libavcodec/qsvdec.c | 234 +++++++++++++++++++++++++++++++++++ > libavfilter/qsvvpp.c | 6 + > libavfilter/vf_overlay_qsv.c | 19 ++- > libavutil/frame.c | 67 ++++++---- > libavutil/frame.h | 32 +++++ > libavutil/version.h | 2 +- > 13 files changed, 485 insertions(+), 173 deletions(-) > > > base-commit: 6a82412bf33108111eb3f63076fd5a51349ae114 > Published-As: > https://github.com/ffstaging/FFmpeg/releases/tag/pr-ffstaging-31%2Fsoftworkz%2Fsubmit_qsv_sei-v4 > Fetch-It-Via: git fetch https://github.com/ffstaging/FFmpeg pr-ffstaging- > 31/softworkz/submit_qsv_sei-v4 > Pull-Request: https://github.com/ffstaging/FFmpeg/pull/31 > > Range-diff vs v3: > > 1: c442597a35 ! 1: 7656477360 avutil/frame: Add av_frame_copy_side_data() > and av_frame_remove_all_side_data() > @@ doc/APIchanges: libavutil: 2021-04-27 > > API changes, most recent first: > > -+2022-05-26 - xxxxxxxxx - lavu 57.26.100 - frame.h > ++2022-05-26 - xxxxxxxxx - lavu 57.28.100 - frame.h > + Add av_frame_remove_all_side_data(), av_frame_copy_side_data(), > + AV_FRAME_TRANSFER_SD_COPY, and AV_FRAME_TRANSFER_SD_FILTER. > + > - 2022-05-23 - xxxxxxxxx - lavu 57.25.100 - avutil.h > - Deprecate av_fopen_utf8() without replacement. > - > + 2022-06-12 - xxxxxxxxxx - lavf 59.25.100 - avio.h > + Add avio_vprintf(), similar to avio_printf() but allow to use it > + from within a function taking a variable argument list as input. > > ## libavutil/frame.c ## > @@ libavutil/frame.c: FF_ENABLE_DEPRECATION_WARNINGS > @@ libavutil/frame.h: int av_frame_copy(AVFrame *dst, const AVFrame > *src); > + * @param src a frame from which to copy the side data. > + * @param flags a combination of AV_FRAME_TRANSFER_SD_* > + * > -+ * @return >= 0 on success, a negative AVERROR on error. > ++ * @return 0 on success, a negative AVERROR on error. > + * > + * @note This function will create new references to side data buffers > in src, > + * unless the AV_FRAME_TRANSFER_SD_COPY flag is passed. > @@ libavutil/version.h > */ > > #define LIBAVUTIL_VERSION_MAJOR 57 > --#define LIBAVUTIL_VERSION_MINOR 25 > -+#define LIBAVUTIL_VERSION_MINOR 26 > +-#define LIBAVUTIL_VERSION_MINOR 27 > ++#define LIBAVUTIL_VERSION_MINOR 28 > #define LIBAVUTIL_VERSION_MICRO 100 > > #define LIBAVUTIL_VERSION_INT AV_VERSION_INT(LIBAVUTIL_VERSION_MAJOR, > \ > 2: 6f50d0bd57 = 2: 06976606c5 avcodec/vpp_qsv: Copy side data from input to > output frame > 3: f682b1d695 = 3: 320a8a535c avcodec/mpeg12dec: make > mpeg_decode_user_data() accessible > 4: 995d835233 = 4: e58ad6564f avcodec/hevcdec: make set_side_data() > accessible > 5: ac8dc06395 = 5: a57bfaebb9 avcodec/h264dec: make > h264_export_frame_props() accessible > 6: 27c3dded4d = 6: 3f2588563e avcodec/qsvdec: Implement SEI parsing for QSV > decoders Patchset LGTM, -Haihao _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v5 0/6] Implement SEI parsing for QSV decoders 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 0/6] " ffmpegagent ` (6 preceding siblings ...) 2022-06-27 4:18 ` [FFmpeg-devel] [PATCH v4 0/6] " Xiang, Haihao @ 2022-07-01 20:48 ` ffmpegagent 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() softworkz ` (7 more replies) 7 siblings, 8 replies; 65+ messages in thread From: ffmpegagent @ 2022-07-01 20:48 UTC (permalink / raw) To: ffmpeg-devel; +Cc: Kieran Kunhya, softworkz, Xiang, Haihao, Andreas Rheinhardt Missing SEI information has always been a major drawback when using the QSV decoders. I used to think that there's no chance to get at the data without explicit implementation from the MSDK side (or doing something weird like parsing in parallel). It turned out that there's a hardly known api method that provides access to all SEI (h264/hevc) or user data (mpeg2video). This allows to get things like closed captions, frame packing, display orientation, HDR data (mastering display, content light level, etc.) without having to rely on those data being provided by the MSDK as extended buffers. The commit "Implement SEI parsing for QSV decoders" includes some hard-coded workarounds for MSDK bugs which I reported: https://github.com/Intel-Media-SDK/MediaSDK/issues/2597#issuecomment-1072795311 But that doesn't help. Those bugs exist and I'm sharing my workarounds, which are empirically determined by testing a range of files. If someone is interested, I can provide private access to a repository where we have been testing this. Alternatively, I could also leave those workarounds out, and just skip those SEI types. In a previous version of this patchset, there was a concern that payload data might need to be re-ordered. Meanwhile I have researched this carefully and the conclusion is that this is not required. My detailed analysis can be found here: https://gist.github.com/softworkz/36c49586a8610813a32270ee3947a932 v4 * add new dependencies in makefile Now, build still works when someone uses configure --disable-decoder=h264 --disable-decoder=hevc --disable-decoder=mpegvideo --disable-decoder=mpeg1video --disable-decoder=mpeg2video --enable-libmfx v3 * frame.h: clarify doc text for av_frame_copy_side_data() v2 * qsvdec: make error handling consistent and clear * qsvdec: remove AV_CODEC_ID_MPEG1VIDEO constants * hevcdec: rename function to ff_hevc_set_side_data(), add doc text v3 * qsvdec: fix c/p error softworkz (6): avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() avcodec/vpp_qsv: Copy side data from input to output frame avcodec/mpeg12dec: make mpeg_decode_user_data() accessible avcodec/hevcdec: make set_side_data() accessible avcodec/h264dec: make h264_export_frame_props() accessible avcodec/qsvdec: Implement SEI parsing for QSV decoders doc/APIchanges | 4 + libavcodec/Makefile | 6 +- libavcodec/h264_slice.c | 98 ++++++++------- libavcodec/h264dec.h | 2 + libavcodec/hevcdec.c | 117 +++++++++--------- libavcodec/hevcdec.h | 9 ++ libavcodec/hevcdsp.c | 4 + libavcodec/mpeg12.h | 28 +++++ libavcodec/mpeg12dec.c | 40 +----- libavcodec/qsvdec.c | 234 +++++++++++++++++++++++++++++++++++ libavfilter/qsvvpp.c | 6 + libavfilter/vf_overlay_qsv.c | 19 ++- libavutil/frame.c | 67 ++++++---- libavutil/frame.h | 32 +++++ libavutil/version.h | 2 +- 15 files changed, 494 insertions(+), 174 deletions(-) base-commit: 6a82412bf33108111eb3f63076fd5a51349ae114 Published-As: https://github.com/ffstaging/FFmpeg/releases/tag/pr-ffstaging-31%2Fsoftworkz%2Fsubmit_qsv_sei-v5 Fetch-It-Via: git fetch https://github.com/ffstaging/FFmpeg pr-ffstaging-31/softworkz/submit_qsv_sei-v5 Pull-Request: https://github.com/ffstaging/FFmpeg/pull/31 Range-diff vs v4: 1: 7656477360 = 1: 7656477360 avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() 2: 06976606c5 = 2: 06976606c5 avcodec/vpp_qsv: Copy side data from input to output frame 3: 320a8a535c = 3: 320a8a535c avcodec/mpeg12dec: make mpeg_decode_user_data() accessible 4: e58ad6564f = 4: e58ad6564f avcodec/hevcdec: make set_side_data() accessible 5: a57bfaebb9 = 5: 4c0b6eb4cb avcodec/h264dec: make h264_export_frame_props() accessible 6: 3f2588563e ! 6: 19bc00be4d avcodec/qsvdec: Implement SEI parsing for QSV decoders @@ Commit message Signed-off-by: softworkz <softworkz@hotmail.com> + ## libavcodec/Makefile ## +@@ libavcodec/Makefile: OBJS-$(CONFIG_MSS34DSP) += mss34dsp.o + OBJS-$(CONFIG_PIXBLOCKDSP) += pixblockdsp.o + OBJS-$(CONFIG_QPELDSP) += qpeldsp.o + OBJS-$(CONFIG_QSV) += qsv.o +-OBJS-$(CONFIG_QSVDEC) += qsvdec.o ++OBJS-$(CONFIG_QSVDEC) += qsvdec.o h264_slice.o h264_cabac.o h264_cavlc.o \ ++ h264_direct.o h264_mb.o h264_picture.o h264_loopfilter.o \ ++ h264dec.o h264_refs.o cabac.o hevcdec.o hevc_refs.o \ ++ hevc_filter.o hevc_cabac.o hevc_mvs.o hevcpred.o hevcdsp.o \ ++ h274.o dovi_rpu.o mpeg12dec.o + OBJS-$(CONFIG_QSVENC) += qsvenc.o + OBJS-$(CONFIG_RANGECODER) += rangecoder.o + OBJS-$(CONFIG_RDFT) += rdft.o + + ## libavcodec/hevcdsp.c ## +@@ + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + ++#include "config_components.h" ++ + #include "hevcdsp.h" + + static const int8_t transform[32][32] = { +@@ libavcodec/hevcdsp.c: int i = 0; + break; + } + ++#if CONFIG_HEVC_DECODER + #if ARCH_AARCH64 + ff_hevc_dsp_init_aarch64(hevcdsp, bit_depth); + #elif ARCH_ARM +@@ libavcodec/hevcdsp.c: int i = 0; + #elif ARCH_LOONGARCH + ff_hevc_dsp_init_loongarch(hevcdsp, bit_depth); + #endif ++#endif + } + ## libavcodec/qsvdec.c ## @@ #include "hwconfig.h" -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v5 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 " ffmpegagent @ 2022-07-01 20:48 ` softworkz 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 2/6] avcodec/vpp_qsv: Copy side data from input to output frame softworkz ` (6 subsequent siblings) 7 siblings, 0 replies; 65+ messages in thread From: softworkz @ 2022-07-01 20:48 UTC (permalink / raw) To: ffmpeg-devel; +Cc: Kieran Kunhya, softworkz, Xiang, Haihao, Andreas Rheinhardt From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> Signed-off-by: Anton Khirnov <anton@khirnov.net> --- doc/APIchanges | 4 +++ libavutil/frame.c | 67 +++++++++++++++++++++++++++------------------ libavutil/frame.h | 32 ++++++++++++++++++++++ libavutil/version.h | 2 +- 4 files changed, 78 insertions(+), 27 deletions(-) diff --git a/doc/APIchanges b/doc/APIchanges index 20b944933a..6b5bf61d85 100644 --- a/doc/APIchanges +++ b/doc/APIchanges @@ -14,6 +14,10 @@ libavutil: 2021-04-27 API changes, most recent first: +2022-05-26 - xxxxxxxxx - lavu 57.28.100 - frame.h + Add av_frame_remove_all_side_data(), av_frame_copy_side_data(), + AV_FRAME_TRANSFER_SD_COPY, and AV_FRAME_TRANSFER_SD_FILTER. + 2022-06-12 - xxxxxxxxxx - lavf 59.25.100 - avio.h Add avio_vprintf(), similar to avio_printf() but allow to use it from within a function taking a variable argument list as input. diff --git a/libavutil/frame.c b/libavutil/frame.c index 4c16488c66..5d34fde904 100644 --- a/libavutil/frame.c +++ b/libavutil/frame.c @@ -271,9 +271,45 @@ FF_ENABLE_DEPRECATION_WARNINGS return AVERROR(EINVAL); } +void av_frame_remove_all_side_data(AVFrame *frame) +{ + wipe_side_data(frame); +} + +int av_frame_copy_side_data(AVFrame* dst, const AVFrame* src, int flags) +{ + for (unsigned i = 0; i < src->nb_side_data; i++) { + const AVFrameSideData *sd_src = src->side_data[i]; + AVFrameSideData *sd_dst; + if ((flags & AV_FRAME_TRANSFER_SD_FILTER) && + sd_src->type == AV_FRAME_DATA_PANSCAN && + (src->width != dst->width || src->height != dst->height)) + continue; + if (flags & AV_FRAME_TRANSFER_SD_COPY) { + sd_dst = av_frame_new_side_data(dst, sd_src->type, + sd_src->size); + if (!sd_dst) { + wipe_side_data(dst); + return AVERROR(ENOMEM); + } + memcpy(sd_dst->data, sd_src->data, sd_src->size); + } else { + AVBufferRef *ref = av_buffer_ref(sd_src->buf); + sd_dst = av_frame_new_side_data_from_buf(dst, sd_src->type, ref); + if (!sd_dst) { + av_buffer_unref(&ref); + wipe_side_data(dst); + return AVERROR(ENOMEM); + } + } + av_dict_copy(&sd_dst->metadata, sd_src->metadata, 0); + } + return 0; +} + static int frame_copy_props(AVFrame *dst, const AVFrame *src, int force_copy) { - int ret, i; + int ret; dst->key_frame = src->key_frame; dst->pict_type = src->pict_type; @@ -309,31 +345,10 @@ static int frame_copy_props(AVFrame *dst, const AVFrame *src, int force_copy) av_dict_copy(&dst->metadata, src->metadata, 0); - for (i = 0; i < src->nb_side_data; i++) { - const AVFrameSideData *sd_src = src->side_data[i]; - AVFrameSideData *sd_dst; - if ( sd_src->type == AV_FRAME_DATA_PANSCAN - && (src->width != dst->width || src->height != dst->height)) - continue; - if (force_copy) { - sd_dst = av_frame_new_side_data(dst, sd_src->type, - sd_src->size); - if (!sd_dst) { - wipe_side_data(dst); - return AVERROR(ENOMEM); - } - memcpy(sd_dst->data, sd_src->data, sd_src->size); - } else { - AVBufferRef *ref = av_buffer_ref(sd_src->buf); - sd_dst = av_frame_new_side_data_from_buf(dst, sd_src->type, ref); - if (!sd_dst) { - av_buffer_unref(&ref); - wipe_side_data(dst); - return AVERROR(ENOMEM); - } - } - av_dict_copy(&sd_dst->metadata, sd_src->metadata, 0); - } + if ((ret = av_frame_copy_side_data(dst, src, + (force_copy ? AV_FRAME_TRANSFER_SD_COPY : 0) | + AV_FRAME_TRANSFER_SD_FILTER) < 0)) + return ret; ret = av_buffer_replace(&dst->opaque_ref, src->opaque_ref); ret |= av_buffer_replace(&dst->private_ref, src->private_ref); diff --git a/libavutil/frame.h b/libavutil/frame.h index 33fac2054c..f72b6fae71 100644 --- a/libavutil/frame.h +++ b/libavutil/frame.h @@ -850,6 +850,30 @@ int av_frame_copy(AVFrame *dst, const AVFrame *src); */ int av_frame_copy_props(AVFrame *dst, const AVFrame *src); + +/** + * Copy side data, rather than creating new references. + */ +#define AV_FRAME_TRANSFER_SD_COPY (1 << 0) +/** + * Filter out side data that does not match dst properties. + */ +#define AV_FRAME_TRANSFER_SD_FILTER (1 << 1) + +/** + * Copy all side-data from src to dst. + * + * @param dst a frame to which the side data should be copied. + * @param src a frame from which to copy the side data. + * @param flags a combination of AV_FRAME_TRANSFER_SD_* + * + * @return 0 on success, a negative AVERROR on error. + * + * @note This function will create new references to side data buffers in src, + * unless the AV_FRAME_TRANSFER_SD_COPY flag is passed. + */ +int av_frame_copy_side_data(AVFrame* dst, const AVFrame* src, int flags); + /** * Get the buffer reference a given data plane is stored in. * @@ -901,6 +925,14 @@ AVFrameSideData *av_frame_get_side_data(const AVFrame *frame, */ void av_frame_remove_side_data(AVFrame *frame, enum AVFrameSideDataType type); +/** + * Remove and free all side data instances. + * + * @param frame from which to remove all side data. + */ +void av_frame_remove_all_side_data(AVFrame *frame); + + /** * Flags for frame cropping. diff --git a/libavutil/version.h b/libavutil/version.h index 2e9e02dda8..87178e9e9a 100644 --- a/libavutil/version.h +++ b/libavutil/version.h @@ -79,7 +79,7 @@ */ #define LIBAVUTIL_VERSION_MAJOR 57 -#define LIBAVUTIL_VERSION_MINOR 27 +#define LIBAVUTIL_VERSION_MINOR 28 #define LIBAVUTIL_VERSION_MICRO 100 #define LIBAVUTIL_VERSION_INT AV_VERSION_INT(LIBAVUTIL_VERSION_MAJOR, \ -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v5 2/6] avcodec/vpp_qsv: Copy side data from input to output frame 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 " ffmpegagent 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() softworkz @ 2022-07-01 20:48 ` softworkz 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 3/6] avcodec/mpeg12dec: make mpeg_decode_user_data() accessible softworkz ` (5 subsequent siblings) 7 siblings, 0 replies; 65+ messages in thread From: softworkz @ 2022-07-01 20:48 UTC (permalink / raw) To: ffmpeg-devel; +Cc: Kieran Kunhya, softworkz, Xiang, Haihao, Andreas Rheinhardt From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> --- libavfilter/qsvvpp.c | 6 ++++++ libavfilter/vf_overlay_qsv.c | 19 +++++++++++++++---- 2 files changed, 21 insertions(+), 4 deletions(-) diff --git a/libavfilter/qsvvpp.c b/libavfilter/qsvvpp.c index 954f882637..f4bf628073 100644 --- a/libavfilter/qsvvpp.c +++ b/libavfilter/qsvvpp.c @@ -843,6 +843,12 @@ int ff_qsvvpp_filter_frame(QSVVPPContext *s, AVFilterLink *inlink, AVFrame *picr return AVERROR(EAGAIN); break; } + + av_frame_remove_all_side_data(out_frame->frame); + ret = av_frame_copy_side_data(out_frame->frame, in_frame->frame, 0); + if (ret < 0) + return ret; + out_frame->frame->pts = av_rescale_q(out_frame->surface.Data.TimeStamp, default_tb, outlink->time_base); diff --git a/libavfilter/vf_overlay_qsv.c b/libavfilter/vf_overlay_qsv.c index 7e76b39aa9..e15214dbf2 100644 --- a/libavfilter/vf_overlay_qsv.c +++ b/libavfilter/vf_overlay_qsv.c @@ -231,13 +231,24 @@ static int process_frame(FFFrameSync *fs) { AVFilterContext *ctx = fs->parent; QSVOverlayContext *s = fs->opaque; + AVFrame *frame0 = NULL; AVFrame *frame = NULL; - int ret = 0, i; + int ret = 0; - for (i = 0; i < ctx->nb_inputs; i++) { + for (unsigned i = 0; i < ctx->nb_inputs; i++) { ret = ff_framesync_get_frame(fs, i, &frame, 0); - if (ret == 0) - ret = ff_qsvvpp_filter_frame(s->qsv, ctx->inputs[i], frame); + + if (ret == 0) { + if (i == 0) + frame0 = frame; + else { + av_frame_remove_all_side_data(frame); + ret = av_frame_copy_side_data(frame, frame0, 0); + } + + ret = ret < 0 ? ret : ff_qsvvpp_filter_frame(s->qsv, ctx->inputs[i], frame); + } + if (ret < 0 && ret != AVERROR(EAGAIN)) break; } -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v5 3/6] avcodec/mpeg12dec: make mpeg_decode_user_data() accessible 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 " ffmpegagent 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() softworkz 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 2/6] avcodec/vpp_qsv: Copy side data from input to output frame softworkz @ 2022-07-01 20:48 ` softworkz 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 4/6] avcodec/hevcdec: make set_side_data() accessible softworkz ` (4 subsequent siblings) 7 siblings, 0 replies; 65+ messages in thread From: softworkz @ 2022-07-01 20:48 UTC (permalink / raw) To: ffmpeg-devel; +Cc: Kieran Kunhya, softworkz, Xiang, Haihao, Andreas Rheinhardt From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> --- libavcodec/mpeg12.h | 28 ++++++++++++++++++++++++++++ libavcodec/mpeg12dec.c | 40 +++++----------------------------------- 2 files changed, 33 insertions(+), 35 deletions(-) diff --git a/libavcodec/mpeg12.h b/libavcodec/mpeg12.h index e0406b32d9..84a829cdd3 100644 --- a/libavcodec/mpeg12.h +++ b/libavcodec/mpeg12.h @@ -23,6 +23,7 @@ #define AVCODEC_MPEG12_H #include "mpegvideo.h" +#include "libavutil/stereo3d.h" /* Start codes. */ #define SEQ_END_CODE 0x000001b7 @@ -34,6 +35,31 @@ #define EXT_START_CODE 0x000001b5 #define USER_START_CODE 0x000001b2 +typedef struct Mpeg1Context { + MpegEncContext mpeg_enc_ctx; + int mpeg_enc_ctx_allocated; /* true if decoding context allocated */ + int repeat_field; /* true if we must repeat the field */ + AVPanScan pan_scan; /* some temporary storage for the panscan */ + AVStereo3D stereo3d; + int has_stereo3d; + AVBufferRef *a53_buf_ref; + uint8_t afd; + int has_afd; + int slice_count; + unsigned aspect_ratio_info; + AVRational save_aspect; + int save_width, save_height, save_progressive_seq; + int rc_buffer_size; + AVRational frame_rate_ext; /* MPEG-2 specific framerate modificator */ + unsigned frame_rate_index; + int sync; /* Did we reach a sync point like a GOP/SEQ/KEYFrame? */ + int closed_gop; + int tmpgexs; + int first_slice; + int extradata_decoded; + int64_t timecode_frame_start; /*< GOP timecode frame start number, in non drop frame format */ +} Mpeg1Context; + void ff_mpeg12_common_init(MpegEncContext *s); void ff_mpeg1_clean_buffers(MpegEncContext *s); @@ -45,4 +71,6 @@ void ff_mpeg12_find_best_frame_rate(AVRational frame_rate, int *code, int *ext_n, int *ext_d, int nonstandard); +void ff_mpeg_decode_user_data(AVCodecContext *avctx, Mpeg1Context *s1, const uint8_t *p, int buf_size); + #endif /* AVCODEC_MPEG12_H */ diff --git a/libavcodec/mpeg12dec.c b/libavcodec/mpeg12dec.c index e9bde48f7a..11d2b58185 100644 --- a/libavcodec/mpeg12dec.c +++ b/libavcodec/mpeg12dec.c @@ -58,31 +58,6 @@ #define A53_MAX_CC_COUNT 2000 -typedef struct Mpeg1Context { - MpegEncContext mpeg_enc_ctx; - int mpeg_enc_ctx_allocated; /* true if decoding context allocated */ - int repeat_field; /* true if we must repeat the field */ - AVPanScan pan_scan; /* some temporary storage for the panscan */ - AVStereo3D stereo3d; - int has_stereo3d; - AVBufferRef *a53_buf_ref; - uint8_t afd; - int has_afd; - int slice_count; - unsigned aspect_ratio_info; - AVRational save_aspect; - int save_width, save_height, save_progressive_seq; - int rc_buffer_size; - AVRational frame_rate_ext; /* MPEG-2 specific framerate modificator */ - unsigned frame_rate_index; - int sync; /* Did we reach a sync point like a GOP/SEQ/KEYFrame? */ - int closed_gop; - int tmpgexs; - int first_slice; - int extradata_decoded; - int64_t timecode_frame_start; /*< GOP timecode frame start number, in non drop frame format */ -} Mpeg1Context; - #define MB_TYPE_ZERO_MV 0x20000000 static const uint32_t ptype2mb_type[7] = { @@ -2198,11 +2173,9 @@ static int vcr2_init_sequence(AVCodecContext *avctx) return 0; } -static int mpeg_decode_a53_cc(AVCodecContext *avctx, +static int mpeg_decode_a53_cc(AVCodecContext *avctx, Mpeg1Context *s1, const uint8_t *p, int buf_size) { - Mpeg1Context *s1 = avctx->priv_data; - if (buf_size >= 6 && p[0] == 'G' && p[1] == 'A' && p[2] == '9' && p[3] == '4' && p[4] == 3 && (p[5] & 0x40)) { @@ -2333,12 +2306,9 @@ static int mpeg_decode_a53_cc(AVCodecContext *avctx, return 0; } -static void mpeg_decode_user_data(AVCodecContext *avctx, - const uint8_t *p, int buf_size) +void ff_mpeg_decode_user_data(AVCodecContext *avctx, Mpeg1Context *s1, const uint8_t *p, int buf_size) { - Mpeg1Context *s = avctx->priv_data; const uint8_t *buf_end = p + buf_size; - Mpeg1Context *s1 = avctx->priv_data; #if 0 int i; @@ -2352,7 +2322,7 @@ static void mpeg_decode_user_data(AVCodecContext *avctx, int i; for(i=0; i<20; i++) if (!memcmp(p+i, "\0TMPGEXS\0", 9)){ - s->tmpgexs= 1; + s1->tmpgexs= 1; } } /* we parse the DTG active format information */ @@ -2398,7 +2368,7 @@ static void mpeg_decode_user_data(AVCodecContext *avctx, break; } } - } else if (mpeg_decode_a53_cc(avctx, p, buf_size)) { + } else if (mpeg_decode_a53_cc(avctx, s1, p, buf_size)) { return; } } @@ -2590,7 +2560,7 @@ static int decode_chunks(AVCodecContext *avctx, AVFrame *picture, } break; case USER_START_CODE: - mpeg_decode_user_data(avctx, buf_ptr, input_size); + ff_mpeg_decode_user_data(avctx, s, buf_ptr, input_size); break; case GOP_START_CODE: if (last_code == 0) { -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v5 4/6] avcodec/hevcdec: make set_side_data() accessible 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 " ffmpegagent ` (2 preceding siblings ...) 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 3/6] avcodec/mpeg12dec: make mpeg_decode_user_data() accessible softworkz @ 2022-07-01 20:48 ` softworkz 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 5/6] avcodec/h264dec: make h264_export_frame_props() accessible softworkz ` (3 subsequent siblings) 7 siblings, 0 replies; 65+ messages in thread From: softworkz @ 2022-07-01 20:48 UTC (permalink / raw) To: ffmpeg-devel; +Cc: Kieran Kunhya, softworkz, Xiang, Haihao, Andreas Rheinhardt From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> --- libavcodec/hevcdec.c | 117 +++++++++++++++++++++---------------------- libavcodec/hevcdec.h | 9 ++++ 2 files changed, 67 insertions(+), 59 deletions(-) diff --git a/libavcodec/hevcdec.c b/libavcodec/hevcdec.c index e84c30dd13..b4d8db8c6b 100644 --- a/libavcodec/hevcdec.c +++ b/libavcodec/hevcdec.c @@ -2726,23 +2726,22 @@ error: return res; } -static int set_side_data(HEVCContext *s) +int ff_hevc_set_side_data(AVCodecContext *logctx, HEVCSEI *sei, HEVCContext *s, AVFrame *out) { - AVFrame *out = s->ref->frame; - int ret; + int ret = 0; - if (s->sei.frame_packing.present && - s->sei.frame_packing.arrangement_type >= 3 && - s->sei.frame_packing.arrangement_type <= 5 && - s->sei.frame_packing.content_interpretation_type > 0 && - s->sei.frame_packing.content_interpretation_type < 3) { + if (sei->frame_packing.present && + sei->frame_packing.arrangement_type >= 3 && + sei->frame_packing.arrangement_type <= 5 && + sei->frame_packing.content_interpretation_type > 0 && + sei->frame_packing.content_interpretation_type < 3) { AVStereo3D *stereo = av_stereo3d_create_side_data(out); if (!stereo) return AVERROR(ENOMEM); - switch (s->sei.frame_packing.arrangement_type) { + switch (sei->frame_packing.arrangement_type) { case 3: - if (s->sei.frame_packing.quincunx_subsampling) + if (sei->frame_packing.quincunx_subsampling) stereo->type = AV_STEREO3D_SIDEBYSIDE_QUINCUNX; else stereo->type = AV_STEREO3D_SIDEBYSIDE; @@ -2755,21 +2754,21 @@ static int set_side_data(HEVCContext *s) break; } - if (s->sei.frame_packing.content_interpretation_type == 2) + if (sei->frame_packing.content_interpretation_type == 2) stereo->flags = AV_STEREO3D_FLAG_INVERT; - if (s->sei.frame_packing.arrangement_type == 5) { - if (s->sei.frame_packing.current_frame_is_frame0_flag) + if (sei->frame_packing.arrangement_type == 5) { + if (sei->frame_packing.current_frame_is_frame0_flag) stereo->view = AV_STEREO3D_VIEW_LEFT; else stereo->view = AV_STEREO3D_VIEW_RIGHT; } } - if (s->sei.display_orientation.present && - (s->sei.display_orientation.anticlockwise_rotation || - s->sei.display_orientation.hflip || s->sei.display_orientation.vflip)) { - double angle = s->sei.display_orientation.anticlockwise_rotation * 360 / (double) (1 << 16); + if (sei->display_orientation.present && + (sei->display_orientation.anticlockwise_rotation || + sei->display_orientation.hflip || sei->display_orientation.vflip)) { + double angle = sei->display_orientation.anticlockwise_rotation * 360 / (double) (1 << 16); AVFrameSideData *rotation = av_frame_new_side_data(out, AV_FRAME_DATA_DISPLAYMATRIX, sizeof(int32_t) * 9); @@ -2788,17 +2787,17 @@ static int set_side_data(HEVCContext *s) * (1 - 2 * !!s->sei.display_orientation.vflip); av_display_rotation_set((int32_t *)rotation->data, angle); av_display_matrix_flip((int32_t *)rotation->data, - s->sei.display_orientation.hflip, - s->sei.display_orientation.vflip); + sei->display_orientation.hflip, + sei->display_orientation.vflip); } // Decrement the mastering display flag when IRAP frame has no_rasl_output_flag=1 // so the side data persists for the entire coded video sequence. - if (s->sei.mastering_display.present > 0 && + if (s && sei->mastering_display.present > 0 && IS_IRAP(s) && s->no_rasl_output_flag) { - s->sei.mastering_display.present--; + sei->mastering_display.present--; } - if (s->sei.mastering_display.present) { + if (sei->mastering_display.present) { // HEVC uses a g,b,r ordering, which we convert to a more natural r,g,b const int mapping[3] = {2, 0, 1}; const int chroma_den = 50000; @@ -2811,25 +2810,25 @@ static int set_side_data(HEVCContext *s) for (i = 0; i < 3; i++) { const int j = mapping[i]; - metadata->display_primaries[i][0].num = s->sei.mastering_display.display_primaries[j][0]; + metadata->display_primaries[i][0].num = sei->mastering_display.display_primaries[j][0]; metadata->display_primaries[i][0].den = chroma_den; - metadata->display_primaries[i][1].num = s->sei.mastering_display.display_primaries[j][1]; + metadata->display_primaries[i][1].num = sei->mastering_display.display_primaries[j][1]; metadata->display_primaries[i][1].den = chroma_den; } - metadata->white_point[0].num = s->sei.mastering_display.white_point[0]; + metadata->white_point[0].num = sei->mastering_display.white_point[0]; metadata->white_point[0].den = chroma_den; - metadata->white_point[1].num = s->sei.mastering_display.white_point[1]; + metadata->white_point[1].num = sei->mastering_display.white_point[1]; metadata->white_point[1].den = chroma_den; - metadata->max_luminance.num = s->sei.mastering_display.max_luminance; + metadata->max_luminance.num = sei->mastering_display.max_luminance; metadata->max_luminance.den = luma_den; - metadata->min_luminance.num = s->sei.mastering_display.min_luminance; + metadata->min_luminance.num = sei->mastering_display.min_luminance; metadata->min_luminance.den = luma_den; metadata->has_luminance = 1; metadata->has_primaries = 1; - av_log(s->avctx, AV_LOG_DEBUG, "Mastering Display Metadata:\n"); - av_log(s->avctx, AV_LOG_DEBUG, + av_log(logctx, AV_LOG_DEBUG, "Mastering Display Metadata:\n"); + av_log(logctx, AV_LOG_DEBUG, "r(%5.4f,%5.4f) g(%5.4f,%5.4f) b(%5.4f %5.4f) wp(%5.4f, %5.4f)\n", av_q2d(metadata->display_primaries[0][0]), av_q2d(metadata->display_primaries[0][1]), @@ -2838,31 +2837,31 @@ static int set_side_data(HEVCContext *s) av_q2d(metadata->display_primaries[2][0]), av_q2d(metadata->display_primaries[2][1]), av_q2d(metadata->white_point[0]), av_q2d(metadata->white_point[1])); - av_log(s->avctx, AV_LOG_DEBUG, + av_log(logctx, AV_LOG_DEBUG, "min_luminance=%f, max_luminance=%f\n", av_q2d(metadata->min_luminance), av_q2d(metadata->max_luminance)); } // Decrement the mastering display flag when IRAP frame has no_rasl_output_flag=1 // so the side data persists for the entire coded video sequence. - if (s->sei.content_light.present > 0 && + if (s && sei->content_light.present > 0 && IS_IRAP(s) && s->no_rasl_output_flag) { - s->sei.content_light.present--; + sei->content_light.present--; } - if (s->sei.content_light.present) { + if (sei->content_light.present) { AVContentLightMetadata *metadata = av_content_light_metadata_create_side_data(out); if (!metadata) return AVERROR(ENOMEM); - metadata->MaxCLL = s->sei.content_light.max_content_light_level; - metadata->MaxFALL = s->sei.content_light.max_pic_average_light_level; + metadata->MaxCLL = sei->content_light.max_content_light_level; + metadata->MaxFALL = sei->content_light.max_pic_average_light_level; - av_log(s->avctx, AV_LOG_DEBUG, "Content Light Level Metadata:\n"); - av_log(s->avctx, AV_LOG_DEBUG, "MaxCLL=%d, MaxFALL=%d\n", + av_log(logctx, AV_LOG_DEBUG, "Content Light Level Metadata:\n"); + av_log(logctx, AV_LOG_DEBUG, "MaxCLL=%d, MaxFALL=%d\n", metadata->MaxCLL, metadata->MaxFALL); } - if (s->sei.a53_caption.buf_ref) { - HEVCSEIA53Caption *a53 = &s->sei.a53_caption; + if (sei->a53_caption.buf_ref) { + HEVCSEIA53Caption *a53 = &sei->a53_caption; AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, a53->buf_ref); if (!sd) @@ -2870,8 +2869,8 @@ static int set_side_data(HEVCContext *s) a53->buf_ref = NULL; } - for (int i = 0; i < s->sei.unregistered.nb_buf_ref; i++) { - HEVCSEIUnregistered *unreg = &s->sei.unregistered; + for (int i = 0; i < sei->unregistered.nb_buf_ref; i++) { + HEVCSEIUnregistered *unreg = &sei->unregistered; if (unreg->buf_ref[i]) { AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, @@ -2882,9 +2881,9 @@ static int set_side_data(HEVCContext *s) unreg->buf_ref[i] = NULL; } } - s->sei.unregistered.nb_buf_ref = 0; + sei->unregistered.nb_buf_ref = 0; - if (s->sei.timecode.present) { + if (s && sei->timecode.present) { uint32_t *tc_sd; char tcbuf[AV_TIMECODE_STR_SIZE]; AVFrameSideData *tcside = av_frame_new_side_data(out, AV_FRAME_DATA_S12M_TIMECODE, @@ -2893,25 +2892,25 @@ static int set_side_data(HEVCContext *s) return AVERROR(ENOMEM); tc_sd = (uint32_t*)tcside->data; - tc_sd[0] = s->sei.timecode.num_clock_ts; + tc_sd[0] = sei->timecode.num_clock_ts; for (int i = 0; i < tc_sd[0]; i++) { - int drop = s->sei.timecode.cnt_dropped_flag[i]; - int hh = s->sei.timecode.hours_value[i]; - int mm = s->sei.timecode.minutes_value[i]; - int ss = s->sei.timecode.seconds_value[i]; - int ff = s->sei.timecode.n_frames[i]; + int drop = sei->timecode.cnt_dropped_flag[i]; + int hh = sei->timecode.hours_value[i]; + int mm = sei->timecode.minutes_value[i]; + int ss = sei->timecode.seconds_value[i]; + int ff = sei->timecode.n_frames[i]; tc_sd[i + 1] = av_timecode_get_smpte(s->avctx->framerate, drop, hh, mm, ss, ff); av_timecode_make_smpte_tc_string2(tcbuf, s->avctx->framerate, tc_sd[i + 1], 0, 0); av_dict_set(&out->metadata, "timecode", tcbuf, 0); } - s->sei.timecode.num_clock_ts = 0; + sei->timecode.num_clock_ts = 0; } - if (s->sei.film_grain_characteristics.present) { - HEVCSEIFilmGrainCharacteristics *fgc = &s->sei.film_grain_characteristics; + if (s && sei->film_grain_characteristics.present) { + HEVCSEIFilmGrainCharacteristics *fgc = &sei->film_grain_characteristics; AVFilmGrainParams *fgp = av_film_grain_params_create_side_data(out); if (!fgp) return AVERROR(ENOMEM); @@ -2965,8 +2964,8 @@ static int set_side_data(HEVCContext *s) fgc->present = fgc->persistence_flag; } - if (s->sei.dynamic_hdr_plus.info) { - AVBufferRef *info_ref = av_buffer_ref(s->sei.dynamic_hdr_plus.info); + if (sei->dynamic_hdr_plus.info) { + AVBufferRef *info_ref = av_buffer_ref(sei->dynamic_hdr_plus.info); if (!info_ref) return AVERROR(ENOMEM); @@ -2976,7 +2975,7 @@ static int set_side_data(HEVCContext *s) } } - if (s->rpu_buf) { + if (s && s->rpu_buf) { AVFrameSideData *rpu = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_DOVI_RPU_BUFFER, s->rpu_buf); if (!rpu) return AVERROR(ENOMEM); @@ -2984,10 +2983,10 @@ static int set_side_data(HEVCContext *s) s->rpu_buf = NULL; } - if ((ret = ff_dovi_attach_side_data(&s->dovi_ctx, out)) < 0) + if (s && (ret = ff_dovi_attach_side_data(&s->dovi_ctx, out)) < 0) return ret; - if (s->sei.dynamic_hdr_vivid.info) { + if (s && s->sei.dynamic_hdr_vivid.info) { AVBufferRef *info_ref = av_buffer_ref(s->sei.dynamic_hdr_vivid.info); if (!info_ref) return AVERROR(ENOMEM); @@ -3046,7 +3045,7 @@ static int hevc_frame_start(HEVCContext *s) goto fail; } - ret = set_side_data(s); + ret = ff_hevc_set_side_data(s->avctx, &s->sei, s, s->ref->frame); if (ret < 0) goto fail; diff --git a/libavcodec/hevcdec.h b/libavcodec/hevcdec.h index de861b88b3..cd8cd40da0 100644 --- a/libavcodec/hevcdec.h +++ b/libavcodec/hevcdec.h @@ -690,6 +690,15 @@ void ff_hevc_hls_residual_coding(HEVCContext *s, int x0, int y0, void ff_hevc_hls_mvd_coding(HEVCContext *s, int x0, int y0, int log2_cb_size); +/** + * Set the decodec side data to an AVFrame. + * @logctx context for logging. + * @sei HEVCSEI decoding context, must not be NULL. + * @s HEVCContext, can be NULL. + * @return < 0 on error, 0 otherwise. + */ +int ff_hevc_set_side_data(AVCodecContext *logctx, HEVCSEI *sei, HEVCContext *s, AVFrame *out); + extern const uint8_t ff_hevc_qpel_extra_before[4]; extern const uint8_t ff_hevc_qpel_extra_after[4]; extern const uint8_t ff_hevc_qpel_extra[4]; -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v5 5/6] avcodec/h264dec: make h264_export_frame_props() accessible 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 " ffmpegagent ` (3 preceding siblings ...) 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 4/6] avcodec/hevcdec: make set_side_data() accessible softworkz @ 2022-07-01 20:48 ` softworkz 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders softworkz ` (2 subsequent siblings) 7 siblings, 0 replies; 65+ messages in thread From: softworkz @ 2022-07-01 20:48 UTC (permalink / raw) To: ffmpeg-devel; +Cc: Kieran Kunhya, softworkz, Xiang, Haihao, Andreas Rheinhardt From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> --- libavcodec/h264_slice.c | 98 +++++++++++++++++++++-------------------- libavcodec/h264dec.h | 2 + 2 files changed, 52 insertions(+), 48 deletions(-) diff --git a/libavcodec/h264_slice.c b/libavcodec/h264_slice.c index d56722a5c2..f2a4c1c657 100644 --- a/libavcodec/h264_slice.c +++ b/libavcodec/h264_slice.c @@ -1157,11 +1157,10 @@ static int h264_init_ps(H264Context *h, const H264SliceContext *sl, int first_sl return 0; } -static int h264_export_frame_props(H264Context *h) +int ff_h264_export_frame_props(AVCodecContext *logctx, H264SEIContext *sei, H264Context *h, AVFrame *out) { - const SPS *sps = h->ps.sps; - H264Picture *cur = h->cur_pic_ptr; - AVFrame *out = cur->f; + const SPS *sps = h ? h->ps.sps : NULL; + H264Picture *cur = h ? h->cur_pic_ptr : NULL; out->interlaced_frame = 0; out->repeat_pict = 0; @@ -1169,19 +1168,19 @@ static int h264_export_frame_props(H264Context *h) /* Signal interlacing information externally. */ /* Prioritize picture timing SEI information over used * decoding process if it exists. */ - if (h->sei.picture_timing.present) { - int ret = ff_h264_sei_process_picture_timing(&h->sei.picture_timing, sps, - h->avctx); + if (sps && sei->picture_timing.present) { + int ret = ff_h264_sei_process_picture_timing(&sei->picture_timing, sps, + logctx); if (ret < 0) { - av_log(h->avctx, AV_LOG_ERROR, "Error processing a picture timing SEI\n"); - if (h->avctx->err_recognition & AV_EF_EXPLODE) + av_log(logctx, AV_LOG_ERROR, "Error processing a picture timing SEI\n"); + if (logctx->err_recognition & AV_EF_EXPLODE) return ret; - h->sei.picture_timing.present = 0; + sei->picture_timing.present = 0; } } - if (sps->pic_struct_present_flag && h->sei.picture_timing.present) { - H264SEIPictureTiming *pt = &h->sei.picture_timing; + if (h && sps && sps->pic_struct_present_flag && sei->picture_timing.present) { + H264SEIPictureTiming *pt = &sei->picture_timing; switch (pt->pic_struct) { case H264_SEI_PIC_STRUCT_FRAME: break; @@ -1215,21 +1214,23 @@ static int h264_export_frame_props(H264Context *h) if ((pt->ct_type & 3) && pt->pic_struct <= H264_SEI_PIC_STRUCT_BOTTOM_TOP) out->interlaced_frame = (pt->ct_type & (1 << 1)) != 0; - } else { + } else if (h) { /* Derive interlacing flag from used decoding process. */ out->interlaced_frame = FIELD_OR_MBAFF_PICTURE(h); } - h->prev_interlaced_frame = out->interlaced_frame; - if (cur->field_poc[0] != cur->field_poc[1]) { + if (h) + h->prev_interlaced_frame = out->interlaced_frame; + + if (sps && cur->field_poc[0] != cur->field_poc[1]) { /* Derive top_field_first from field pocs. */ out->top_field_first = cur->field_poc[0] < cur->field_poc[1]; - } else { - if (sps->pic_struct_present_flag && h->sei.picture_timing.present) { + } else if (sps) { + if (sps->pic_struct_present_flag && sei->picture_timing.present) { /* Use picture timing SEI information. Even if it is a * information of a past frame, better than nothing. */ - if (h->sei.picture_timing.pic_struct == H264_SEI_PIC_STRUCT_TOP_BOTTOM || - h->sei.picture_timing.pic_struct == H264_SEI_PIC_STRUCT_TOP_BOTTOM_TOP) + if (sei->picture_timing.pic_struct == H264_SEI_PIC_STRUCT_TOP_BOTTOM || + sei->picture_timing.pic_struct == H264_SEI_PIC_STRUCT_TOP_BOTTOM_TOP) out->top_field_first = 1; else out->top_field_first = 0; @@ -1243,11 +1244,11 @@ static int h264_export_frame_props(H264Context *h) } } - if (h->sei.frame_packing.present && - h->sei.frame_packing.arrangement_type <= 6 && - h->sei.frame_packing.content_interpretation_type > 0 && - h->sei.frame_packing.content_interpretation_type < 3) { - H264SEIFramePacking *fp = &h->sei.frame_packing; + if (sei->frame_packing.present && + sei->frame_packing.arrangement_type <= 6 && + sei->frame_packing.content_interpretation_type > 0 && + sei->frame_packing.content_interpretation_type < 3) { + H264SEIFramePacking *fp = &sei->frame_packing; AVStereo3D *stereo = av_stereo3d_create_side_data(out); if (stereo) { switch (fp->arrangement_type) { @@ -1289,11 +1290,11 @@ static int h264_export_frame_props(H264Context *h) } } - if (h->sei.display_orientation.present && - (h->sei.display_orientation.anticlockwise_rotation || - h->sei.display_orientation.hflip || - h->sei.display_orientation.vflip)) { - H264SEIDisplayOrientation *o = &h->sei.display_orientation; + if (sei->display_orientation.present && + (sei->display_orientation.anticlockwise_rotation || + sei->display_orientation.hflip || + sei->display_orientation.vflip)) { + H264SEIDisplayOrientation *o = &sei->display_orientation; double angle = o->anticlockwise_rotation * 360 / (double) (1 << 16); AVFrameSideData *rotation = av_frame_new_side_data(out, AV_FRAME_DATA_DISPLAYMATRIX, @@ -1314,29 +1315,30 @@ static int h264_export_frame_props(H264Context *h) } } - if (h->sei.afd.present) { + if (sei->afd.present) { AVFrameSideData *sd = av_frame_new_side_data(out, AV_FRAME_DATA_AFD, sizeof(uint8_t)); if (sd) { - *sd->data = h->sei.afd.active_format_description; - h->sei.afd.present = 0; + *sd->data = sei->afd.active_format_description; + sei->afd.present = 0; } } - if (h->sei.a53_caption.buf_ref) { - H264SEIA53Caption *a53 = &h->sei.a53_caption; + if (sei->a53_caption.buf_ref) { + H264SEIA53Caption *a53 = &sei->a53_caption; AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, a53->buf_ref); if (!sd) av_buffer_unref(&a53->buf_ref); a53->buf_ref = NULL; - h->avctx->properties |= FF_CODEC_PROPERTY_CLOSED_CAPTIONS; + if (h) + h->avctx->properties |= FF_CODEC_PROPERTY_CLOSED_CAPTIONS; } - for (int i = 0; i < h->sei.unregistered.nb_buf_ref; i++) { - H264SEIUnregistered *unreg = &h->sei.unregistered; + for (int i = 0; i < sei->unregistered.nb_buf_ref; i++) { + H264SEIUnregistered *unreg = &sei->unregistered; if (unreg->buf_ref[i]) { AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, @@ -1347,10 +1349,10 @@ static int h264_export_frame_props(H264Context *h) unreg->buf_ref[i] = NULL; } } - h->sei.unregistered.nb_buf_ref = 0; + sei->unregistered.nb_buf_ref = 0; - if (h->sei.film_grain_characteristics.present) { - H264SEIFilmGrainCharacteristics *fgc = &h->sei.film_grain_characteristics; + if (h && sps && sei->film_grain_characteristics.present) { + H264SEIFilmGrainCharacteristics *fgc = &sei->film_grain_characteristics; AVFilmGrainParams *fgp = av_film_grain_params_create_side_data(out); if (!fgp) return AVERROR(ENOMEM); @@ -1404,7 +1406,7 @@ static int h264_export_frame_props(H264Context *h) h->avctx->properties |= FF_CODEC_PROPERTY_FILM_GRAIN; } - if (h->sei.picture_timing.timecode_cnt > 0) { + if (h && sei->picture_timing.timecode_cnt > 0) { uint32_t *tc_sd; char tcbuf[AV_TIMECODE_STR_SIZE]; @@ -1415,14 +1417,14 @@ static int h264_export_frame_props(H264Context *h) return AVERROR(ENOMEM); tc_sd = (uint32_t*)tcside->data; - tc_sd[0] = h->sei.picture_timing.timecode_cnt; + tc_sd[0] = sei->picture_timing.timecode_cnt; for (int i = 0; i < tc_sd[0]; i++) { - int drop = h->sei.picture_timing.timecode[i].dropframe; - int hh = h->sei.picture_timing.timecode[i].hours; - int mm = h->sei.picture_timing.timecode[i].minutes; - int ss = h->sei.picture_timing.timecode[i].seconds; - int ff = h->sei.picture_timing.timecode[i].frame; + int drop = sei->picture_timing.timecode[i].dropframe; + int hh = sei->picture_timing.timecode[i].hours; + int mm = sei->picture_timing.timecode[i].minutes; + int ss = sei->picture_timing.timecode[i].seconds; + int ff = sei->picture_timing.timecode[i].frame; tc_sd[i + 1] = av_timecode_get_smpte(h->avctx->framerate, drop, hh, mm, ss, ff); av_timecode_make_smpte_tc_string2(tcbuf, h->avctx->framerate, tc_sd[i + 1], 0, 0); @@ -1817,7 +1819,7 @@ static int h264_field_start(H264Context *h, const H264SliceContext *sl, * field coded frames, since some SEI information is present for each field * and is merged by the SEI parsing code. */ if (!FIELD_PICTURE(h) || !h->first_field || h->missing_fields > 1) { - ret = h264_export_frame_props(h); + ret = ff_h264_export_frame_props(h->avctx, &h->sei, h, h->cur_pic_ptr->f); if (ret < 0) return ret; diff --git a/libavcodec/h264dec.h b/libavcodec/h264dec.h index 9a1ec1bace..38930da4ca 100644 --- a/libavcodec/h264dec.h +++ b/libavcodec/h264dec.h @@ -808,4 +808,6 @@ void ff_h264_free_tables(H264Context *h); void ff_h264_set_erpic(ERPicture *dst, H264Picture *src); +int ff_h264_export_frame_props(AVCodecContext *logctx, H264SEIContext *sei, H264Context *h, AVFrame *out); + #endif /* AVCODEC_H264DEC_H */ -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v5 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 " ffmpegagent ` (4 preceding siblings ...) 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 5/6] avcodec/h264dec: make h264_export_frame_props() accessible softworkz @ 2022-07-01 20:48 ` softworkz 2022-07-19 6:55 ` [FFmpeg-devel] [PATCH v5 0/6] " Xiang, Haihao 2022-10-25 4:03 ` [FFmpeg-devel] [PATCH v6 0/3] " ffmpegagent 7 siblings, 0 replies; 65+ messages in thread From: softworkz @ 2022-07-01 20:48 UTC (permalink / raw) To: ffmpeg-devel; +Cc: Kieran Kunhya, softworkz, Xiang, Haihao, Andreas Rheinhardt From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> --- libavcodec/Makefile | 6 +- libavcodec/hevcdsp.c | 4 + libavcodec/qsvdec.c | 234 +++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 243 insertions(+), 1 deletion(-) diff --git a/libavcodec/Makefile b/libavcodec/Makefile index 3b8f7b5e01..7b98391745 100644 --- a/libavcodec/Makefile +++ b/libavcodec/Makefile @@ -144,7 +144,11 @@ OBJS-$(CONFIG_MSS34DSP) += mss34dsp.o OBJS-$(CONFIG_PIXBLOCKDSP) += pixblockdsp.o OBJS-$(CONFIG_QPELDSP) += qpeldsp.o OBJS-$(CONFIG_QSV) += qsv.o -OBJS-$(CONFIG_QSVDEC) += qsvdec.o +OBJS-$(CONFIG_QSVDEC) += qsvdec.o h264_slice.o h264_cabac.o h264_cavlc.o \ + h264_direct.o h264_mb.o h264_picture.o h264_loopfilter.o \ + h264dec.o h264_refs.o cabac.o hevcdec.o hevc_refs.o \ + hevc_filter.o hevc_cabac.o hevc_mvs.o hevcpred.o hevcdsp.o \ + h274.o dovi_rpu.o mpeg12dec.o OBJS-$(CONFIG_QSVENC) += qsvenc.o OBJS-$(CONFIG_RANGECODER) += rangecoder.o OBJS-$(CONFIG_RDFT) += rdft.o diff --git a/libavcodec/hevcdsp.c b/libavcodec/hevcdsp.c index 2ca551df1d..c7a436d30f 100644 --- a/libavcodec/hevcdsp.c +++ b/libavcodec/hevcdsp.c @@ -22,6 +22,8 @@ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA */ +#include "config_components.h" + #include "hevcdsp.h" static const int8_t transform[32][32] = { @@ -257,6 +259,7 @@ int i = 0; break; } +#if CONFIG_HEVC_DECODER #if ARCH_AARCH64 ff_hevc_dsp_init_aarch64(hevcdsp, bit_depth); #elif ARCH_ARM @@ -270,4 +273,5 @@ int i = 0; #elif ARCH_LOONGARCH ff_hevc_dsp_init_loongarch(hevcdsp, bit_depth); #endif +#endif } diff --git a/libavcodec/qsvdec.c b/libavcodec/qsvdec.c index 5fc5bed4c8..e854f363ec 100644 --- a/libavcodec/qsvdec.c +++ b/libavcodec/qsvdec.c @@ -49,6 +49,12 @@ #include "hwconfig.h" #include "qsv.h" #include "qsv_internal.h" +#include "h264dec.h" +#include "h264_sei.h" +#include "hevcdec.h" +#include "hevc_ps.h" +#include "hevc_sei.h" +#include "mpeg12.h" static const AVRational mfx_tb = { 1, 90000 }; @@ -60,6 +66,8 @@ static const AVRational mfx_tb = { 1, 90000 }; AV_NOPTS_VALUE : pts_tb.num ? \ av_rescale_q(mfx_pts, mfx_tb, pts_tb) : mfx_pts) +#define PAYLOAD_BUFFER_SIZE 65535 + typedef struct QSVAsyncFrame { mfxSyncPoint *sync; QSVFrame *frame; @@ -101,6 +109,9 @@ typedef struct QSVContext { mfxExtBuffer **ext_buffers; int nb_ext_buffers; + + mfxU8 payload_buffer[PAYLOAD_BUFFER_SIZE]; + Mpeg1Context mpeg_ctx; } QSVContext; static const AVCodecHWConfigInternal *const qsv_hw_configs[] = { @@ -599,6 +610,210 @@ static int qsv_export_film_grain(AVCodecContext *avctx, mfxExtAV1FilmGrainParam return 0; } #endif +static int find_start_offset(mfxU8 data[4]) +{ + if (data[0] == 0 && data[1] == 0 && data[2] == 1) + return 3; + + if (data[0] == 0 && data[1] == 0 && data[2] == 0 && data[3] == 1) + return 4; + + return 0; +} + +static int parse_sei_h264(AVCodecContext* avctx, QSVContext* q, AVFrame* out) +{ + H264SEIContext sei = { 0 }; + GetBitContext gb = { 0 }; + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = sizeof(q->payload_buffer) }; + mfxU64 ts; + int ret; + + while (1) { + int start; + memset(payload.Data, 0, payload.BufSize); + + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient buffer on GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q->payload_buffer), payload.BufSize); + return 0; + } + if (ret != MFX_ERR_NONE) + return ret; + + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) + break; + + start = find_start_offset(payload.Data); + + switch (payload.Type) { + case SEI_TYPE_BUFFERING_PERIOD: + case SEI_TYPE_PIC_TIMING: + continue; + } + + if (init_get_bits(&gb, &payload.Data[start], payload.NumBit - start * 8) < 0) + av_log(avctx, AV_LOG_ERROR, "Error initializing bitstream reader SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); + else { + ret = ff_h264_sei_decode(&sei, &gb, NULL, avctx); + + if (ret < 0) + av_log(avctx, AV_LOG_WARNING, "Failed to parse SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); + else + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d\n", payload.Type, payload.NumBit); + } + } + + if (out) + return ff_h264_export_frame_props(avctx, &sei, NULL, out); + + return 0; +} + +static int parse_sei_hevc(AVCodecContext* avctx, QSVContext* q, QSVFrame* out) +{ + HEVCSEI sei = { 0 }; + HEVCParamSets ps = { 0 }; + GetBitContext gb = { 0 }; + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = sizeof(q->payload_buffer) }; + mfxFrameSurface1 *surface = &out->surface; + mfxU64 ts; + int ret, has_logged = 0; + + while (1) { + int start; + memset(payload.Data, 0, payload.BufSize); + + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient buffer on GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q->payload_buffer), payload.BufSize); + return 0; + } + if (ret != MFX_ERR_NONE) + return ret; + + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) + break; + + if (!has_logged) { + has_logged = 1; + av_log(avctx, AV_LOG_VERBOSE, "-----------------------------------------\n"); + av_log(avctx, AV_LOG_VERBOSE, "Start reading SEI - payload timestamp: %llu - surface timestamp: %llu\n", ts, surface->Data.TimeStamp); + } + + if (ts != surface->Data.TimeStamp) { + av_log(avctx, AV_LOG_WARNING, "GetPayload timestamp (%llu) does not match surface timestamp: (%llu)\n", ts, surface->Data.TimeStamp); + } + + start = find_start_offset(payload.Data); + + av_log(avctx, AV_LOG_VERBOSE, "parsing SEI type: %3d Numbits %3d Start: %d\n", payload.Type, payload.NumBit, start); + + switch (payload.Type) { + case SEI_TYPE_BUFFERING_PERIOD: + case SEI_TYPE_PIC_TIMING: + continue; + case SEI_TYPE_MASTERING_DISPLAY_COLOUR_VOLUME: + // There seems to be a bug in MSDK + payload.NumBit -= 8; + + break; + case SEI_TYPE_CONTENT_LIGHT_LEVEL_INFO: + // There seems to be a bug in MSDK + payload.NumBit = 48; + + break; + case SEI_TYPE_USER_DATA_REGISTERED_ITU_T_T35: + // There seems to be a bug in MSDK + if (payload.NumBit == 552) + payload.NumBit = 528; + break; + } + + if (init_get_bits(&gb, &payload.Data[start], payload.NumBit - start * 8) < 0) + av_log(avctx, AV_LOG_ERROR, "Error initializing bitstream reader SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); + else { + ret = ff_hevc_decode_nal_sei(&gb, avctx, &sei, &ps, HEVC_NAL_SEI_PREFIX); + + if (ret < 0) + av_log(avctx, AV_LOG_WARNING, "Failed to parse SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); + else + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d\n", payload.Type, payload.NumBit); + } + } + + if (has_logged) { + av_log(avctx, AV_LOG_VERBOSE, "End reading SEI\n"); + } + + if (out && out->frame) + return ff_hevc_set_side_data(avctx, &sei, NULL, out->frame); + + return 0; +} + +static int parse_sei_mpeg12(AVCodecContext* avctx, QSVContext* q, AVFrame* out) +{ + Mpeg1Context *mpeg_ctx = &q->mpeg_ctx; + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = sizeof(q->payload_buffer) }; + mfxU64 ts; + int ret; + + while (1) { + int start; + + memset(payload.Data, 0, payload.BufSize); + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient buffer on GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q->payload_buffer), payload.BufSize); + return 0; + } + if (ret != MFX_ERR_NONE) + return ret; + + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) + break; + + start = find_start_offset(payload.Data); + + start++; + + ff_mpeg_decode_user_data(avctx, mpeg_ctx, &payload.Data[start], (int)((payload.NumBit + 7) / 8) - start); + + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d start %d -> %.s\n", payload.Type, payload.NumBit, start, (char *)(&payload.Data[start])); + } + + if (!out) + return 0; + + if (mpeg_ctx->a53_buf_ref) { + + AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, mpeg_ctx->a53_buf_ref); + if (!sd) + av_buffer_unref(&mpeg_ctx->a53_buf_ref); + mpeg_ctx->a53_buf_ref = NULL; + } + + if (mpeg_ctx->has_stereo3d) { + AVStereo3D *stereo = av_stereo3d_create_side_data(out); + if (!stereo) + return AVERROR(ENOMEM); + + *stereo = mpeg_ctx->stereo3d; + mpeg_ctx->has_stereo3d = 0; + } + + if (mpeg_ctx->has_afd) { + AVFrameSideData *sd = av_frame_new_side_data(out, AV_FRAME_DATA_AFD, 1); + if (!sd) + return AVERROR(ENOMEM); + + *sd->data = mpeg_ctx->afd; + mpeg_ctx->has_afd = 0; + } + + return 0; +} static int qsv_decode(AVCodecContext *avctx, QSVContext *q, AVFrame *frame, int *got_frame, @@ -636,6 +851,8 @@ static int qsv_decode(AVCodecContext *avctx, QSVContext *q, insurf, &outsurf, sync); if (ret == MFX_WRN_DEVICE_BUSY) av_usleep(500); + else if (avctx->codec_id == AV_CODEC_ID_MPEG2VIDEO) + parse_sei_mpeg12(avctx, q, NULL); } while (ret == MFX_WRN_DEVICE_BUSY || ret == MFX_ERR_MORE_SURFACE); @@ -677,6 +894,23 @@ static int qsv_decode(AVCodecContext *avctx, QSVContext *q, return AVERROR_BUG; } + switch (avctx->codec_id) { + case AV_CODEC_ID_MPEG2VIDEO: + ret = parse_sei_mpeg12(avctx, q, out_frame->frame); + break; + case AV_CODEC_ID_H264: + ret = parse_sei_h264(avctx, q, out_frame->frame); + break; + case AV_CODEC_ID_HEVC: + ret = parse_sei_hevc(avctx, q, out_frame); + break; + default: + ret = 0; + } + + if (ret < 0) + av_log(avctx, AV_LOG_ERROR, "Error parsing SEI data: %d\n", ret); + out_frame->queued += 1; aframe = (QSVAsyncFrame){ sync, out_frame }; -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [FFmpeg-devel] [PATCH v5 0/6] Implement SEI parsing for QSV decoders 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 " ffmpegagent ` (5 preceding siblings ...) 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders softworkz @ 2022-07-19 6:55 ` Xiang, Haihao 2022-07-21 21:06 ` Soft Works 2022-07-21 21:56 ` Andreas Rheinhardt 2022-10-25 4:03 ` [FFmpeg-devel] [PATCH v6 0/3] " ffmpegagent 7 siblings, 2 replies; 65+ messages in thread From: Xiang, Haihao @ 2022-07-19 6:55 UTC (permalink / raw) To: ffmpeg-devel Cc: softworkz, kierank, haihao.xiang-at-intel.com, andreas.rheinhardt On Fri, 2022-07-01 at 20:48 +0000, ffmpegagent wrote: > Missing SEI information has always been a major drawback when using the QSV > decoders. I used to think that there's no chance to get at the data without > explicit implementation from the MSDK side (or doing something weird like > parsing in parallel). It turned out that there's a hardly known api method > that provides access to all SEI (h264/hevc) or user data (mpeg2video). > > This allows to get things like closed captions, frame packing, display > orientation, HDR data (mastering display, content light level, etc.) without > having to rely on those data being provided by the MSDK as extended buffers. > > The commit "Implement SEI parsing for QSV decoders" includes some hard-coded > workarounds for MSDK bugs which I reported: > https://github.com/Intel-Media-SDK/MediaSDK/issues/2597#issuecomment-1072795311 > > But that doesn't help. Those bugs exist and I'm sharing my workarounds, > which are empirically determined by testing a range of files. If someone is > interested, I can provide private access to a repository where we have been > testing this. Alternatively, I could also leave those workarounds out, and > just skip those SEI types. > > In a previous version of this patchset, there was a concern that payload > data might need to be re-ordered. Meanwhile I have researched this carefully > and the conclusion is that this is not required. > > My detailed analysis can be found here: > https://gist.github.com/softworkz/36c49586a8610813a32270ee3947a932 > > v4 > > * add new dependencies in makefile Now, build still works when someone uses > configure --disable-decoder=h264 --disable-decoder=hevc > --disable-decoder=mpegvideo --disable-decoder=mpeg1video > --disable-decoder=mpeg2video --enable-libmfx > > v3 > > * frame.h: clarify doc text for av_frame_copy_side_data() > > v2 > > * qsvdec: make error handling consistent and clear > * qsvdec: remove AV_CODEC_ID_MPEG1VIDEO constants > * hevcdec: rename function to ff_hevc_set_side_data(), add doc text > > v3 > > * qsvdec: fix c/p error > > softworkz (6): > avutil/frame: Add av_frame_copy_side_data() and > av_frame_remove_all_side_data() > avcodec/vpp_qsv: Copy side data from input to output frame > avcodec/mpeg12dec: make mpeg_decode_user_data() accessible > avcodec/hevcdec: make set_side_data() accessible > avcodec/h264dec: make h264_export_frame_props() accessible > avcodec/qsvdec: Implement SEI parsing for QSV decoders > > doc/APIchanges | 4 + > libavcodec/Makefile | 6 +- > libavcodec/h264_slice.c | 98 ++++++++------- > libavcodec/h264dec.h | 2 + > libavcodec/hevcdec.c | 117 +++++++++--------- > libavcodec/hevcdec.h | 9 ++ > libavcodec/hevcdsp.c | 4 + > libavcodec/mpeg12.h | 28 +++++ > libavcodec/mpeg12dec.c | 40 +----- > libavcodec/qsvdec.c | 234 +++++++++++++++++++++++++++++++++++ > libavfilter/qsvvpp.c | 6 + > libavfilter/vf_overlay_qsv.c | 19 ++- > libavutil/frame.c | 67 ++++++---- > libavutil/frame.h | 32 +++++ > libavutil/version.h | 2 +- > 15 files changed, 494 insertions(+), 174 deletions(-) > > > base-commit: 6a82412bf33108111eb3f63076fd5a51349ae114 > Published-As: > https://github.com/ffstaging/FFmpeg/releases/tag/pr-ffstaging-31%2Fsoftworkz%2Fsubmit_qsv_sei-v5 > Fetch-It-Via: git fetch https://github.com/ffstaging/FFmpeg pr-ffstaging- > 31/softworkz/submit_qsv_sei-v5 > Pull-Request: https://github.com/ffstaging/FFmpeg/pull/31 > > Range-diff vs v4: > > 1: 7656477360 = 1: 7656477360 avutil/frame: Add av_frame_copy_side_data() > and av_frame_remove_all_side_data() > 2: 06976606c5 = 2: 06976606c5 avcodec/vpp_qsv: Copy side data from input to > output frame > 3: 320a8a535c = 3: 320a8a535c avcodec/mpeg12dec: make > mpeg_decode_user_data() accessible > 4: e58ad6564f = 4: e58ad6564f avcodec/hevcdec: make set_side_data() > accessible > 5: a57bfaebb9 = 5: 4c0b6eb4cb avcodec/h264dec: make > h264_export_frame_props() accessible > 6: 3f2588563e ! 6: 19bc00be4d avcodec/qsvdec: Implement SEI parsing for QSV > decoders > @@ Commit message > > Signed-off-by: softworkz <softworkz@hotmail.com> > > + ## libavcodec/Makefile ## > +@@ libavcodec/Makefile: OBJS-$(CONFIG_MSS34DSP) += > mss34dsp.o > + OBJS-$(CONFIG_PIXBLOCKDSP) += pixblockdsp.o > + OBJS-$(CONFIG_QPELDSP) += qpeldsp.o > + OBJS-$(CONFIG_QSV) += qsv.o > +-OBJS-$(CONFIG_QSVDEC) += qsvdec.o > ++OBJS-$(CONFIG_QSVDEC) += qsvdec.o h264_slice.o > h264_cabac.o h264_cavlc.o \ > ++ h264_direct.o h264_mb.o > h264_picture.o h264_loopfilter.o \ > ++ h264dec.o h264_refs.o cabac.o > hevcdec.o hevc_refs.o \ > ++ > hevc_filter.o hevc_cabac.o hevc_mvs.o hevcpred.o hevcdsp.o \ > ++ > h274.o dovi_rpu.o mpeg12dec.o > + OBJS-$(CONFIG_QSVENC) += qsvenc.o > + OBJS-$(CONFIG_RANGECODER) += rangecoder.o > + OBJS-$(CONFIG_RDFT) += rdft.o > + > + ## libavcodec/hevcdsp.c ## > +@@ > + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110- > 1301 USA > + */ > + > ++#include "config_components.h" > ++ > + #include "hevcdsp.h" > + > + static const int8_t transform[32][32] = { > +@@ libavcodec/hevcdsp.c: int i = 0; > + break; > + } > + > ++#if CONFIG_HEVC_DECODER > + #if ARCH_AARCH64 > + ff_hevc_dsp_init_aarch64(hevcdsp, bit_depth); > + #elif ARCH_ARM > +@@ libavcodec/hevcdsp.c: int i = 0; > + #elif ARCH_LOONGARCH > + ff_hevc_dsp_init_loongarch(hevcdsp, bit_depth); > + #endif > ++#endif > + } > + > ## libavcodec/qsvdec.c ## > @@ > #include "hwconfig.h" Is there any comment on this patchset ? If not, I'd like to merge it to make QSV decoders works with SEI info. Thanks Haihao > _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [FFmpeg-devel] [PATCH v5 0/6] Implement SEI parsing for QSV decoders 2022-07-19 6:55 ` [FFmpeg-devel] [PATCH v5 0/6] " Xiang, Haihao @ 2022-07-21 21:06 ` Soft Works 2022-07-21 21:56 ` Andreas Rheinhardt 1 sibling, 0 replies; 65+ messages in thread From: Soft Works @ 2022-07-21 21:06 UTC (permalink / raw) To: Xiang, Haihao, ffmpeg-devel Cc: kierank, haihao.xiang-at-intel.com, andreas.rheinhardt > -----Original Message----- > From: Xiang, Haihao <haihao.xiang@intel.com> > Sent: Tuesday, July 19, 2022 8:55 AM > To: ffmpeg-devel@ffmpeg.org > Cc: andreas.rheinhardt@outlook.com; kierank@obe.tv; haihao.xiang-at- > intel.com@ffmpeg.org; softworkz@hotmail.com > Subject: Re: [FFmpeg-devel] [PATCH v5 0/6] Implement SEI parsing for > QSV decoders > > On Fri, 2022-07-01 at 20:48 +0000, ffmpegagent wrote: > > Missing SEI information has always been a major drawback when using > the QSV > > decoders. I used to think that there's no chance to get at the data > without > > explicit implementation from the MSDK side (or doing something > weird like > > parsing in parallel). It turned out that there's a hardly known api > method > > that provides access to all SEI (h264/hevc) or user data > (mpeg2video). > > > > This allows to get things like closed captions, frame packing, > display > > orientation, HDR data (mastering display, content light level, > etc.) without > > having to rely on those data being provided by the MSDK as extended > buffers. > > > > The commit "Implement SEI parsing for QSV decoders" includes some > hard-coded > > workarounds for MSDK bugs which I reported: > > > https://github.com/Intel-Media-SDK/MediaSDK/issues/2597#issuecomment- > 1072795311 > > > > But that doesn't help. Those bugs exist and I'm sharing my > workarounds, > > which are empirically determined by testing a range of files. If > someone is > > interested, I can provide private access to a repository where we > have been > > testing this. Alternatively, I could also leave those workarounds > out, and > > just skip those SEI types. > > > > In a previous version of this patchset, there was a concern that > payload > > data might need to be re-ordered. Meanwhile I have researched this > carefully > > and the conclusion is that this is not required. > > > > My detailed analysis can be found here: > > https://gist.github.com/softworkz/36c49586a8610813a32270ee3947a932 > > > > v4 > > > > * add new dependencies in makefile Now, build still works when > someone uses > > configure --disable-decoder=h264 --disable-decoder=hevc > > --disable-decoder=mpegvideo --disable-decoder=mpeg1video > > --disable-decoder=mpeg2video --enable-libmfx > > > > v3 > > > > * frame.h: clarify doc text for av_frame_copy_side_data() > > > > v2 > > > > * qsvdec: make error handling consistent and clear > > * qsvdec: remove AV_CODEC_ID_MPEG1VIDEO constants > > * hevcdec: rename function to ff_hevc_set_side_data(), add doc > text > > > > v3 > > > > * qsvdec: fix c/p error > > > > softworkz (6): > > avutil/frame: Add av_frame_copy_side_data() and > > av_frame_remove_all_side_data() > > avcodec/vpp_qsv: Copy side data from input to output frame > > avcodec/mpeg12dec: make mpeg_decode_user_data() accessible > > avcodec/hevcdec: make set_side_data() accessible > > avcodec/h264dec: make h264_export_frame_props() accessible > > avcodec/qsvdec: Implement SEI parsing for QSV decoders > > > > doc/APIchanges | 4 + > > libavcodec/Makefile | 6 +- > > libavcodec/h264_slice.c | 98 ++++++++------- > > libavcodec/h264dec.h | 2 + > > libavcodec/hevcdec.c | 117 +++++++++--------- > > libavcodec/hevcdec.h | 9 ++ > > libavcodec/hevcdsp.c | 4 + > > libavcodec/mpeg12.h | 28 +++++ > > libavcodec/mpeg12dec.c | 40 +----- > > libavcodec/qsvdec.c | 234 > +++++++++++++++++++++++++++++++++++ > > libavfilter/qsvvpp.c | 6 + > > libavfilter/vf_overlay_qsv.c | 19 ++- > > libavutil/frame.c | 67 ++++++---- > > libavutil/frame.h | 32 +++++ > > libavutil/version.h | 2 +- > > 15 files changed, 494 insertions(+), 174 deletions(-) > > > > > > base-commit: 6a82412bf33108111eb3f63076fd5a51349ae114 > > Published-As: > > https://github.com/ffstaging/FFmpeg/releases/tag/pr-ffstaging- > 31%2Fsoftworkz%2Fsubmit_qsv_sei-v5 > > Fetch-It-Via: git fetch https://github.com/ffstaging/FFmpeg pr- > ffstaging- > > 31/softworkz/submit_qsv_sei-v5 > > Pull-Request: https://github.com/ffstaging/FFmpeg/pull/31 > > > > Range-diff vs v4: > > > > 1: 7656477360 = 1: 7656477360 avutil/frame: Add > av_frame_copy_side_data() > > and av_frame_remove_all_side_data() > > 2: 06976606c5 = 2: 06976606c5 avcodec/vpp_qsv: Copy side data > from input to > > output frame > > 3: 320a8a535c = 3: 320a8a535c avcodec/mpeg12dec: make > > mpeg_decode_user_data() accessible > > 4: e58ad6564f = 4: e58ad6564f avcodec/hevcdec: make > set_side_data() > > accessible > > 5: a57bfaebb9 = 5: 4c0b6eb4cb avcodec/h264dec: make > > h264_export_frame_props() accessible > > 6: 3f2588563e ! 6: 19bc00be4d avcodec/qsvdec: Implement SEI > parsing for QSV > > decoders > > @@ Commit message > > > > Signed-off-by: softworkz <softworkz@hotmail.com> > > > > + ## libavcodec/Makefile ## > > +@@ libavcodec/Makefile: OBJS-$(CONFIG_MSS34DSP) > += > > mss34dsp.o > > + OBJS-$(CONFIG_PIXBLOCKDSP) += pixblockdsp.o > > + OBJS-$(CONFIG_QPELDSP) += qpeldsp.o > > + OBJS-$(CONFIG_QSV) += qsv.o > > +-OBJS-$(CONFIG_QSVDEC) += qsvdec.o > > ++OBJS-$(CONFIG_QSVDEC) += qsvdec.o > h264_slice.o > > h264_cabac.o h264_cavlc.o \ > > ++ h264_direct.o > h264_mb.o > > h264_picture.o h264_loopfilter.o \ > > ++ h264dec.o > h264_refs.o cabac.o > > hevcdec.o hevc_refs.o \ > > ++ > > > hevc_filter.o hevc_cabac.o hevc_mvs.o hevcpred.o hevcdsp.o \ > > ++ > > > h274.o dovi_rpu.o mpeg12dec.o > > + OBJS-$(CONFIG_QSVENC) += qsvenc.o > > + OBJS-$(CONFIG_RANGECODER) += rangecoder.o > > + OBJS-$(CONFIG_RDFT) += rdft.o > > + > > + ## libavcodec/hevcdsp.c ## > > +@@ > > + * Foundation, Inc., 51 Franklin Street, Fifth Floor, > Boston, MA 02110- > > 1301 USA > > + */ > > + > > ++#include "config_components.h" > > ++ > > + #include "hevcdsp.h" > > + > > + static const int8_t transform[32][32] = { > > +@@ libavcodec/hevcdsp.c: int i = 0; > > + break; > > + } > > + > > ++#if CONFIG_HEVC_DECODER > > + #if ARCH_AARCH64 > > + ff_hevc_dsp_init_aarch64(hevcdsp, bit_depth); > > + #elif ARCH_ARM > > +@@ libavcodec/hevcdsp.c: int i = 0; > > + #elif ARCH_LOONGARCH > > + ff_hevc_dsp_init_loongarch(hevcdsp, bit_depth); > > + #endif > > ++#endif > > + } > > + > > ## libavcodec/qsvdec.c ## > > @@ > > #include "hwconfig.h" > > > Is there any comment on this patchset ? If not, I'd like to merge it > to make QSV > decoders works with SEI info. > > Thanks > Haihao There's a conflicting (but reasonable and useful) patchset from Andreas: https://patchwork.ffmpeg.org/project/ffmpeg/list/?series=6959 Even though this is open for so long already, it think it makes more sense to merge Andreas' first and adapt mine to be applied on top of it. @Andreas - any news on that patchset of yours? Thanks, softworkz _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [FFmpeg-devel] [PATCH v5 0/6] Implement SEI parsing for QSV decoders 2022-07-19 6:55 ` [FFmpeg-devel] [PATCH v5 0/6] " Xiang, Haihao 2022-07-21 21:06 ` Soft Works @ 2022-07-21 21:56 ` Andreas Rheinhardt 2022-10-21 7:42 ` Soft Works 1 sibling, 1 reply; 65+ messages in thread From: Andreas Rheinhardt @ 2022-07-21 21:56 UTC (permalink / raw) To: Xiang, Haihao, ffmpeg-devel Xiang, Haihao: > On Fri, 2022-07-01 at 20:48 +0000, ffmpegagent wrote: >> Missing SEI information has always been a major drawback when using the QSV >> decoders. I used to think that there's no chance to get at the data without >> explicit implementation from the MSDK side (or doing something weird like >> parsing in parallel). It turned out that there's a hardly known api method >> that provides access to all SEI (h264/hevc) or user data (mpeg2video). >> >> This allows to get things like closed captions, frame packing, display >> orientation, HDR data (mastering display, content light level, etc.) without >> having to rely on those data being provided by the MSDK as extended buffers. >> >> The commit "Implement SEI parsing for QSV decoders" includes some hard-coded >> workarounds for MSDK bugs which I reported: >> > https://github.com/Intel-Media-SDK/MediaSDK/issues/2597#issuecomment-1072795311 >> >> But that doesn't help. Those bugs exist and I'm sharing my workarounds, >> which are empirically determined by testing a range of files. If someone is >> interested, I can provide private access to a repository where we have been >> testing this. Alternatively, I could also leave those workarounds out, and >> just skip those SEI types. >> >> In a previous version of this patchset, there was a concern that payload >> data might need to be re-ordered. Meanwhile I have researched this carefully >> and the conclusion is that this is not required. >> >> My detailed analysis can be found here: >> https://gist.github.com/softworkz/36c49586a8610813a32270ee3947a932 >> >> v4 >> >> * add new dependencies in makefile Now, build still works when someone uses >> configure --disable-decoder=h264 --disable-decoder=hevc >> --disable-decoder=mpegvideo --disable-decoder=mpeg1video >> --disable-decoder=mpeg2video --enable-libmfx >> >> v3 >> >> * frame.h: clarify doc text for av_frame_copy_side_data() >> >> v2 >> >> * qsvdec: make error handling consistent and clear >> * qsvdec: remove AV_CODEC_ID_MPEG1VIDEO constants >> * hevcdec: rename function to ff_hevc_set_side_data(), add doc text >> >> v3 >> >> * qsvdec: fix c/p error >> >> softworkz (6): >> avutil/frame: Add av_frame_copy_side_data() and >> av_frame_remove_all_side_data() >> avcodec/vpp_qsv: Copy side data from input to output frame >> avcodec/mpeg12dec: make mpeg_decode_user_data() accessible >> avcodec/hevcdec: make set_side_data() accessible >> avcodec/h264dec: make h264_export_frame_props() accessible >> avcodec/qsvdec: Implement SEI parsing for QSV decoders >> >> doc/APIchanges | 4 + >> libavcodec/Makefile | 6 +- >> libavcodec/h264_slice.c | 98 ++++++++------- >> libavcodec/h264dec.h | 2 + >> libavcodec/hevcdec.c | 117 +++++++++--------- >> libavcodec/hevcdec.h | 9 ++ >> libavcodec/hevcdsp.c | 4 + >> libavcodec/mpeg12.h | 28 +++++ >> libavcodec/mpeg12dec.c | 40 +----- >> libavcodec/qsvdec.c | 234 +++++++++++++++++++++++++++++++++++ >> libavfilter/qsvvpp.c | 6 + >> libavfilter/vf_overlay_qsv.c | 19 ++- >> libavutil/frame.c | 67 ++++++---- >> libavutil/frame.h | 32 +++++ >> libavutil/version.h | 2 +- >> 15 files changed, 494 insertions(+), 174 deletions(-) >> >> >> base-commit: 6a82412bf33108111eb3f63076fd5a51349ae114 >> Published-As: >> https://github.com/ffstaging/FFmpeg/releases/tag/pr-ffstaging-31%2Fsoftworkz%2Fsubmit_qsv_sei-v5 >> Fetch-It-Via: git fetch https://github.com/ffstaging/FFmpeg pr-ffstaging- >> 31/softworkz/submit_qsv_sei-v5 >> Pull-Request: https://github.com/ffstaging/FFmpeg/pull/31 >> >> Range-diff vs v4: >> >> 1: 7656477360 = 1: 7656477360 avutil/frame: Add av_frame_copy_side_data() >> and av_frame_remove_all_side_data() >> 2: 06976606c5 = 2: 06976606c5 avcodec/vpp_qsv: Copy side data from input to >> output frame >> 3: 320a8a535c = 3: 320a8a535c avcodec/mpeg12dec: make >> mpeg_decode_user_data() accessible >> 4: e58ad6564f = 4: e58ad6564f avcodec/hevcdec: make set_side_data() >> accessible >> 5: a57bfaebb9 = 5: 4c0b6eb4cb avcodec/h264dec: make >> h264_export_frame_props() accessible >> 6: 3f2588563e ! 6: 19bc00be4d avcodec/qsvdec: Implement SEI parsing for QSV >> decoders >> @@ Commit message >> >> Signed-off-by: softworkz <softworkz@hotmail.com> >> >> + ## libavcodec/Makefile ## >> +@@ libavcodec/Makefile: OBJS-$(CONFIG_MSS34DSP) += >> mss34dsp.o >> + OBJS-$(CONFIG_PIXBLOCKDSP) += pixblockdsp.o >> + OBJS-$(CONFIG_QPELDSP) += qpeldsp.o >> + OBJS-$(CONFIG_QSV) += qsv.o >> +-OBJS-$(CONFIG_QSVDEC) += qsvdec.o >> ++OBJS-$(CONFIG_QSVDEC) += qsvdec.o h264_slice.o >> h264_cabac.o h264_cavlc.o \ >> ++ h264_direct.o h264_mb.o >> h264_picture.o h264_loopfilter.o \ >> ++ h264dec.o h264_refs.o cabac.o >> hevcdec.o hevc_refs.o \ >> ++ >> hevc_filter.o hevc_cabac.o hevc_mvs.o hevcpred.o hevcdsp.o \ >> ++ >> h274.o dovi_rpu.o mpeg12dec.o >> + OBJS-$(CONFIG_QSVENC) += qsvenc.o >> + OBJS-$(CONFIG_RANGECODER) += rangecoder.o >> + OBJS-$(CONFIG_RDFT) += rdft.o >> + >> + ## libavcodec/hevcdsp.c ## >> +@@ >> + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110- >> 1301 USA >> + */ >> + >> ++#include "config_components.h" >> ++ >> + #include "hevcdsp.h" >> + >> + static const int8_t transform[32][32] = { >> +@@ libavcodec/hevcdsp.c: int i = 0; >> + break; >> + } >> + >> ++#if CONFIG_HEVC_DECODER >> + #if ARCH_AARCH64 >> + ff_hevc_dsp_init_aarch64(hevcdsp, bit_depth); >> + #elif ARCH_ARM >> +@@ libavcodec/hevcdsp.c: int i = 0; >> + #elif ARCH_LOONGARCH >> + ff_hevc_dsp_init_loongarch(hevcdsp, bit_depth); >> + #endif >> ++#endif >> + } >> + >> ## libavcodec/qsvdec.c ## >> @@ >> #include "hwconfig.h" > > > Is there any comment on this patchset ? If not, I'd like to merge it to make QSV > decoders works with SEI info. > > Thanks > Haihao > This patchset has several issues, namely: 1. It tries to share the functions that are used for processing user/SEI data as they are, even the parts that are not intended to be used by QSV (like the picture structure stuff for H.264 or tmpgexs in case of MPEG-1/2). 2. It tries to keep the functions where they are, leading to the insanely long Makefile line in patch 6/6 (which I believe to be still incomplete: mpeg12dec.o pulls in mpegvideo.o mpegvideo_dec.o (which in turn pull in lots of dsp stuff) and where is h264dsp.o? (it seems like there is a reliance on the H.264 parser for this)). This is the opposite of modularity. 3. It just puts a huge Mpeg1Context in the QSVContext, although only a miniscule part of it is actually used. One should use a small context of its own instead. 4. It does not take into account that buffers need to be padded to be usable by the GetBit-API. (I have made an attempt to factor out the common parts of H.264 and H.265 SEI handling, which should make this here much easier.) - Andreas _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [FFmpeg-devel] [PATCH v5 0/6] Implement SEI parsing for QSV decoders 2022-07-21 21:56 ` Andreas Rheinhardt @ 2022-10-21 7:42 ` Soft Works 0 siblings, 0 replies; 65+ messages in thread From: Soft Works @ 2022-10-21 7:42 UTC (permalink / raw) To: FFmpeg development discussions and patches, Xiang, Haihao > -----Original Message----- > From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of > Andreas Rheinhardt > Sent: Thursday, July 21, 2022 11:56 PM > To: Xiang, Haihao <haihao.xiang@intel.com>; ffmpeg-devel@ffmpeg.org > Subject: Re: [FFmpeg-devel] [PATCH v5 0/6] Implement SEI parsing for > QSV decoders > > Xiang, Haihao: > > On Fri, 2022-07-01 at 20:48 +0000, ffmpegagent wrote: > >> Missing SEI information has always been a major drawback when > using the QSV > >> decoders. I used to think that there's no chance to get at the > data without > >> explicit implementation from the MSDK side (or doing something > weird like > >> parsing in parallel). It turned out that there's a hardly known > api method > >> that provides access to all SEI (h264/hevc) or user data > (mpeg2video). > >> > >> This allows to get things like closed captions, frame packing, > display > >> orientation, HDR data (mastering display, content light level, > etc.) without > >> having to rely on those data being provided by the MSDK as > extended buffers. > >> > >> The commit "Implement SEI parsing for QSV decoders" includes some > hard-coded > >> workarounds for MSDK bugs which I reported: > >> > > https://github.com/Intel-Media- > SDK/MediaSDK/issues/2597#issuecomment-1072795311 > >> > >> But that doesn't help. Those bugs exist and I'm sharing my > workarounds, > >> which are empirically determined by testing a range of files. If > someone is > >> interested, I can provide private access to a repository where we > have been > >> testing this. Alternatively, I could also leave those workarounds > out, and > >> just skip those SEI types. > >> > >> In a previous version of this patchset, there was a concern that > payload > >> data might need to be re-ordered. Meanwhile I have researched this > carefully > >> and the conclusion is that this is not required. > >> > >> My detailed analysis can be found here: > >> https://gist.github.com/softworkz/36c49586a8610813a32270ee3947a932 > >> > >> v4 > >> > >> * add new dependencies in makefile Now, build still works when > someone uses > >> configure --disable-decoder=h264 --disable-decoder=hevc > >> --disable-decoder=mpegvideo --disable-decoder=mpeg1video > >> --disable-decoder=mpeg2video --enable-libmfx > >> > >> v3 > >> > >> * frame.h: clarify doc text for av_frame_copy_side_data() > >> > >> v2 > >> > >> * qsvdec: make error handling consistent and clear > >> * qsvdec: remove AV_CODEC_ID_MPEG1VIDEO constants > >> * hevcdec: rename function to ff_hevc_set_side_data(), add doc > text > >> > >> v3 > >> > >> * qsvdec: fix c/p error > >> > >> softworkz (6): > >> avutil/frame: Add av_frame_copy_side_data() and > >> av_frame_remove_all_side_data() > >> avcodec/vpp_qsv: Copy side data from input to output frame > >> avcodec/mpeg12dec: make mpeg_decode_user_data() accessible > >> avcodec/hevcdec: make set_side_data() accessible > >> avcodec/h264dec: make h264_export_frame_props() accessible > >> avcodec/qsvdec: Implement SEI parsing for QSV decoders > >> > >> doc/APIchanges | 4 + > >> libavcodec/Makefile | 6 +- > >> libavcodec/h264_slice.c | 98 ++++++++------- > >> libavcodec/h264dec.h | 2 + > >> libavcodec/hevcdec.c | 117 +++++++++--------- > >> libavcodec/hevcdec.h | 9 ++ > >> libavcodec/hevcdsp.c | 4 + > >> libavcodec/mpeg12.h | 28 +++++ > >> libavcodec/mpeg12dec.c | 40 +----- > >> libavcodec/qsvdec.c | 234 > +++++++++++++++++++++++++++++++++++ > >> libavfilter/qsvvpp.c | 6 + > >> libavfilter/vf_overlay_qsv.c | 19 ++- > >> libavutil/frame.c | 67 ++++++---- > >> libavutil/frame.h | 32 +++++ > >> libavutil/version.h | 2 +- > >> 15 files changed, 494 insertions(+), 174 deletions(-) > >> > >> > >> base-commit: 6a82412bf33108111eb3f63076fd5a51349ae114 > >> Published-As: > >> https://github.com/ffstaging/FFmpeg/releases/tag/pr-ffstaging- > 31%2Fsoftworkz%2Fsubmit_qsv_sei-v5 > >> Fetch-It-Via: git fetch https://github.com/ffstaging/FFmpeg pr- > ffstaging- > >> 31/softworkz/submit_qsv_sei-v5 > >> Pull-Request: https://github.com/ffstaging/FFmpeg/pull/31 > >> > >> Range-diff vs v4: > >> > >> 1: 7656477360 = 1: 7656477360 avutil/frame: Add > av_frame_copy_side_data() > >> and av_frame_remove_all_side_data() > >> 2: 06976606c5 = 2: 06976606c5 avcodec/vpp_qsv: Copy side data > from input to > >> output frame > >> 3: 320a8a535c = 3: 320a8a535c avcodec/mpeg12dec: make > >> mpeg_decode_user_data() accessible > >> 4: e58ad6564f = 4: e58ad6564f avcodec/hevcdec: make > set_side_data() > >> accessible > >> 5: a57bfaebb9 = 5: 4c0b6eb4cb avcodec/h264dec: make > >> h264_export_frame_props() accessible > >> 6: 3f2588563e ! 6: 19bc00be4d avcodec/qsvdec: Implement SEI > parsing for QSV > >> decoders > >> @@ Commit message > >> > >> Signed-off-by: softworkz <softworkz@hotmail.com> > >> > >> + ## libavcodec/Makefile ## > >> +@@ libavcodec/Makefile: OBJS-$(CONFIG_MSS34DSP) > += > >> mss34dsp.o > >> + OBJS-$(CONFIG_PIXBLOCKDSP) += pixblockdsp.o > >> + OBJS-$(CONFIG_QPELDSP) += qpeldsp.o > >> + OBJS-$(CONFIG_QSV) += qsv.o > >> +-OBJS-$(CONFIG_QSVDEC) += qsvdec.o > >> ++OBJS-$(CONFIG_QSVDEC) += qsvdec.o > h264_slice.o > >> h264_cabac.o h264_cavlc.o \ > >> ++ h264_direct.o > h264_mb.o > >> h264_picture.o h264_loopfilter.o \ > >> ++ h264dec.o > h264_refs.o cabac.o > >> hevcdec.o hevc_refs.o \ > >> ++ > > >> hevc_filter.o hevc_cabac.o hevc_mvs.o hevcpred.o hevcdsp.o \ > >> ++ > > >> h274.o dovi_rpu.o mpeg12dec.o > >> + OBJS-$(CONFIG_QSVENC) += qsvenc.o > >> + OBJS-$(CONFIG_RANGECODER) += rangecoder.o > >> + OBJS-$(CONFIG_RDFT) += rdft.o > >> + > >> + ## libavcodec/hevcdsp.c ## > >> +@@ > >> + * Foundation, Inc., 51 Franklin Street, Fifth Floor, > Boston, MA 02110- > >> 1301 USA > >> + */ > >> + > >> ++#include "config_components.h" > >> ++ > >> + #include "hevcdsp.h" > >> + > >> + static const int8_t transform[32][32] = { > >> +@@ libavcodec/hevcdsp.c: int i = 0; > >> + break; > >> + } > >> + > >> ++#if CONFIG_HEVC_DECODER > >> + #if ARCH_AARCH64 > >> + ff_hevc_dsp_init_aarch64(hevcdsp, bit_depth); > >> + #elif ARCH_ARM > >> +@@ libavcodec/hevcdsp.c: int i = 0; > >> + #elif ARCH_LOONGARCH > >> + ff_hevc_dsp_init_loongarch(hevcdsp, bit_depth); > >> + #endif > >> ++#endif > >> + } > >> + > >> ## libavcodec/qsvdec.c ## > >> @@ > >> #include "hwconfig.h" > > > > > > Is there any comment on this patchset ? If not, I'd like to merge > it to make QSV > > decoders works with SEI info. > > > > Thanks > > Haihao > > > > This patchset has several issues, namely: > 1. It tries to share the functions that are used for processing > user/SEI > data as they are, even the parts that are not intended to be used by > QSV > (like the picture structure stuff for H.264 or tmpgexs in case of > MPEG-1/2). > 2. It tries to keep the functions where they are, leading to the > insanely long Makefile line in patch 6/6 (which I believe to be still > incomplete: mpeg12dec.o pulls in mpegvideo.o mpegvideo_dec.o (which > in > turn pull in lots of dsp stuff) and where is h264dsp.o? (it seems > like > there is a reliance on the H.264 parser for this)). This is the > opposite > of modularity. > 3. It just puts a huge Mpeg1Context in the QSVContext, although only > a > miniscule part of it is actually used. One should use a small context > of > its own instead. > 4. It does not take into account that buffers need to be padded to be > usable by the GetBit-API. Hi Andreas, thanks for pointing out (4), in fact I wasn't aware of this. I agree to your other points (1-3). Not that I wouldn't have been aware of those implications, I've just been afraid that larger refactorings could have minimized acceptance. > (I have made an attempt to factor out the common parts of H.264 and > H.265 SEI handling, which should make this here much easier.) Your patchset would in fact be very helpful and allow me to provide a much better and focused revision. Though, it is still pending at this time - are you planning to push it? Thanks, softworkz _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v6 0/3] Implement SEI parsing for QSV decoders 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 " ffmpegagent ` (6 preceding siblings ...) 2022-07-19 6:55 ` [FFmpeg-devel] [PATCH v5 0/6] " Xiang, Haihao @ 2022-10-25 4:03 ` ffmpegagent 2022-10-25 4:03 ` [FFmpeg-devel] [PATCH v6 1/3] avcodec/hevcdec: factor out ff_hevc_set_set_to_frame softworkz ` (2 more replies) 7 siblings, 3 replies; 65+ messages in thread From: ffmpegagent @ 2022-10-25 4:03 UTC (permalink / raw) To: ffmpeg-devel; +Cc: Kieran Kunhya, softworkz, Xiang, Haihao, Andreas Rheinhardt Missing SEI information has always been a major drawback when using the QSV decoders. It turned out that there's a hardly known api method that provides access to all SEI (h264/hevc) or user data (mpeg2video). This allows to get things like closed captions, frame packing, display orientation, HDR data (mastering display, content light level, etc.) without having to rely on those data being provided by the MSDK as extended buffers. The commit "Implement SEI parsing for QSV decoders" includes some hard-coded workarounds for MSDK bugs which I reported: https://github.com/Intel-Media-SDK/MediaSDK/issues/2597#issuecomment-1072795311 If someone is interested in the details please contact me directly. v5 * Split out the first two commits as a separate patchset https://github.com/ffstaging/FFmpeg/pull/44 * For mpeg12, parse A53 data in qsvdec directly * For h264 and hevc, factor out ff_hxxx_set_sei_to_frame functions to avoid being dependent on the full decoder contexts * Ensure sufficient padding for get_bits API * Addresses all points (1, 2, 3, 4) made by Andreas https://patchwork.ffmpeg.org/project/ffmpeg/cover/pull.31.v5.ffstaging.FFmpeg.1656708534.ffmpegagent@gmail.com/ v4 * add new dependencies in makefile Now, build still works when someone uses configure --disable-decoder=h264 --disable-decoder=hevc --disable-decoder=mpegvideo --disable-decoder=mpeg1video --disable-decoder=mpeg2video --enable-libmfx v3 * frame.h: clarify doc text for av_frame_copy_side_data() v2 * qsvdec: make error handling consistent and clear * qsvdec: remove AV_CODEC_ID_MPEG1VIDEO constants * hevcdec: rename function to ff_hevc_set_side_data(), add doc text v3 * qsvdec: fix c/p error softworkz (3): avcodec/hevcdec: factor out ff_hevc_set_set_to_frame avcodec/h264dec: make h264_export_frame_props() accessible avcodec/qsvdec: Implement SEI parsing for QSV decoders libavcodec/Makefile | 2 +- libavcodec/h264_sei.c | 197 ++++++++++++++++++++++++ libavcodec/h264_sei.h | 2 + libavcodec/h264_slice.c | 190 +----------------------- libavcodec/hevc_sei.c | 252 +++++++++++++++++++++++++++++++ libavcodec/hevc_sei.h | 3 + libavcodec/hevcdec.c | 249 +------------------------------ libavcodec/qsvdec.c | 321 ++++++++++++++++++++++++++++++++++++++++ 8 files changed, 782 insertions(+), 434 deletions(-) base-commit: 882a17068fd8e62c7d38c14e6fb160d7c9fc446a Published-As: https://github.com/ffstaging/FFmpeg/releases/tag/pr-ffstaging-31%2Fsoftworkz%2Fsubmit_qsv_sei-v6 Fetch-It-Via: git fetch https://github.com/ffstaging/FFmpeg pr-ffstaging-31/softworkz/submit_qsv_sei-v6 Pull-Request: https://github.com/ffstaging/FFmpeg/pull/31 Range-diff vs v5: 1: 7656477360 < -: ---------- avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() 2: 06976606c5 < -: ---------- avcodec/vpp_qsv: Copy side data from input to output frame 3: 320a8a535c < -: ---------- avcodec/mpeg12dec: make mpeg_decode_user_data() accessible 4: e58ad6564f ! 1: 4e9adcd90a avcodec/hevcdec: make set_side_data() accessible @@ Metadata Author: softworkz <softworkz@hotmail.com> ## Commit message ## - avcodec/hevcdec: make set_side_data() accessible + avcodec/hevcdec: factor out ff_hevc_set_set_to_frame Signed-off-by: softworkz <softworkz@hotmail.com> - ## libavcodec/hevcdec.c ## -@@ libavcodec/hevcdec.c: error: - return res; - } + ## libavcodec/hevc_sei.c ## +@@ + #include "hevc_ps.h" + #include "hevc_sei.h" --static int set_side_data(HEVCContext *s) -+int ff_hevc_set_side_data(AVCodecContext *logctx, HEVCSEI *sei, HEVCContext *s, AVFrame *out) ++#include "libavutil/display.h" ++#include "libavutil/film_grain_params.h" ++#include "libavutil/mastering_display_metadata.h" ++#include "libavutil/stereo3d.h" ++#include "libavutil/timecode.h" ++ + static int decode_nal_sei_decoded_picture_hash(HEVCSEIPictureHash *s, + GetByteContext *gb) { -- AVFrame *out = s->ref->frame; -- int ret; -+ int ret = 0; - -- if (s->sei.frame_packing.present && -- s->sei.frame_packing.arrangement_type >= 3 && -- s->sei.frame_packing.arrangement_type <= 5 && -- s->sei.frame_packing.content_interpretation_type > 0 && -- s->sei.frame_packing.content_interpretation_type < 3) { +@@ libavcodec/hevc_sei.c: void ff_hevc_reset_sei(HEVCSEI *s) + av_buffer_unref(&s->dynamic_hdr_plus.info); + av_buffer_unref(&s->dynamic_hdr_vivid.info); + } ++ ++int ff_hevc_set_sei_to_frame(AVCodecContext *logctx, HEVCSEI *sei, AVFrame *out, AVRational framerate, uint64_t seed, const VUI *vui, int bit_depth_luma, int bit_depth_chroma) ++{ + if (sei->frame_packing.present && + sei->frame_packing.arrangement_type >= 3 && + sei->frame_packing.arrangement_type <= 5 && + sei->frame_packing.content_interpretation_type > 0 && + sei->frame_packing.content_interpretation_type < 3) { - AVStereo3D *stereo = av_stereo3d_create_side_data(out); - if (!stereo) - return AVERROR(ENOMEM); - -- switch (s->sei.frame_packing.arrangement_type) { ++ AVStereo3D *stereo = av_stereo3d_create_side_data(out); ++ if (!stereo) ++ return AVERROR(ENOMEM); ++ + switch (sei->frame_packing.arrangement_type) { - case 3: -- if (s->sei.frame_packing.quincunx_subsampling) ++ case 3: + if (sei->frame_packing.quincunx_subsampling) - stereo->type = AV_STEREO3D_SIDEBYSIDE_QUINCUNX; - else - stereo->type = AV_STEREO3D_SIDEBYSIDE; -@@ libavcodec/hevcdec.c: static int set_side_data(HEVCContext *s) - break; - } - -- if (s->sei.frame_packing.content_interpretation_type == 2) ++ stereo->type = AV_STEREO3D_SIDEBYSIDE_QUINCUNX; ++ else ++ stereo->type = AV_STEREO3D_SIDEBYSIDE; ++ break; ++ case 4: ++ stereo->type = AV_STEREO3D_TOPBOTTOM; ++ break; ++ case 5: ++ stereo->type = AV_STEREO3D_FRAMESEQUENCE; ++ break; ++ } ++ + if (sei->frame_packing.content_interpretation_type == 2) - stereo->flags = AV_STEREO3D_FLAG_INVERT; - -- if (s->sei.frame_packing.arrangement_type == 5) { -- if (s->sei.frame_packing.current_frame_is_frame0_flag) ++ stereo->flags = AV_STEREO3D_FLAG_INVERT; ++ + if (sei->frame_packing.arrangement_type == 5) { + if (sei->frame_packing.current_frame_is_frame0_flag) - stereo->view = AV_STEREO3D_VIEW_LEFT; - else - stereo->view = AV_STEREO3D_VIEW_RIGHT; - } - } - -- if (s->sei.display_orientation.present && -- (s->sei.display_orientation.anticlockwise_rotation || -- s->sei.display_orientation.hflip || s->sei.display_orientation.vflip)) { -- double angle = s->sei.display_orientation.anticlockwise_rotation * 360 / (double) (1 << 16); ++ stereo->view = AV_STEREO3D_VIEW_LEFT; ++ else ++ stereo->view = AV_STEREO3D_VIEW_RIGHT; ++ } ++ } ++ + if (sei->display_orientation.present && + (sei->display_orientation.anticlockwise_rotation || + sei->display_orientation.hflip || sei->display_orientation.vflip)) { + double angle = sei->display_orientation.anticlockwise_rotation * 360 / (double) (1 << 16); - AVFrameSideData *rotation = av_frame_new_side_data(out, - AV_FRAME_DATA_DISPLAYMATRIX, - sizeof(int32_t) * 9); ++ AVFrameSideData *rotation = av_frame_new_side_data(out, ++ AV_FRAME_DATA_DISPLAYMATRIX, ++ sizeof(int32_t) * 9); ++ if (!rotation) ++ return AVERROR(ENOMEM); ++ ++ /* av_display_rotation_set() expects the angle in the clockwise ++ * direction, hence the first minus. ++ * The below code applies the flips after the rotation, yet ++ * the H.2645 specs require flipping to be applied first. ++ * Because of R O(phi) = O(-phi) R (where R is flipping around ++ * an arbitatry axis and O(phi) is the proper rotation by phi) ++ * we can create display matrices as desired by negating ++ * the degree once for every flip applied. */ ++ angle = -angle * (1 - 2 * !!sei->display_orientation.hflip) ++ * (1 - 2 * !!sei->display_orientation.vflip); ++ av_display_rotation_set((int32_t *)rotation->data, angle); ++ av_display_matrix_flip((int32_t *)rotation->data, ++ sei->display_orientation.hflip, ++ sei->display_orientation.vflip); ++ } ++ ++ if (sei->mastering_display.present) { ++ // HEVC uses a g,b,r ordering, which we convert to a more natural r,g,b ++ const int mapping[3] = {2, 0, 1}; ++ const int chroma_den = 50000; ++ const int luma_den = 10000; ++ int i; ++ AVMasteringDisplayMetadata *metadata = ++ av_mastering_display_metadata_create_side_data(out); ++ if (!metadata) ++ return AVERROR(ENOMEM); ++ ++ for (i = 0; i < 3; i++) { ++ const int j = mapping[i]; ++ metadata->display_primaries[i][0].num = sei->mastering_display.display_primaries[j][0]; ++ metadata->display_primaries[i][0].den = chroma_den; ++ metadata->display_primaries[i][1].num = sei->mastering_display.display_primaries[j][1]; ++ metadata->display_primaries[i][1].den = chroma_den; ++ } ++ metadata->white_point[0].num = sei->mastering_display.white_point[0]; ++ metadata->white_point[0].den = chroma_den; ++ metadata->white_point[1].num = sei->mastering_display.white_point[1]; ++ metadata->white_point[1].den = chroma_den; ++ ++ metadata->max_luminance.num = sei->mastering_display.max_luminance; ++ metadata->max_luminance.den = luma_den; ++ metadata->min_luminance.num = sei->mastering_display.min_luminance; ++ metadata->min_luminance.den = luma_den; ++ metadata->has_luminance = 1; ++ metadata->has_primaries = 1; ++ ++ av_log(logctx, AV_LOG_DEBUG, "Mastering Display Metadata:\n"); ++ av_log(logctx, AV_LOG_DEBUG, ++ "r(%5.4f,%5.4f) g(%5.4f,%5.4f) b(%5.4f %5.4f) wp(%5.4f, %5.4f)\n", ++ av_q2d(metadata->display_primaries[0][0]), ++ av_q2d(metadata->display_primaries[0][1]), ++ av_q2d(metadata->display_primaries[1][0]), ++ av_q2d(metadata->display_primaries[1][1]), ++ av_q2d(metadata->display_primaries[2][0]), ++ av_q2d(metadata->display_primaries[2][1]), ++ av_q2d(metadata->white_point[0]), av_q2d(metadata->white_point[1])); ++ av_log(logctx, AV_LOG_DEBUG, ++ "min_luminance=%f, max_luminance=%f\n", ++ av_q2d(metadata->min_luminance), av_q2d(metadata->max_luminance)); ++ } ++ if (sei->content_light.present) { ++ AVContentLightMetadata *metadata = ++ av_content_light_metadata_create_side_data(out); ++ if (!metadata) ++ return AVERROR(ENOMEM); ++ metadata->MaxCLL = sei->content_light.max_content_light_level; ++ metadata->MaxFALL = sei->content_light.max_pic_average_light_level; ++ ++ av_log(logctx, AV_LOG_DEBUG, "Content Light Level Metadata:\n"); ++ av_log(logctx, AV_LOG_DEBUG, "MaxCLL=%d, MaxFALL=%d\n", ++ metadata->MaxCLL, metadata->MaxFALL); ++ } ++ ++ if (sei->a53_caption.buf_ref) { ++ HEVCSEIA53Caption *a53 = &sei->a53_caption; ++ ++ AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, a53->buf_ref); ++ if (!sd) ++ av_buffer_unref(&a53->buf_ref); ++ a53->buf_ref = NULL; ++ } ++ ++ for (int i = 0; i < sei->unregistered.nb_buf_ref; i++) { ++ HEVCSEIUnregistered *unreg = &sei->unregistered; ++ ++ if (unreg->buf_ref[i]) { ++ AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, ++ AV_FRAME_DATA_SEI_UNREGISTERED, ++ unreg->buf_ref[i]); ++ if (!sd) ++ av_buffer_unref(&unreg->buf_ref[i]); ++ unreg->buf_ref[i] = NULL; ++ } ++ } ++ sei->unregistered.nb_buf_ref = 0; ++ ++ if (sei->timecode.present) { ++ uint32_t *tc_sd; ++ char tcbuf[AV_TIMECODE_STR_SIZE]; ++ AVFrameSideData *tcside = av_frame_new_side_data(out, AV_FRAME_DATA_S12M_TIMECODE, ++ sizeof(uint32_t) * 4); ++ if (!tcside) ++ return AVERROR(ENOMEM); ++ ++ tc_sd = (uint32_t*)tcside->data; ++ tc_sd[0] = sei->timecode.num_clock_ts; ++ ++ for (int i = 0; i < tc_sd[0]; i++) { ++ int drop = sei->timecode.cnt_dropped_flag[i]; ++ int hh = sei->timecode.hours_value[i]; ++ int mm = sei->timecode.minutes_value[i]; ++ int ss = sei->timecode.seconds_value[i]; ++ int ff = sei->timecode.n_frames[i]; ++ ++ tc_sd[i + 1] = av_timecode_get_smpte(framerate, drop, hh, mm, ss, ff); ++ av_timecode_make_smpte_tc_string2(tcbuf, framerate, tc_sd[i + 1], 0, 0); ++ av_dict_set(&out->metadata, "timecode", tcbuf, 0); ++ } ++ ++ sei->timecode.num_clock_ts = 0; ++ } ++ ++ if (sei->film_grain_characteristics.present) { ++ HEVCSEIFilmGrainCharacteristics *fgc = &sei->film_grain_characteristics; ++ AVFilmGrainParams *fgp = av_film_grain_params_create_side_data(out); ++ if (!fgp) ++ return AVERROR(ENOMEM); ++ ++ fgp->type = AV_FILM_GRAIN_PARAMS_H274; ++ fgp->seed = seed; /* no poc_offset in HEVC */ ++ fgp->codec.h274.model_id = fgc->model_id; ++ if (fgc->separate_colour_description_present_flag) { ++ fgp->codec.h274.bit_depth_luma = fgc->bit_depth_luma; ++ fgp->codec.h274.bit_depth_chroma = fgc->bit_depth_chroma; ++ fgp->codec.h274.color_range = fgc->full_range + 1; ++ fgp->codec.h274.color_primaries = fgc->color_primaries; ++ fgp->codec.h274.color_trc = fgc->transfer_characteristics; ++ fgp->codec.h274.color_space = fgc->matrix_coeffs; ++ } else { ++ fgp->codec.h274.bit_depth_luma = bit_depth_luma; ++ fgp->codec.h274.bit_depth_chroma = bit_depth_chroma; ++ if (vui->video_signal_type_present_flag) ++ fgp->codec.h274.color_range = vui->video_full_range_flag + 1; ++ else ++ fgp->codec.h274.color_range = AVCOL_RANGE_UNSPECIFIED; ++ if (vui->colour_description_present_flag) { ++ fgp->codec.h274.color_primaries = vui->colour_primaries; ++ fgp->codec.h274.color_trc = vui->transfer_characteristic; ++ fgp->codec.h274.color_space = vui->matrix_coeffs; ++ } else { ++ fgp->codec.h274.color_primaries = AVCOL_PRI_UNSPECIFIED; ++ fgp->codec.h274.color_trc = AVCOL_TRC_UNSPECIFIED; ++ fgp->codec.h274.color_space = AVCOL_SPC_UNSPECIFIED; ++ } ++ } ++ fgp->codec.h274.blending_mode_id = fgc->blending_mode_id; ++ fgp->codec.h274.log2_scale_factor = fgc->log2_scale_factor; ++ ++ memcpy(&fgp->codec.h274.component_model_present, &fgc->comp_model_present_flag, ++ sizeof(fgp->codec.h274.component_model_present)); ++ memcpy(&fgp->codec.h274.num_intensity_intervals, &fgc->num_intensity_intervals, ++ sizeof(fgp->codec.h274.num_intensity_intervals)); ++ memcpy(&fgp->codec.h274.num_model_values, &fgc->num_model_values, ++ sizeof(fgp->codec.h274.num_model_values)); ++ memcpy(&fgp->codec.h274.intensity_interval_lower_bound, &fgc->intensity_interval_lower_bound, ++ sizeof(fgp->codec.h274.intensity_interval_lower_bound)); ++ memcpy(&fgp->codec.h274.intensity_interval_upper_bound, &fgc->intensity_interval_upper_bound, ++ sizeof(fgp->codec.h274.intensity_interval_upper_bound)); ++ memcpy(&fgp->codec.h274.comp_model_value, &fgc->comp_model_value, ++ sizeof(fgp->codec.h274.comp_model_value)); ++ ++ fgc->present = fgc->persistence_flag; ++ } ++ ++ if (sei->dynamic_hdr_plus.info) { ++ AVBufferRef *info_ref = av_buffer_ref(sei->dynamic_hdr_plus.info); ++ if (!info_ref) ++ return AVERROR(ENOMEM); ++ ++ if (!av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_DYNAMIC_HDR_PLUS, info_ref)) { ++ av_buffer_unref(&info_ref); ++ return AVERROR(ENOMEM); ++ } ++ } ++ ++ if (sei->dynamic_hdr_vivid.info) { ++ AVBufferRef *info_ref = av_buffer_ref(sei->dynamic_hdr_vivid.info); ++ if (!info_ref) ++ return AVERROR(ENOMEM); ++ ++ if (!av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_DYNAMIC_HDR_VIVID, info_ref)) { ++ av_buffer_unref(&info_ref); ++ return AVERROR(ENOMEM); ++ } ++ } ++ ++ return 0; ++} + + ## libavcodec/hevc_sei.h ## +@@ + + #include "get_bits.h" + #include "hevc.h" ++#include "hevc_ps.h" + #include "sei.h" + + +@@ libavcodec/hevc_sei.h: int ff_hevc_decode_nal_sei(GetBitContext *gb, void *logctx, HEVCSEI *s, + */ + void ff_hevc_reset_sei(HEVCSEI *s); + ++int ff_hevc_set_sei_to_frame(AVCodecContext *logctx, HEVCSEI *sei, AVFrame *out, AVRational framerate, uint64_t seed, const VUI *vui, int bit_depth_luma, int bit_depth_chroma); ++ + #endif /* AVCODEC_HEVC_SEI_H */ + + ## libavcodec/hevcdec.c ## @@ libavcodec/hevcdec.c: static int set_side_data(HEVCContext *s) - * (1 - 2 * !!s->sei.display_orientation.vflip); - av_display_rotation_set((int32_t *)rotation->data, angle); - av_display_matrix_flip((int32_t *)rotation->data, + { + AVFrame *out = s->ref->frame; + int ret; +- +- if (s->sei.frame_packing.present && +- s->sei.frame_packing.arrangement_type >= 3 && +- s->sei.frame_packing.arrangement_type <= 5 && +- s->sei.frame_packing.content_interpretation_type > 0 && +- s->sei.frame_packing.content_interpretation_type < 3) { +- AVStereo3D *stereo = av_stereo3d_create_side_data(out); +- if (!stereo) +- return AVERROR(ENOMEM); +- +- switch (s->sei.frame_packing.arrangement_type) { +- case 3: +- if (s->sei.frame_packing.quincunx_subsampling) +- stereo->type = AV_STEREO3D_SIDEBYSIDE_QUINCUNX; +- else +- stereo->type = AV_STEREO3D_SIDEBYSIDE; +- break; +- case 4: +- stereo->type = AV_STEREO3D_TOPBOTTOM; +- break; +- case 5: +- stereo->type = AV_STEREO3D_FRAMESEQUENCE; +- break; +- } +- +- if (s->sei.frame_packing.content_interpretation_type == 2) +- stereo->flags = AV_STEREO3D_FLAG_INVERT; +- +- if (s->sei.frame_packing.arrangement_type == 5) { +- if (s->sei.frame_packing.current_frame_is_frame0_flag) +- stereo->view = AV_STEREO3D_VIEW_LEFT; +- else +- stereo->view = AV_STEREO3D_VIEW_RIGHT; +- } +- } +- +- if (s->sei.display_orientation.present && +- (s->sei.display_orientation.anticlockwise_rotation || +- s->sei.display_orientation.hflip || s->sei.display_orientation.vflip)) { +- double angle = s->sei.display_orientation.anticlockwise_rotation * 360 / (double) (1 << 16); +- AVFrameSideData *rotation = av_frame_new_side_data(out, +- AV_FRAME_DATA_DISPLAYMATRIX, +- sizeof(int32_t) * 9); +- if (!rotation) +- return AVERROR(ENOMEM); +- +- /* av_display_rotation_set() expects the angle in the clockwise +- * direction, hence the first minus. +- * The below code applies the flips after the rotation, yet +- * the H.2645 specs require flipping to be applied first. +- * Because of R O(phi) = O(-phi) R (where R is flipping around +- * an arbitatry axis and O(phi) is the proper rotation by phi) +- * we can create display matrices as desired by negating +- * the degree once for every flip applied. */ +- angle = -angle * (1 - 2 * !!s->sei.display_orientation.hflip) +- * (1 - 2 * !!s->sei.display_orientation.vflip); +- av_display_rotation_set((int32_t *)rotation->data, angle); +- av_display_matrix_flip((int32_t *)rotation->data, - s->sei.display_orientation.hflip, - s->sei.display_orientation.vflip); -+ sei->display_orientation.hflip, -+ sei->display_orientation.vflip); - } +- } ++ const HEVCSPS *sps = s->ps.sps; // Decrement the mastering display flag when IRAP frame has no_rasl_output_flag=1 // so the side data persists for the entire coded video sequence. -- if (s->sei.mastering_display.present > 0 && -+ if (s && sei->mastering_display.present > 0 && +@@ libavcodec/hevcdec.c: static int set_side_data(HEVCContext *s) IS_IRAP(s) && s->no_rasl_output_flag) { -- s->sei.mastering_display.present--; -+ sei->mastering_display.present--; + s->sei.mastering_display.present--; } - if (s->sei.mastering_display.present) { -+ if (sei->mastering_display.present) { - // HEVC uses a g,b,r ordering, which we convert to a more natural r,g,b - const int mapping[3] = {2, 0, 1}; - const int chroma_den = 50000; -@@ libavcodec/hevcdec.c: static int set_side_data(HEVCContext *s) - - for (i = 0; i < 3; i++) { - const int j = mapping[i]; +- // HEVC uses a g,b,r ordering, which we convert to a more natural r,g,b +- const int mapping[3] = {2, 0, 1}; +- const int chroma_den = 50000; +- const int luma_den = 10000; +- int i; +- AVMasteringDisplayMetadata *metadata = +- av_mastering_display_metadata_create_side_data(out); +- if (!metadata) +- return AVERROR(ENOMEM); +- +- for (i = 0; i < 3; i++) { +- const int j = mapping[i]; - metadata->display_primaries[i][0].num = s->sei.mastering_display.display_primaries[j][0]; -+ metadata->display_primaries[i][0].num = sei->mastering_display.display_primaries[j][0]; - metadata->display_primaries[i][0].den = chroma_den; +- metadata->display_primaries[i][0].den = chroma_den; - metadata->display_primaries[i][1].num = s->sei.mastering_display.display_primaries[j][1]; -+ metadata->display_primaries[i][1].num = sei->mastering_display.display_primaries[j][1]; - metadata->display_primaries[i][1].den = chroma_den; - } +- metadata->display_primaries[i][1].den = chroma_den; +- } - metadata->white_point[0].num = s->sei.mastering_display.white_point[0]; -+ metadata->white_point[0].num = sei->mastering_display.white_point[0]; - metadata->white_point[0].den = chroma_den; +- metadata->white_point[0].den = chroma_den; - metadata->white_point[1].num = s->sei.mastering_display.white_point[1]; -+ metadata->white_point[1].num = sei->mastering_display.white_point[1]; - metadata->white_point[1].den = chroma_den; - +- metadata->white_point[1].den = chroma_den; +- - metadata->max_luminance.num = s->sei.mastering_display.max_luminance; -+ metadata->max_luminance.num = sei->mastering_display.max_luminance; - metadata->max_luminance.den = luma_den; +- metadata->max_luminance.den = luma_den; - metadata->min_luminance.num = s->sei.mastering_display.min_luminance; -+ metadata->min_luminance.num = sei->mastering_display.min_luminance; - metadata->min_luminance.den = luma_den; - metadata->has_luminance = 1; - metadata->has_primaries = 1; - +- metadata->min_luminance.den = luma_den; +- metadata->has_luminance = 1; +- metadata->has_primaries = 1; +- - av_log(s->avctx, AV_LOG_DEBUG, "Mastering Display Metadata:\n"); - av_log(s->avctx, AV_LOG_DEBUG, -+ av_log(logctx, AV_LOG_DEBUG, "Mastering Display Metadata:\n"); -+ av_log(logctx, AV_LOG_DEBUG, - "r(%5.4f,%5.4f) g(%5.4f,%5.4f) b(%5.4f %5.4f) wp(%5.4f, %5.4f)\n", - av_q2d(metadata->display_primaries[0][0]), - av_q2d(metadata->display_primaries[0][1]), -@@ libavcodec/hevcdec.c: static int set_side_data(HEVCContext *s) - av_q2d(metadata->display_primaries[2][0]), - av_q2d(metadata->display_primaries[2][1]), - av_q2d(metadata->white_point[0]), av_q2d(metadata->white_point[1])); +- "r(%5.4f,%5.4f) g(%5.4f,%5.4f) b(%5.4f %5.4f) wp(%5.4f, %5.4f)\n", +- av_q2d(metadata->display_primaries[0][0]), +- av_q2d(metadata->display_primaries[0][1]), +- av_q2d(metadata->display_primaries[1][0]), +- av_q2d(metadata->display_primaries[1][1]), +- av_q2d(metadata->display_primaries[2][0]), +- av_q2d(metadata->display_primaries[2][1]), +- av_q2d(metadata->white_point[0]), av_q2d(metadata->white_point[1])); - av_log(s->avctx, AV_LOG_DEBUG, -+ av_log(logctx, AV_LOG_DEBUG, - "min_luminance=%f, max_luminance=%f\n", - av_q2d(metadata->min_luminance), av_q2d(metadata->max_luminance)); - } +- "min_luminance=%f, max_luminance=%f\n", +- av_q2d(metadata->min_luminance), av_q2d(metadata->max_luminance)); +- } // Decrement the mastering display flag when IRAP frame has no_rasl_output_flag=1 // so the side data persists for the entire coded video sequence. -- if (s->sei.content_light.present > 0 && -+ if (s && sei->content_light.present > 0 && + if (s->sei.content_light.present > 0 && IS_IRAP(s) && s->no_rasl_output_flag) { -- s->sei.content_light.present--; -+ sei->content_light.present--; + s->sei.content_light.present--; } - if (s->sei.content_light.present) { -+ if (sei->content_light.present) { - AVContentLightMetadata *metadata = - av_content_light_metadata_create_side_data(out); - if (!metadata) - return AVERROR(ENOMEM); +- AVContentLightMetadata *metadata = +- av_content_light_metadata_create_side_data(out); +- if (!metadata) +- return AVERROR(ENOMEM); - metadata->MaxCLL = s->sei.content_light.max_content_light_level; - metadata->MaxFALL = s->sei.content_light.max_pic_average_light_level; -+ metadata->MaxCLL = sei->content_light.max_content_light_level; -+ metadata->MaxFALL = sei->content_light.max_pic_average_light_level; - +- - av_log(s->avctx, AV_LOG_DEBUG, "Content Light Level Metadata:\n"); - av_log(s->avctx, AV_LOG_DEBUG, "MaxCLL=%d, MaxFALL=%d\n", -+ av_log(logctx, AV_LOG_DEBUG, "Content Light Level Metadata:\n"); -+ av_log(logctx, AV_LOG_DEBUG, "MaxCLL=%d, MaxFALL=%d\n", - metadata->MaxCLL, metadata->MaxFALL); - } - +- metadata->MaxCLL, metadata->MaxFALL); +- } +- - if (s->sei.a53_caption.buf_ref) { - HEVCSEIA53Caption *a53 = &s->sei.a53_caption; -+ if (sei->a53_caption.buf_ref) { -+ HEVCSEIA53Caption *a53 = &sei->a53_caption; - - AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, a53->buf_ref); - if (!sd) -@@ libavcodec/hevcdec.c: static int set_side_data(HEVCContext *s) - a53->buf_ref = NULL; - } - +- +- AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, a53->buf_ref); +- if (!sd) +- av_buffer_unref(&a53->buf_ref); +- a53->buf_ref = NULL; +- } +- - for (int i = 0; i < s->sei.unregistered.nb_buf_ref; i++) { - HEVCSEIUnregistered *unreg = &s->sei.unregistered; -+ for (int i = 0; i < sei->unregistered.nb_buf_ref; i++) { -+ HEVCSEIUnregistered *unreg = &sei->unregistered; - - if (unreg->buf_ref[i]) { - AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, -@@ libavcodec/hevcdec.c: static int set_side_data(HEVCContext *s) - unreg->buf_ref[i] = NULL; - } - } +- +- if (unreg->buf_ref[i]) { +- AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, +- AV_FRAME_DATA_SEI_UNREGISTERED, +- unreg->buf_ref[i]); +- if (!sd) +- av_buffer_unref(&unreg->buf_ref[i]); +- unreg->buf_ref[i] = NULL; +- } +- } - s->sei.unregistered.nb_buf_ref = 0; -+ sei->unregistered.nb_buf_ref = 0; - if (s->sei.timecode.present) { -+ if (s && sei->timecode.present) { - uint32_t *tc_sd; - char tcbuf[AV_TIMECODE_STR_SIZE]; - AVFrameSideData *tcside = av_frame_new_side_data(out, AV_FRAME_DATA_S12M_TIMECODE, -@@ libavcodec/hevcdec.c: static int set_side_data(HEVCContext *s) - return AVERROR(ENOMEM); - - tc_sd = (uint32_t*)tcside->data; +- uint32_t *tc_sd; +- char tcbuf[AV_TIMECODE_STR_SIZE]; +- AVFrameSideData *tcside = av_frame_new_side_data(out, AV_FRAME_DATA_S12M_TIMECODE, +- sizeof(uint32_t) * 4); +- if (!tcside) +- return AVERROR(ENOMEM); +- +- tc_sd = (uint32_t*)tcside->data; - tc_sd[0] = s->sei.timecode.num_clock_ts; -+ tc_sd[0] = sei->timecode.num_clock_ts; - - for (int i = 0; i < tc_sd[0]; i++) { +- +- for (int i = 0; i < tc_sd[0]; i++) { - int drop = s->sei.timecode.cnt_dropped_flag[i]; - int hh = s->sei.timecode.hours_value[i]; - int mm = s->sei.timecode.minutes_value[i]; - int ss = s->sei.timecode.seconds_value[i]; - int ff = s->sei.timecode.n_frames[i]; -+ int drop = sei->timecode.cnt_dropped_flag[i]; -+ int hh = sei->timecode.hours_value[i]; -+ int mm = sei->timecode.minutes_value[i]; -+ int ss = sei->timecode.seconds_value[i]; -+ int ff = sei->timecode.n_frames[i]; - - tc_sd[i + 1] = av_timecode_get_smpte(s->avctx->framerate, drop, hh, mm, ss, ff); - av_timecode_make_smpte_tc_string2(tcbuf, s->avctx->framerate, tc_sd[i + 1], 0, 0); - av_dict_set(&out->metadata, "timecode", tcbuf, 0); - } - +- +- tc_sd[i + 1] = av_timecode_get_smpte(s->avctx->framerate, drop, hh, mm, ss, ff); +- av_timecode_make_smpte_tc_string2(tcbuf, s->avctx->framerate, tc_sd[i + 1], 0, 0); +- av_dict_set(&out->metadata, "timecode", tcbuf, 0); +- } +- - s->sei.timecode.num_clock_ts = 0; -+ sei->timecode.num_clock_ts = 0; - } - +- } +- - if (s->sei.film_grain_characteristics.present) { - HEVCSEIFilmGrainCharacteristics *fgc = &s->sei.film_grain_characteristics; -+ if (s && sei->film_grain_characteristics.present) { -+ HEVCSEIFilmGrainCharacteristics *fgc = &sei->film_grain_characteristics; - AVFilmGrainParams *fgp = av_film_grain_params_create_side_data(out); - if (!fgp) - return AVERROR(ENOMEM); -@@ libavcodec/hevcdec.c: static int set_side_data(HEVCContext *s) - fgc->present = fgc->persistence_flag; - } - +- AVFilmGrainParams *fgp = av_film_grain_params_create_side_data(out); +- if (!fgp) +- return AVERROR(ENOMEM); +- +- fgp->type = AV_FILM_GRAIN_PARAMS_H274; +- fgp->seed = s->ref->poc; /* no poc_offset in HEVC */ +- +- fgp->codec.h274.model_id = fgc->model_id; +- if (fgc->separate_colour_description_present_flag) { +- fgp->codec.h274.bit_depth_luma = fgc->bit_depth_luma; +- fgp->codec.h274.bit_depth_chroma = fgc->bit_depth_chroma; +- fgp->codec.h274.color_range = fgc->full_range + 1; +- fgp->codec.h274.color_primaries = fgc->color_primaries; +- fgp->codec.h274.color_trc = fgc->transfer_characteristics; +- fgp->codec.h274.color_space = fgc->matrix_coeffs; +- } else { +- const HEVCSPS *sps = s->ps.sps; +- const VUI *vui = &sps->vui; +- fgp->codec.h274.bit_depth_luma = sps->bit_depth; +- fgp->codec.h274.bit_depth_chroma = sps->bit_depth_chroma; +- if (vui->video_signal_type_present_flag) +- fgp->codec.h274.color_range = vui->video_full_range_flag + 1; +- else +- fgp->codec.h274.color_range = AVCOL_RANGE_UNSPECIFIED; +- if (vui->colour_description_present_flag) { +- fgp->codec.h274.color_primaries = vui->colour_primaries; +- fgp->codec.h274.color_trc = vui->transfer_characteristic; +- fgp->codec.h274.color_space = vui->matrix_coeffs; +- } else { +- fgp->codec.h274.color_primaries = AVCOL_PRI_UNSPECIFIED; +- fgp->codec.h274.color_trc = AVCOL_TRC_UNSPECIFIED; +- fgp->codec.h274.color_space = AVCOL_SPC_UNSPECIFIED; +- } +- } +- fgp->codec.h274.blending_mode_id = fgc->blending_mode_id; +- fgp->codec.h274.log2_scale_factor = fgc->log2_scale_factor; +- +- memcpy(&fgp->codec.h274.component_model_present, &fgc->comp_model_present_flag, +- sizeof(fgp->codec.h274.component_model_present)); +- memcpy(&fgp->codec.h274.num_intensity_intervals, &fgc->num_intensity_intervals, +- sizeof(fgp->codec.h274.num_intensity_intervals)); +- memcpy(&fgp->codec.h274.num_model_values, &fgc->num_model_values, +- sizeof(fgp->codec.h274.num_model_values)); +- memcpy(&fgp->codec.h274.intensity_interval_lower_bound, &fgc->intensity_interval_lower_bound, +- sizeof(fgp->codec.h274.intensity_interval_lower_bound)); +- memcpy(&fgp->codec.h274.intensity_interval_upper_bound, &fgc->intensity_interval_upper_bound, +- sizeof(fgp->codec.h274.intensity_interval_upper_bound)); +- memcpy(&fgp->codec.h274.comp_model_value, &fgc->comp_model_value, +- sizeof(fgp->codec.h274.comp_model_value)); +- +- fgc->present = fgc->persistence_flag; +- } +- - if (s->sei.dynamic_hdr_plus.info) { - AVBufferRef *info_ref = av_buffer_ref(s->sei.dynamic_hdr_plus.info); -+ if (sei->dynamic_hdr_plus.info) { -+ AVBufferRef *info_ref = av_buffer_ref(sei->dynamic_hdr_plus.info); - if (!info_ref) - return AVERROR(ENOMEM); - -@@ libavcodec/hevcdec.c: static int set_side_data(HEVCContext *s) - } - } +- if (!info_ref) +- return AVERROR(ENOMEM); +- +- if (!av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_DYNAMIC_HDR_PLUS, info_ref)) { +- av_buffer_unref(&info_ref); +- return AVERROR(ENOMEM); +- } +- } ++ if ((ret = ff_hevc_set_sei_to_frame(s->avctx, &s->sei, out, s->avctx->framerate, s->ref->poc, &sps->vui, sps->bit_depth, sps->bit_depth_chroma) < 0)) ++ return ret; - if (s->rpu_buf) { + if (s && s->rpu_buf) { @@ libavcodec/hevcdec.c: static int set_side_data(HEVCContext *s) return ret; - if (s->sei.dynamic_hdr_vivid.info) { -+ if (s && s->sei.dynamic_hdr_vivid.info) { - AVBufferRef *info_ref = av_buffer_ref(s->sei.dynamic_hdr_vivid.info); - if (!info_ref) - return AVERROR(ENOMEM); -@@ libavcodec/hevcdec.c: static int hevc_frame_start(HEVCContext *s) - goto fail; - } - -- ret = set_side_data(s); -+ ret = ff_hevc_set_side_data(s->avctx, &s->sei, s, s->ref->frame); - if (ret < 0) - goto fail; - - - ## libavcodec/hevcdec.h ## -@@ libavcodec/hevcdec.h: void ff_hevc_hls_residual_coding(HEVCContext *s, int x0, int y0, - - void ff_hevc_hls_mvd_coding(HEVCContext *s, int x0, int y0, int log2_cb_size); +- AVBufferRef *info_ref = av_buffer_ref(s->sei.dynamic_hdr_vivid.info); +- if (!info_ref) +- return AVERROR(ENOMEM); +- +- if (!av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_DYNAMIC_HDR_VIVID, info_ref)) { +- av_buffer_unref(&info_ref); +- return AVERROR(ENOMEM); +- } +- } +- + return 0; + } -+/** -+ * Set the decodec side data to an AVFrame. -+ * @logctx context for logging. -+ * @sei HEVCSEI decoding context, must not be NULL. -+ * @s HEVCContext, can be NULL. -+ * @return < 0 on error, 0 otherwise. -+ */ -+int ff_hevc_set_side_data(AVCodecContext *logctx, HEVCSEI *sei, HEVCContext *s, AVFrame *out); -+ - extern const uint8_t ff_hevc_qpel_extra_before[4]; - extern const uint8_t ff_hevc_qpel_extra_after[4]; - extern const uint8_t ff_hevc_qpel_extra[4]; 5: 4c0b6eb4cb ! 2: 51b234c8d0 avcodec/h264dec: make h264_export_frame_props() accessible @@ Commit message Signed-off-by: softworkz <softworkz@hotmail.com> - ## libavcodec/h264_slice.c ## -@@ libavcodec/h264_slice.c: static int h264_init_ps(H264Context *h, const H264SliceContext *sl, int first_sl - return 0; - } + ## libavcodec/h264_sei.c ## +@@ + #include "h264_ps.h" + #include "h264_sei.h" + #include "sei.h" ++#include "libavutil/display.h" ++#include "libavutil/film_grain_params.h" ++#include "libavutil/stereo3d.h" ++#include "libavutil/timecode.h" --static int h264_export_frame_props(H264Context *h) -+int ff_h264_export_frame_props(AVCodecContext *logctx, H264SEIContext *sei, H264Context *h, AVFrame *out) - { -- const SPS *sps = h->ps.sps; -- H264Picture *cur = h->cur_pic_ptr; -- AVFrame *out = cur->f; -+ const SPS *sps = h ? h->ps.sps : NULL; -+ H264Picture *cur = h ? h->cur_pic_ptr : NULL; - - out->interlaced_frame = 0; - out->repeat_pict = 0; -@@ libavcodec/h264_slice.c: static int h264_export_frame_props(H264Context *h) - /* Signal interlacing information externally. */ - /* Prioritize picture timing SEI information over used - * decoding process if it exists. */ -- if (h->sei.picture_timing.present) { -- int ret = ff_h264_sei_process_picture_timing(&h->sei.picture_timing, sps, -- h->avctx); -+ if (sps && sei->picture_timing.present) { -+ int ret = ff_h264_sei_process_picture_timing(&sei->picture_timing, sps, -+ logctx); - if (ret < 0) { -- av_log(h->avctx, AV_LOG_ERROR, "Error processing a picture timing SEI\n"); -- if (h->avctx->err_recognition & AV_EF_EXPLODE) -+ av_log(logctx, AV_LOG_ERROR, "Error processing a picture timing SEI\n"); -+ if (logctx->err_recognition & AV_EF_EXPLODE) - return ret; -- h->sei.picture_timing.present = 0; -+ sei->picture_timing.present = 0; - } - } + #define AVERROR_PS_NOT_FOUND FFERRTAG(0xF8,'?','P','S') -- if (sps->pic_struct_present_flag && h->sei.picture_timing.present) { -- H264SEIPictureTiming *pt = &h->sei.picture_timing; -+ if (h && sps && sps->pic_struct_present_flag && sei->picture_timing.present) { -+ H264SEIPictureTiming *pt = &sei->picture_timing; - switch (pt->pic_struct) { - case H264_SEI_PIC_STRUCT_FRAME: - break; -@@ libavcodec/h264_slice.c: static int h264_export_frame_props(H264Context *h) - if ((pt->ct_type & 3) && - pt->pic_struct <= H264_SEI_PIC_STRUCT_BOTTOM_TOP) - out->interlaced_frame = (pt->ct_type & (1 << 1)) != 0; -- } else { -+ } else if (h) { - /* Derive interlacing flag from used decoding process. */ - out->interlaced_frame = FIELD_OR_MBAFF_PICTURE(h); +@@ libavcodec/h264_sei.c: const char *ff_h264_sei_stereo_mode(const H264SEIFramePacking *h) + return NULL; } -- h->prev_interlaced_frame = out->interlaced_frame; + } ++ ++int ff_h264_set_sei_to_frame(AVCodecContext *avctx, H264SEIContext *sei, AVFrame *out, const SPS *sps, uint64_t seed) ++{ ++ if (sei->frame_packing.present && ++ sei->frame_packing.arrangement_type <= 6 && ++ sei->frame_packing.content_interpretation_type > 0 && ++ sei->frame_packing.content_interpretation_type < 3) { ++ H264SEIFramePacking *fp = &sei->frame_packing; ++ AVStereo3D *stereo = av_stereo3d_create_side_data(out); ++ if (stereo) { ++ switch (fp->arrangement_type) { ++ case H264_SEI_FPA_TYPE_CHECKERBOARD: ++ stereo->type = AV_STEREO3D_CHECKERBOARD; ++ break; ++ case H264_SEI_FPA_TYPE_INTERLEAVE_COLUMN: ++ stereo->type = AV_STEREO3D_COLUMNS; ++ break; ++ case H264_SEI_FPA_TYPE_INTERLEAVE_ROW: ++ stereo->type = AV_STEREO3D_LINES; ++ break; ++ case H264_SEI_FPA_TYPE_SIDE_BY_SIDE: ++ if (fp->quincunx_sampling_flag) ++ stereo->type = AV_STEREO3D_SIDEBYSIDE_QUINCUNX; ++ else ++ stereo->type = AV_STEREO3D_SIDEBYSIDE; ++ break; ++ case H264_SEI_FPA_TYPE_TOP_BOTTOM: ++ stereo->type = AV_STEREO3D_TOPBOTTOM; ++ break; ++ case H264_SEI_FPA_TYPE_INTERLEAVE_TEMPORAL: ++ stereo->type = AV_STEREO3D_FRAMESEQUENCE; ++ break; ++ case H264_SEI_FPA_TYPE_2D: ++ stereo->type = AV_STEREO3D_2D; ++ break; ++ } ++ ++ if (fp->content_interpretation_type == 2) ++ stereo->flags = AV_STEREO3D_FLAG_INVERT; ++ ++ if (fp->arrangement_type == H264_SEI_FPA_TYPE_INTERLEAVE_TEMPORAL) { ++ if (fp->current_frame_is_frame0_flag) ++ stereo->view = AV_STEREO3D_VIEW_LEFT; ++ else ++ stereo->view = AV_STEREO3D_VIEW_RIGHT; ++ } ++ } ++ } ++ ++ if (sei->display_orientation.present && ++ (sei->display_orientation.anticlockwise_rotation || ++ sei->display_orientation.hflip || ++ sei->display_orientation.vflip)) { ++ H264SEIDisplayOrientation *o = &sei->display_orientation; ++ double angle = o->anticlockwise_rotation * 360 / (double) (1 << 16); ++ AVFrameSideData *rotation = av_frame_new_side_data(out, ++ AV_FRAME_DATA_DISPLAYMATRIX, ++ sizeof(int32_t) * 9); ++ if (rotation) { ++ /* av_display_rotation_set() expects the angle in the clockwise ++ * direction, hence the first minus. ++ * The below code applies the flips after the rotation, yet ++ * the H.2645 specs require flipping to be applied first. ++ * Because of R O(phi) = O(-phi) R (where R is flipping around ++ * an arbitatry axis and O(phi) is the proper rotation by phi) ++ * we can create display matrices as desired by negating ++ * the degree once for every flip applied. */ ++ angle = -angle * (1 - 2 * !!o->hflip) * (1 - 2 * !!o->vflip); ++ av_display_rotation_set((int32_t *)rotation->data, angle); ++ av_display_matrix_flip((int32_t *)rotation->data, ++ o->hflip, o->vflip); ++ } ++ } ++ ++ if (sei->afd.present) { ++ AVFrameSideData *sd = av_frame_new_side_data(out, AV_FRAME_DATA_AFD, ++ sizeof(uint8_t)); ++ ++ if (sd) { ++ *sd->data = sei->afd.active_format_description; ++ sei->afd.present = 0; ++ } ++ } ++ ++ if (sei->a53_caption.buf_ref) { ++ H264SEIA53Caption *a53 = &sei->a53_caption; ++ ++ AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, a53->buf_ref); ++ if (!sd) ++ av_buffer_unref(&a53->buf_ref); ++ a53->buf_ref = NULL; ++ ++ avctx->properties |= FF_CODEC_PROPERTY_CLOSED_CAPTIONS; ++ } ++ ++ for (int i = 0; i < sei->unregistered.nb_buf_ref; i++) { ++ H264SEIUnregistered *unreg = &sei->unregistered; ++ ++ if (unreg->buf_ref[i]) { ++ AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, ++ AV_FRAME_DATA_SEI_UNREGISTERED, ++ unreg->buf_ref[i]); ++ if (!sd) ++ av_buffer_unref(&unreg->buf_ref[i]); ++ unreg->buf_ref[i] = NULL; ++ } ++ } ++ sei->unregistered.nb_buf_ref = 0; ++ ++ if (sps && sei->film_grain_characteristics.present) { ++ H264SEIFilmGrainCharacteristics *fgc = &sei->film_grain_characteristics; ++ AVFilmGrainParams *fgp = av_film_grain_params_create_side_data(out); ++ if (!fgp) ++ return AVERROR(ENOMEM); ++ ++ fgp->type = AV_FILM_GRAIN_PARAMS_H274; ++ fgp->seed = seed; ++ ++ fgp->codec.h274.model_id = fgc->model_id; ++ if (fgc->separate_colour_description_present_flag) { ++ fgp->codec.h274.bit_depth_luma = fgc->bit_depth_luma; ++ fgp->codec.h274.bit_depth_chroma = fgc->bit_depth_chroma; ++ fgp->codec.h274.color_range = fgc->full_range + 1; ++ fgp->codec.h274.color_primaries = fgc->color_primaries; ++ fgp->codec.h274.color_trc = fgc->transfer_characteristics; ++ fgp->codec.h274.color_space = fgc->matrix_coeffs; ++ } else { ++ fgp->codec.h274.bit_depth_luma = sps->bit_depth_luma; ++ fgp->codec.h274.bit_depth_chroma = sps->bit_depth_chroma; ++ if (sps->video_signal_type_present_flag) ++ fgp->codec.h274.color_range = sps->full_range + 1; ++ else ++ fgp->codec.h274.color_range = AVCOL_RANGE_UNSPECIFIED; ++ if (sps->colour_description_present_flag) { ++ fgp->codec.h274.color_primaries = sps->color_primaries; ++ fgp->codec.h274.color_trc = sps->color_trc; ++ fgp->codec.h274.color_space = sps->colorspace; ++ } else { ++ fgp->codec.h274.color_primaries = AVCOL_PRI_UNSPECIFIED; ++ fgp->codec.h274.color_trc = AVCOL_TRC_UNSPECIFIED; ++ fgp->codec.h274.color_space = AVCOL_SPC_UNSPECIFIED; ++ } ++ } ++ fgp->codec.h274.blending_mode_id = fgc->blending_mode_id; ++ fgp->codec.h274.log2_scale_factor = fgc->log2_scale_factor; ++ ++ memcpy(&fgp->codec.h274.component_model_present, &fgc->comp_model_present_flag, ++ sizeof(fgp->codec.h274.component_model_present)); ++ memcpy(&fgp->codec.h274.num_intensity_intervals, &fgc->num_intensity_intervals, ++ sizeof(fgp->codec.h274.num_intensity_intervals)); ++ memcpy(&fgp->codec.h274.num_model_values, &fgc->num_model_values, ++ sizeof(fgp->codec.h274.num_model_values)); ++ memcpy(&fgp->codec.h274.intensity_interval_lower_bound, &fgc->intensity_interval_lower_bound, ++ sizeof(fgp->codec.h274.intensity_interval_lower_bound)); ++ memcpy(&fgp->codec.h274.intensity_interval_upper_bound, &fgc->intensity_interval_upper_bound, ++ sizeof(fgp->codec.h274.intensity_interval_upper_bound)); ++ memcpy(&fgp->codec.h274.comp_model_value, &fgc->comp_model_value, ++ sizeof(fgp->codec.h274.comp_model_value)); ++ ++ fgc->present = !!fgc->repetition_period; ++ ++ avctx->properties |= FF_CODEC_PROPERTY_FILM_GRAIN; ++ } ++ ++ if (sei->picture_timing.timecode_cnt > 0) { ++ uint32_t *tc_sd; ++ char tcbuf[AV_TIMECODE_STR_SIZE]; ++ ++ AVFrameSideData *tcside = av_frame_new_side_data(out, ++ AV_FRAME_DATA_S12M_TIMECODE, ++ sizeof(uint32_t)*4); ++ if (!tcside) ++ return AVERROR(ENOMEM); ++ ++ tc_sd = (uint32_t*)tcside->data; ++ tc_sd[0] = sei->picture_timing.timecode_cnt; ++ ++ for (int i = 0; i < tc_sd[0]; i++) { ++ int drop = sei->picture_timing.timecode[i].dropframe; ++ int hh = sei->picture_timing.timecode[i].hours; ++ int mm = sei->picture_timing.timecode[i].minutes; ++ int ss = sei->picture_timing.timecode[i].seconds; ++ int ff = sei->picture_timing.timecode[i].frame; ++ ++ tc_sd[i + 1] = av_timecode_get_smpte(avctx->framerate, drop, hh, mm, ss, ff); ++ av_timecode_make_smpte_tc_string2(tcbuf, avctx->framerate, tc_sd[i + 1], 0, 0); ++ av_dict_set(&out->metadata, "timecode", tcbuf, 0); ++ } ++ sei->picture_timing.timecode_cnt = 0; ++ } ++ ++ return 0; ++} + + ## libavcodec/h264_sei.h ## +@@ libavcodec/h264_sei.h: const char *ff_h264_sei_stereo_mode(const H264SEIFramePacking *h); + int ff_h264_sei_process_picture_timing(H264SEIPictureTiming *h, const SPS *sps, + void *logctx); -- if (cur->field_poc[0] != cur->field_poc[1]) { -+ if (h) -+ h->prev_interlaced_frame = out->interlaced_frame; -+ -+ if (sps && cur->field_poc[0] != cur->field_poc[1]) { - /* Derive top_field_first from field pocs. */ - out->top_field_first = cur->field_poc[0] < cur->field_poc[1]; -- } else { -- if (sps->pic_struct_present_flag && h->sei.picture_timing.present) { -+ } else if (sps) { -+ if (sps->pic_struct_present_flag && sei->picture_timing.present) { - /* Use picture timing SEI information. Even if it is a - * information of a past frame, better than nothing. */ -- if (h->sei.picture_timing.pic_struct == H264_SEI_PIC_STRUCT_TOP_BOTTOM || -- h->sei.picture_timing.pic_struct == H264_SEI_PIC_STRUCT_TOP_BOTTOM_TOP) -+ if (sei->picture_timing.pic_struct == H264_SEI_PIC_STRUCT_TOP_BOTTOM || -+ sei->picture_timing.pic_struct == H264_SEI_PIC_STRUCT_TOP_BOTTOM_TOP) - out->top_field_first = 1; - else - out->top_field_first = 0; ++int ff_h264_set_sei_to_frame(AVCodecContext *avctx, H264SEIContext *sei, AVFrame *out, const SPS *sps, uint64_t seed); ++ + #endif /* AVCODEC_H264_SEI_H */ + + ## libavcodec/h264_slice.c ## @@ libavcodec/h264_slice.c: static int h264_export_frame_props(H264Context *h) } } @@ libavcodec/h264_slice.c: static int h264_export_frame_props(H264Context *h) - h->sei.frame_packing.content_interpretation_type > 0 && - h->sei.frame_packing.content_interpretation_type < 3) { - H264SEIFramePacking *fp = &h->sei.frame_packing; -+ if (sei->frame_packing.present && -+ sei->frame_packing.arrangement_type <= 6 && -+ sei->frame_packing.content_interpretation_type > 0 && -+ sei->frame_packing.content_interpretation_type < 3) { -+ H264SEIFramePacking *fp = &sei->frame_packing; - AVStereo3D *stereo = av_stereo3d_create_side_data(out); - if (stereo) { - switch (fp->arrangement_type) { -@@ libavcodec/h264_slice.c: static int h264_export_frame_props(H264Context *h) - } - } - +- AVStereo3D *stereo = av_stereo3d_create_side_data(out); +- if (stereo) { +- switch (fp->arrangement_type) { +- case H264_SEI_FPA_TYPE_CHECKERBOARD: +- stereo->type = AV_STEREO3D_CHECKERBOARD; +- break; +- case H264_SEI_FPA_TYPE_INTERLEAVE_COLUMN: +- stereo->type = AV_STEREO3D_COLUMNS; +- break; +- case H264_SEI_FPA_TYPE_INTERLEAVE_ROW: +- stereo->type = AV_STEREO3D_LINES; +- break; +- case H264_SEI_FPA_TYPE_SIDE_BY_SIDE: +- if (fp->quincunx_sampling_flag) +- stereo->type = AV_STEREO3D_SIDEBYSIDE_QUINCUNX; +- else +- stereo->type = AV_STEREO3D_SIDEBYSIDE; +- break; +- case H264_SEI_FPA_TYPE_TOP_BOTTOM: +- stereo->type = AV_STEREO3D_TOPBOTTOM; +- break; +- case H264_SEI_FPA_TYPE_INTERLEAVE_TEMPORAL: +- stereo->type = AV_STEREO3D_FRAMESEQUENCE; +- break; +- case H264_SEI_FPA_TYPE_2D: +- stereo->type = AV_STEREO3D_2D; +- break; +- } +- +- if (fp->content_interpretation_type == 2) +- stereo->flags = AV_STEREO3D_FLAG_INVERT; +- +- if (fp->arrangement_type == H264_SEI_FPA_TYPE_INTERLEAVE_TEMPORAL) { +- if (fp->current_frame_is_frame0_flag) +- stereo->view = AV_STEREO3D_VIEW_LEFT; +- else +- stereo->view = AV_STEREO3D_VIEW_RIGHT; +- } +- } +- } +- - if (h->sei.display_orientation.present && - (h->sei.display_orientation.anticlockwise_rotation || - h->sei.display_orientation.hflip || - h->sei.display_orientation.vflip)) { - H264SEIDisplayOrientation *o = &h->sei.display_orientation; -+ if (sei->display_orientation.present && -+ (sei->display_orientation.anticlockwise_rotation || -+ sei->display_orientation.hflip || -+ sei->display_orientation.vflip)) { -+ H264SEIDisplayOrientation *o = &sei->display_orientation; - double angle = o->anticlockwise_rotation * 360 / (double) (1 << 16); - AVFrameSideData *rotation = av_frame_new_side_data(out, - AV_FRAME_DATA_DISPLAYMATRIX, -@@ libavcodec/h264_slice.c: static int h264_export_frame_props(H264Context *h) - } - } - +- double angle = o->anticlockwise_rotation * 360 / (double) (1 << 16); +- AVFrameSideData *rotation = av_frame_new_side_data(out, +- AV_FRAME_DATA_DISPLAYMATRIX, +- sizeof(int32_t) * 9); +- if (rotation) { +- /* av_display_rotation_set() expects the angle in the clockwise +- * direction, hence the first minus. +- * The below code applies the flips after the rotation, yet +- * the H.2645 specs require flipping to be applied first. +- * Because of R O(phi) = O(-phi) R (where R is flipping around +- * an arbitatry axis and O(phi) is the proper rotation by phi) +- * we can create display matrices as desired by negating +- * the degree once for every flip applied. */ +- angle = -angle * (1 - 2 * !!o->hflip) * (1 - 2 * !!o->vflip); +- av_display_rotation_set((int32_t *)rotation->data, angle); +- av_display_matrix_flip((int32_t *)rotation->data, +- o->hflip, o->vflip); +- } +- } +- - if (h->sei.afd.present) { -+ if (sei->afd.present) { - AVFrameSideData *sd = av_frame_new_side_data(out, AV_FRAME_DATA_AFD, - sizeof(uint8_t)); - - if (sd) { +- AVFrameSideData *sd = av_frame_new_side_data(out, AV_FRAME_DATA_AFD, +- sizeof(uint8_t)); +- +- if (sd) { - *sd->data = h->sei.afd.active_format_description; - h->sei.afd.present = 0; -+ *sd->data = sei->afd.active_format_description; -+ sei->afd.present = 0; - } - } - +- } +- } +- - if (h->sei.a53_caption.buf_ref) { - H264SEIA53Caption *a53 = &h->sei.a53_caption; -+ if (sei->a53_caption.buf_ref) { -+ H264SEIA53Caption *a53 = &sei->a53_caption; - - AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, a53->buf_ref); - if (!sd) - av_buffer_unref(&a53->buf_ref); - a53->buf_ref = NULL; - +- +- AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, a53->buf_ref); +- if (!sd) +- av_buffer_unref(&a53->buf_ref); +- a53->buf_ref = NULL; +- - h->avctx->properties |= FF_CODEC_PROPERTY_CLOSED_CAPTIONS; -+ if (h) -+ h->avctx->properties |= FF_CODEC_PROPERTY_CLOSED_CAPTIONS; - } - +- } +- - for (int i = 0; i < h->sei.unregistered.nb_buf_ref; i++) { - H264SEIUnregistered *unreg = &h->sei.unregistered; -+ for (int i = 0; i < sei->unregistered.nb_buf_ref; i++) { -+ H264SEIUnregistered *unreg = &sei->unregistered; - - if (unreg->buf_ref[i]) { - AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, -@@ libavcodec/h264_slice.c: static int h264_export_frame_props(H264Context *h) - unreg->buf_ref[i] = NULL; - } - } +- +- if (unreg->buf_ref[i]) { +- AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, +- AV_FRAME_DATA_SEI_UNREGISTERED, +- unreg->buf_ref[i]); +- if (!sd) +- av_buffer_unref(&unreg->buf_ref[i]); +- unreg->buf_ref[i] = NULL; +- } +- } - h->sei.unregistered.nb_buf_ref = 0; -+ sei->unregistered.nb_buf_ref = 0; - +- - if (h->sei.film_grain_characteristics.present) { - H264SEIFilmGrainCharacteristics *fgc = &h->sei.film_grain_characteristics; -+ if (h && sps && sei->film_grain_characteristics.present) { -+ H264SEIFilmGrainCharacteristics *fgc = &sei->film_grain_characteristics; - AVFilmGrainParams *fgp = av_film_grain_params_create_side_data(out); - if (!fgp) - return AVERROR(ENOMEM); -@@ libavcodec/h264_slice.c: static int h264_export_frame_props(H264Context *h) - h->avctx->properties |= FF_CODEC_PROPERTY_FILM_GRAIN; - } - +- AVFilmGrainParams *fgp = av_film_grain_params_create_side_data(out); +- if (!fgp) +- return AVERROR(ENOMEM); +- +- fgp->type = AV_FILM_GRAIN_PARAMS_H274; +- fgp->seed = cur->poc + (h->poc_offset << 5); +- +- fgp->codec.h274.model_id = fgc->model_id; +- if (fgc->separate_colour_description_present_flag) { +- fgp->codec.h274.bit_depth_luma = fgc->bit_depth_luma; +- fgp->codec.h274.bit_depth_chroma = fgc->bit_depth_chroma; +- fgp->codec.h274.color_range = fgc->full_range + 1; +- fgp->codec.h274.color_primaries = fgc->color_primaries; +- fgp->codec.h274.color_trc = fgc->transfer_characteristics; +- fgp->codec.h274.color_space = fgc->matrix_coeffs; +- } else { +- fgp->codec.h274.bit_depth_luma = sps->bit_depth_luma; +- fgp->codec.h274.bit_depth_chroma = sps->bit_depth_chroma; +- if (sps->video_signal_type_present_flag) +- fgp->codec.h274.color_range = sps->full_range + 1; +- else +- fgp->codec.h274.color_range = AVCOL_RANGE_UNSPECIFIED; +- if (sps->colour_description_present_flag) { +- fgp->codec.h274.color_primaries = sps->color_primaries; +- fgp->codec.h274.color_trc = sps->color_trc; +- fgp->codec.h274.color_space = sps->colorspace; +- } else { +- fgp->codec.h274.color_primaries = AVCOL_PRI_UNSPECIFIED; +- fgp->codec.h274.color_trc = AVCOL_TRC_UNSPECIFIED; +- fgp->codec.h274.color_space = AVCOL_SPC_UNSPECIFIED; +- } +- } +- fgp->codec.h274.blending_mode_id = fgc->blending_mode_id; +- fgp->codec.h274.log2_scale_factor = fgc->log2_scale_factor; +- +- memcpy(&fgp->codec.h274.component_model_present, &fgc->comp_model_present_flag, +- sizeof(fgp->codec.h274.component_model_present)); +- memcpy(&fgp->codec.h274.num_intensity_intervals, &fgc->num_intensity_intervals, +- sizeof(fgp->codec.h274.num_intensity_intervals)); +- memcpy(&fgp->codec.h274.num_model_values, &fgc->num_model_values, +- sizeof(fgp->codec.h274.num_model_values)); +- memcpy(&fgp->codec.h274.intensity_interval_lower_bound, &fgc->intensity_interval_lower_bound, +- sizeof(fgp->codec.h274.intensity_interval_lower_bound)); +- memcpy(&fgp->codec.h274.intensity_interval_upper_bound, &fgc->intensity_interval_upper_bound, +- sizeof(fgp->codec.h274.intensity_interval_upper_bound)); +- memcpy(&fgp->codec.h274.comp_model_value, &fgc->comp_model_value, +- sizeof(fgp->codec.h274.comp_model_value)); +- +- fgc->present = !!fgc->repetition_period; +- +- h->avctx->properties |= FF_CODEC_PROPERTY_FILM_GRAIN; +- } +- - if (h->sei.picture_timing.timecode_cnt > 0) { -+ if (h && sei->picture_timing.timecode_cnt > 0) { - uint32_t *tc_sd; - char tcbuf[AV_TIMECODE_STR_SIZE]; - -@@ libavcodec/h264_slice.c: static int h264_export_frame_props(H264Context *h) - return AVERROR(ENOMEM); - - tc_sd = (uint32_t*)tcside->data; +- uint32_t *tc_sd; +- char tcbuf[AV_TIMECODE_STR_SIZE]; +- +- AVFrameSideData *tcside = av_frame_new_side_data(out, +- AV_FRAME_DATA_S12M_TIMECODE, +- sizeof(uint32_t)*4); +- if (!tcside) +- return AVERROR(ENOMEM); +- +- tc_sd = (uint32_t*)tcside->data; - tc_sd[0] = h->sei.picture_timing.timecode_cnt; -+ tc_sd[0] = sei->picture_timing.timecode_cnt; - - for (int i = 0; i < tc_sd[0]; i++) { +- +- for (int i = 0; i < tc_sd[0]; i++) { - int drop = h->sei.picture_timing.timecode[i].dropframe; - int hh = h->sei.picture_timing.timecode[i].hours; - int mm = h->sei.picture_timing.timecode[i].minutes; - int ss = h->sei.picture_timing.timecode[i].seconds; - int ff = h->sei.picture_timing.timecode[i].frame; -+ int drop = sei->picture_timing.timecode[i].dropframe; -+ int hh = sei->picture_timing.timecode[i].hours; -+ int mm = sei->picture_timing.timecode[i].minutes; -+ int ss = sei->picture_timing.timecode[i].seconds; -+ int ff = sei->picture_timing.timecode[i].frame; - - tc_sd[i + 1] = av_timecode_get_smpte(h->avctx->framerate, drop, hh, mm, ss, ff); - av_timecode_make_smpte_tc_string2(tcbuf, h->avctx->framerate, tc_sd[i + 1], 0, 0); -@@ libavcodec/h264_slice.c: static int h264_field_start(H264Context *h, const H264SliceContext *sl, - * field coded frames, since some SEI information is present for each field - * and is merged by the SEI parsing code. */ - if (!FIELD_PICTURE(h) || !h->first_field || h->missing_fields > 1) { -- ret = h264_export_frame_props(h); -+ ret = ff_h264_export_frame_props(h->avctx, &h->sei, h, h->cur_pic_ptr->f); - if (ret < 0) - return ret; - - - ## libavcodec/h264dec.h ## -@@ libavcodec/h264dec.h: void ff_h264_free_tables(H264Context *h); - - void ff_h264_set_erpic(ERPicture *dst, H264Picture *src); +- +- tc_sd[i + 1] = av_timecode_get_smpte(h->avctx->framerate, drop, hh, mm, ss, ff); +- av_timecode_make_smpte_tc_string2(tcbuf, h->avctx->framerate, tc_sd[i + 1], 0, 0); +- av_dict_set(&out->metadata, "timecode", tcbuf, 0); +- } +- h->sei.picture_timing.timecode_cnt = 0; +- } +- +- return 0; ++ return ff_h264_set_sei_to_frame(h->avctx, &h->sei, out, sps, cur->poc + (h->poc_offset << 5)); + } -+int ff_h264_export_frame_props(AVCodecContext *logctx, H264SEIContext *sei, H264Context *h, AVFrame *out); -+ - #endif /* AVCODEC_H264DEC_H */ + static int h264_select_output_frame(H264Context *h) 6: 19bc00be4d ! 3: 61626ebb78 avcodec/qsvdec: Implement SEI parsing for QSV decoders @@ libavcodec/Makefile: OBJS-$(CONFIG_MSS34DSP) += mss34dsp.o OBJS-$(CONFIG_QPELDSP) += qpeldsp.o OBJS-$(CONFIG_QSV) += qsv.o -OBJS-$(CONFIG_QSVDEC) += qsvdec.o -+OBJS-$(CONFIG_QSVDEC) += qsvdec.o h264_slice.o h264_cabac.o h264_cavlc.o \ -+ h264_direct.o h264_mb.o h264_picture.o h264_loopfilter.o \ -+ h264dec.o h264_refs.o cabac.o hevcdec.o hevc_refs.o \ -+ hevc_filter.o hevc_cabac.o hevc_mvs.o hevcpred.o hevcdsp.o \ -+ h274.o dovi_rpu.o mpeg12dec.o ++OBJS-$(CONFIG_QSVDEC) += qsvdec.o h264_sei.o hevc_sei.o OBJS-$(CONFIG_QSVENC) += qsvenc.o OBJS-$(CONFIG_RANGECODER) += rangecoder.o OBJS-$(CONFIG_RDFT) += rdft.o - ## libavcodec/hevcdsp.c ## + ## libavcodec/qsvdec.c ## @@ - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -+#include "config_components.h" -+ - #include "hevcdsp.h" - - static const int8_t transform[32][32] = { -@@ libavcodec/hevcdsp.c: int i = 0; - break; - } + #include "libavutil/time.h" + #include "libavutil/imgutils.h" + #include "libavutil/film_grain_params.h" ++#include <libavutil/reverse.h> -+#if CONFIG_HEVC_DECODER - #if ARCH_AARCH64 - ff_hevc_dsp_init_aarch64(hevcdsp, bit_depth); - #elif ARCH_ARM -@@ libavcodec/hevcdsp.c: int i = 0; - #elif ARCH_LOONGARCH - ff_hevc_dsp_init_loongarch(hevcdsp, bit_depth); - #endif -+#endif - } - - ## libavcodec/qsvdec.c ## + #include "avcodec.h" + #include "codec_internal.h" @@ #include "hwconfig.h" #include "qsv.h" #include "qsv_internal.h" -+#include "h264dec.h" +#include "h264_sei.h" -+#include "hevcdec.h" +#include "hevc_ps.h" +#include "hevc_sei.h" -+#include "mpeg12.h" - - static const AVRational mfx_tb = { 1, 90000 }; + #if QSV_ONEVPL + #include <mfxdispatcher.h> @@ libavcodec/qsvdec.c: static const AVRational mfx_tb = { 1, 90000 }; AV_NOPTS_VALUE : pts_tb.num ? \ av_rescale_q(mfx_pts, mfx_tb, pts_tb) : mfx_pts) @@ libavcodec/qsvdec.c: typedef struct QSVContext { int nb_ext_buffers; + + mfxU8 payload_buffer[PAYLOAD_BUFFER_SIZE]; -+ Mpeg1Context mpeg_ctx; ++ AVBufferRef *a53_buf_ref; } QSVContext; static const AVCodecHWConfigInternal *const qsv_hw_configs[] = { @@ libavcodec/qsvdec.c: static int qsv_export_film_grain(AVCodecContext *avctx, mfxExtAV1FilmGrainParam - return 0; } #endif + +static int find_start_offset(mfxU8 data[4]) +{ + if (data[0] == 0 && data[1] == 0 && data[2] == 1) @@ libavcodec/qsvdec.c: static int qsv_export_film_grain(AVCodecContext *avctx, mfx +{ + H264SEIContext sei = { 0 }; + GetBitContext gb = { 0 }; -+ mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = sizeof(q->payload_buffer) }; ++ mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = sizeof(q->payload_buffer) - AV_INPUT_BUFFER_PADDING_SIZE }; + mfxU64 ts; + int ret; + @@ libavcodec/qsvdec.c: static int qsv_export_film_grain(AVCodecContext *avctx, mfx + } + + if (out) -+ return ff_h264_export_frame_props(avctx, &sei, NULL, out); ++ return ff_h264_set_sei_to_frame(avctx, &sei, out, NULL, 0); + + return 0; +} @@ libavcodec/qsvdec.c: static int qsv_export_film_grain(AVCodecContext *avctx, mfx + HEVCSEI sei = { 0 }; + HEVCParamSets ps = { 0 }; + GetBitContext gb = { 0 }; -+ mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = sizeof(q->payload_buffer) }; ++ mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = sizeof(q->payload_buffer) - AV_INPUT_BUFFER_PADDING_SIZE }; + mfxFrameSurface1 *surface = &out->surface; + mfxU64 ts; + int ret, has_logged = 0; @@ libavcodec/qsvdec.c: static int qsv_export_film_grain(AVCodecContext *avctx, mfx + } + + if (out && out->frame) -+ return ff_hevc_set_side_data(avctx, &sei, NULL, out->frame); ++ return ff_hevc_set_sei_to_frame(avctx, &sei, out->frame, avctx->framerate, 0, &ps.sps->vui, ps.sps->bit_depth, ps.sps->bit_depth_chroma); ++ ++ return 0; ++} ++ ++#define A53_MAX_CC_COUNT 2000 + ++static int mpeg_decode_a53_cc(AVCodecContext *avctx, QSVContext *s, ++ const uint8_t *p, int buf_size) ++{ ++ if (buf_size >= 6 && ++ p[0] == 'G' && p[1] == 'A' && p[2] == '9' && p[3] == '4' && ++ p[4] == 3 && (p[5] & 0x40)) { ++ /* extract A53 Part 4 CC data */ ++ unsigned cc_count = p[5] & 0x1f; ++ if (cc_count > 0 && buf_size >= 7 + cc_count * 3) { ++ const uint64_t old_size = s->a53_buf_ref ? s->a53_buf_ref->size : 0; ++ const uint64_t new_size = (old_size + cc_count ++ * UINT64_C(3)); ++ int ret; ++ ++ if (new_size > 3*A53_MAX_CC_COUNT) ++ return AVERROR(EINVAL); ++ ++ ret = av_buffer_realloc(&s->a53_buf_ref, new_size); ++ if (ret >= 0) ++ memcpy(s->a53_buf_ref->data + old_size, p + 7, cc_count * UINT64_C(3)); ++ ++ avctx->properties |= FF_CODEC_PROPERTY_CLOSED_CAPTIONS; ++ } ++ return 1; ++ } else if (buf_size >= 2 && p[0] == 0x03 && (p[1]&0x7f) == 0x01) { ++ /* extract SCTE-20 CC data */ ++ GetBitContext gb; ++ unsigned cc_count = 0; ++ int ret; ++ ++ init_get_bits8(&gb, p + 2, buf_size - 2); ++ cc_count = get_bits(&gb, 5); ++ if (cc_count > 0) { ++ uint64_t old_size = s->a53_buf_ref ? s->a53_buf_ref->size : 0; ++ uint64_t new_size = (old_size + cc_count * UINT64_C(3)); ++ if (new_size > 3 * A53_MAX_CC_COUNT) ++ return AVERROR(EINVAL); ++ ++ ret = av_buffer_realloc(&s->a53_buf_ref, new_size); ++ if (ret >= 0) { ++ uint8_t field, cc1, cc2; ++ uint8_t *cap = s->a53_buf_ref->data; ++ ++ memset(s->a53_buf_ref->data + old_size, 0, cc_count * 3); ++ for (unsigned i = 0; i < cc_count && get_bits_left(&gb) >= 26; i++) { ++ skip_bits(&gb, 2); // priority ++ field = get_bits(&gb, 2); ++ skip_bits(&gb, 5); // line_offset ++ cc1 = get_bits(&gb, 8); ++ cc2 = get_bits(&gb, 8); ++ skip_bits(&gb, 1); // marker ++ ++ if (!field) { // forbidden ++ cap[0] = cap[1] = cap[2] = 0x00; ++ } else { ++ field = (field == 2 ? 1 : 0); ++ ////if (!s1->mpeg_enc_ctx.top_field_first) field = !field; ++ cap[0] = 0x04 | field; ++ cap[1] = ff_reverse[cc1]; ++ cap[2] = ff_reverse[cc2]; ++ } ++ cap += 3; ++ } ++ } ++ avctx->properties |= FF_CODEC_PROPERTY_CLOSED_CAPTIONS; ++ } ++ return 1; ++ } else if (buf_size >= 11 && p[0] == 'C' && p[1] == 'C' && p[2] == 0x01 && p[3] == 0xf8) { ++ int cc_count = 0; ++ int i, ret; ++ // There is a caption count field in the data, but it is often ++ // incorrect. So count the number of captions present. ++ for (i = 5; i + 6 <= buf_size && ((p[i] & 0xfe) == 0xfe); i += 6) ++ cc_count++; ++ // Transform the DVD format into A53 Part 4 format ++ if (cc_count > 0) { ++ int old_size = s->a53_buf_ref ? s->a53_buf_ref->size : 0; ++ uint64_t new_size = (old_size + cc_count ++ * UINT64_C(6)); ++ if (new_size > 3*A53_MAX_CC_COUNT) ++ return AVERROR(EINVAL); ++ ++ ret = av_buffer_realloc(&s->a53_buf_ref, new_size); ++ if (ret >= 0) { ++ uint8_t field1 = !!(p[4] & 0x80); ++ uint8_t *cap = s->a53_buf_ref->data; ++ p += 5; ++ for (i = 0; i < cc_count; i++) { ++ cap[0] = (p[0] == 0xff && field1) ? 0xfc : 0xfd; ++ cap[1] = p[1]; ++ cap[2] = p[2]; ++ cap[3] = (p[3] == 0xff && !field1) ? 0xfc : 0xfd; ++ cap[4] = p[4]; ++ cap[5] = p[5]; ++ cap += 6; ++ p += 6; ++ } ++ } ++ avctx->properties |= FF_CODEC_PROPERTY_CLOSED_CAPTIONS; ++ } ++ return 1; ++ } + return 0; +} + +static int parse_sei_mpeg12(AVCodecContext* avctx, QSVContext* q, AVFrame* out) +{ -+ Mpeg1Context *mpeg_ctx = &q->mpeg_ctx; -+ mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = sizeof(q->payload_buffer) }; ++ mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = sizeof(q->payload_buffer) - AV_INPUT_BUFFER_PADDING_SIZE }; + mfxU64 ts; + int ret; + @@ libavcodec/qsvdec.c: static int qsv_export_film_grain(AVCodecContext *avctx, mfx + + start++; + -+ ff_mpeg_decode_user_data(avctx, mpeg_ctx, &payload.Data[start], (int)((payload.NumBit + 7) / 8) - start); ++ mpeg_decode_a53_cc(avctx, q, &payload.Data[start], (int)((payload.NumBit + 7) / 8) - start); + + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d start %d -> %.s\n", payload.Type, payload.NumBit, start, (char *)(&payload.Data[start])); + } @@ libavcodec/qsvdec.c: static int qsv_export_film_grain(AVCodecContext *avctx, mfx + if (!out) + return 0; + -+ if (mpeg_ctx->a53_buf_ref) { -+ -+ AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, mpeg_ctx->a53_buf_ref); -+ if (!sd) -+ av_buffer_unref(&mpeg_ctx->a53_buf_ref); -+ mpeg_ctx->a53_buf_ref = NULL; -+ } -+ -+ if (mpeg_ctx->has_stereo3d) { -+ AVStereo3D *stereo = av_stereo3d_create_side_data(out); -+ if (!stereo) -+ return AVERROR(ENOMEM); -+ -+ *stereo = mpeg_ctx->stereo3d; -+ mpeg_ctx->has_stereo3d = 0; -+ } ++ if (q->a53_buf_ref) { + -+ if (mpeg_ctx->has_afd) { -+ AVFrameSideData *sd = av_frame_new_side_data(out, AV_FRAME_DATA_AFD, 1); ++ AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, q->a53_buf_ref); + if (!sd) -+ return AVERROR(ENOMEM); -+ -+ *sd->data = mpeg_ctx->afd; -+ mpeg_ctx->has_afd = 0; ++ av_buffer_unref(&q->a53_buf_ref); ++ q->a53_buf_ref = NULL; + } + + return 0; +} - ++ static int qsv_decode(AVCodecContext *avctx, QSVContext *q, AVFrame *frame, int *got_frame, + const AVPacket *avpkt) @@ libavcodec/qsvdec.c: static int qsv_decode(AVCodecContext *avctx, QSVContext *q, insurf, &outsurf, sync); if (ret == MFX_WRN_DEVICE_BUSY) -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v6 1/3] avcodec/hevcdec: factor out ff_hevc_set_set_to_frame 2022-10-25 4:03 ` [FFmpeg-devel] [PATCH v6 0/3] " ffmpegagent @ 2022-10-25 4:03 ` softworkz 2022-10-25 4:03 ` [FFmpeg-devel] [PATCH v6 2/3] avcodec/h264dec: make h264_export_frame_props() accessible softworkz 2022-10-25 4:03 ` [FFmpeg-devel] [PATCH v6 3/3] avcodec/qsvdec: Implement SEI parsing for QSV decoders softworkz 2 siblings, 0 replies; 65+ messages in thread From: softworkz @ 2022-10-25 4:03 UTC (permalink / raw) To: ffmpeg-devel; +Cc: Kieran Kunhya, softworkz, Xiang, Haihao, Andreas Rheinhardt From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> --- libavcodec/hevc_sei.c | 252 ++++++++++++++++++++++++++++++++++++++++++ libavcodec/hevc_sei.h | 3 + libavcodec/hevcdec.c | 249 +---------------------------------------- 3 files changed, 260 insertions(+), 244 deletions(-) diff --git a/libavcodec/hevc_sei.c b/libavcodec/hevc_sei.c index 631373e06f..6fd066e44f 100644 --- a/libavcodec/hevc_sei.c +++ b/libavcodec/hevc_sei.c @@ -30,6 +30,12 @@ #include "hevc_ps.h" #include "hevc_sei.h" +#include "libavutil/display.h" +#include "libavutil/film_grain_params.h" +#include "libavutil/mastering_display_metadata.h" +#include "libavutil/stereo3d.h" +#include "libavutil/timecode.h" + static int decode_nal_sei_decoded_picture_hash(HEVCSEIPictureHash *s, GetByteContext *gb) { @@ -578,3 +584,249 @@ void ff_hevc_reset_sei(HEVCSEI *s) av_buffer_unref(&s->dynamic_hdr_plus.info); av_buffer_unref(&s->dynamic_hdr_vivid.info); } + +int ff_hevc_set_sei_to_frame(AVCodecContext *logctx, HEVCSEI *sei, AVFrame *out, AVRational framerate, uint64_t seed, const VUI *vui, int bit_depth_luma, int bit_depth_chroma) +{ + if (sei->frame_packing.present && + sei->frame_packing.arrangement_type >= 3 && + sei->frame_packing.arrangement_type <= 5 && + sei->frame_packing.content_interpretation_type > 0 && + sei->frame_packing.content_interpretation_type < 3) { + AVStereo3D *stereo = av_stereo3d_create_side_data(out); + if (!stereo) + return AVERROR(ENOMEM); + + switch (sei->frame_packing.arrangement_type) { + case 3: + if (sei->frame_packing.quincunx_subsampling) + stereo->type = AV_STEREO3D_SIDEBYSIDE_QUINCUNX; + else + stereo->type = AV_STEREO3D_SIDEBYSIDE; + break; + case 4: + stereo->type = AV_STEREO3D_TOPBOTTOM; + break; + case 5: + stereo->type = AV_STEREO3D_FRAMESEQUENCE; + break; + } + + if (sei->frame_packing.content_interpretation_type == 2) + stereo->flags = AV_STEREO3D_FLAG_INVERT; + + if (sei->frame_packing.arrangement_type == 5) { + if (sei->frame_packing.current_frame_is_frame0_flag) + stereo->view = AV_STEREO3D_VIEW_LEFT; + else + stereo->view = AV_STEREO3D_VIEW_RIGHT; + } + } + + if (sei->display_orientation.present && + (sei->display_orientation.anticlockwise_rotation || + sei->display_orientation.hflip || sei->display_orientation.vflip)) { + double angle = sei->display_orientation.anticlockwise_rotation * 360 / (double) (1 << 16); + AVFrameSideData *rotation = av_frame_new_side_data(out, + AV_FRAME_DATA_DISPLAYMATRIX, + sizeof(int32_t) * 9); + if (!rotation) + return AVERROR(ENOMEM); + + /* av_display_rotation_set() expects the angle in the clockwise + * direction, hence the first minus. + * The below code applies the flips after the rotation, yet + * the H.2645 specs require flipping to be applied first. + * Because of R O(phi) = O(-phi) R (where R is flipping around + * an arbitatry axis and O(phi) is the proper rotation by phi) + * we can create display matrices as desired by negating + * the degree once for every flip applied. */ + angle = -angle * (1 - 2 * !!sei->display_orientation.hflip) + * (1 - 2 * !!sei->display_orientation.vflip); + av_display_rotation_set((int32_t *)rotation->data, angle); + av_display_matrix_flip((int32_t *)rotation->data, + sei->display_orientation.hflip, + sei->display_orientation.vflip); + } + + if (sei->mastering_display.present) { + // HEVC uses a g,b,r ordering, which we convert to a more natural r,g,b + const int mapping[3] = {2, 0, 1}; + const int chroma_den = 50000; + const int luma_den = 10000; + int i; + AVMasteringDisplayMetadata *metadata = + av_mastering_display_metadata_create_side_data(out); + if (!metadata) + return AVERROR(ENOMEM); + + for (i = 0; i < 3; i++) { + const int j = mapping[i]; + metadata->display_primaries[i][0].num = sei->mastering_display.display_primaries[j][0]; + metadata->display_primaries[i][0].den = chroma_den; + metadata->display_primaries[i][1].num = sei->mastering_display.display_primaries[j][1]; + metadata->display_primaries[i][1].den = chroma_den; + } + metadata->white_point[0].num = sei->mastering_display.white_point[0]; + metadata->white_point[0].den = chroma_den; + metadata->white_point[1].num = sei->mastering_display.white_point[1]; + metadata->white_point[1].den = chroma_den; + + metadata->max_luminance.num = sei->mastering_display.max_luminance; + metadata->max_luminance.den = luma_den; + metadata->min_luminance.num = sei->mastering_display.min_luminance; + metadata->min_luminance.den = luma_den; + metadata->has_luminance = 1; + metadata->has_primaries = 1; + + av_log(logctx, AV_LOG_DEBUG, "Mastering Display Metadata:\n"); + av_log(logctx, AV_LOG_DEBUG, + "r(%5.4f,%5.4f) g(%5.4f,%5.4f) b(%5.4f %5.4f) wp(%5.4f, %5.4f)\n", + av_q2d(metadata->display_primaries[0][0]), + av_q2d(metadata->display_primaries[0][1]), + av_q2d(metadata->display_primaries[1][0]), + av_q2d(metadata->display_primaries[1][1]), + av_q2d(metadata->display_primaries[2][0]), + av_q2d(metadata->display_primaries[2][1]), + av_q2d(metadata->white_point[0]), av_q2d(metadata->white_point[1])); + av_log(logctx, AV_LOG_DEBUG, + "min_luminance=%f, max_luminance=%f\n", + av_q2d(metadata->min_luminance), av_q2d(metadata->max_luminance)); + } + if (sei->content_light.present) { + AVContentLightMetadata *metadata = + av_content_light_metadata_create_side_data(out); + if (!metadata) + return AVERROR(ENOMEM); + metadata->MaxCLL = sei->content_light.max_content_light_level; + metadata->MaxFALL = sei->content_light.max_pic_average_light_level; + + av_log(logctx, AV_LOG_DEBUG, "Content Light Level Metadata:\n"); + av_log(logctx, AV_LOG_DEBUG, "MaxCLL=%d, MaxFALL=%d\n", + metadata->MaxCLL, metadata->MaxFALL); + } + + if (sei->a53_caption.buf_ref) { + HEVCSEIA53Caption *a53 = &sei->a53_caption; + + AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, a53->buf_ref); + if (!sd) + av_buffer_unref(&a53->buf_ref); + a53->buf_ref = NULL; + } + + for (int i = 0; i < sei->unregistered.nb_buf_ref; i++) { + HEVCSEIUnregistered *unreg = &sei->unregistered; + + if (unreg->buf_ref[i]) { + AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, + AV_FRAME_DATA_SEI_UNREGISTERED, + unreg->buf_ref[i]); + if (!sd) + av_buffer_unref(&unreg->buf_ref[i]); + unreg->buf_ref[i] = NULL; + } + } + sei->unregistered.nb_buf_ref = 0; + + if (sei->timecode.present) { + uint32_t *tc_sd; + char tcbuf[AV_TIMECODE_STR_SIZE]; + AVFrameSideData *tcside = av_frame_new_side_data(out, AV_FRAME_DATA_S12M_TIMECODE, + sizeof(uint32_t) * 4); + if (!tcside) + return AVERROR(ENOMEM); + + tc_sd = (uint32_t*)tcside->data; + tc_sd[0] = sei->timecode.num_clock_ts; + + for (int i = 0; i < tc_sd[0]; i++) { + int drop = sei->timecode.cnt_dropped_flag[i]; + int hh = sei->timecode.hours_value[i]; + int mm = sei->timecode.minutes_value[i]; + int ss = sei->timecode.seconds_value[i]; + int ff = sei->timecode.n_frames[i]; + + tc_sd[i + 1] = av_timecode_get_smpte(framerate, drop, hh, mm, ss, ff); + av_timecode_make_smpte_tc_string2(tcbuf, framerate, tc_sd[i + 1], 0, 0); + av_dict_set(&out->metadata, "timecode", tcbuf, 0); + } + + sei->timecode.num_clock_ts = 0; + } + + if (sei->film_grain_characteristics.present) { + HEVCSEIFilmGrainCharacteristics *fgc = &sei->film_grain_characteristics; + AVFilmGrainParams *fgp = av_film_grain_params_create_side_data(out); + if (!fgp) + return AVERROR(ENOMEM); + + fgp->type = AV_FILM_GRAIN_PARAMS_H274; + fgp->seed = seed; /* no poc_offset in HEVC */ + fgp->codec.h274.model_id = fgc->model_id; + if (fgc->separate_colour_description_present_flag) { + fgp->codec.h274.bit_depth_luma = fgc->bit_depth_luma; + fgp->codec.h274.bit_depth_chroma = fgc->bit_depth_chroma; + fgp->codec.h274.color_range = fgc->full_range + 1; + fgp->codec.h274.color_primaries = fgc->color_primaries; + fgp->codec.h274.color_trc = fgc->transfer_characteristics; + fgp->codec.h274.color_space = fgc->matrix_coeffs; + } else { + fgp->codec.h274.bit_depth_luma = bit_depth_luma; + fgp->codec.h274.bit_depth_chroma = bit_depth_chroma; + if (vui->video_signal_type_present_flag) + fgp->codec.h274.color_range = vui->video_full_range_flag + 1; + else + fgp->codec.h274.color_range = AVCOL_RANGE_UNSPECIFIED; + if (vui->colour_description_present_flag) { + fgp->codec.h274.color_primaries = vui->colour_primaries; + fgp->codec.h274.color_trc = vui->transfer_characteristic; + fgp->codec.h274.color_space = vui->matrix_coeffs; + } else { + fgp->codec.h274.color_primaries = AVCOL_PRI_UNSPECIFIED; + fgp->codec.h274.color_trc = AVCOL_TRC_UNSPECIFIED; + fgp->codec.h274.color_space = AVCOL_SPC_UNSPECIFIED; + } + } + fgp->codec.h274.blending_mode_id = fgc->blending_mode_id; + fgp->codec.h274.log2_scale_factor = fgc->log2_scale_factor; + + memcpy(&fgp->codec.h274.component_model_present, &fgc->comp_model_present_flag, + sizeof(fgp->codec.h274.component_model_present)); + memcpy(&fgp->codec.h274.num_intensity_intervals, &fgc->num_intensity_intervals, + sizeof(fgp->codec.h274.num_intensity_intervals)); + memcpy(&fgp->codec.h274.num_model_values, &fgc->num_model_values, + sizeof(fgp->codec.h274.num_model_values)); + memcpy(&fgp->codec.h274.intensity_interval_lower_bound, &fgc->intensity_interval_lower_bound, + sizeof(fgp->codec.h274.intensity_interval_lower_bound)); + memcpy(&fgp->codec.h274.intensity_interval_upper_bound, &fgc->intensity_interval_upper_bound, + sizeof(fgp->codec.h274.intensity_interval_upper_bound)); + memcpy(&fgp->codec.h274.comp_model_value, &fgc->comp_model_value, + sizeof(fgp->codec.h274.comp_model_value)); + + fgc->present = fgc->persistence_flag; + } + + if (sei->dynamic_hdr_plus.info) { + AVBufferRef *info_ref = av_buffer_ref(sei->dynamic_hdr_plus.info); + if (!info_ref) + return AVERROR(ENOMEM); + + if (!av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_DYNAMIC_HDR_PLUS, info_ref)) { + av_buffer_unref(&info_ref); + return AVERROR(ENOMEM); + } + } + + if (sei->dynamic_hdr_vivid.info) { + AVBufferRef *info_ref = av_buffer_ref(sei->dynamic_hdr_vivid.info); + if (!info_ref) + return AVERROR(ENOMEM); + + if (!av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_DYNAMIC_HDR_VIVID, info_ref)) { + av_buffer_unref(&info_ref); + return AVERROR(ENOMEM); + } + } + + return 0; +} diff --git a/libavcodec/hevc_sei.h b/libavcodec/hevc_sei.h index ef987f6781..9bb5c9e62f 100644 --- a/libavcodec/hevc_sei.h +++ b/libavcodec/hevc_sei.h @@ -27,6 +27,7 @@ #include "get_bits.h" #include "hevc.h" +#include "hevc_ps.h" #include "sei.h" @@ -166,4 +167,6 @@ int ff_hevc_decode_nal_sei(GetBitContext *gb, void *logctx, HEVCSEI *s, */ void ff_hevc_reset_sei(HEVCSEI *s); +int ff_hevc_set_sei_to_frame(AVCodecContext *logctx, HEVCSEI *sei, AVFrame *out, AVRational framerate, uint64_t seed, const VUI *vui, int bit_depth_luma, int bit_depth_chroma); + #endif /* AVCODEC_HEVC_SEI_H */ diff --git a/libavcodec/hevcdec.c b/libavcodec/hevcdec.c index fb44d8d3f2..6e24df37ab 100644 --- a/libavcodec/hevcdec.c +++ b/libavcodec/hevcdec.c @@ -2725,67 +2725,7 @@ static int set_side_data(HEVCContext *s) { AVFrame *out = s->ref->frame; int ret; - - if (s->sei.frame_packing.present && - s->sei.frame_packing.arrangement_type >= 3 && - s->sei.frame_packing.arrangement_type <= 5 && - s->sei.frame_packing.content_interpretation_type > 0 && - s->sei.frame_packing.content_interpretation_type < 3) { - AVStereo3D *stereo = av_stereo3d_create_side_data(out); - if (!stereo) - return AVERROR(ENOMEM); - - switch (s->sei.frame_packing.arrangement_type) { - case 3: - if (s->sei.frame_packing.quincunx_subsampling) - stereo->type = AV_STEREO3D_SIDEBYSIDE_QUINCUNX; - else - stereo->type = AV_STEREO3D_SIDEBYSIDE; - break; - case 4: - stereo->type = AV_STEREO3D_TOPBOTTOM; - break; - case 5: - stereo->type = AV_STEREO3D_FRAMESEQUENCE; - break; - } - - if (s->sei.frame_packing.content_interpretation_type == 2) - stereo->flags = AV_STEREO3D_FLAG_INVERT; - - if (s->sei.frame_packing.arrangement_type == 5) { - if (s->sei.frame_packing.current_frame_is_frame0_flag) - stereo->view = AV_STEREO3D_VIEW_LEFT; - else - stereo->view = AV_STEREO3D_VIEW_RIGHT; - } - } - - if (s->sei.display_orientation.present && - (s->sei.display_orientation.anticlockwise_rotation || - s->sei.display_orientation.hflip || s->sei.display_orientation.vflip)) { - double angle = s->sei.display_orientation.anticlockwise_rotation * 360 / (double) (1 << 16); - AVFrameSideData *rotation = av_frame_new_side_data(out, - AV_FRAME_DATA_DISPLAYMATRIX, - sizeof(int32_t) * 9); - if (!rotation) - return AVERROR(ENOMEM); - - /* av_display_rotation_set() expects the angle in the clockwise - * direction, hence the first minus. - * The below code applies the flips after the rotation, yet - * the H.2645 specs require flipping to be applied first. - * Because of R O(phi) = O(-phi) R (where R is flipping around - * an arbitatry axis and O(phi) is the proper rotation by phi) - * we can create display matrices as desired by negating - * the degree once for every flip applied. */ - angle = -angle * (1 - 2 * !!s->sei.display_orientation.hflip) - * (1 - 2 * !!s->sei.display_orientation.vflip); - av_display_rotation_set((int32_t *)rotation->data, angle); - av_display_matrix_flip((int32_t *)rotation->data, - s->sei.display_orientation.hflip, - s->sei.display_orientation.vflip); - } + const HEVCSPS *sps = s->ps.sps; // Decrement the mastering display flag when IRAP frame has no_rasl_output_flag=1 // so the side data persists for the entire coded video sequence. @@ -2793,185 +2733,17 @@ static int set_side_data(HEVCContext *s) IS_IRAP(s) && s->no_rasl_output_flag) { s->sei.mastering_display.present--; } - if (s->sei.mastering_display.present) { - // HEVC uses a g,b,r ordering, which we convert to a more natural r,g,b - const int mapping[3] = {2, 0, 1}; - const int chroma_den = 50000; - const int luma_den = 10000; - int i; - AVMasteringDisplayMetadata *metadata = - av_mastering_display_metadata_create_side_data(out); - if (!metadata) - return AVERROR(ENOMEM); - - for (i = 0; i < 3; i++) { - const int j = mapping[i]; - metadata->display_primaries[i][0].num = s->sei.mastering_display.display_primaries[j][0]; - metadata->display_primaries[i][0].den = chroma_den; - metadata->display_primaries[i][1].num = s->sei.mastering_display.display_primaries[j][1]; - metadata->display_primaries[i][1].den = chroma_den; - } - metadata->white_point[0].num = s->sei.mastering_display.white_point[0]; - metadata->white_point[0].den = chroma_den; - metadata->white_point[1].num = s->sei.mastering_display.white_point[1]; - metadata->white_point[1].den = chroma_den; - - metadata->max_luminance.num = s->sei.mastering_display.max_luminance; - metadata->max_luminance.den = luma_den; - metadata->min_luminance.num = s->sei.mastering_display.min_luminance; - metadata->min_luminance.den = luma_den; - metadata->has_luminance = 1; - metadata->has_primaries = 1; - - av_log(s->avctx, AV_LOG_DEBUG, "Mastering Display Metadata:\n"); - av_log(s->avctx, AV_LOG_DEBUG, - "r(%5.4f,%5.4f) g(%5.4f,%5.4f) b(%5.4f %5.4f) wp(%5.4f, %5.4f)\n", - av_q2d(metadata->display_primaries[0][0]), - av_q2d(metadata->display_primaries[0][1]), - av_q2d(metadata->display_primaries[1][0]), - av_q2d(metadata->display_primaries[1][1]), - av_q2d(metadata->display_primaries[2][0]), - av_q2d(metadata->display_primaries[2][1]), - av_q2d(metadata->white_point[0]), av_q2d(metadata->white_point[1])); - av_log(s->avctx, AV_LOG_DEBUG, - "min_luminance=%f, max_luminance=%f\n", - av_q2d(metadata->min_luminance), av_q2d(metadata->max_luminance)); - } // Decrement the mastering display flag when IRAP frame has no_rasl_output_flag=1 // so the side data persists for the entire coded video sequence. if (s->sei.content_light.present > 0 && IS_IRAP(s) && s->no_rasl_output_flag) { s->sei.content_light.present--; } - if (s->sei.content_light.present) { - AVContentLightMetadata *metadata = - av_content_light_metadata_create_side_data(out); - if (!metadata) - return AVERROR(ENOMEM); - metadata->MaxCLL = s->sei.content_light.max_content_light_level; - metadata->MaxFALL = s->sei.content_light.max_pic_average_light_level; - - av_log(s->avctx, AV_LOG_DEBUG, "Content Light Level Metadata:\n"); - av_log(s->avctx, AV_LOG_DEBUG, "MaxCLL=%d, MaxFALL=%d\n", - metadata->MaxCLL, metadata->MaxFALL); - } - - if (s->sei.a53_caption.buf_ref) { - HEVCSEIA53Caption *a53 = &s->sei.a53_caption; - - AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, a53->buf_ref); - if (!sd) - av_buffer_unref(&a53->buf_ref); - a53->buf_ref = NULL; - } - - for (int i = 0; i < s->sei.unregistered.nb_buf_ref; i++) { - HEVCSEIUnregistered *unreg = &s->sei.unregistered; - - if (unreg->buf_ref[i]) { - AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, - AV_FRAME_DATA_SEI_UNREGISTERED, - unreg->buf_ref[i]); - if (!sd) - av_buffer_unref(&unreg->buf_ref[i]); - unreg->buf_ref[i] = NULL; - } - } - s->sei.unregistered.nb_buf_ref = 0; - if (s->sei.timecode.present) { - uint32_t *tc_sd; - char tcbuf[AV_TIMECODE_STR_SIZE]; - AVFrameSideData *tcside = av_frame_new_side_data(out, AV_FRAME_DATA_S12M_TIMECODE, - sizeof(uint32_t) * 4); - if (!tcside) - return AVERROR(ENOMEM); - - tc_sd = (uint32_t*)tcside->data; - tc_sd[0] = s->sei.timecode.num_clock_ts; - - for (int i = 0; i < tc_sd[0]; i++) { - int drop = s->sei.timecode.cnt_dropped_flag[i]; - int hh = s->sei.timecode.hours_value[i]; - int mm = s->sei.timecode.minutes_value[i]; - int ss = s->sei.timecode.seconds_value[i]; - int ff = s->sei.timecode.n_frames[i]; - - tc_sd[i + 1] = av_timecode_get_smpte(s->avctx->framerate, drop, hh, mm, ss, ff); - av_timecode_make_smpte_tc_string2(tcbuf, s->avctx->framerate, tc_sd[i + 1], 0, 0); - av_dict_set(&out->metadata, "timecode", tcbuf, 0); - } - - s->sei.timecode.num_clock_ts = 0; - } - - if (s->sei.film_grain_characteristics.present) { - HEVCSEIFilmGrainCharacteristics *fgc = &s->sei.film_grain_characteristics; - AVFilmGrainParams *fgp = av_film_grain_params_create_side_data(out); - if (!fgp) - return AVERROR(ENOMEM); - - fgp->type = AV_FILM_GRAIN_PARAMS_H274; - fgp->seed = s->ref->poc; /* no poc_offset in HEVC */ - - fgp->codec.h274.model_id = fgc->model_id; - if (fgc->separate_colour_description_present_flag) { - fgp->codec.h274.bit_depth_luma = fgc->bit_depth_luma; - fgp->codec.h274.bit_depth_chroma = fgc->bit_depth_chroma; - fgp->codec.h274.color_range = fgc->full_range + 1; - fgp->codec.h274.color_primaries = fgc->color_primaries; - fgp->codec.h274.color_trc = fgc->transfer_characteristics; - fgp->codec.h274.color_space = fgc->matrix_coeffs; - } else { - const HEVCSPS *sps = s->ps.sps; - const VUI *vui = &sps->vui; - fgp->codec.h274.bit_depth_luma = sps->bit_depth; - fgp->codec.h274.bit_depth_chroma = sps->bit_depth_chroma; - if (vui->video_signal_type_present_flag) - fgp->codec.h274.color_range = vui->video_full_range_flag + 1; - else - fgp->codec.h274.color_range = AVCOL_RANGE_UNSPECIFIED; - if (vui->colour_description_present_flag) { - fgp->codec.h274.color_primaries = vui->colour_primaries; - fgp->codec.h274.color_trc = vui->transfer_characteristic; - fgp->codec.h274.color_space = vui->matrix_coeffs; - } else { - fgp->codec.h274.color_primaries = AVCOL_PRI_UNSPECIFIED; - fgp->codec.h274.color_trc = AVCOL_TRC_UNSPECIFIED; - fgp->codec.h274.color_space = AVCOL_SPC_UNSPECIFIED; - } - } - fgp->codec.h274.blending_mode_id = fgc->blending_mode_id; - fgp->codec.h274.log2_scale_factor = fgc->log2_scale_factor; - - memcpy(&fgp->codec.h274.component_model_present, &fgc->comp_model_present_flag, - sizeof(fgp->codec.h274.component_model_present)); - memcpy(&fgp->codec.h274.num_intensity_intervals, &fgc->num_intensity_intervals, - sizeof(fgp->codec.h274.num_intensity_intervals)); - memcpy(&fgp->codec.h274.num_model_values, &fgc->num_model_values, - sizeof(fgp->codec.h274.num_model_values)); - memcpy(&fgp->codec.h274.intensity_interval_lower_bound, &fgc->intensity_interval_lower_bound, - sizeof(fgp->codec.h274.intensity_interval_lower_bound)); - memcpy(&fgp->codec.h274.intensity_interval_upper_bound, &fgc->intensity_interval_upper_bound, - sizeof(fgp->codec.h274.intensity_interval_upper_bound)); - memcpy(&fgp->codec.h274.comp_model_value, &fgc->comp_model_value, - sizeof(fgp->codec.h274.comp_model_value)); - - fgc->present = fgc->persistence_flag; - } - - if (s->sei.dynamic_hdr_plus.info) { - AVBufferRef *info_ref = av_buffer_ref(s->sei.dynamic_hdr_plus.info); - if (!info_ref) - return AVERROR(ENOMEM); - - if (!av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_DYNAMIC_HDR_PLUS, info_ref)) { - av_buffer_unref(&info_ref); - return AVERROR(ENOMEM); - } - } + if ((ret = ff_hevc_set_sei_to_frame(s->avctx, &s->sei, out, s->avctx->framerate, s->ref->poc, &sps->vui, sps->bit_depth, sps->bit_depth_chroma) < 0)) + return ret; - if (s->rpu_buf) { + if (s && s->rpu_buf) { AVFrameSideData *rpu = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_DOVI_RPU_BUFFER, s->rpu_buf); if (!rpu) return AVERROR(ENOMEM); @@ -2979,20 +2751,9 @@ static int set_side_data(HEVCContext *s) s->rpu_buf = NULL; } - if ((ret = ff_dovi_attach_side_data(&s->dovi_ctx, out)) < 0) + if (s && (ret = ff_dovi_attach_side_data(&s->dovi_ctx, out)) < 0) return ret; - if (s->sei.dynamic_hdr_vivid.info) { - AVBufferRef *info_ref = av_buffer_ref(s->sei.dynamic_hdr_vivid.info); - if (!info_ref) - return AVERROR(ENOMEM); - - if (!av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_DYNAMIC_HDR_VIVID, info_ref)) { - av_buffer_unref(&info_ref); - return AVERROR(ENOMEM); - } - } - return 0; } -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v6 2/3] avcodec/h264dec: make h264_export_frame_props() accessible 2022-10-25 4:03 ` [FFmpeg-devel] [PATCH v6 0/3] " ffmpegagent 2022-10-25 4:03 ` [FFmpeg-devel] [PATCH v6 1/3] avcodec/hevcdec: factor out ff_hevc_set_set_to_frame softworkz @ 2022-10-25 4:03 ` softworkz 2022-10-25 4:03 ` [FFmpeg-devel] [PATCH v6 3/3] avcodec/qsvdec: Implement SEI parsing for QSV decoders softworkz 2 siblings, 0 replies; 65+ messages in thread From: softworkz @ 2022-10-25 4:03 UTC (permalink / raw) To: ffmpeg-devel; +Cc: Kieran Kunhya, softworkz, Xiang, Haihao, Andreas Rheinhardt From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> --- libavcodec/h264_sei.c | 197 ++++++++++++++++++++++++++++++++++++++++ libavcodec/h264_sei.h | 2 + libavcodec/h264_slice.c | 190 +------------------------------------- 3 files changed, 200 insertions(+), 189 deletions(-) diff --git a/libavcodec/h264_sei.c b/libavcodec/h264_sei.c index 034ddb8f1c..d3612fdbc0 100644 --- a/libavcodec/h264_sei.c +++ b/libavcodec/h264_sei.c @@ -38,6 +38,10 @@ #include "h264_ps.h" #include "h264_sei.h" #include "sei.h" +#include "libavutil/display.h" +#include "libavutil/film_grain_params.h" +#include "libavutil/stereo3d.h" +#include "libavutil/timecode.h" #define AVERROR_PS_NOT_FOUND FFERRTAG(0xF8,'?','P','S') @@ -587,3 +591,196 @@ const char *ff_h264_sei_stereo_mode(const H264SEIFramePacking *h) return NULL; } } + +int ff_h264_set_sei_to_frame(AVCodecContext *avctx, H264SEIContext *sei, AVFrame *out, const SPS *sps, uint64_t seed) +{ + if (sei->frame_packing.present && + sei->frame_packing.arrangement_type <= 6 && + sei->frame_packing.content_interpretation_type > 0 && + sei->frame_packing.content_interpretation_type < 3) { + H264SEIFramePacking *fp = &sei->frame_packing; + AVStereo3D *stereo = av_stereo3d_create_side_data(out); + if (stereo) { + switch (fp->arrangement_type) { + case H264_SEI_FPA_TYPE_CHECKERBOARD: + stereo->type = AV_STEREO3D_CHECKERBOARD; + break; + case H264_SEI_FPA_TYPE_INTERLEAVE_COLUMN: + stereo->type = AV_STEREO3D_COLUMNS; + break; + case H264_SEI_FPA_TYPE_INTERLEAVE_ROW: + stereo->type = AV_STEREO3D_LINES; + break; + case H264_SEI_FPA_TYPE_SIDE_BY_SIDE: + if (fp->quincunx_sampling_flag) + stereo->type = AV_STEREO3D_SIDEBYSIDE_QUINCUNX; + else + stereo->type = AV_STEREO3D_SIDEBYSIDE; + break; + case H264_SEI_FPA_TYPE_TOP_BOTTOM: + stereo->type = AV_STEREO3D_TOPBOTTOM; + break; + case H264_SEI_FPA_TYPE_INTERLEAVE_TEMPORAL: + stereo->type = AV_STEREO3D_FRAMESEQUENCE; + break; + case H264_SEI_FPA_TYPE_2D: + stereo->type = AV_STEREO3D_2D; + break; + } + + if (fp->content_interpretation_type == 2) + stereo->flags = AV_STEREO3D_FLAG_INVERT; + + if (fp->arrangement_type == H264_SEI_FPA_TYPE_INTERLEAVE_TEMPORAL) { + if (fp->current_frame_is_frame0_flag) + stereo->view = AV_STEREO3D_VIEW_LEFT; + else + stereo->view = AV_STEREO3D_VIEW_RIGHT; + } + } + } + + if (sei->display_orientation.present && + (sei->display_orientation.anticlockwise_rotation || + sei->display_orientation.hflip || + sei->display_orientation.vflip)) { + H264SEIDisplayOrientation *o = &sei->display_orientation; + double angle = o->anticlockwise_rotation * 360 / (double) (1 << 16); + AVFrameSideData *rotation = av_frame_new_side_data(out, + AV_FRAME_DATA_DISPLAYMATRIX, + sizeof(int32_t) * 9); + if (rotation) { + /* av_display_rotation_set() expects the angle in the clockwise + * direction, hence the first minus. + * The below code applies the flips after the rotation, yet + * the H.2645 specs require flipping to be applied first. + * Because of R O(phi) = O(-phi) R (where R is flipping around + * an arbitatry axis and O(phi) is the proper rotation by phi) + * we can create display matrices as desired by negating + * the degree once for every flip applied. */ + angle = -angle * (1 - 2 * !!o->hflip) * (1 - 2 * !!o->vflip); + av_display_rotation_set((int32_t *)rotation->data, angle); + av_display_matrix_flip((int32_t *)rotation->data, + o->hflip, o->vflip); + } + } + + if (sei->afd.present) { + AVFrameSideData *sd = av_frame_new_side_data(out, AV_FRAME_DATA_AFD, + sizeof(uint8_t)); + + if (sd) { + *sd->data = sei->afd.active_format_description; + sei->afd.present = 0; + } + } + + if (sei->a53_caption.buf_ref) { + H264SEIA53Caption *a53 = &sei->a53_caption; + + AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, a53->buf_ref); + if (!sd) + av_buffer_unref(&a53->buf_ref); + a53->buf_ref = NULL; + + avctx->properties |= FF_CODEC_PROPERTY_CLOSED_CAPTIONS; + } + + for (int i = 0; i < sei->unregistered.nb_buf_ref; i++) { + H264SEIUnregistered *unreg = &sei->unregistered; + + if (unreg->buf_ref[i]) { + AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, + AV_FRAME_DATA_SEI_UNREGISTERED, + unreg->buf_ref[i]); + if (!sd) + av_buffer_unref(&unreg->buf_ref[i]); + unreg->buf_ref[i] = NULL; + } + } + sei->unregistered.nb_buf_ref = 0; + + if (sps && sei->film_grain_characteristics.present) { + H264SEIFilmGrainCharacteristics *fgc = &sei->film_grain_characteristics; + AVFilmGrainParams *fgp = av_film_grain_params_create_side_data(out); + if (!fgp) + return AVERROR(ENOMEM); + + fgp->type = AV_FILM_GRAIN_PARAMS_H274; + fgp->seed = seed; + + fgp->codec.h274.model_id = fgc->model_id; + if (fgc->separate_colour_description_present_flag) { + fgp->codec.h274.bit_depth_luma = fgc->bit_depth_luma; + fgp->codec.h274.bit_depth_chroma = fgc->bit_depth_chroma; + fgp->codec.h274.color_range = fgc->full_range + 1; + fgp->codec.h274.color_primaries = fgc->color_primaries; + fgp->codec.h274.color_trc = fgc->transfer_characteristics; + fgp->codec.h274.color_space = fgc->matrix_coeffs; + } else { + fgp->codec.h274.bit_depth_luma = sps->bit_depth_luma; + fgp->codec.h274.bit_depth_chroma = sps->bit_depth_chroma; + if (sps->video_signal_type_present_flag) + fgp->codec.h274.color_range = sps->full_range + 1; + else + fgp->codec.h274.color_range = AVCOL_RANGE_UNSPECIFIED; + if (sps->colour_description_present_flag) { + fgp->codec.h274.color_primaries = sps->color_primaries; + fgp->codec.h274.color_trc = sps->color_trc; + fgp->codec.h274.color_space = sps->colorspace; + } else { + fgp->codec.h274.color_primaries = AVCOL_PRI_UNSPECIFIED; + fgp->codec.h274.color_trc = AVCOL_TRC_UNSPECIFIED; + fgp->codec.h274.color_space = AVCOL_SPC_UNSPECIFIED; + } + } + fgp->codec.h274.blending_mode_id = fgc->blending_mode_id; + fgp->codec.h274.log2_scale_factor = fgc->log2_scale_factor; + + memcpy(&fgp->codec.h274.component_model_present, &fgc->comp_model_present_flag, + sizeof(fgp->codec.h274.component_model_present)); + memcpy(&fgp->codec.h274.num_intensity_intervals, &fgc->num_intensity_intervals, + sizeof(fgp->codec.h274.num_intensity_intervals)); + memcpy(&fgp->codec.h274.num_model_values, &fgc->num_model_values, + sizeof(fgp->codec.h274.num_model_values)); + memcpy(&fgp->codec.h274.intensity_interval_lower_bound, &fgc->intensity_interval_lower_bound, + sizeof(fgp->codec.h274.intensity_interval_lower_bound)); + memcpy(&fgp->codec.h274.intensity_interval_upper_bound, &fgc->intensity_interval_upper_bound, + sizeof(fgp->codec.h274.intensity_interval_upper_bound)); + memcpy(&fgp->codec.h274.comp_model_value, &fgc->comp_model_value, + sizeof(fgp->codec.h274.comp_model_value)); + + fgc->present = !!fgc->repetition_period; + + avctx->properties |= FF_CODEC_PROPERTY_FILM_GRAIN; + } + + if (sei->picture_timing.timecode_cnt > 0) { + uint32_t *tc_sd; + char tcbuf[AV_TIMECODE_STR_SIZE]; + + AVFrameSideData *tcside = av_frame_new_side_data(out, + AV_FRAME_DATA_S12M_TIMECODE, + sizeof(uint32_t)*4); + if (!tcside) + return AVERROR(ENOMEM); + + tc_sd = (uint32_t*)tcside->data; + tc_sd[0] = sei->picture_timing.timecode_cnt; + + for (int i = 0; i < tc_sd[0]; i++) { + int drop = sei->picture_timing.timecode[i].dropframe; + int hh = sei->picture_timing.timecode[i].hours; + int mm = sei->picture_timing.timecode[i].minutes; + int ss = sei->picture_timing.timecode[i].seconds; + int ff = sei->picture_timing.timecode[i].frame; + + tc_sd[i + 1] = av_timecode_get_smpte(avctx->framerate, drop, hh, mm, ss, ff); + av_timecode_make_smpte_tc_string2(tcbuf, avctx->framerate, tc_sd[i + 1], 0, 0); + av_dict_set(&out->metadata, "timecode", tcbuf, 0); + } + sei->picture_timing.timecode_cnt = 0; + } + + return 0; +} diff --git a/libavcodec/h264_sei.h b/libavcodec/h264_sei.h index f9166b45df..05945523da 100644 --- a/libavcodec/h264_sei.h +++ b/libavcodec/h264_sei.h @@ -221,4 +221,6 @@ const char *ff_h264_sei_stereo_mode(const H264SEIFramePacking *h); int ff_h264_sei_process_picture_timing(H264SEIPictureTiming *h, const SPS *sps, void *logctx); +int ff_h264_set_sei_to_frame(AVCodecContext *avctx, H264SEIContext *sei, AVFrame *out, const SPS *sps, uint64_t seed); + #endif /* AVCODEC_H264_SEI_H */ diff --git a/libavcodec/h264_slice.c b/libavcodec/h264_slice.c index 6f0a7c1fb7..c66329f064 100644 --- a/libavcodec/h264_slice.c +++ b/libavcodec/h264_slice.c @@ -1244,195 +1244,7 @@ static int h264_export_frame_props(H264Context *h) } } - if (h->sei.frame_packing.present && - h->sei.frame_packing.arrangement_type <= 6 && - h->sei.frame_packing.content_interpretation_type > 0 && - h->sei.frame_packing.content_interpretation_type < 3) { - H264SEIFramePacking *fp = &h->sei.frame_packing; - AVStereo3D *stereo = av_stereo3d_create_side_data(out); - if (stereo) { - switch (fp->arrangement_type) { - case H264_SEI_FPA_TYPE_CHECKERBOARD: - stereo->type = AV_STEREO3D_CHECKERBOARD; - break; - case H264_SEI_FPA_TYPE_INTERLEAVE_COLUMN: - stereo->type = AV_STEREO3D_COLUMNS; - break; - case H264_SEI_FPA_TYPE_INTERLEAVE_ROW: - stereo->type = AV_STEREO3D_LINES; - break; - case H264_SEI_FPA_TYPE_SIDE_BY_SIDE: - if (fp->quincunx_sampling_flag) - stereo->type = AV_STEREO3D_SIDEBYSIDE_QUINCUNX; - else - stereo->type = AV_STEREO3D_SIDEBYSIDE; - break; - case H264_SEI_FPA_TYPE_TOP_BOTTOM: - stereo->type = AV_STEREO3D_TOPBOTTOM; - break; - case H264_SEI_FPA_TYPE_INTERLEAVE_TEMPORAL: - stereo->type = AV_STEREO3D_FRAMESEQUENCE; - break; - case H264_SEI_FPA_TYPE_2D: - stereo->type = AV_STEREO3D_2D; - break; - } - - if (fp->content_interpretation_type == 2) - stereo->flags = AV_STEREO3D_FLAG_INVERT; - - if (fp->arrangement_type == H264_SEI_FPA_TYPE_INTERLEAVE_TEMPORAL) { - if (fp->current_frame_is_frame0_flag) - stereo->view = AV_STEREO3D_VIEW_LEFT; - else - stereo->view = AV_STEREO3D_VIEW_RIGHT; - } - } - } - - if (h->sei.display_orientation.present && - (h->sei.display_orientation.anticlockwise_rotation || - h->sei.display_orientation.hflip || - h->sei.display_orientation.vflip)) { - H264SEIDisplayOrientation *o = &h->sei.display_orientation; - double angle = o->anticlockwise_rotation * 360 / (double) (1 << 16); - AVFrameSideData *rotation = av_frame_new_side_data(out, - AV_FRAME_DATA_DISPLAYMATRIX, - sizeof(int32_t) * 9); - if (rotation) { - /* av_display_rotation_set() expects the angle in the clockwise - * direction, hence the first minus. - * The below code applies the flips after the rotation, yet - * the H.2645 specs require flipping to be applied first. - * Because of R O(phi) = O(-phi) R (where R is flipping around - * an arbitatry axis and O(phi) is the proper rotation by phi) - * we can create display matrices as desired by negating - * the degree once for every flip applied. */ - angle = -angle * (1 - 2 * !!o->hflip) * (1 - 2 * !!o->vflip); - av_display_rotation_set((int32_t *)rotation->data, angle); - av_display_matrix_flip((int32_t *)rotation->data, - o->hflip, o->vflip); - } - } - - if (h->sei.afd.present) { - AVFrameSideData *sd = av_frame_new_side_data(out, AV_FRAME_DATA_AFD, - sizeof(uint8_t)); - - if (sd) { - *sd->data = h->sei.afd.active_format_description; - h->sei.afd.present = 0; - } - } - - if (h->sei.a53_caption.buf_ref) { - H264SEIA53Caption *a53 = &h->sei.a53_caption; - - AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, a53->buf_ref); - if (!sd) - av_buffer_unref(&a53->buf_ref); - a53->buf_ref = NULL; - - h->avctx->properties |= FF_CODEC_PROPERTY_CLOSED_CAPTIONS; - } - - for (int i = 0; i < h->sei.unregistered.nb_buf_ref; i++) { - H264SEIUnregistered *unreg = &h->sei.unregistered; - - if (unreg->buf_ref[i]) { - AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, - AV_FRAME_DATA_SEI_UNREGISTERED, - unreg->buf_ref[i]); - if (!sd) - av_buffer_unref(&unreg->buf_ref[i]); - unreg->buf_ref[i] = NULL; - } - } - h->sei.unregistered.nb_buf_ref = 0; - - if (h->sei.film_grain_characteristics.present) { - H264SEIFilmGrainCharacteristics *fgc = &h->sei.film_grain_characteristics; - AVFilmGrainParams *fgp = av_film_grain_params_create_side_data(out); - if (!fgp) - return AVERROR(ENOMEM); - - fgp->type = AV_FILM_GRAIN_PARAMS_H274; - fgp->seed = cur->poc + (h->poc_offset << 5); - - fgp->codec.h274.model_id = fgc->model_id; - if (fgc->separate_colour_description_present_flag) { - fgp->codec.h274.bit_depth_luma = fgc->bit_depth_luma; - fgp->codec.h274.bit_depth_chroma = fgc->bit_depth_chroma; - fgp->codec.h274.color_range = fgc->full_range + 1; - fgp->codec.h274.color_primaries = fgc->color_primaries; - fgp->codec.h274.color_trc = fgc->transfer_characteristics; - fgp->codec.h274.color_space = fgc->matrix_coeffs; - } else { - fgp->codec.h274.bit_depth_luma = sps->bit_depth_luma; - fgp->codec.h274.bit_depth_chroma = sps->bit_depth_chroma; - if (sps->video_signal_type_present_flag) - fgp->codec.h274.color_range = sps->full_range + 1; - else - fgp->codec.h274.color_range = AVCOL_RANGE_UNSPECIFIED; - if (sps->colour_description_present_flag) { - fgp->codec.h274.color_primaries = sps->color_primaries; - fgp->codec.h274.color_trc = sps->color_trc; - fgp->codec.h274.color_space = sps->colorspace; - } else { - fgp->codec.h274.color_primaries = AVCOL_PRI_UNSPECIFIED; - fgp->codec.h274.color_trc = AVCOL_TRC_UNSPECIFIED; - fgp->codec.h274.color_space = AVCOL_SPC_UNSPECIFIED; - } - } - fgp->codec.h274.blending_mode_id = fgc->blending_mode_id; - fgp->codec.h274.log2_scale_factor = fgc->log2_scale_factor; - - memcpy(&fgp->codec.h274.component_model_present, &fgc->comp_model_present_flag, - sizeof(fgp->codec.h274.component_model_present)); - memcpy(&fgp->codec.h274.num_intensity_intervals, &fgc->num_intensity_intervals, - sizeof(fgp->codec.h274.num_intensity_intervals)); - memcpy(&fgp->codec.h274.num_model_values, &fgc->num_model_values, - sizeof(fgp->codec.h274.num_model_values)); - memcpy(&fgp->codec.h274.intensity_interval_lower_bound, &fgc->intensity_interval_lower_bound, - sizeof(fgp->codec.h274.intensity_interval_lower_bound)); - memcpy(&fgp->codec.h274.intensity_interval_upper_bound, &fgc->intensity_interval_upper_bound, - sizeof(fgp->codec.h274.intensity_interval_upper_bound)); - memcpy(&fgp->codec.h274.comp_model_value, &fgc->comp_model_value, - sizeof(fgp->codec.h274.comp_model_value)); - - fgc->present = !!fgc->repetition_period; - - h->avctx->properties |= FF_CODEC_PROPERTY_FILM_GRAIN; - } - - if (h->sei.picture_timing.timecode_cnt > 0) { - uint32_t *tc_sd; - char tcbuf[AV_TIMECODE_STR_SIZE]; - - AVFrameSideData *tcside = av_frame_new_side_data(out, - AV_FRAME_DATA_S12M_TIMECODE, - sizeof(uint32_t)*4); - if (!tcside) - return AVERROR(ENOMEM); - - tc_sd = (uint32_t*)tcside->data; - tc_sd[0] = h->sei.picture_timing.timecode_cnt; - - for (int i = 0; i < tc_sd[0]; i++) { - int drop = h->sei.picture_timing.timecode[i].dropframe; - int hh = h->sei.picture_timing.timecode[i].hours; - int mm = h->sei.picture_timing.timecode[i].minutes; - int ss = h->sei.picture_timing.timecode[i].seconds; - int ff = h->sei.picture_timing.timecode[i].frame; - - tc_sd[i + 1] = av_timecode_get_smpte(h->avctx->framerate, drop, hh, mm, ss, ff); - av_timecode_make_smpte_tc_string2(tcbuf, h->avctx->framerate, tc_sd[i + 1], 0, 0); - av_dict_set(&out->metadata, "timecode", tcbuf, 0); - } - h->sei.picture_timing.timecode_cnt = 0; - } - - return 0; + return ff_h264_set_sei_to_frame(h->avctx, &h->sei, out, sps, cur->poc + (h->poc_offset << 5)); } static int h264_select_output_frame(H264Context *h) -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* [FFmpeg-devel] [PATCH v6 3/3] avcodec/qsvdec: Implement SEI parsing for QSV decoders 2022-10-25 4:03 ` [FFmpeg-devel] [PATCH v6 0/3] " ffmpegagent 2022-10-25 4:03 ` [FFmpeg-devel] [PATCH v6 1/3] avcodec/hevcdec: factor out ff_hevc_set_set_to_frame softworkz 2022-10-25 4:03 ` [FFmpeg-devel] [PATCH v6 2/3] avcodec/h264dec: make h264_export_frame_props() accessible softworkz @ 2022-10-25 4:03 ` softworkz 2022-11-21 2:44 ` Xiang, Haihao 2 siblings, 1 reply; 65+ messages in thread From: softworkz @ 2022-10-25 4:03 UTC (permalink / raw) To: ffmpeg-devel; +Cc: Kieran Kunhya, softworkz, Xiang, Haihao, Andreas Rheinhardt From: softworkz <softworkz@hotmail.com> Signed-off-by: softworkz <softworkz@hotmail.com> --- libavcodec/Makefile | 2 +- libavcodec/qsvdec.c | 321 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 322 insertions(+), 1 deletion(-) diff --git a/libavcodec/Makefile b/libavcodec/Makefile index 90c7f113a3..cbddbb0ace 100644 --- a/libavcodec/Makefile +++ b/libavcodec/Makefile @@ -146,7 +146,7 @@ OBJS-$(CONFIG_MSS34DSP) += mss34dsp.o OBJS-$(CONFIG_PIXBLOCKDSP) += pixblockdsp.o OBJS-$(CONFIG_QPELDSP) += qpeldsp.o OBJS-$(CONFIG_QSV) += qsv.o -OBJS-$(CONFIG_QSVDEC) += qsvdec.o +OBJS-$(CONFIG_QSVDEC) += qsvdec.o h264_sei.o hevc_sei.o OBJS-$(CONFIG_QSVENC) += qsvenc.o OBJS-$(CONFIG_RANGECODER) += rangecoder.o OBJS-$(CONFIG_RDFT) += rdft.o diff --git a/libavcodec/qsvdec.c b/libavcodec/qsvdec.c index 73405b5747..467a248224 100644 --- a/libavcodec/qsvdec.c +++ b/libavcodec/qsvdec.c @@ -41,6 +41,7 @@ #include "libavutil/time.h" #include "libavutil/imgutils.h" #include "libavutil/film_grain_params.h" +#include <libavutil/reverse.h> #include "avcodec.h" #include "codec_internal.h" @@ -49,6 +50,9 @@ #include "hwconfig.h" #include "qsv.h" #include "qsv_internal.h" +#include "h264_sei.h" +#include "hevc_ps.h" +#include "hevc_sei.h" #if QSV_ONEVPL #include <mfxdispatcher.h> @@ -66,6 +70,8 @@ static const AVRational mfx_tb = { 1, 90000 }; AV_NOPTS_VALUE : pts_tb.num ? \ av_rescale_q(mfx_pts, mfx_tb, pts_tb) : mfx_pts) +#define PAYLOAD_BUFFER_SIZE 65535 + typedef struct QSVAsyncFrame { mfxSyncPoint *sync; QSVFrame *frame; @@ -107,6 +113,9 @@ typedef struct QSVContext { mfxExtBuffer **ext_buffers; int nb_ext_buffers; + + mfxU8 payload_buffer[PAYLOAD_BUFFER_SIZE]; + AVBufferRef *a53_buf_ref; } QSVContext; static const AVCodecHWConfigInternal *const qsv_hw_configs[] = { @@ -628,6 +637,299 @@ static int qsv_export_film_grain(AVCodecContext *avctx, mfxExtAV1FilmGrainParam } #endif +static int find_start_offset(mfxU8 data[4]) +{ + if (data[0] == 0 && data[1] == 0 && data[2] == 1) + return 3; + + if (data[0] == 0 && data[1] == 0 && data[2] == 0 && data[3] == 1) + return 4; + + return 0; +} + +static int parse_sei_h264(AVCodecContext* avctx, QSVContext* q, AVFrame* out) +{ + H264SEIContext sei = { 0 }; + GetBitContext gb = { 0 }; + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = sizeof(q->payload_buffer) - AV_INPUT_BUFFER_PADDING_SIZE }; + mfxU64 ts; + int ret; + + while (1) { + int start; + memset(payload.Data, 0, payload.BufSize); + + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient buffer on GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q->payload_buffer), payload.BufSize); + return 0; + } + if (ret != MFX_ERR_NONE) + return ret; + + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) + break; + + start = find_start_offset(payload.Data); + + switch (payload.Type) { + case SEI_TYPE_BUFFERING_PERIOD: + case SEI_TYPE_PIC_TIMING: + continue; + } + + if (init_get_bits(&gb, &payload.Data[start], payload.NumBit - start * 8) < 0) + av_log(avctx, AV_LOG_ERROR, "Error initializing bitstream reader SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); + else { + ret = ff_h264_sei_decode(&sei, &gb, NULL, avctx); + + if (ret < 0) + av_log(avctx, AV_LOG_WARNING, "Failed to parse SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); + else + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d\n", payload.Type, payload.NumBit); + } + } + + if (out) + return ff_h264_set_sei_to_frame(avctx, &sei, out, NULL, 0); + + return 0; +} + +static int parse_sei_hevc(AVCodecContext* avctx, QSVContext* q, QSVFrame* out) +{ + HEVCSEI sei = { 0 }; + HEVCParamSets ps = { 0 }; + GetBitContext gb = { 0 }; + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = sizeof(q->payload_buffer) - AV_INPUT_BUFFER_PADDING_SIZE }; + mfxFrameSurface1 *surface = &out->surface; + mfxU64 ts; + int ret, has_logged = 0; + + while (1) { + int start; + memset(payload.Data, 0, payload.BufSize); + + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient buffer on GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q->payload_buffer), payload.BufSize); + return 0; + } + if (ret != MFX_ERR_NONE) + return ret; + + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) + break; + + if (!has_logged) { + has_logged = 1; + av_log(avctx, AV_LOG_VERBOSE, "-----------------------------------------\n"); + av_log(avctx, AV_LOG_VERBOSE, "Start reading SEI - payload timestamp: %llu - surface timestamp: %llu\n", ts, surface->Data.TimeStamp); + } + + if (ts != surface->Data.TimeStamp) { + av_log(avctx, AV_LOG_WARNING, "GetPayload timestamp (%llu) does not match surface timestamp: (%llu)\n", ts, surface->Data.TimeStamp); + } + + start = find_start_offset(payload.Data); + + av_log(avctx, AV_LOG_VERBOSE, "parsing SEI type: %3d Numbits %3d Start: %d\n", payload.Type, payload.NumBit, start); + + switch (payload.Type) { + case SEI_TYPE_BUFFERING_PERIOD: + case SEI_TYPE_PIC_TIMING: + continue; + case SEI_TYPE_MASTERING_DISPLAY_COLOUR_VOLUME: + // There seems to be a bug in MSDK + payload.NumBit -= 8; + + break; + case SEI_TYPE_CONTENT_LIGHT_LEVEL_INFO: + // There seems to be a bug in MSDK + payload.NumBit = 48; + + break; + case SEI_TYPE_USER_DATA_REGISTERED_ITU_T_T35: + // There seems to be a bug in MSDK + if (payload.NumBit == 552) + payload.NumBit = 528; + break; + } + + if (init_get_bits(&gb, &payload.Data[start], payload.NumBit - start * 8) < 0) + av_log(avctx, AV_LOG_ERROR, "Error initializing bitstream reader SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); + else { + ret = ff_hevc_decode_nal_sei(&gb, avctx, &sei, &ps, HEVC_NAL_SEI_PREFIX); + + if (ret < 0) + av_log(avctx, AV_LOG_WARNING, "Failed to parse SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); + else + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d\n", payload.Type, payload.NumBit); + } + } + + if (has_logged) { + av_log(avctx, AV_LOG_VERBOSE, "End reading SEI\n"); + } + + if (out && out->frame) + return ff_hevc_set_sei_to_frame(avctx, &sei, out->frame, avctx->framerate, 0, &ps.sps->vui, ps.sps->bit_depth, ps.sps->bit_depth_chroma); + + return 0; +} + +#define A53_MAX_CC_COUNT 2000 + +static int mpeg_decode_a53_cc(AVCodecContext *avctx, QSVContext *s, + const uint8_t *p, int buf_size) +{ + if (buf_size >= 6 && + p[0] == 'G' && p[1] == 'A' && p[2] == '9' && p[3] == '4' && + p[4] == 3 && (p[5] & 0x40)) { + /* extract A53 Part 4 CC data */ + unsigned cc_count = p[5] & 0x1f; + if (cc_count > 0 && buf_size >= 7 + cc_count * 3) { + const uint64_t old_size = s->a53_buf_ref ? s->a53_buf_ref->size : 0; + const uint64_t new_size = (old_size + cc_count + * UINT64_C(3)); + int ret; + + if (new_size > 3*A53_MAX_CC_COUNT) + return AVERROR(EINVAL); + + ret = av_buffer_realloc(&s->a53_buf_ref, new_size); + if (ret >= 0) + memcpy(s->a53_buf_ref->data + old_size, p + 7, cc_count * UINT64_C(3)); + + avctx->properties |= FF_CODEC_PROPERTY_CLOSED_CAPTIONS; + } + return 1; + } else if (buf_size >= 2 && p[0] == 0x03 && (p[1]&0x7f) == 0x01) { + /* extract SCTE-20 CC data */ + GetBitContext gb; + unsigned cc_count = 0; + int ret; + + init_get_bits8(&gb, p + 2, buf_size - 2); + cc_count = get_bits(&gb, 5); + if (cc_count > 0) { + uint64_t old_size = s->a53_buf_ref ? s->a53_buf_ref->size : 0; + uint64_t new_size = (old_size + cc_count * UINT64_C(3)); + if (new_size > 3 * A53_MAX_CC_COUNT) + return AVERROR(EINVAL); + + ret = av_buffer_realloc(&s->a53_buf_ref, new_size); + if (ret >= 0) { + uint8_t field, cc1, cc2; + uint8_t *cap = s->a53_buf_ref->data; + + memset(s->a53_buf_ref->data + old_size, 0, cc_count * 3); + for (unsigned i = 0; i < cc_count && get_bits_left(&gb) >= 26; i++) { + skip_bits(&gb, 2); // priority + field = get_bits(&gb, 2); + skip_bits(&gb, 5); // line_offset + cc1 = get_bits(&gb, 8); + cc2 = get_bits(&gb, 8); + skip_bits(&gb, 1); // marker + + if (!field) { // forbidden + cap[0] = cap[1] = cap[2] = 0x00; + } else { + field = (field == 2 ? 1 : 0); + ////if (!s1->mpeg_enc_ctx.top_field_first) field = !field; + cap[0] = 0x04 | field; + cap[1] = ff_reverse[cc1]; + cap[2] = ff_reverse[cc2]; + } + cap += 3; + } + } + avctx->properties |= FF_CODEC_PROPERTY_CLOSED_CAPTIONS; + } + return 1; + } else if (buf_size >= 11 && p[0] == 'C' && p[1] == 'C' && p[2] == 0x01 && p[3] == 0xf8) { + int cc_count = 0; + int i, ret; + // There is a caption count field in the data, but it is often + // incorrect. So count the number of captions present. + for (i = 5; i + 6 <= buf_size && ((p[i] & 0xfe) == 0xfe); i += 6) + cc_count++; + // Transform the DVD format into A53 Part 4 format + if (cc_count > 0) { + int old_size = s->a53_buf_ref ? s->a53_buf_ref->size : 0; + uint64_t new_size = (old_size + cc_count + * UINT64_C(6)); + if (new_size > 3*A53_MAX_CC_COUNT) + return AVERROR(EINVAL); + + ret = av_buffer_realloc(&s->a53_buf_ref, new_size); + if (ret >= 0) { + uint8_t field1 = !!(p[4] & 0x80); + uint8_t *cap = s->a53_buf_ref->data; + p += 5; + for (i = 0; i < cc_count; i++) { + cap[0] = (p[0] == 0xff && field1) ? 0xfc : 0xfd; + cap[1] = p[1]; + cap[2] = p[2]; + cap[3] = (p[3] == 0xff && !field1) ? 0xfc : 0xfd; + cap[4] = p[4]; + cap[5] = p[5]; + cap += 6; + p += 6; + } + } + avctx->properties |= FF_CODEC_PROPERTY_CLOSED_CAPTIONS; + } + return 1; + } + return 0; +} + +static int parse_sei_mpeg12(AVCodecContext* avctx, QSVContext* q, AVFrame* out) +{ + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = sizeof(q->payload_buffer) - AV_INPUT_BUFFER_PADDING_SIZE }; + mfxU64 ts; + int ret; + + while (1) { + int start; + + memset(payload.Data, 0, payload.BufSize); + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient buffer on GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q->payload_buffer), payload.BufSize); + return 0; + } + if (ret != MFX_ERR_NONE) + return ret; + + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) + break; + + start = find_start_offset(payload.Data); + + start++; + + mpeg_decode_a53_cc(avctx, q, &payload.Data[start], (int)((payload.NumBit + 7) / 8) - start); + + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d start %d -> %.s\n", payload.Type, payload.NumBit, start, (char *)(&payload.Data[start])); + } + + if (!out) + return 0; + + if (q->a53_buf_ref) { + + AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, AV_FRAME_DATA_A53_CC, q->a53_buf_ref); + if (!sd) + av_buffer_unref(&q->a53_buf_ref); + q->a53_buf_ref = NULL; + } + + return 0; +} + static int qsv_decode(AVCodecContext *avctx, QSVContext *q, AVFrame *frame, int *got_frame, const AVPacket *avpkt) @@ -664,6 +966,8 @@ static int qsv_decode(AVCodecContext *avctx, QSVContext *q, insurf, &outsurf, sync); if (ret == MFX_WRN_DEVICE_BUSY) av_usleep(500); + else if (avctx->codec_id == AV_CODEC_ID_MPEG2VIDEO) + parse_sei_mpeg12(avctx, q, NULL); } while (ret == MFX_WRN_DEVICE_BUSY || ret == MFX_ERR_MORE_SURFACE); @@ -705,6 +1009,23 @@ static int qsv_decode(AVCodecContext *avctx, QSVContext *q, return AVERROR_BUG; } + switch (avctx->codec_id) { + case AV_CODEC_ID_MPEG2VIDEO: + ret = parse_sei_mpeg12(avctx, q, out_frame->frame); + break; + case AV_CODEC_ID_H264: + ret = parse_sei_h264(avctx, q, out_frame->frame); + break; + case AV_CODEC_ID_HEVC: + ret = parse_sei_hevc(avctx, q, out_frame); + break; + default: + ret = 0; + } + + if (ret < 0) + av_log(avctx, AV_LOG_ERROR, "Error parsing SEI data: %d\n", ret); + out_frame->queued += 1; aframe = (QSVAsyncFrame){ sync, out_frame }; -- ffmpeg-codebot _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [FFmpeg-devel] [PATCH v6 3/3] avcodec/qsvdec: Implement SEI parsing for QSV decoders 2022-10-25 4:03 ` [FFmpeg-devel] [PATCH v6 3/3] avcodec/qsvdec: Implement SEI parsing for QSV decoders softworkz @ 2022-11-21 2:44 ` Xiang, Haihao 2022-11-21 15:58 ` Soft Works 0 siblings, 1 reply; 65+ messages in thread From: Xiang, Haihao @ 2022-11-21 2:44 UTC (permalink / raw) To: ffmpeg-devel Cc: kierank, softworkz, haihao.xiang-at-intel.com, andreas.rheinhardt On Tue, 2022-10-25 at 04:03 +0000, softworkz wrote: > From: softworkz <softworkz@hotmail.com> > > Signed-off-by: softworkz <softworkz@hotmail.com> > --- > libavcodec/Makefile | 2 +- > libavcodec/qsvdec.c | 321 ++++++++++++++++++++++++++++++++++++++++++++ > 2 files changed, 322 insertions(+), 1 deletion(-) > > diff --git a/libavcodec/Makefile b/libavcodec/Makefile > index 90c7f113a3..cbddbb0ace 100644 > --- a/libavcodec/Makefile > +++ b/libavcodec/Makefile > @@ -146,7 +146,7 @@ OBJS-$(CONFIG_MSS34DSP) += mss34dsp.o > OBJS-$(CONFIG_PIXBLOCKDSP) += pixblockdsp.o > OBJS-$(CONFIG_QPELDSP) += qpeldsp.o > OBJS-$(CONFIG_QSV) += qsv.o > -OBJS-$(CONFIG_QSVDEC) += qsvdec.o > +OBJS-$(CONFIG_QSVDEC) += qsvdec.o h264_sei.o hevc_sei.o > OBJS-$(CONFIG_QSVENC) += qsvenc.o > OBJS-$(CONFIG_RANGECODER) += rangecoder.o > OBJS-$(CONFIG_RDFT) += rdft.o > diff --git a/libavcodec/qsvdec.c b/libavcodec/qsvdec.c > index 73405b5747..467a248224 100644 > --- a/libavcodec/qsvdec.c > +++ b/libavcodec/qsvdec.c > @@ -41,6 +41,7 @@ > #include "libavutil/time.h" > #include "libavutil/imgutils.h" > #include "libavutil/film_grain_params.h" > +#include <libavutil/reverse.h> > > #include "avcodec.h" > #include "codec_internal.h" > @@ -49,6 +50,9 @@ > #include "hwconfig.h" > #include "qsv.h" > #include "qsv_internal.h" > +#include "h264_sei.h" > +#include "hevc_ps.h" > +#include "hevc_sei.h" > > #if QSV_ONEVPL > #include <mfxdispatcher.h> > @@ -66,6 +70,8 @@ static const AVRational mfx_tb = { 1, 90000 }; > AV_NOPTS_VALUE : pts_tb.num ? \ > av_rescale_q(mfx_pts, mfx_tb, pts_tb) : mfx_pts) > > +#define PAYLOAD_BUFFER_SIZE 65535 > + > typedef struct QSVAsyncFrame { > mfxSyncPoint *sync; > QSVFrame *frame; > @@ -107,6 +113,9 @@ typedef struct QSVContext { > > mfxExtBuffer **ext_buffers; > int nb_ext_buffers; > + > + mfxU8 payload_buffer[PAYLOAD_BUFFER_SIZE]; > + AVBufferRef *a53_buf_ref; > } QSVContext; > > static const AVCodecHWConfigInternal *const qsv_hw_configs[] = { > @@ -628,6 +637,299 @@ static int qsv_export_film_grain(AVCodecContext *avctx, > mfxExtAV1FilmGrainParam > } > #endif > > +static int find_start_offset(mfxU8 data[4]) > +{ > + if (data[0] == 0 && data[1] == 0 && data[2] == 1) > + return 3; > + > + if (data[0] == 0 && data[1] == 0 && data[2] == 0 && data[3] == 1) > + return 4; > + > + return 0; > +} > + > +static int parse_sei_h264(AVCodecContext* avctx, QSVContext* q, AVFrame* out) > +{ > + H264SEIContext sei = { 0 }; > + GetBitContext gb = { 0 }; > + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = > sizeof(q->payload_buffer) - AV_INPUT_BUFFER_PADDING_SIZE }; > + mfxU64 ts; > + int ret; > + > + while (1) { > + int start; > + memset(payload.Data, 0, payload.BufSize); > + > + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); > + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { > + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient buffer on > GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q->payload_buffer), > payload.BufSize); > + return 0; > + } > + if (ret != MFX_ERR_NONE) > + return ret; > + > + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) > + break; > + > + start = find_start_offset(payload.Data); > + > + switch (payload.Type) { > + case SEI_TYPE_BUFFERING_PERIOD: > + case SEI_TYPE_PIC_TIMING: > + continue; > + } > + > + if (init_get_bits(&gb, &payload.Data[start], payload.NumBit - start * > 8) < 0) > + av_log(avctx, AV_LOG_ERROR, "Error initializing bitstream reader > SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); > + else { > + ret = ff_h264_sei_decode(&sei, &gb, NULL, avctx); > + > + if (ret < 0) > + av_log(avctx, AV_LOG_WARNING, "Failed to parse SEI type: > %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); > + else > + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits > %d\n", payload.Type, payload.NumBit); > + } > + } > + > + if (out) > + return ff_h264_set_sei_to_frame(avctx, &sei, out, NULL, 0); > + > + return 0; > +} > + > +static int parse_sei_hevc(AVCodecContext* avctx, QSVContext* q, QSVFrame* > out) > +{ > + HEVCSEI sei = { 0 }; > + HEVCParamSets ps = { 0 }; > + GetBitContext gb = { 0 }; > + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = > sizeof(q->payload_buffer) - AV_INPUT_BUFFER_PADDING_SIZE }; > + mfxFrameSurface1 *surface = &out->surface; > + mfxU64 ts; > + int ret, has_logged = 0; > + > + while (1) { > + int start; > + memset(payload.Data, 0, payload.BufSize); > + > + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); > + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { > + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient buffer on > GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q->payload_buffer), > payload.BufSize); > + return 0; > + } > + if (ret != MFX_ERR_NONE) > + return ret; > + > + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) > + break; > + > + if (!has_logged) { > + has_logged = 1; > + av_log(avctx, AV_LOG_VERBOSE, "-------------------------------- > ---------\n"); > + av_log(avctx, AV_LOG_VERBOSE, "Start reading SEI - payload > timestamp: %llu - surface timestamp: %llu\n", ts, surface->Data.TimeStamp); > + } > + > + if (ts != surface->Data.TimeStamp) { > + av_log(avctx, AV_LOG_WARNING, "GetPayload timestamp (%llu) does > not match surface timestamp: (%llu)\n", ts, surface->Data.TimeStamp); > + } > + > + start = find_start_offset(payload.Data); > + > + av_log(avctx, AV_LOG_VERBOSE, "parsing SEI type: %3d Numbits > %3d Start: %d\n", payload.Type, payload.NumBit, start); > + > + switch (payload.Type) { > + case SEI_TYPE_BUFFERING_PERIOD: > + case SEI_TYPE_PIC_TIMING: > + continue; > + case SEI_TYPE_MASTERING_DISPLAY_COLOUR_VOLUME: > + // There seems to be a bug in MSDK > + payload.NumBit -= 8; > + > + break; > + case SEI_TYPE_CONTENT_LIGHT_LEVEL_INFO: > + // There seems to be a bug in MSDK > + payload.NumBit = 48; > + > + break; > + case SEI_TYPE_USER_DATA_REGISTERED_ITU_T_T35: > + // There seems to be a bug in MSDK > + if (payload.NumBit == 552) > + payload.NumBit = 528; > + break; > + } > + > + if (init_get_bits(&gb, &payload.Data[start], payload.NumBit - start * > 8) < 0) > + av_log(avctx, AV_LOG_ERROR, "Error initializing bitstream reader > SEI type: %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); > + else { > + ret = ff_hevc_decode_nal_sei(&gb, avctx, &sei, &ps, > HEVC_NAL_SEI_PREFIX); > + > + if (ret < 0) > + av_log(avctx, AV_LOG_WARNING, "Failed to parse SEI type: > %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); > + else > + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits > %d\n", payload.Type, payload.NumBit); > + } > + } > + > + if (has_logged) { > + av_log(avctx, AV_LOG_VERBOSE, "End reading SEI\n"); > + } > + > + if (out && out->frame) > + return ff_hevc_set_sei_to_frame(avctx, &sei, out->frame, avctx- > >framerate, 0, &ps.sps->vui, ps.sps->bit_depth, ps.sps->bit_depth_chroma); I got segfault when trying your patchset, Thread 1 "ffmpeg" received signal SIGSEGV, Segmentation fault. 0x00007ffff67c0497 in parse_sei_hevc (avctx=avctx@entry=0x5555555e4280, q=q@entry=0x555555625288, out=out@entry=0x5555559b6f80) at libavcodec/qsvdec.c:777 777 return ff_hevc_set_sei_to_frame(avctx, &sei, out->frame, avctx->framerate, 0, &ps.sps->vui, ps.sps->bit_depth, ps.sps->bit_depth_chroma); (gdb) bt #0 0x00007ffff67c0497 in parse_sei_hevc (avctx=avctx@entry=0x5555555e4280, q=q@entry=0x555555625288, out=out@entry=0x5555559b6f80) at libavcodec/qsvdec.c:777 #1 0x00007ffff67c1afe in qsv_decode (avctx=avctx@entry=0x5555555e4280, q=q@entry=0x555555625288, frame=frame@entry=0x5555556df740, got_frame=got_frame@entry=0x7fffffffd6bc, avpkt=avpkt@entry=0x555555635398) at libavcodec/qsvdec.c:1020 BTW the SDK provides support for hevc HDR metadata, we needn't parse SEI payload in qsvdec and may get the corresponding info from the SDK, see https://ffmpeg.org/pipermail/ffmpeg-devel/2022-November/304142.html Thanks Haihao > + > + return 0; > +} > + > +#define A53_MAX_CC_COUNT 2000 > + > +static int mpeg_decode_a53_cc(AVCodecContext *avctx, QSVContext *s, > + const uint8_t *p, int buf_size) > +{ > + if (buf_size >= 6 && > + p[0] == 'G' && p[1] == 'A' && p[2] == '9' && p[3] == '4' && > + p[4] == 3 && (p[5] & 0x40)) { > + /* extract A53 Part 4 CC data */ > + unsigned cc_count = p[5] & 0x1f; > + if (cc_count > 0 && buf_size >= 7 + cc_count * 3) { > + const uint64_t old_size = s->a53_buf_ref ? s->a53_buf_ref->size : > 0; > + const uint64_t new_size = (old_size + cc_count > + * UINT64_C(3)); > + int ret; > + > + if (new_size > 3*A53_MAX_CC_COUNT) > + return AVERROR(EINVAL); > + > + ret = av_buffer_realloc(&s->a53_buf_ref, new_size); > + if (ret >= 0) > + memcpy(s->a53_buf_ref->data + old_size, p + 7, cc_count * > UINT64_C(3)); > + > + avctx->properties |= FF_CODEC_PROPERTY_CLOSED_CAPTIONS; > + } > + return 1; > + } else if (buf_size >= 2 && p[0] == 0x03 && (p[1]&0x7f) == 0x01) { > + /* extract SCTE-20 CC data */ > + GetBitContext gb; > + unsigned cc_count = 0; > + int ret; > + > + init_get_bits8(&gb, p + 2, buf_size - 2); > + cc_count = get_bits(&gb, 5); > + if (cc_count > 0) { > + uint64_t old_size = s->a53_buf_ref ? s->a53_buf_ref->size : 0; > + uint64_t new_size = (old_size + cc_count * UINT64_C(3)); > + if (new_size > 3 * A53_MAX_CC_COUNT) > + return AVERROR(EINVAL); > + > + ret = av_buffer_realloc(&s->a53_buf_ref, new_size); > + if (ret >= 0) { > + uint8_t field, cc1, cc2; > + uint8_t *cap = s->a53_buf_ref->data; > + > + memset(s->a53_buf_ref->data + old_size, 0, cc_count * 3); > + for (unsigned i = 0; i < cc_count && get_bits_left(&gb) >= > 26; i++) { > + skip_bits(&gb, 2); // priority > + field = get_bits(&gb, 2); > + skip_bits(&gb, 5); // line_offset > + cc1 = get_bits(&gb, 8); > + cc2 = get_bits(&gb, 8); > + skip_bits(&gb, 1); // marker > + > + if (!field) { // forbidden > + cap[0] = cap[1] = cap[2] = 0x00; > + } else { > + field = (field == 2 ? 1 : 0); > + ////if (!s1->mpeg_enc_ctx.top_field_first) field = > !field; > + cap[0] = 0x04 | field; > + cap[1] = ff_reverse[cc1]; > + cap[2] = ff_reverse[cc2]; > + } > + cap += 3; > + } > + } > + avctx->properties |= FF_CODEC_PROPERTY_CLOSED_CAPTIONS; > + } > + return 1; > + } else if (buf_size >= 11 && p[0] == 'C' && p[1] == 'C' && p[2] == 0x01 > && p[3] == 0xf8) { > + int cc_count = 0; > + int i, ret; > + // There is a caption count field in the data, but it is often > + // incorrect. So count the number of captions present. > + for (i = 5; i + 6 <= buf_size && ((p[i] & 0xfe) == 0xfe); i += 6) > + cc_count++; > + // Transform the DVD format into A53 Part 4 format > + if (cc_count > 0) { > + int old_size = s->a53_buf_ref ? s->a53_buf_ref->size : 0; > + uint64_t new_size = (old_size + cc_count > + * UINT64_C(6)); > + if (new_size > 3*A53_MAX_CC_COUNT) > + return AVERROR(EINVAL); > + > + ret = av_buffer_realloc(&s->a53_buf_ref, new_size); > + if (ret >= 0) { > + uint8_t field1 = !!(p[4] & 0x80); > + uint8_t *cap = s->a53_buf_ref->data; > + p += 5; > + for (i = 0; i < cc_count; i++) { > + cap[0] = (p[0] == 0xff && field1) ? 0xfc : 0xfd; > + cap[1] = p[1]; > + cap[2] = p[2]; > + cap[3] = (p[3] == 0xff && !field1) ? 0xfc : 0xfd; > + cap[4] = p[4]; > + cap[5] = p[5]; > + cap += 6; > + p += 6; > + } > + } > + avctx->properties |= FF_CODEC_PROPERTY_CLOSED_CAPTIONS; > + } > + return 1; > + } > + return 0; > +} > + > +static int parse_sei_mpeg12(AVCodecContext* avctx, QSVContext* q, AVFrame* > out) > +{ > + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], .BufSize = > sizeof(q->payload_buffer) - AV_INPUT_BUFFER_PADDING_SIZE }; > + mfxU64 ts; > + int ret; > + > + while (1) { > + int start; > + > + memset(payload.Data, 0, payload.BufSize); > + ret = MFXVideoDECODE_GetPayload(q->session, &ts, &payload); > + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { > + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient buffer on > GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q->payload_buffer), > payload.BufSize); > + return 0; > + } > + if (ret != MFX_ERR_NONE) > + return ret; > + > + if (payload.NumBit == 0 || payload.NumBit >= payload.BufSize * 8) > + break; > + > + start = find_start_offset(payload.Data); > + > + start++; > + > + mpeg_decode_a53_cc(avctx, q, &payload.Data[start], > (int)((payload.NumBit + 7) / 8) - start); > + > + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d Numbits %d start %d > -> %.s\n", payload.Type, payload.NumBit, start, (char > *)(&payload.Data[start])); > + } > + > + if (!out) > + return 0; > + > + if (q->a53_buf_ref) { > + > + AVFrameSideData *sd = av_frame_new_side_data_from_buf(out, > AV_FRAME_DATA_A53_CC, q->a53_buf_ref); > + if (!sd) > + av_buffer_unref(&q->a53_buf_ref); > + q->a53_buf_ref = NULL; > + } > + > + return 0; > +} > + > static int qsv_decode(AVCodecContext *avctx, QSVContext *q, > AVFrame *frame, int *got_frame, > const AVPacket *avpkt) > @@ -664,6 +966,8 @@ static int qsv_decode(AVCodecContext *avctx, QSVContext > *q, > insurf, &outsurf, sync); > if (ret == MFX_WRN_DEVICE_BUSY) > av_usleep(500); > + else if (avctx->codec_id == AV_CODEC_ID_MPEG2VIDEO) > + parse_sei_mpeg12(avctx, q, NULL); > > } while (ret == MFX_WRN_DEVICE_BUSY || ret == MFX_ERR_MORE_SURFACE); > > @@ -705,6 +1009,23 @@ static int qsv_decode(AVCodecContext *avctx, QSVContext > *q, > return AVERROR_BUG; > } > > + switch (avctx->codec_id) { > + case AV_CODEC_ID_MPEG2VIDEO: > + ret = parse_sei_mpeg12(avctx, q, out_frame->frame); > + break; > + case AV_CODEC_ID_H264: > + ret = parse_sei_h264(avctx, q, out_frame->frame); > + break; > + case AV_CODEC_ID_HEVC: > + ret = parse_sei_hevc(avctx, q, out_frame); > + break; > + default: > + ret = 0; > + } > + > + if (ret < 0) > + av_log(avctx, AV_LOG_ERROR, "Error parsing SEI data: %d\n", ret); > + > out_frame->queued += 1; > > aframe = (QSVAsyncFrame){ sync, out_frame }; _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [FFmpeg-devel] [PATCH v6 3/3] avcodec/qsvdec: Implement SEI parsing for QSV decoders 2022-11-21 2:44 ` Xiang, Haihao @ 2022-11-21 15:58 ` Soft Works 2022-11-22 5:41 ` Xiang, Haihao 0 siblings, 1 reply; 65+ messages in thread From: Soft Works @ 2022-11-21 15:58 UTC (permalink / raw) To: Xiang, Haihao, ffmpeg-devel Cc: kierank, haihao.xiang-at-intel.com, andreas.rheinhardt > -----Original Message----- > From: Xiang, Haihao <haihao.xiang@intel.com> > Sent: Monday, November 21, 2022 3:45 AM > To: ffmpeg-devel@ffmpeg.org > Cc: softworkz@hotmail.com; kierank@obe.tv; haihao.xiang-at- > intel.com@ffmpeg.org; andreas.rheinhardt@outlook.com > Subject: Re: [FFmpeg-devel] [PATCH v6 3/3] avcodec/qsvdec: Implement > SEI parsing for QSV decoders > > On Tue, 2022-10-25 at 04:03 +0000, softworkz wrote: > > From: softworkz <softworkz@hotmail.com> > > > > Signed-off-by: softworkz <softworkz@hotmail.com> > > --- > > libavcodec/Makefile | 2 +- > > libavcodec/qsvdec.c | 321 > ++++++++++++++++++++++++++++++++++++++++++++ > > 2 files changed, 322 insertions(+), 1 deletion(-) > > > > diff --git a/libavcodec/Makefile b/libavcodec/Makefile > > index 90c7f113a3..cbddbb0ace 100644 > > --- a/libavcodec/Makefile > > +++ b/libavcodec/Makefile > > @@ -146,7 +146,7 @@ OBJS-$(CONFIG_MSS34DSP) += > mss34dsp.o > > OBJS-$(CONFIG_PIXBLOCKDSP) += pixblockdsp.o > > OBJS-$(CONFIG_QPELDSP) += qpeldsp.o > > OBJS-$(CONFIG_QSV) += qsv.o > > -OBJS-$(CONFIG_QSVDEC) += qsvdec.o > > +OBJS-$(CONFIG_QSVDEC) += qsvdec.o h264_sei.o > hevc_sei.o > > OBJS-$(CONFIG_QSVENC) += qsvenc.o > > OBJS-$(CONFIG_RANGECODER) += rangecoder.o > > OBJS-$(CONFIG_RDFT) += rdft.o > > diff --git a/libavcodec/qsvdec.c b/libavcodec/qsvdec.c > > index 73405b5747..467a248224 100644 > > --- a/libavcodec/qsvdec.c > > +++ b/libavcodec/qsvdec.c > > @@ -41,6 +41,7 @@ > > #include "libavutil/time.h" > > #include "libavutil/imgutils.h" > > #include "libavutil/film_grain_params.h" > > +#include <libavutil/reverse.h> > > > > #include "avcodec.h" > > #include "codec_internal.h" > > @@ -49,6 +50,9 @@ > > #include "hwconfig.h" > > #include "qsv.h" > > #include "qsv_internal.h" > > +#include "h264_sei.h" > > +#include "hevc_ps.h" > > +#include "hevc_sei.h" > > > > #if QSV_ONEVPL > > #include <mfxdispatcher.h> > > @@ -66,6 +70,8 @@ static const AVRational mfx_tb = { 1, 90000 }; > > AV_NOPTS_VALUE : pts_tb.num ? \ > > av_rescale_q(mfx_pts, mfx_tb, pts_tb) : mfx_pts) > > > > +#define PAYLOAD_BUFFER_SIZE 65535 > > + > > typedef struct QSVAsyncFrame { > > mfxSyncPoint *sync; > > QSVFrame *frame; > > @@ -107,6 +113,9 @@ typedef struct QSVContext { > > > > mfxExtBuffer **ext_buffers; > > int nb_ext_buffers; > > + > > + mfxU8 payload_buffer[PAYLOAD_BUFFER_SIZE]; > > + AVBufferRef *a53_buf_ref; > > } QSVContext; > > > > static const AVCodecHWConfigInternal *const qsv_hw_configs[] = { > > @@ -628,6 +637,299 @@ static int > qsv_export_film_grain(AVCodecContext *avctx, > > mfxExtAV1FilmGrainParam > > } > > #endif > > > > +static int find_start_offset(mfxU8 data[4]) > > +{ > > + if (data[0] == 0 && data[1] == 0 && data[2] == 1) > > + return 3; > > + > > + if (data[0] == 0 && data[1] == 0 && data[2] == 0 && data[3] == > 1) > > + return 4; > > + > > + return 0; > > +} > > + > > +static int parse_sei_h264(AVCodecContext* avctx, QSVContext* q, > AVFrame* out) > > +{ > > + H264SEIContext sei = { 0 }; > > + GetBitContext gb = { 0 }; > > + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], > .BufSize = > > sizeof(q->payload_buffer) - AV_INPUT_BUFFER_PADDING_SIZE }; > > + mfxU64 ts; > > + int ret; > > + > > + while (1) { > > + int start; > > + memset(payload.Data, 0, payload.BufSize); > > + > > + ret = MFXVideoDECODE_GetPayload(q->session, &ts, > &payload); > > + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { > > + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient > buffer on > > GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q- > >payload_buffer), > > payload.BufSize); > > + return 0; > > + } > > + if (ret != MFX_ERR_NONE) > > + return ret; > > + > > + if (payload.NumBit == 0 || payload.NumBit >= > payload.BufSize * 8) > > + break; > > + > > + start = find_start_offset(payload.Data); > > + > > + switch (payload.Type) { > > + case SEI_TYPE_BUFFERING_PERIOD: > > + case SEI_TYPE_PIC_TIMING: > > + continue; > > + } > > + > > + if (init_get_bits(&gb, &payload.Data[start], > payload.NumBit - start * > > 8) < 0) > > + av_log(avctx, AV_LOG_ERROR, "Error initializing > bitstream reader > > SEI type: %d Numbits %d error: %d\n", payload.Type, > payload.NumBit, ret); > > + else { > > + ret = ff_h264_sei_decode(&sei, &gb, NULL, avctx); > > + > > + if (ret < 0) > > + av_log(avctx, AV_LOG_WARNING, "Failed to parse SEI > type: > > %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); > > + else > > + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d > Numbits > > %d\n", payload.Type, payload.NumBit); > > + } > > + } > > + > > + if (out) > > + return ff_h264_set_sei_to_frame(avctx, &sei, out, NULL, > 0); > > + > > + return 0; > > +} > > + > > +static int parse_sei_hevc(AVCodecContext* avctx, QSVContext* q, > QSVFrame* > > out) > > +{ > > + HEVCSEI sei = { 0 }; > > + HEVCParamSets ps = { 0 }; > > + GetBitContext gb = { 0 }; > > + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], > .BufSize = > > sizeof(q->payload_buffer) - AV_INPUT_BUFFER_PADDING_SIZE }; > > + mfxFrameSurface1 *surface = &out->surface; > > + mfxU64 ts; > > + int ret, has_logged = 0; > > + > > + while (1) { > > + int start; > > + memset(payload.Data, 0, payload.BufSize); > > + > > + ret = MFXVideoDECODE_GetPayload(q->session, &ts, > &payload); > > + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { > > + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient > buffer on > > GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q- > >payload_buffer), > > payload.BufSize); > > + return 0; > > + } > > + if (ret != MFX_ERR_NONE) > > + return ret; > > + > > + if (payload.NumBit == 0 || payload.NumBit >= > payload.BufSize * 8) > > + break; > > + > > + if (!has_logged) { > > + has_logged = 1; > > + av_log(avctx, AV_LOG_VERBOSE, "----------------------- > --------- > > ---------\n"); > > + av_log(avctx, AV_LOG_VERBOSE, "Start reading SEI - > payload > > timestamp: %llu - surface timestamp: %llu\n", ts, surface- > >Data.TimeStamp); > > + } > > + > > + if (ts != surface->Data.TimeStamp) { > > + av_log(avctx, AV_LOG_WARNING, "GetPayload timestamp > (%llu) does > > not match surface timestamp: (%llu)\n", ts, surface- > >Data.TimeStamp); > > + } > > + > > + start = find_start_offset(payload.Data); > > + > > + av_log(avctx, AV_LOG_VERBOSE, "parsing SEI type: %3d > Numbits > > %3d Start: %d\n", payload.Type, payload.NumBit, start); > > + > > + switch (payload.Type) { > > + case SEI_TYPE_BUFFERING_PERIOD: > > + case SEI_TYPE_PIC_TIMING: > > + continue; > > + case SEI_TYPE_MASTERING_DISPLAY_COLOUR_VOLUME: > > + // There seems to be a bug in MSDK > > + payload.NumBit -= 8; > > + > > + break; > > + case SEI_TYPE_CONTENT_LIGHT_LEVEL_INFO: > > + // There seems to be a bug in MSDK > > + payload.NumBit = 48; > > + > > + break; > > + case SEI_TYPE_USER_DATA_REGISTERED_ITU_T_T35: > > + // There seems to be a bug in MSDK > > + if (payload.NumBit == 552) > > + payload.NumBit = 528; > > + break; > > + } > > + > > + if (init_get_bits(&gb, &payload.Data[start], > payload.NumBit - start * > > 8) < 0) > > + av_log(avctx, AV_LOG_ERROR, "Error initializing > bitstream reader > > SEI type: %d Numbits %d error: %d\n", payload.Type, > payload.NumBit, ret); > > + else { > > + ret = ff_hevc_decode_nal_sei(&gb, avctx, &sei, &ps, > > HEVC_NAL_SEI_PREFIX); > > + > > + if (ret < 0) > > + av_log(avctx, AV_LOG_WARNING, "Failed to parse SEI > type: > > %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); > > + else > > + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d > Numbits > > %d\n", payload.Type, payload.NumBit); > > + } > > + } > > + > > + if (has_logged) { > > + av_log(avctx, AV_LOG_VERBOSE, "End reading SEI\n"); > > + } > > + > > + if (out && out->frame) > > + return ff_hevc_set_sei_to_frame(avctx, &sei, out->frame, > avctx- > > >framerate, 0, &ps.sps->vui, ps.sps->bit_depth, ps.sps- > >bit_depth_chroma); > > I got segfault when trying your patchset, > > Thread 1 "ffmpeg" received signal SIGSEGV, Segmentation fault. > 0x00007ffff67c0497 in parse_sei_hevc > (avctx=avctx@entry=0x5555555e4280, q=q@entry=0x555555625288, > out=out@entry=0x5555559b6f80) at libavcodec/qsvdec.c:777 > 777 return ff_hevc_set_sei_to_frame(avctx, &sei, out- > >frame, avctx->framerate, 0, &ps.sps->vui, ps.sps->bit_depth, ps.sps- > >bit_depth_chroma); > (gdb) bt > #0 0x00007ffff67c0497 in parse_sei_hevc > (avctx=avctx@entry=0x5555555e4280, q=q@entry=0x555555625288, > out=out@entry=0x5555559b6f80) at libavcodec/qsvdec.c:777 > #1 0x00007ffff67c1afe in qsv_decode > (avctx=avctx@entry=0x5555555e4280, q=q@entry=0x555555625288, > frame=frame@entry=0x5555556df740, > got_frame=got_frame@entry=0x7fffffffd6bc, > avpkt=avpkt@entry=0x555555635398) at libavcodec/qsvdec.c:1020 > BTW the SDK provides support for hevc HDR metadata, we needn't parse > SEI payload > in qsvdec and may get the corresponding info from the SDK, see > https://ffmpeg.org/pipermail/ffmpeg-devel/2022-November/304142.html I know. I was the one who had requested this to be added to MSDK :-) But it's just one small part of SEI information, it's limited to the latest MSDK versions and I'm not sure whether it's working as reliably as this implementation. This would need to be tested. You should still have access to the repo with the test files for demoing the offset problems (which my patchset is working around). But it's also about dynamic HDR data, dovi data, etc. - which MSDK doesn't provide, so this single bit of SEI data doesn't help much. Best regards, softworkz _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [FFmpeg-devel] [PATCH v6 3/3] avcodec/qsvdec: Implement SEI parsing for QSV decoders 2022-11-21 15:58 ` Soft Works @ 2022-11-22 5:41 ` Xiang, Haihao 0 siblings, 0 replies; 65+ messages in thread From: Xiang, Haihao @ 2022-11-22 5:41 UTC (permalink / raw) To: ffmpeg-devel, softworkz Cc: kierank, haihao.xiang-at-intel.com, andreas.rheinhardt On Mon, 2022-11-21 at 15:58 +0000, Soft Works wrote: > > -----Original Message----- > > From: Xiang, Haihao <haihao.xiang@intel.com> > > Sent: Monday, November 21, 2022 3:45 AM > > To: ffmpeg-devel@ffmpeg.org > > Cc: softworkz@hotmail.com; kierank@obe.tv; haihao.xiang-at- > > intel.com@ffmpeg.org; andreas.rheinhardt@outlook.com > > Subject: Re: [FFmpeg-devel] [PATCH v6 3/3] avcodec/qsvdec: Implement > > SEI parsing for QSV decoders > > > > On Tue, 2022-10-25 at 04:03 +0000, softworkz wrote: > > > From: softworkz <softworkz@hotmail.com> > > > > > > Signed-off-by: softworkz <softworkz@hotmail.com> > > > --- > > > libavcodec/Makefile | 2 +- > > > libavcodec/qsvdec.c | 321 > > ++++++++++++++++++++++++++++++++++++++++++++ > > > 2 files changed, 322 insertions(+), 1 deletion(-) > > > > > > diff --git a/libavcodec/Makefile b/libavcodec/Makefile > > > index 90c7f113a3..cbddbb0ace 100644 > > > --- a/libavcodec/Makefile > > > +++ b/libavcodec/Makefile > > > @@ -146,7 +146,7 @@ OBJS-$(CONFIG_MSS34DSP) += > > mss34dsp.o > > > OBJS-$(CONFIG_PIXBLOCKDSP) += pixblockdsp.o > > > OBJS-$(CONFIG_QPELDSP) += qpeldsp.o > > > OBJS-$(CONFIG_QSV) += qsv.o > > > -OBJS-$(CONFIG_QSVDEC) += qsvdec.o > > > +OBJS-$(CONFIG_QSVDEC) += qsvdec.o h264_sei.o > > hevc_sei.o > > > OBJS-$(CONFIG_QSVENC) += qsvenc.o > > > OBJS-$(CONFIG_RANGECODER) += rangecoder.o > > > OBJS-$(CONFIG_RDFT) += rdft.o > > > diff --git a/libavcodec/qsvdec.c b/libavcodec/qsvdec.c > > > index 73405b5747..467a248224 100644 > > > --- a/libavcodec/qsvdec.c > > > +++ b/libavcodec/qsvdec.c > > > @@ -41,6 +41,7 @@ > > > #include "libavutil/time.h" > > > #include "libavutil/imgutils.h" > > > #include "libavutil/film_grain_params.h" > > > +#include <libavutil/reverse.h> > > > > > > #include "avcodec.h" > > > #include "codec_internal.h" > > > @@ -49,6 +50,9 @@ > > > #include "hwconfig.h" > > > #include "qsv.h" > > > #include "qsv_internal.h" > > > +#include "h264_sei.h" > > > +#include "hevc_ps.h" > > > +#include "hevc_sei.h" > > > > > > #if QSV_ONEVPL > > > #include <mfxdispatcher.h> > > > @@ -66,6 +70,8 @@ static const AVRational mfx_tb = { 1, 90000 }; > > > AV_NOPTS_VALUE : pts_tb.num ? \ > > > av_rescale_q(mfx_pts, mfx_tb, pts_tb) : mfx_pts) > > > > > > +#define PAYLOAD_BUFFER_SIZE 65535 > > > + > > > typedef struct QSVAsyncFrame { > > > mfxSyncPoint *sync; > > > QSVFrame *frame; > > > @@ -107,6 +113,9 @@ typedef struct QSVContext { > > > > > > mfxExtBuffer **ext_buffers; > > > int nb_ext_buffers; > > > + > > > + mfxU8 payload_buffer[PAYLOAD_BUFFER_SIZE]; > > > + AVBufferRef *a53_buf_ref; > > > } QSVContext; > > > > > > static const AVCodecHWConfigInternal *const qsv_hw_configs[] = { > > > @@ -628,6 +637,299 @@ static int > > qsv_export_film_grain(AVCodecContext *avctx, > > > mfxExtAV1FilmGrainParam > > > } > > > #endif > > > > > > +static int find_start_offset(mfxU8 data[4]) > > > +{ > > > + if (data[0] == 0 && data[1] == 0 && data[2] == 1) > > > + return 3; > > > + > > > + if (data[0] == 0 && data[1] == 0 && data[2] == 0 && data[3] == > > 1) > > > + return 4; > > > + > > > + return 0; > > > +} > > > + > > > +static int parse_sei_h264(AVCodecContext* avctx, QSVContext* q, > > AVFrame* out) > > > +{ > > > + H264SEIContext sei = { 0 }; > > > + GetBitContext gb = { 0 }; > > > + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], > > .BufSize = > > > sizeof(q->payload_buffer) - AV_INPUT_BUFFER_PADDING_SIZE }; > > > + mfxU64 ts; > > > + int ret; > > > + > > > + while (1) { > > > + int start; > > > + memset(payload.Data, 0, payload.BufSize); > > > + > > > + ret = MFXVideoDECODE_GetPayload(q->session, &ts, > > &payload); > > > + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { > > > + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient > > buffer on > > > GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q- > > > payload_buffer), > > > payload.BufSize); > > > + return 0; > > > + } > > > + if (ret != MFX_ERR_NONE) > > > + return ret; > > > + > > > + if (payload.NumBit == 0 || payload.NumBit >= > > payload.BufSize * 8) > > > + break; > > > + > > > + start = find_start_offset(payload.Data); > > > + > > > + switch (payload.Type) { > > > + case SEI_TYPE_BUFFERING_PERIOD: > > > + case SEI_TYPE_PIC_TIMING: > > > + continue; > > > + } > > > + > > > + if (init_get_bits(&gb, &payload.Data[start], > > payload.NumBit - start * > > > 8) < 0) > > > + av_log(avctx, AV_LOG_ERROR, "Error initializing > > bitstream reader > > > SEI type: %d Numbits %d error: %d\n", payload.Type, > > payload.NumBit, ret); > > > + else { > > > + ret = ff_h264_sei_decode(&sei, &gb, NULL, avctx); > > > + > > > + if (ret < 0) > > > + av_log(avctx, AV_LOG_WARNING, "Failed to parse SEI > > type: > > > %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); > > > + else > > > + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d > > Numbits > > > %d\n", payload.Type, payload.NumBit); > > > + } > > > + } > > > + > > > + if (out) > > > + return ff_h264_set_sei_to_frame(avctx, &sei, out, NULL, > > 0); > > > + > > > + return 0; > > > +} > > > + > > > +static int parse_sei_hevc(AVCodecContext* avctx, QSVContext* q, > > QSVFrame* > > > out) > > > +{ > > > + HEVCSEI sei = { 0 }; > > > + HEVCParamSets ps = { 0 }; > > > + GetBitContext gb = { 0 }; > > > + mfxPayload payload = { 0, .Data = &q->payload_buffer[0], > > .BufSize = > > > sizeof(q->payload_buffer) - AV_INPUT_BUFFER_PADDING_SIZE }; > > > + mfxFrameSurface1 *surface = &out->surface; > > > + mfxU64 ts; > > > + int ret, has_logged = 0; > > > + > > > + while (1) { > > > + int start; > > > + memset(payload.Data, 0, payload.BufSize); > > > + > > > + ret = MFXVideoDECODE_GetPayload(q->session, &ts, > > &payload); > > > + if (ret == MFX_ERR_NOT_ENOUGH_BUFFER) { > > > + av_log(avctx, AV_LOG_WARNING, "Warning: Insufficient > > buffer on > > > GetPayload(). Size: %"PRIu64" Needed: %d\n", sizeof(q- > > > payload_buffer), > > > payload.BufSize); > > > + return 0; > > > + } > > > + if (ret != MFX_ERR_NONE) > > > + return ret; > > > + > > > + if (payload.NumBit == 0 || payload.NumBit >= > > payload.BufSize * 8) > > > + break; > > > + > > > + if (!has_logged) { > > > + has_logged = 1; > > > + av_log(avctx, AV_LOG_VERBOSE, "----------------------- > > --------- > > > ---------\n"); > > > + av_log(avctx, AV_LOG_VERBOSE, "Start reading SEI - > > payload > > > timestamp: %llu - surface timestamp: %llu\n", ts, surface- > > > Data.TimeStamp); > > > + } > > > + > > > + if (ts != surface->Data.TimeStamp) { > > > + av_log(avctx, AV_LOG_WARNING, "GetPayload timestamp > > (%llu) does > > > not match surface timestamp: (%llu)\n", ts, surface- > > > Data.TimeStamp); > > > + } > > > + > > > + start = find_start_offset(payload.Data); > > > + > > > + av_log(avctx, AV_LOG_VERBOSE, "parsing SEI type: %3d > > Numbits > > > %3d Start: %d\n", payload.Type, payload.NumBit, start); > > > + > > > + switch (payload.Type) { > > > + case SEI_TYPE_BUFFERING_PERIOD: > > > + case SEI_TYPE_PIC_TIMING: > > > + continue; > > > + case SEI_TYPE_MASTERING_DISPLAY_COLOUR_VOLUME: > > > + // There seems to be a bug in MSDK > > > + payload.NumBit -= 8; > > > + > > > + break; > > > + case SEI_TYPE_CONTENT_LIGHT_LEVEL_INFO: > > > + // There seems to be a bug in MSDK > > > + payload.NumBit = 48; > > > + > > > + break; > > > + case SEI_TYPE_USER_DATA_REGISTERED_ITU_T_T35: > > > + // There seems to be a bug in MSDK > > > + if (payload.NumBit == 552) > > > + payload.NumBit = 528; > > > + break; > > > + } > > > + > > > + if (init_get_bits(&gb, &payload.Data[start], > > payload.NumBit - start * > > > 8) < 0) > > > + av_log(avctx, AV_LOG_ERROR, "Error initializing > > bitstream reader > > > SEI type: %d Numbits %d error: %d\n", payload.Type, > > payload.NumBit, ret); > > > + else { > > > + ret = ff_hevc_decode_nal_sei(&gb, avctx, &sei, &ps, > > > HEVC_NAL_SEI_PREFIX); > > > + > > > + if (ret < 0) > > > + av_log(avctx, AV_LOG_WARNING, "Failed to parse SEI > > type: > > > %d Numbits %d error: %d\n", payload.Type, payload.NumBit, ret); > > > + else > > > + av_log(avctx, AV_LOG_DEBUG, "mfxPayload Type: %d > > Numbits > > > %d\n", payload.Type, payload.NumBit); > > > + } > > > + } > > > + > > > + if (has_logged) { > > > + av_log(avctx, AV_LOG_VERBOSE, "End reading SEI\n"); > > > + } > > > + > > > + if (out && out->frame) > > > + return ff_hevc_set_sei_to_frame(avctx, &sei, out->frame, > > avctx- > > > > framerate, 0, &ps.sps->vui, ps.sps->bit_depth, ps.sps- > > > bit_depth_chroma); > > > > I got segfault when trying your patchset, > > > > Thread 1 "ffmpeg" received signal SIGSEGV, Segmentation fault. > > 0x00007ffff67c0497 in parse_sei_hevc > > (avctx=avctx@entry=0x5555555e4280, q=q@entry=0x555555625288, > > out=out@entry=0x5555559b6f80) at libavcodec/qsvdec.c:777 > > 777 return ff_hevc_set_sei_to_frame(avctx, &sei, out- > > > frame, avctx->framerate, 0, &ps.sps->vui, ps.sps->bit_depth, ps.sps- > > > bit_depth_chroma); > > (gdb) bt > > #0 0x00007ffff67c0497 in parse_sei_hevc > > (avctx=avctx@entry=0x5555555e4280, q=q@entry=0x555555625288, > > out=out@entry=0x5555559b6f80) at libavcodec/qsvdec.c:777 > > #1 0x00007ffff67c1afe in qsv_decode > > (avctx=avctx@entry=0x5555555e4280, q=q@entry=0x555555625288, > > frame=frame@entry=0x5555556df740, > > got_frame=got_frame@entry=0x7fffffffd6bc, > > avpkt=avpkt@entry=0x555555635398) at libavcodec/qsvdec.c:1020 > > BTW the SDK provides support for hevc HDR metadata, we needn't parse > > SEI payload > > in qsvdec and may get the corresponding info from the SDK, see > > https://ffmpeg.org/pipermail/ffmpeg-devel/2022-November/304142.html > > I know. I was the one who had requested this to be added to MSDK :-) > > But it's just one small part of SEI information, it's limited to > the latest MSDK versions and I'm not sure whether it's working > as reliably as this implementation. This would need to be tested. > > You should still have access to the repo with the test files for > demoing the offset problems (which my patchset is working around). I worked out a patchset (https://github.com/intel-media-ci/ffmpeg/pull/518), including https://ffmpeg.org/pipermail/ffmpeg-devel/2022-November/304142.html an d others. I may use your command (with some changes) to convert all HDR clips in your repo to SDR clips except one clip (w/wo my changes, there is a GPU hang issue with this clip). We may only do a small update to support AV1 HDR in the future if applying https://ffmpeg.org/pipermail/ffmpeg-devel/2022-November/304142.html. Note the command below doesn't work with all test files in your repo with your patchset v6. $ ffmpeg -hwaccel qsv -c:v hevc_qsv -i input.mp4 -f null - Thanks Haihao > > But it's also about dynamic HDR data, dovi data, etc. - which > MSDK doesn't provide, so this single bit of SEI data doesn't > help much. > > Best regards, > softworkz > > > _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [FFmpeg-devel] [PATCH 0/6] Implement SEI parsing for QSV decoders 2022-05-26 8:08 [FFmpeg-devel] [PATCH 0/6] Implement SEI parsing for QSV decoders ffmpegagent ` (6 preceding siblings ...) 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 0/6] " ffmpegagent @ 2022-06-01 19:15 ` Kieran Kunhya 2022-06-01 19:46 ` Soft Works 7 siblings, 1 reply; 65+ messages in thread From: Kieran Kunhya @ 2022-06-01 19:15 UTC (permalink / raw) To: FFmpeg development discussions and patches; +Cc: softworkz On Thu, 26 May 2022 at 09:09, ffmpegagent <ffmpegagent@gmail.com> wrote: > But that doesn't help. Those bugs exist and I'm sharing my workarounds, > which are empirically determined by testing a range of files. If someone is > interested, I can provide private access to a repository where we have been > testing this. Alternatively, I could also leave those workarounds out, and > just skip those SEI types. > I don't care much for QSV but I would say b-frame reordering heuristics like the one you are using may not necessarily catch all the possible structures in the wild from third-party encoders. VLC had (has?) heuristics for this which would cause captions to not be frame accurate. Kieran _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [FFmpeg-devel] [PATCH 0/6] Implement SEI parsing for QSV decoders 2022-06-01 19:15 ` [FFmpeg-devel] [PATCH 0/6] " Kieran Kunhya @ 2022-06-01 19:46 ` Soft Works 2022-06-01 20:25 ` Kieran Kunhya 0 siblings, 1 reply; 65+ messages in thread From: Soft Works @ 2022-06-01 19:46 UTC (permalink / raw) To: Kieran Kunhya, FFmpeg development discussions and patches From: Kieran Kunhya <kierank@obe.tv> Sent: Wednesday, June 1, 2022 9:16 PM To: FFmpeg development discussions and patches <ffmpeg-devel@ffmpeg.org> Cc: softworkz <softworkz@hotmail.com> Subject: Re: [FFmpeg-devel] [PATCH 0/6] Implement SEI parsing for QSV decoders On Thu, 26 May 2022 at 09:09, ffmpegagent <ffmpegagent@gmail.com<mailto:ffmpegagent@gmail.com>> wrote: But that doesn't help. Those bugs exist and I'm sharing my workarounds, which are empirically determined by testing a range of files. If someone is interested, I can provide private access to a repository where we have been testing this. Alternatively, I could also leave those workarounds out, and just skip those SEI types. I don't care much for QSV but I would say b-frame reordering heuristics like the one you are using may not necessarily catch all the possible structures in the wild from third-party encoders. I am not using any b-frame reordering heuristics, I just take the payloads in the order in which MSDK(QSV) provides it, and it has turned out that they are reordering the data according to the display order. I did some detailed analysis of files with out-of-order B-frames: https://gist.github.com/softworkz/36c49586a8610813a32270ee3947a932 Did you take a look? VLC had (has?) heuristics for this which would cause captions to not be frame accurate. Captions aren’t exactly “frame accurate” anyway as each frame has just a very small piece of information and only when a certain sequence is complete, it leads to some new letters or line being ready for display. But out-of-order would screw this definitely, but I haven’t seen any such cases. The code I’m submitting has been in testing for quite a while with a bunch of users and many files and TV streams with MP2Video, H264 and HEVC were tested. You could still be right, that there is a case. A while ago I had digged through https://streams.videolan.org/ and downloaded all samples that seemed to have CC, but maybe I missed one for the case you’re talking about. Do you have an idea where/how I could find such stream? Thanks, softworkz _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [FFmpeg-devel] [PATCH 0/6] Implement SEI parsing for QSV decoders 2022-06-01 19:46 ` Soft Works @ 2022-06-01 20:25 ` Kieran Kunhya 2022-06-01 21:24 ` Soft Works 0 siblings, 1 reply; 65+ messages in thread From: Kieran Kunhya @ 2022-06-01 20:25 UTC (permalink / raw) To: Soft Works; +Cc: Kieran Kunhya, FFmpeg development discussions and patches > > Captions aren’t exactly “frame accurate” anyway as each frame has just a > very small piece > > of information and only when a certain sequence is complete, it leads to > some new letters > > or line being ready for display. > In many use-cases, you want them to be frame-accurate. Final rendition to the viewer is only one use-case of FFmpeg (to be fair, the likely use-case of QSV) And you don't want errors accumulating across encode cycles. > Do you have an idea where/how I could find such stream? > You would need to record them off various television services as they use a wide range of encoder manufacturers. Ideally, you could also inject them into various encoders and force them to build complex reordering patterns and check they are frame accurate. Regards, Kieran > _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
* Re: [FFmpeg-devel] [PATCH 0/6] Implement SEI parsing for QSV decoders 2022-06-01 20:25 ` Kieran Kunhya @ 2022-06-01 21:24 ` Soft Works 0 siblings, 0 replies; 65+ messages in thread From: Soft Works @ 2022-06-01 21:24 UTC (permalink / raw) To: Kieran Kunhya; +Cc: FFmpeg development discussions and patches [-- Attachment #1.1: Type: text/plain, Size: 2522 bytes --] From: Kieran Kunhya <kierank@obe.tv> Sent: Wednesday, June 1, 2022 10:26 PM To: Soft Works <softworkz@hotmail.com> Cc: Kieran Kunhya <kierank@obe.tv>; FFmpeg development discussions and patches <ffmpeg-devel@ffmpeg.org> Subject: Re: [FFmpeg-devel] [PATCH 0/6] Implement SEI parsing for QSV decoders Captions aren’t exactly “frame accurate” anyway as each frame has just a very small piece of information and only when a certain sequence is complete, it leads to some new letters or line being ready for display. And you don't want errors accumulating across encode cycles. No such errors seen at any time, CC extraction behavior doesn’t appear to be any different than for other ffmpeg decoders (mp2, h264, hevc – sw, same three for NVDEC, same three for VAAPI, same three for D3D11VA) In many use-cases, you want them to be frame-accurate. Final rendition to the viewer is only one use-case of FFmpeg (to be fair, the likely use-case of QSV) What this patchset provides for CC is: * QSV decoders create and attach CC side data to AVFrames * QSV filters preserve CC side data from input to output * in an earlier patch we already added the ability to attach CC data when using QSV encoders This allows for example to do QSV hw decoding, filtering and encoding where CC data is preserved in the output video . In combination with my Subtitle Filtering patchset, you can do almost anything you like with closed captions. You can work with them like any other subtitle data and process them with the new filters, e.g. manipulate the text content, change styles and appearance, like font sizes, colors, outlines, background etc. Then you can burn-in these into the video, or encode in an arbitrary text subtitle format. There are many possibilities… Just one example: [cid:image001.png@01D8760E.B18F06F0] Do you have an idea where/how I could find such stream? You would need to record them off various television services as they use a wide range of encoder manufacturers. Ideally, you could also inject them into various encoders and force them to build complex reordering patterns and check they are frame accurate. We have users from all over the US with different tuners and on different networks, there has never been an issue. What I meant is the “VLC heuristics” subject you mentioned whether you might have some pointer to a commit, issue, bug report or simply a name how they call this? Thanks again, sw [-- Attachment #1.2: image001.png --] [-- Type: image/png, Size: 32096 bytes --] [-- Attachment #2: Type: text/plain, Size: 251 bytes --] _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". ^ permalink raw reply [flat|nested] 65+ messages in thread
end of thread, other threads:[~2022-11-22 5:42 UTC | newest] Thread overview: 65+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2022-05-26 8:08 [FFmpeg-devel] [PATCH 0/6] Implement SEI parsing for QSV decoders ffmpegagent 2022-05-26 8:08 ` [FFmpeg-devel] [PATCH 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() softworkz 2022-05-27 14:35 ` Soft Works 2022-05-26 8:08 ` [FFmpeg-devel] [PATCH 2/6] avcodec/vpp_qsv: Copy side data from input to output frame softworkz 2022-05-31 9:19 ` Xiang, Haihao 2022-05-26 8:08 ` [FFmpeg-devel] [PATCH 3/6] avcodec/mpeg12dec: make mpeg_decode_user_data() accessible softworkz 2022-05-31 9:24 ` Xiang, Haihao 2022-05-26 8:08 ` [FFmpeg-devel] [PATCH 4/6] avcodec/hevcdec: make set_side_data() accessible softworkz 2022-05-31 9:38 ` Xiang, Haihao 2022-05-31 16:03 ` Soft Works 2022-05-31 9:40 ` Xiang, Haihao 2022-05-26 8:08 ` [FFmpeg-devel] [PATCH 5/6] avcodec/h264dec: make h264_export_frame_props() accessible softworkz 2022-05-26 8:08 ` [FFmpeg-devel] [PATCH 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders softworkz 2022-06-01 5:15 ` Xiang, Haihao 2022-06-01 8:51 ` Soft Works 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 0/6] " ffmpegagent 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() softworkz 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 2/6] avcodec/vpp_qsv: Copy side data from input to output frame softworkz 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 3/6] avcodec/mpeg12dec: make mpeg_decode_user_data() accessible softworkz 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 4/6] avcodec/hevcdec: make set_side_data() accessible softworkz 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 5/6] avcodec/h264dec: make h264_export_frame_props() accessible softworkz 2022-06-01 9:06 ` [FFmpeg-devel] [PATCH v2 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders softworkz 2022-06-01 17:20 ` Xiang, Haihao 2022-06-01 17:50 ` Soft Works 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 0/6] " ffmpegagent 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() softworkz 2022-06-24 7:01 ` Xiang, Haihao 2022-06-26 23:35 ` Soft Works 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 2/6] avcodec/vpp_qsv: Copy side data from input to output frame softworkz 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 3/6] avcodec/mpeg12dec: make mpeg_decode_user_data() accessible softworkz 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 4/6] avcodec/hevcdec: make set_side_data() accessible softworkz 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 5/6] avcodec/h264dec: make h264_export_frame_props() accessible softworkz 2022-06-01 18:01 ` [FFmpeg-devel] [PATCH v3 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders softworkz 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 0/6] " ffmpegagent 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() softworkz 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 2/6] avcodec/vpp_qsv: Copy side data from input to output frame softworkz 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 3/6] avcodec/mpeg12dec: make mpeg_decode_user_data() accessible softworkz 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 4/6] avcodec/hevcdec: make set_side_data() accessible softworkz 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 5/6] avcodec/h264dec: make h264_export_frame_props() accessible softworkz 2022-06-26 23:41 ` [FFmpeg-devel] [PATCH v4 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders softworkz 2022-06-28 4:16 ` Andreas Rheinhardt 2022-06-28 5:25 ` Soft Works 2022-06-27 4:18 ` [FFmpeg-devel] [PATCH v4 0/6] " Xiang, Haihao 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 " ffmpegagent 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 1/6] avutil/frame: Add av_frame_copy_side_data() and av_frame_remove_all_side_data() softworkz 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 2/6] avcodec/vpp_qsv: Copy side data from input to output frame softworkz 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 3/6] avcodec/mpeg12dec: make mpeg_decode_user_data() accessible softworkz 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 4/6] avcodec/hevcdec: make set_side_data() accessible softworkz 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 5/6] avcodec/h264dec: make h264_export_frame_props() accessible softworkz 2022-07-01 20:48 ` [FFmpeg-devel] [PATCH v5 6/6] avcodec/qsvdec: Implement SEI parsing for QSV decoders softworkz 2022-07-19 6:55 ` [FFmpeg-devel] [PATCH v5 0/6] " Xiang, Haihao 2022-07-21 21:06 ` Soft Works 2022-07-21 21:56 ` Andreas Rheinhardt 2022-10-21 7:42 ` Soft Works 2022-10-25 4:03 ` [FFmpeg-devel] [PATCH v6 0/3] " ffmpegagent 2022-10-25 4:03 ` [FFmpeg-devel] [PATCH v6 1/3] avcodec/hevcdec: factor out ff_hevc_set_set_to_frame softworkz 2022-10-25 4:03 ` [FFmpeg-devel] [PATCH v6 2/3] avcodec/h264dec: make h264_export_frame_props() accessible softworkz 2022-10-25 4:03 ` [FFmpeg-devel] [PATCH v6 3/3] avcodec/qsvdec: Implement SEI parsing for QSV decoders softworkz 2022-11-21 2:44 ` Xiang, Haihao 2022-11-21 15:58 ` Soft Works 2022-11-22 5:41 ` Xiang, Haihao 2022-06-01 19:15 ` [FFmpeg-devel] [PATCH 0/6] " Kieran Kunhya 2022-06-01 19:46 ` Soft Works 2022-06-01 20:25 ` Kieran Kunhya 2022-06-01 21:24 ` Soft Works
Git Inbox Mirror of the ffmpeg-devel mailing list - see https://ffmpeg.org/mailman/listinfo/ffmpeg-devel This inbox may be cloned and mirrored by anyone: git clone --mirror https://master.gitmailbox.com/ffmpegdev/0 ffmpegdev/git/0.git # If you have public-inbox 1.1+ installed, you may # initialize and index your mirror using the following commands: public-inbox-init -V2 ffmpegdev ffmpegdev/ https://master.gitmailbox.com/ffmpegdev \ ffmpegdev@gitmailbox.com public-inbox-index ffmpegdev Example config snippet for mirrors. AGPL code for this site: git clone https://public-inbox.org/public-inbox.git