Git Inbox Mirror of the ffmpeg-devel mailing list - see https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
 help / color / mirror / Atom feed
* [FFmpeg-devel] [PATCH v7 01/12] avcodec/vaapi_encode: move pic->input_surface initialization to encode_alloc
@ 2024-03-14  8:14 tong1.wu-at-intel.com
  2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 02/12] avcodec/vaapi_encode: introduce a base layer for vaapi encode tong1.wu-at-intel.com
                   ` (10 more replies)
  0 siblings, 11 replies; 15+ messages in thread
From: tong1.wu-at-intel.com @ 2024-03-14  8:14 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Tong Wu

From: Tong Wu <tong1.wu@intel.com>

When allocating the VAAPIEncodePicture, pic->input_surface can be
initialized right in the place. This movement simplifies the send_frame
logic and is the preparation for moving vaapi_encode_send_frame to the base layer.

Signed-off-by: Tong Wu <tong1.wu@intel.com>
---
 libavcodec/vaapi_encode.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c
index 808b79c0c7..bd29dbf0b4 100644
--- a/libavcodec/vaapi_encode.c
+++ b/libavcodec/vaapi_encode.c
@@ -878,7 +878,8 @@ static int vaapi_encode_discard(AVCodecContext *avctx,
     return 0;
 }
 
-static VAAPIEncodePicture *vaapi_encode_alloc(AVCodecContext *avctx)
+static VAAPIEncodePicture *vaapi_encode_alloc(AVCodecContext *avctx,
+                                              const AVFrame *frame)
 {
     VAAPIEncodeContext *ctx = avctx->priv_data;
     VAAPIEncodePicture *pic;
@@ -895,7 +896,7 @@ static VAAPIEncodePicture *vaapi_encode_alloc(AVCodecContext *avctx)
         }
     }
 
-    pic->input_surface = VA_INVALID_ID;
+    pic->input_surface = (VASurfaceID)(uintptr_t)frame->data[3];
     pic->recon_surface = VA_INVALID_ID;
     pic->output_buffer = VA_INVALID_ID;
 
@@ -1331,7 +1332,7 @@ static int vaapi_encode_send_frame(AVCodecContext *avctx, AVFrame *frame)
         if (err < 0)
             return err;
 
-        pic = vaapi_encode_alloc(avctx);
+        pic = vaapi_encode_alloc(avctx, frame);
         if (!pic)
             return AVERROR(ENOMEM);
 
@@ -1344,7 +1345,6 @@ static int vaapi_encode_send_frame(AVCodecContext *avctx, AVFrame *frame)
         if (ctx->input_order == 0 || frame->pict_type == AV_PICTURE_TYPE_I)
             pic->force_idr = 1;
 
-        pic->input_surface = (VASurfaceID)(uintptr_t)frame->data[3];
         pic->pts = frame->pts;
         pic->duration = frame->duration;
 
-- 
2.41.0.windows.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [FFmpeg-devel] [PATCH v7 02/12] avcodec/vaapi_encode: introduce a base layer for vaapi encode
  2024-03-14  8:14 [FFmpeg-devel] [PATCH v7 01/12] avcodec/vaapi_encode: move pic->input_surface initialization to encode_alloc tong1.wu-at-intel.com
@ 2024-03-14  8:14 ` tong1.wu-at-intel.com
  2024-04-15  7:29   ` Xiang, Haihao
  2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 03/12] avcodec/vaapi_encode: move the dpb logic from VAAPI to base layer tong1.wu-at-intel.com
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 15+ messages in thread
From: tong1.wu-at-intel.com @ 2024-03-14  8:14 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Tong Wu

From: Tong Wu <tong1.wu@intel.com>

Since VAAPI and future D3D12VA implementation may share some common parameters,
a base layer encode context is introduced as vaapi context's base.

Signed-off-by: Tong Wu <tong1.wu@intel.com>
---
 libavcodec/hw_base_encode.h     | 241 ++++++++++++++++++++
 libavcodec/vaapi_encode.c       | 392 +++++++++++++++++---------------
 libavcodec/vaapi_encode.h       | 198 +---------------
 libavcodec/vaapi_encode_av1.c   |  69 +++---
 libavcodec/vaapi_encode_h264.c  | 197 ++++++++--------
 libavcodec/vaapi_encode_h265.c  | 159 ++++++-------
 libavcodec/vaapi_encode_mjpeg.c |  20 +-
 libavcodec/vaapi_encode_mpeg2.c |  49 ++--
 libavcodec/vaapi_encode_vp8.c   |  24 +-
 libavcodec/vaapi_encode_vp9.c   |  66 +++---
 10 files changed, 764 insertions(+), 651 deletions(-)
 create mode 100644 libavcodec/hw_base_encode.h

diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h
new file mode 100644
index 0000000000..41b68aa073
--- /dev/null
+++ b/libavcodec/hw_base_encode.h
@@ -0,0 +1,241 @@
+/*
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef AVCODEC_HW_BASE_ENCODE_H
+#define AVCODEC_HW_BASE_ENCODE_H
+
+#include "libavutil/hwcontext.h"
+#include "libavutil/fifo.h"
+
+#include "avcodec.h"
+
+#define MAX_DPB_SIZE 16
+#define MAX_PICTURE_REFERENCES 2
+#define MAX_REORDER_DELAY 16
+#define MAX_ASYNC_DEPTH 64
+#define MAX_REFERENCE_LIST_NUM 2
+
+static inline const char *ff_hw_base_encode_get_pictype_name(const int type) {
+    const char * const picture_type_name[] = { "IDR", "I", "P", "B" };
+    return picture_type_name[type];
+}
+
+enum {
+    PICTURE_TYPE_IDR = 0,
+    PICTURE_TYPE_I   = 1,
+    PICTURE_TYPE_P   = 2,
+    PICTURE_TYPE_B   = 3,
+};
+
+enum {
+    // Codec supports controlling the subdivision of pictures into slices.
+    FLAG_SLICE_CONTROL         = 1 << 0,
+    // Codec only supports constant quality (no rate control).
+    FLAG_CONSTANT_QUALITY_ONLY = 1 << 1,
+    // Codec is intra-only.
+    FLAG_INTRA_ONLY            = 1 << 2,
+    // Codec supports B-pictures.
+    FLAG_B_PICTURES            = 1 << 3,
+    // Codec supports referencing B-pictures.
+    FLAG_B_PICTURE_REFERENCES  = 1 << 4,
+    // Codec supports non-IDR key pictures (that is, key pictures do
+    // not necessarily empty the DPB).
+    FLAG_NON_IDR_KEY_PICTURES  = 1 << 5,
+};
+
+typedef struct HWBaseEncodePicture {
+    struct HWBaseEncodePicture *next;
+
+    int64_t         display_order;
+    int64_t         encode_order;
+    int64_t         pts;
+    int64_t         duration;
+    int             force_idr;
+
+    void           *opaque;
+    AVBufferRef    *opaque_ref;
+
+    int             type;
+    int             b_depth;
+    int             encode_issued;
+    int             encode_complete;
+
+    AVFrame        *input_image;
+    AVFrame        *recon_image;
+
+    void           *priv_data;
+
+    // Whether this picture is a reference picture.
+    int             is_reference;
+
+    // The contents of the DPB after this picture has been decoded.
+    // This will contain the picture itself if it is a reference picture,
+    // but not if it isn't.
+    int                     nb_dpb_pics;
+    struct HWBaseEncodePicture *dpb[MAX_DPB_SIZE];
+    // The reference pictures used in decoding this picture. If they are
+    // used by later pictures they will also appear in the DPB. ref[0][] for
+    // previous reference frames. ref[1][] for future reference frames.
+    int                     nb_refs[MAX_REFERENCE_LIST_NUM];
+    struct HWBaseEncodePicture *refs[MAX_REFERENCE_LIST_NUM][MAX_PICTURE_REFERENCES];
+    // The previous reference picture in encode order.  Must be in at least
+    // one of the reference list and DPB list.
+    struct HWBaseEncodePicture *prev;
+    // Reference count for other pictures referring to this one through
+    // the above pointers, directly from incomplete pictures and indirectly
+    // through completed pictures.
+    int             ref_count[2];
+    int             ref_removed[2];
+} HWBaseEncodePicture;
+
+typedef struct HWEncodePictureOperation {
+    // Alloc memory for the picture structure.
+    HWBaseEncodePicture * (*alloc)(AVCodecContext *avctx, const AVFrame *frame);
+    // Issue the picture structure, which will send the frame surface to HW Encode API.
+    int (*issue)(AVCodecContext *avctx, const HWBaseEncodePicture *base_pic);
+    // Get the output AVPacket.
+    int (*output)(AVCodecContext *avctx, const HWBaseEncodePicture *base_pic, AVPacket *pkt);
+    // Free the picture structure.
+    int (*free)(AVCodecContext *avctx, HWBaseEncodePicture *base_pic);
+}  HWEncodePictureOperation;
+
+typedef struct HWBaseEncodeContext {
+    const AVClass *class;
+
+    // Hardware-specific hooks.
+    const struct HWEncodePictureOperation *op;
+
+    // Global options.
+
+    // Number of I frames between IDR frames.
+    int             idr_interval;
+
+    // Desired B frame reference depth.
+    int             desired_b_depth;
+
+    // Explicitly set RC mode (otherwise attempt to pick from
+    // available modes).
+    int             explicit_rc_mode;
+
+    // Explicitly-set QP, for use with the "qp" options.
+    // (Forces CQP mode when set, overriding everything else.)
+    int             explicit_qp;
+
+    // The required size of surfaces.  This is probably the input
+    // size (AVCodecContext.width|height) aligned up to whatever
+    // block size is required by the codec.
+    int             surface_width;
+    int             surface_height;
+
+    // The block size for slice calculations.
+    int             slice_block_width;
+    int             slice_block_height;
+
+    // RC quality level - meaning depends on codec and RC mode.
+    // In CQP mode this sets the fixed quantiser value.
+    int             rc_quality;
+
+    AVBufferRef    *device_ref;
+    AVHWDeviceContext *device;
+
+    // The hardware frame context containing the input frames.
+    AVBufferRef    *input_frames_ref;
+    AVHWFramesContext *input_frames;
+
+    // The hardware frame context containing the reconstructed frames.
+    AVBufferRef    *recon_frames_ref;
+    AVHWFramesContext *recon_frames;
+
+    // Current encoding window, in display (input) order.
+    HWBaseEncodePicture *pic_start, *pic_end;
+    // The next picture to use as the previous reference picture in
+    // encoding order. Order from small to large in encoding order.
+    HWBaseEncodePicture *next_prev[MAX_PICTURE_REFERENCES];
+    int                  nb_next_prev;
+
+    // Next input order index (display order).
+    int64_t         input_order;
+    // Number of frames that output is behind input.
+    int64_t         output_delay;
+    // Next encode order index.
+    int64_t         encode_order;
+    // Number of frames decode output will need to be delayed.
+    int64_t         decode_delay;
+    // Next output order index (in encode order).
+    int64_t         output_order;
+
+    // Timestamp handling.
+    int64_t         first_pts;
+    int64_t         dts_pts_diff;
+    int64_t         ts_ring[MAX_REORDER_DELAY * 3 +
+                            MAX_ASYNC_DEPTH];
+
+    // Frame type decision.
+    int gop_size;
+    int closed_gop;
+    int gop_per_idr;
+    int p_per_i;
+    int max_b_depth;
+    int b_per_p;
+    int force_idr;
+    int idr_counter;
+    int gop_counter;
+    int end_of_stream;
+    int p_to_gpb;
+
+    // Whether the driver supports ROI at all.
+    int             roi_allowed;
+
+    // The encoder does not support cropping information, so warn about
+    // it the first time we encounter any nonzero crop fields.
+    int             crop_warned;
+    // If the driver does not support ROI then warn the first time we
+    // encounter a frame with ROI side data.
+    int             roi_warned;
+
+    AVFrame         *frame;
+
+    // Whether the HW supports sync buffer function.
+    // If supported, encode_fifo/async_depth will be used together.
+    // Used for output buffer synchronization.
+    int             async_encode;
+
+    // Store buffered pic
+    AVFifo          *encode_fifo;
+    // Max number of frame buffered in encoder.
+    int             async_depth;
+
+    /** Tail data of a pic, now only used for av1 repeat frame header. */
+    AVPacket        *tail_pkt;
+} HWBaseEncodeContext;
+
+#define HW_BASE_ENCODE_COMMON_OPTIONS \
+    { "idr_interval", \
+      "Distance (in I-frames) between key frames", \
+      OFFSET(common.base.idr_interval), AV_OPT_TYPE_INT, \
+      { .i64 = 0 }, 0, INT_MAX, FLAGS }, \
+    { "b_depth", \
+      "Maximum B-frame reference depth", \
+      OFFSET(common.base.desired_b_depth), AV_OPT_TYPE_INT, \
+      { .i64 = 1 }, 1, INT_MAX, FLAGS }, \
+    { "async_depth", "Maximum processing parallelism. " \
+      "Increase this to improve single channel performance.", \
+      OFFSET(common.base.async_depth), AV_OPT_TYPE_INT, \
+      { .i64 = 2 }, 1, MAX_ASYNC_DEPTH, FLAGS }
+
+#endif /* AVCODEC_HW_BASE_ENCODE_H */
diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c
index bd29dbf0b4..4350960248 100644
--- a/libavcodec/vaapi_encode.c
+++ b/libavcodec/vaapi_encode.c
@@ -37,8 +37,6 @@ const AVCodecHWConfigInternal *const ff_vaapi_encode_hw_configs[] = {
     NULL,
 };
 
-static const char * const picture_type_name[] = { "IDR", "I", "P", "B" };
-
 static int vaapi_encode_make_packed_header(AVCodecContext *avctx,
                                            VAAPIEncodePicture *pic,
                                            int type, char *data, size_t bit_len)
@@ -139,22 +137,24 @@ static int vaapi_encode_make_misc_param_buffer(AVCodecContext *avctx,
 static int vaapi_encode_wait(AVCodecContext *avctx,
                              VAAPIEncodePicture *pic)
 {
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
     VAAPIEncodeContext *ctx = avctx->priv_data;
+    HWBaseEncodePicture *base_pic = (HWBaseEncodePicture*)pic;
     VAStatus vas;
 
-    av_assert0(pic->encode_issued);
+    av_assert0(base_pic->encode_issued);
 
-    if (pic->encode_complete) {
+    if (base_pic->encode_complete) {
         // Already waited for this picture.
         return 0;
     }
 
     av_log(avctx, AV_LOG_DEBUG, "Sync to pic %"PRId64"/%"PRId64" "
-           "(input surface %#x).\n", pic->display_order,
-           pic->encode_order, pic->input_surface);
+           "(input surface %#x).\n", base_pic->display_order,
+           base_pic->encode_order, pic->input_surface);
 
 #if VA_CHECK_VERSION(1, 9, 0)
-    if (ctx->has_sync_buffer_func) {
+    if (base_ctx->async_encode) {
         vas = vaSyncBuffer(ctx->hwctx->display,
                            pic->output_buffer,
                            VA_TIMEOUT_INFINITE);
@@ -175,9 +175,9 @@ static int vaapi_encode_wait(AVCodecContext *avctx,
     }
 
     // Input is definitely finished with now.
-    av_frame_free(&pic->input_image);
+    av_frame_free(&base_pic->input_image);
 
-    pic->encode_complete = 1;
+    base_pic->encode_complete = 1;
     return 0;
 }
 
@@ -264,9 +264,11 @@ static int vaapi_encode_make_tile_slice(AVCodecContext *avctx,
 }
 
 static int vaapi_encode_issue(AVCodecContext *avctx,
-                              VAAPIEncodePicture *pic)
+                              HWBaseEncodePicture *base_pic)
 {
-    VAAPIEncodeContext *ctx = avctx->priv_data;
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    VAAPIEncodeContext       *ctx = avctx->priv_data;
+    VAAPIEncodePicture *pic = (VAAPIEncodePicture*)base_pic;
     VAAPIEncodeSlice *slice;
     VAStatus vas;
     int err, i;
@@ -275,52 +277,52 @@ static int vaapi_encode_issue(AVCodecContext *avctx,
     av_unused AVFrameSideData *sd;
 
     av_log(avctx, AV_LOG_DEBUG, "Issuing encode for pic %"PRId64"/%"PRId64" "
-           "as type %s.\n", pic->display_order, pic->encode_order,
-           picture_type_name[pic->type]);
-    if (pic->nb_refs[0] == 0 && pic->nb_refs[1] == 0) {
+           "as type %s.\n", base_pic->display_order, base_pic->encode_order,
+           ff_hw_base_encode_get_pictype_name(base_pic->type));
+    if (base_pic->nb_refs[0] == 0 && base_pic->nb_refs[1] == 0) {
         av_log(avctx, AV_LOG_DEBUG, "No reference pictures.\n");
     } else {
         av_log(avctx, AV_LOG_DEBUG, "L0 refers to");
-        for (i = 0; i < pic->nb_refs[0]; i++) {
+        for (i = 0; i < base_pic->nb_refs[0]; i++) {
             av_log(avctx, AV_LOG_DEBUG, " %"PRId64"/%"PRId64,
-                   pic->refs[0][i]->display_order, pic->refs[0][i]->encode_order);
+                   base_pic->refs[0][i]->display_order, base_pic->refs[0][i]->encode_order);
         }
         av_log(avctx, AV_LOG_DEBUG, ".\n");
 
-        if (pic->nb_refs[1]) {
+        if (base_pic->nb_refs[1]) {
             av_log(avctx, AV_LOG_DEBUG, "L1 refers to");
-            for (i = 0; i < pic->nb_refs[1]; i++) {
+            for (i = 0; i < base_pic->nb_refs[1]; i++) {
                 av_log(avctx, AV_LOG_DEBUG, " %"PRId64"/%"PRId64,
-                       pic->refs[1][i]->display_order, pic->refs[1][i]->encode_order);
+                       base_pic->refs[1][i]->display_order, base_pic->refs[1][i]->encode_order);
             }
             av_log(avctx, AV_LOG_DEBUG, ".\n");
         }
     }
 
-    av_assert0(!pic->encode_issued);
-    for (i = 0; i < pic->nb_refs[0]; i++) {
-        av_assert0(pic->refs[0][i]);
-        av_assert0(pic->refs[0][i]->encode_issued);
+    av_assert0(!base_pic->encode_issued);
+    for (i = 0; i < base_pic->nb_refs[0]; i++) {
+        av_assert0(base_pic->refs[0][i]);
+        av_assert0(base_pic->refs[0][i]->encode_issued);
     }
-    for (i = 0; i < pic->nb_refs[1]; i++) {
-        av_assert0(pic->refs[1][i]);
-        av_assert0(pic->refs[1][i]->encode_issued);
+    for (i = 0; i < base_pic->nb_refs[1]; i++) {
+        av_assert0(base_pic->refs[1][i]);
+        av_assert0(base_pic->refs[1][i]->encode_issued);
     }
 
     av_log(avctx, AV_LOG_DEBUG, "Input surface is %#x.\n", pic->input_surface);
 
-    pic->recon_image = av_frame_alloc();
-    if (!pic->recon_image) {
+    base_pic->recon_image = av_frame_alloc();
+    if (!base_pic->recon_image) {
         err = AVERROR(ENOMEM);
         goto fail;
     }
 
-    err = av_hwframe_get_buffer(ctx->recon_frames_ref, pic->recon_image, 0);
+    err = av_hwframe_get_buffer(base_ctx->recon_frames_ref, base_pic->recon_image, 0);
     if (err < 0) {
         err = AVERROR(ENOMEM);
         goto fail;
     }
-    pic->recon_surface = (VASurfaceID)(uintptr_t)pic->recon_image->data[3];
+    pic->recon_surface = (VASurfaceID)(uintptr_t)base_pic->recon_image->data[3];
     av_log(avctx, AV_LOG_DEBUG, "Recon surface is %#x.\n", pic->recon_surface);
 
     pic->output_buffer_ref = ff_refstruct_pool_get(ctx->output_buffer_pool);
@@ -344,7 +346,7 @@ static int vaapi_encode_issue(AVCodecContext *avctx,
 
     pic->nb_param_buffers = 0;
 
-    if (pic->type == PICTURE_TYPE_IDR && ctx->codec->init_sequence_params) {
+    if (base_pic->type == PICTURE_TYPE_IDR && ctx->codec->init_sequence_params) {
         err = vaapi_encode_make_param_buffer(avctx, pic,
                                              VAEncSequenceParameterBufferType,
                                              ctx->codec_sequence_params,
@@ -353,7 +355,7 @@ static int vaapi_encode_issue(AVCodecContext *avctx,
             goto fail;
     }
 
-    if (pic->type == PICTURE_TYPE_IDR) {
+    if (base_pic->type == PICTURE_TYPE_IDR) {
         for (i = 0; i < ctx->nb_global_params; i++) {
             err = vaapi_encode_make_misc_param_buffer(avctx, pic,
                                                       ctx->global_params_type[i],
@@ -390,7 +392,7 @@ static int vaapi_encode_issue(AVCodecContext *avctx,
     }
 #endif
 
-    if (pic->type == PICTURE_TYPE_IDR) {
+    if (base_pic->type == PICTURE_TYPE_IDR) {
         if (ctx->va_packed_headers & VA_ENC_PACKED_HEADER_SEQUENCE &&
             ctx->codec->write_sequence_header) {
             bit_len = 8 * sizeof(data);
@@ -530,9 +532,9 @@ static int vaapi_encode_issue(AVCodecContext *avctx,
     }
 
 #if VA_CHECK_VERSION(1, 0, 0)
-    sd = av_frame_get_side_data(pic->input_image,
+    sd = av_frame_get_side_data(base_pic->input_image,
                                 AV_FRAME_DATA_REGIONS_OF_INTEREST);
-    if (sd && ctx->roi_allowed) {
+    if (sd && base_ctx->roi_allowed) {
         const AVRegionOfInterest *roi;
         uint32_t roi_size;
         VAEncMiscParameterBufferROI param_roi;
@@ -543,11 +545,11 @@ static int vaapi_encode_issue(AVCodecContext *avctx,
         av_assert0(roi_size && sd->size % roi_size == 0);
         nb_roi = sd->size / roi_size;
         if (nb_roi > ctx->roi_max_regions) {
-            if (!ctx->roi_warned) {
+            if (!base_ctx->roi_warned) {
                 av_log(avctx, AV_LOG_WARNING, "More ROIs set than "
                        "supported by driver (%d > %d).\n",
                        nb_roi, ctx->roi_max_regions);
-                ctx->roi_warned = 1;
+                base_ctx->roi_warned = 1;
             }
             nb_roi = ctx->roi_max_regions;
         }
@@ -640,7 +642,7 @@ static int vaapi_encode_issue(AVCodecContext *avctx,
         }
     }
 
-    pic->encode_issued = 1;
+    base_pic->encode_issued = 1;
 
     return 0;
 
@@ -658,17 +660,18 @@ fail_at_end:
     av_freep(&pic->param_buffers);
     av_freep(&pic->slices);
     av_freep(&pic->roi);
-    av_frame_free(&pic->recon_image);
+    av_frame_free(&base_pic->recon_image);
     ff_refstruct_unref(&pic->output_buffer_ref);
     pic->output_buffer = VA_INVALID_ID;
     return err;
 }
 
 static int vaapi_encode_set_output_property(AVCodecContext *avctx,
-                                            VAAPIEncodePicture *pic,
+                                            HWBaseEncodePicture *pic,
                                             AVPacket *pkt)
 {
-    VAAPIEncodeContext *ctx = avctx->priv_data;
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    VAAPIEncodeContext       *ctx = avctx->priv_data;
 
     if (pic->type == PICTURE_TYPE_IDR)
         pkt->flags |= AV_PKT_FLAG_KEY;
@@ -689,16 +692,16 @@ static int vaapi_encode_set_output_property(AVCodecContext *avctx,
         return 0;
     }
 
-    if (ctx->output_delay == 0) {
+    if (base_ctx->output_delay == 0) {
         pkt->dts = pkt->pts;
-    } else if (pic->encode_order < ctx->decode_delay) {
-        if (ctx->ts_ring[pic->encode_order] < INT64_MIN + ctx->dts_pts_diff)
+    } else if (pic->encode_order < base_ctx->decode_delay) {
+        if (base_ctx->ts_ring[pic->encode_order] < INT64_MIN + base_ctx->dts_pts_diff)
             pkt->dts = INT64_MIN;
         else
-            pkt->dts = ctx->ts_ring[pic->encode_order] - ctx->dts_pts_diff;
+            pkt->dts = base_ctx->ts_ring[pic->encode_order] - base_ctx->dts_pts_diff;
     } else {
-        pkt->dts = ctx->ts_ring[(pic->encode_order - ctx->decode_delay) %
-                                (3 * ctx->output_delay + ctx->async_depth)];
+        pkt->dts = base_ctx->ts_ring[(pic->encode_order - base_ctx->decode_delay) %
+                                     (3 * base_ctx->output_delay + base_ctx->async_depth)];
     }
 
     return 0;
@@ -817,9 +820,11 @@ end:
 }
 
 static int vaapi_encode_output(AVCodecContext *avctx,
-                               VAAPIEncodePicture *pic, AVPacket *pkt)
+                               HWBaseEncodePicture *base_pic, AVPacket *pkt)
 {
-    VAAPIEncodeContext *ctx = avctx->priv_data;
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    VAAPIEncodeContext       *ctx = avctx->priv_data;
+    VAAPIEncodePicture       *pic = (VAAPIEncodePicture*)base_pic;
     AVPacket *pkt_ptr = pkt;
     int err;
 
@@ -832,17 +837,17 @@ static int vaapi_encode_output(AVCodecContext *avctx,
         ctx->coded_buffer_ref = ff_refstruct_ref(pic->output_buffer_ref);
 
         if (pic->tail_size) {
-            if (ctx->tail_pkt->size) {
+            if (base_ctx->tail_pkt->size) {
                 err = AVERROR_BUG;
                 goto end;
             }
 
-            err = ff_get_encode_buffer(avctx, ctx->tail_pkt, pic->tail_size, 0);
+            err = ff_get_encode_buffer(avctx, base_ctx->tail_pkt, pic->tail_size, 0);
             if (err < 0)
                 goto end;
 
-            memcpy(ctx->tail_pkt->data, pic->tail_data, pic->tail_size);
-            pkt_ptr = ctx->tail_pkt;
+            memcpy(base_ctx->tail_pkt->data, pic->tail_data, pic->tail_size);
+            pkt_ptr = base_ctx->tail_pkt;
         }
     } else {
         err = vaapi_encode_get_coded_data(avctx, pic, pkt);
@@ -851,9 +856,9 @@ static int vaapi_encode_output(AVCodecContext *avctx,
     }
 
     av_log(avctx, AV_LOG_DEBUG, "Output read for pic %"PRId64"/%"PRId64".\n",
-           pic->display_order, pic->encode_order);
+           base_pic->display_order, base_pic->encode_order);
 
-    vaapi_encode_set_output_property(avctx, pic, pkt_ptr);
+    vaapi_encode_set_output_property(avctx, base_pic, pkt_ptr);
 
 end:
     ff_refstruct_unref(&pic->output_buffer_ref);
@@ -864,12 +869,13 @@ end:
 static int vaapi_encode_discard(AVCodecContext *avctx,
                                 VAAPIEncodePicture *pic)
 {
+    HWBaseEncodePicture *base_pic = (HWBaseEncodePicture*)pic;
     vaapi_encode_wait(avctx, pic);
 
     if (pic->output_buffer_ref) {
         av_log(avctx, AV_LOG_DEBUG, "Discard output for pic "
                "%"PRId64"/%"PRId64".\n",
-               pic->display_order, pic->encode_order);
+               base_pic->display_order, base_pic->encode_order);
 
         ff_refstruct_unref(&pic->output_buffer_ref);
         pic->output_buffer = VA_INVALID_ID;
@@ -878,8 +884,8 @@ static int vaapi_encode_discard(AVCodecContext *avctx,
     return 0;
 }
 
-static VAAPIEncodePicture *vaapi_encode_alloc(AVCodecContext *avctx,
-                                              const AVFrame *frame)
+static HWBaseEncodePicture *vaapi_encode_alloc(AVCodecContext *avctx,
+                                               const AVFrame *frame)
 {
     VAAPIEncodeContext *ctx = avctx->priv_data;
     VAAPIEncodePicture *pic;
@@ -889,8 +895,8 @@ static VAAPIEncodePicture *vaapi_encode_alloc(AVCodecContext *avctx,
         return NULL;
 
     if (ctx->codec->picture_priv_data_size > 0) {
-        pic->priv_data = av_mallocz(ctx->codec->picture_priv_data_size);
-        if (!pic->priv_data) {
+        pic->base.priv_data = av_mallocz(ctx->codec->picture_priv_data_size);
+        if (!pic->base.priv_data) {
             av_freep(&pic);
             return NULL;
         }
@@ -900,15 +906,16 @@ static VAAPIEncodePicture *vaapi_encode_alloc(AVCodecContext *avctx,
     pic->recon_surface = VA_INVALID_ID;
     pic->output_buffer = VA_INVALID_ID;
 
-    return pic;
+    return (HWBaseEncodePicture*)pic;
 }
 
 static int vaapi_encode_free(AVCodecContext *avctx,
-                             VAAPIEncodePicture *pic)
+                             HWBaseEncodePicture *base_pic)
 {
+    VAAPIEncodePicture *pic = (VAAPIEncodePicture*)base_pic;
     int i;
 
-    if (pic->encode_issued)
+    if (base_pic->encode_issued)
         vaapi_encode_discard(avctx, pic);
 
     if (pic->slices) {
@@ -916,17 +923,17 @@ static int vaapi_encode_free(AVCodecContext *avctx,
             av_freep(&pic->slices[i].codec_slice_params);
     }
 
-    av_frame_free(&pic->input_image);
-    av_frame_free(&pic->recon_image);
+    av_frame_free(&base_pic->input_image);
+    av_frame_free(&base_pic->recon_image);
 
-    av_buffer_unref(&pic->opaque_ref);
+    av_buffer_unref(&base_pic->opaque_ref);
 
     av_freep(&pic->param_buffers);
     av_freep(&pic->slices);
     // Output buffer should already be destroyed.
     av_assert0(pic->output_buffer == VA_INVALID_ID);
 
-    av_freep(&pic->priv_data);
+    av_freep(&base_pic->priv_data);
     av_freep(&pic->codec_picture_params);
     av_freep(&pic->roi);
 
@@ -936,8 +943,8 @@ static int vaapi_encode_free(AVCodecContext *avctx,
 }
 
 static void vaapi_encode_add_ref(AVCodecContext *avctx,
-                                 VAAPIEncodePicture *pic,
-                                 VAAPIEncodePicture *target,
+                                 HWBaseEncodePicture *pic,
+                                 HWBaseEncodePicture *target,
                                  int is_ref, int in_dpb, int prev)
 {
     int refs = 0;
@@ -970,7 +977,7 @@ static void vaapi_encode_add_ref(AVCodecContext *avctx,
 }
 
 static void vaapi_encode_remove_refs(AVCodecContext *avctx,
-                                     VAAPIEncodePicture *pic,
+                                     HWBaseEncodePicture *pic,
                                      int level)
 {
     int i;
@@ -1006,14 +1013,14 @@ static void vaapi_encode_remove_refs(AVCodecContext *avctx,
 }
 
 static void vaapi_encode_set_b_pictures(AVCodecContext *avctx,
-                                        VAAPIEncodePicture *start,
-                                        VAAPIEncodePicture *end,
-                                        VAAPIEncodePicture *prev,
+                                        HWBaseEncodePicture *start,
+                                        HWBaseEncodePicture *end,
+                                        HWBaseEncodePicture *prev,
                                         int current_depth,
-                                        VAAPIEncodePicture **last)
+                                        HWBaseEncodePicture **last)
 {
-    VAAPIEncodeContext *ctx = avctx->priv_data;
-    VAAPIEncodePicture *pic, *next, *ref;
+    HWBaseEncodeContext *ctx = avctx->priv_data;
+    HWBaseEncodePicture *pic, *next, *ref;
     int i, len;
 
     av_assert0(start && end && start != end && start->next != end);
@@ -1070,9 +1077,9 @@ static void vaapi_encode_set_b_pictures(AVCodecContext *avctx,
 }
 
 static void vaapi_encode_add_next_prev(AVCodecContext *avctx,
-                                       VAAPIEncodePicture *pic)
+                                       HWBaseEncodePicture *pic)
 {
-    VAAPIEncodeContext *ctx = avctx->priv_data;
+    HWBaseEncodeContext *ctx = avctx->priv_data;
     int i;
 
     if (!pic)
@@ -1103,10 +1110,10 @@ static void vaapi_encode_add_next_prev(AVCodecContext *avctx,
 }
 
 static int vaapi_encode_pick_next(AVCodecContext *avctx,
-                                  VAAPIEncodePicture **pic_out)
+                                  HWBaseEncodePicture **pic_out)
 {
-    VAAPIEncodeContext *ctx = avctx->priv_data;
-    VAAPIEncodePicture *pic = NULL, *prev = NULL, *next, *start;
+    HWBaseEncodeContext *ctx = avctx->priv_data;
+    HWBaseEncodePicture *pic = NULL, *prev = NULL, *next, *start;
     int i, b_counter, closed_gop_end;
 
     // If there are any B-frames already queued, the next one to encode
@@ -1256,8 +1263,8 @@ static int vaapi_encode_pick_next(AVCodecContext *avctx,
 
 static int vaapi_encode_clear_old(AVCodecContext *avctx)
 {
-    VAAPIEncodeContext *ctx = avctx->priv_data;
-    VAAPIEncodePicture *pic, *prev, *next;
+    HWBaseEncodeContext *ctx = avctx->priv_data;
+    HWBaseEncodePicture *pic, *prev, *next;
 
     av_assert0(ctx->pic_start);
 
@@ -1295,7 +1302,7 @@ static int vaapi_encode_clear_old(AVCodecContext *avctx)
 static int vaapi_encode_check_frame(AVCodecContext *avctx,
                                     const AVFrame *frame)
 {
-    VAAPIEncodeContext *ctx = avctx->priv_data;
+    HWBaseEncodeContext *ctx = avctx->priv_data;
 
     if ((frame->crop_top  || frame->crop_bottom ||
          frame->crop_left || frame->crop_right) && !ctx->crop_warned) {
@@ -1320,8 +1327,8 @@ static int vaapi_encode_check_frame(AVCodecContext *avctx,
 
 static int vaapi_encode_send_frame(AVCodecContext *avctx, AVFrame *frame)
 {
-    VAAPIEncodeContext *ctx = avctx->priv_data;
-    VAAPIEncodePicture *pic;
+    HWBaseEncodeContext *ctx = avctx->priv_data;
+    HWBaseEncodePicture *pic;
     int err;
 
     if (frame) {
@@ -1395,15 +1402,15 @@ fail:
 
 int ff_vaapi_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt)
 {
-    VAAPIEncodeContext *ctx = avctx->priv_data;
-    VAAPIEncodePicture *pic = NULL;
+    HWBaseEncodeContext *ctx = avctx->priv_data;
+    HWBaseEncodePicture *pic = NULL;
     AVFrame *frame = ctx->frame;
     int err;
 
 start:
     /** if no B frame before repeat P frame, sent repeat P frame out. */
     if (ctx->tail_pkt->size) {
-        for (VAAPIEncodePicture *tmp = ctx->pic_start; tmp; tmp = tmp->next) {
+        for (HWBaseEncodePicture *tmp = ctx->pic_start; tmp; tmp = tmp->next) {
             if (tmp->type == PICTURE_TYPE_B && tmp->pts < ctx->tail_pkt->pts)
                 break;
             else if (!tmp->next) {
@@ -1431,7 +1438,7 @@ start:
             return AVERROR(EAGAIN);
     }
 
-    if (ctx->has_sync_buffer_func) {
+    if (ctx->async_encode) {
         if (av_fifo_can_write(ctx->encode_fifo)) {
             err = vaapi_encode_pick_next(avctx, &pic);
             if (!err) {
@@ -1551,9 +1558,10 @@ static const VAEntrypoint vaapi_encode_entrypoints_low_power[] = {
 
 static av_cold int vaapi_encode_profile_entrypoint(AVCodecContext *avctx)
 {
-    VAAPIEncodeContext      *ctx = avctx->priv_data;
-    VAProfile    *va_profiles    = NULL;
-    VAEntrypoint *va_entrypoints = NULL;
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    VAAPIEncodeContext       *ctx = avctx->priv_data;
+    VAProfile     *va_profiles    = NULL;
+    VAEntrypoint  *va_entrypoints = NULL;
     VAStatus vas;
     const VAEntrypoint *usable_entrypoints;
     const VAAPIEncodeProfile *profile;
@@ -1576,10 +1584,10 @@ static av_cold int vaapi_encode_profile_entrypoint(AVCodecContext *avctx)
         usable_entrypoints = vaapi_encode_entrypoints_normal;
     }
 
-    desc = av_pix_fmt_desc_get(ctx->input_frames->sw_format);
+    desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format);
     if (!desc) {
         av_log(avctx, AV_LOG_ERROR, "Invalid input pixfmt (%d).\n",
-               ctx->input_frames->sw_format);
+               base_ctx->input_frames->sw_format);
         return AVERROR(EINVAL);
     }
     depth = desc->comp[0].depth;
@@ -1772,7 +1780,8 @@ static const VAAPIEncodeRCMode vaapi_encode_rc_modes[] = {
 
 static av_cold int vaapi_encode_init_rate_control(AVCodecContext *avctx)
 {
-    VAAPIEncodeContext *ctx = avctx->priv_data;
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    VAAPIEncodeContext       *ctx = avctx->priv_data;
     uint32_t supported_va_rc_modes;
     const VAAPIEncodeRCMode *rc_mode;
     int64_t rc_bits_per_second;
@@ -1855,10 +1864,10 @@ static av_cold int vaapi_encode_init_rate_control(AVCodecContext *avctx)
         } \
     } while (0)
 
-    if (ctx->explicit_rc_mode)
-        TRY_RC_MODE(ctx->explicit_rc_mode, 1);
+    if (base_ctx->explicit_rc_mode)
+        TRY_RC_MODE(base_ctx->explicit_rc_mode, 1);
 
-    if (ctx->explicit_qp)
+    if (base_ctx->explicit_qp)
         TRY_RC_MODE(RC_MODE_CQP, 1);
 
     if (ctx->codec->flags & FLAG_CONSTANT_QUALITY_ONLY)
@@ -1953,8 +1962,8 @@ rc_mode_found:
     }
 
     if (rc_mode->quality) {
-        if (ctx->explicit_qp) {
-            rc_quality = ctx->explicit_qp;
+        if (base_ctx->explicit_qp) {
+            rc_quality = base_ctx->explicit_qp;
         } else if (avctx->global_quality > 0) {
             rc_quality = avctx->global_quality;
         } else {
@@ -2010,10 +2019,10 @@ rc_mode_found:
         return AVERROR(EINVAL);
     }
 
-    ctx->rc_mode     = rc_mode;
-    ctx->rc_quality  = rc_quality;
-    ctx->va_rc_mode  = rc_mode->va_mode;
-    ctx->va_bit_rate = rc_bits_per_second;
+    ctx->rc_mode          = rc_mode;
+    base_ctx->rc_quality  = rc_quality;
+    ctx->va_rc_mode       = rc_mode->va_mode;
+    ctx->va_bit_rate      = rc_bits_per_second;
 
     av_log(avctx, AV_LOG_VERBOSE, "RC mode: %s.\n", rc_mode->name);
     if (rc_attr.value == VA_ATTRIB_NOT_SUPPORTED) {
@@ -2159,7 +2168,8 @@ static av_cold int vaapi_encode_init_max_frame_size(AVCodecContext *avctx)
 
 static av_cold int vaapi_encode_init_gop_structure(AVCodecContext *avctx)
 {
-    VAAPIEncodeContext *ctx = avctx->priv_data;
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    VAAPIEncodeContext       *ctx = avctx->priv_data;
     VAStatus vas;
     VAConfigAttrib attr = { VAConfigAttribEncMaxRefFrames };
     uint32_t ref_l0, ref_l1;
@@ -2182,7 +2192,7 @@ static av_cold int vaapi_encode_init_gop_structure(AVCodecContext *avctx)
         ref_l1 = attr.value >> 16 & 0xffff;
     }
 
-    ctx->p_to_gpb = 0;
+    base_ctx->p_to_gpb = 0;
     prediction_pre_only = 0;
 
 #if VA_CHECK_VERSION(1, 9, 0)
@@ -2218,7 +2228,7 @@ static av_cold int vaapi_encode_init_gop_structure(AVCodecContext *avctx)
 
             if (attr.value & VA_PREDICTION_DIRECTION_BI_NOT_EMPTY) {
                 if (ref_l0 > 0 && ref_l1 > 0) {
-                    ctx->p_to_gpb = 1;
+                    base_ctx->p_to_gpb = 1;
                     av_log(avctx, AV_LOG_VERBOSE, "Driver does not support P-frames, "
                            "replacing them with B-frames.\n");
                 }
@@ -2230,7 +2240,7 @@ static av_cold int vaapi_encode_init_gop_structure(AVCodecContext *avctx)
     if (ctx->codec->flags & FLAG_INTRA_ONLY ||
         avctx->gop_size <= 1) {
         av_log(avctx, AV_LOG_VERBOSE, "Using intra frames only.\n");
-        ctx->gop_size = 1;
+        base_ctx->gop_size = 1;
     } else if (ref_l0 < 1) {
         av_log(avctx, AV_LOG_ERROR, "Driver does not support any "
                "reference frames.\n");
@@ -2238,41 +2248,41 @@ static av_cold int vaapi_encode_init_gop_structure(AVCodecContext *avctx)
     } else if (!(ctx->codec->flags & FLAG_B_PICTURES) ||
                ref_l1 < 1 || avctx->max_b_frames < 1 ||
                prediction_pre_only) {
-        if (ctx->p_to_gpb)
+        if (base_ctx->p_to_gpb)
            av_log(avctx, AV_LOG_VERBOSE, "Using intra and B-frames "
                   "(supported references: %d / %d).\n",
                   ref_l0, ref_l1);
         else
             av_log(avctx, AV_LOG_VERBOSE, "Using intra and P-frames "
                    "(supported references: %d / %d).\n", ref_l0, ref_l1);
-        ctx->gop_size = avctx->gop_size;
-        ctx->p_per_i  = INT_MAX;
-        ctx->b_per_p  = 0;
+        base_ctx->gop_size = avctx->gop_size;
+        base_ctx->p_per_i  = INT_MAX;
+        base_ctx->b_per_p  = 0;
     } else {
-       if (ctx->p_to_gpb)
+       if (base_ctx->p_to_gpb)
            av_log(avctx, AV_LOG_VERBOSE, "Using intra and B-frames "
                   "(supported references: %d / %d).\n",
                   ref_l0, ref_l1);
        else
            av_log(avctx, AV_LOG_VERBOSE, "Using intra, P- and B-frames "
                   "(supported references: %d / %d).\n", ref_l0, ref_l1);
-        ctx->gop_size = avctx->gop_size;
-        ctx->p_per_i  = INT_MAX;
-        ctx->b_per_p  = avctx->max_b_frames;
+        base_ctx->gop_size = avctx->gop_size;
+        base_ctx->p_per_i  = INT_MAX;
+        base_ctx->b_per_p  = avctx->max_b_frames;
         if (ctx->codec->flags & FLAG_B_PICTURE_REFERENCES) {
-            ctx->max_b_depth = FFMIN(ctx->desired_b_depth,
-                                     av_log2(ctx->b_per_p) + 1);
+            base_ctx->max_b_depth = FFMIN(base_ctx->desired_b_depth,
+                                          av_log2(base_ctx->b_per_p) + 1);
         } else {
-            ctx->max_b_depth = 1;
+            base_ctx->max_b_depth = 1;
         }
     }
 
     if (ctx->codec->flags & FLAG_NON_IDR_KEY_PICTURES) {
-        ctx->closed_gop  = !!(avctx->flags & AV_CODEC_FLAG_CLOSED_GOP);
-        ctx->gop_per_idr = ctx->idr_interval + 1;
+        base_ctx->closed_gop  = !!(avctx->flags & AV_CODEC_FLAG_CLOSED_GOP);
+        base_ctx->gop_per_idr = base_ctx->idr_interval + 1;
     } else {
-        ctx->closed_gop  = 1;
-        ctx->gop_per_idr = 1;
+        base_ctx->closed_gop  = 1;
+        base_ctx->gop_per_idr = 1;
     }
 
     return 0;
@@ -2386,6 +2396,7 @@ static av_cold int vaapi_encode_init_tile_slice_structure(AVCodecContext *avctx,
 
 static av_cold int vaapi_encode_init_slice_structure(AVCodecContext *avctx)
 {
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
     VAAPIEncodeContext *ctx = avctx->priv_data;
     VAConfigAttrib attr[3] = { { VAConfigAttribEncMaxSlices },
                                { VAConfigAttribEncSliceStructure },
@@ -2405,12 +2416,12 @@ static av_cold int vaapi_encode_init_slice_structure(AVCodecContext *avctx)
         return 0;
     }
 
-    av_assert0(ctx->slice_block_height > 0 && ctx->slice_block_width > 0);
+    av_assert0(base_ctx->slice_block_height > 0 && base_ctx->slice_block_width > 0);
 
-    ctx->slice_block_rows = (avctx->height + ctx->slice_block_height - 1) /
-                             ctx->slice_block_height;
-    ctx->slice_block_cols = (avctx->width  + ctx->slice_block_width  - 1) /
-                             ctx->slice_block_width;
+    ctx->slice_block_rows = (avctx->height + base_ctx->slice_block_height - 1) /
+                             base_ctx->slice_block_height;
+    ctx->slice_block_cols = (avctx->width  + base_ctx->slice_block_width  - 1) /
+                             base_ctx->slice_block_width;
 
     if (avctx->slices <= 1 && !ctx->tile_rows && !ctx->tile_cols) {
         ctx->nb_slices  = 1;
@@ -2585,7 +2596,8 @@ static av_cold int vaapi_encode_init_quality(AVCodecContext *avctx)
 static av_cold int vaapi_encode_init_roi(AVCodecContext *avctx)
 {
 #if VA_CHECK_VERSION(1, 0, 0)
-    VAAPIEncodeContext *ctx = avctx->priv_data;
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    VAAPIEncodeContext       *ctx = avctx->priv_data;
     VAStatus vas;
     VAConfigAttrib attr = { VAConfigAttribEncROI };
 
@@ -2600,14 +2612,14 @@ static av_cold int vaapi_encode_init_roi(AVCodecContext *avctx)
     }
 
     if (attr.value == VA_ATTRIB_NOT_SUPPORTED) {
-        ctx->roi_allowed = 0;
+        base_ctx->roi_allowed = 0;
     } else {
         VAConfigAttribValEncROI roi = {
             .value = attr.value,
         };
 
         ctx->roi_max_regions = roi.bits.num_roi_regions;
-        ctx->roi_allowed = ctx->roi_max_regions > 0 &&
+        base_ctx->roi_allowed = ctx->roi_max_regions > 0 &&
             (ctx->va_rc_mode == VA_RC_CQP ||
              roi.bits.roi_rc_qp_delta_support);
     }
@@ -2631,7 +2643,8 @@ static void vaapi_encode_free_output_buffer(FFRefStructOpaque opaque,
 static int vaapi_encode_alloc_output_buffer(FFRefStructOpaque opaque, void *obj)
 {
     AVCodecContext   *avctx = opaque.nc;
-    VAAPIEncodeContext *ctx = avctx->priv_data;
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    VAAPIEncodeContext       *ctx = avctx->priv_data;
     VABufferID *buffer_id = obj;
     VAStatus vas;
 
@@ -2641,7 +2654,7 @@ static int vaapi_encode_alloc_output_buffer(FFRefStructOpaque opaque, void *obj)
     // bound on that.
     vas = vaCreateBuffer(ctx->hwctx->display, ctx->va_context,
                          VAEncCodedBufferType,
-                         3 * ctx->surface_width * ctx->surface_height +
+                         3 * base_ctx->surface_width * base_ctx->surface_height +
                          (1 << 16), 1, 0, buffer_id);
     if (vas != VA_STATUS_SUCCESS) {
         av_log(avctx, AV_LOG_ERROR, "Failed to create bitstream "
@@ -2656,20 +2669,21 @@ static int vaapi_encode_alloc_output_buffer(FFRefStructOpaque opaque, void *obj)
 
 static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx)
 {
-    VAAPIEncodeContext *ctx = avctx->priv_data;
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    VAAPIEncodeContext       *ctx = avctx->priv_data;
     AVVAAPIHWConfig *hwconfig = NULL;
     AVHWFramesConstraints *constraints = NULL;
     enum AVPixelFormat recon_format;
     int err, i;
 
-    hwconfig = av_hwdevice_hwconfig_alloc(ctx->device_ref);
+    hwconfig = av_hwdevice_hwconfig_alloc(base_ctx->device_ref);
     if (!hwconfig) {
         err = AVERROR(ENOMEM);
         goto fail;
     }
     hwconfig->config_id = ctx->va_config;
 
-    constraints = av_hwdevice_get_hwframe_constraints(ctx->device_ref,
+    constraints = av_hwdevice_get_hwframe_constraints(base_ctx->device_ref,
                                                       hwconfig);
     if (!constraints) {
         err = AVERROR(ENOMEM);
@@ -2682,9 +2696,9 @@ static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx)
     recon_format = AV_PIX_FMT_NONE;
     if (constraints->valid_sw_formats) {
         for (i = 0; constraints->valid_sw_formats[i] != AV_PIX_FMT_NONE; i++) {
-            if (ctx->input_frames->sw_format ==
+            if (base_ctx->input_frames->sw_format ==
                 constraints->valid_sw_formats[i]) {
-                recon_format = ctx->input_frames->sw_format;
+                recon_format = base_ctx->input_frames->sw_format;
                 break;
             }
         }
@@ -2695,18 +2709,18 @@ static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx)
         }
     } else {
         // No idea what to use; copy input format.
-        recon_format = ctx->input_frames->sw_format;
+        recon_format = base_ctx->input_frames->sw_format;
     }
     av_log(avctx, AV_LOG_DEBUG, "Using %s as format of "
            "reconstructed frames.\n", av_get_pix_fmt_name(recon_format));
 
-    if (ctx->surface_width  < constraints->min_width  ||
-        ctx->surface_height < constraints->min_height ||
-        ctx->surface_width  > constraints->max_width ||
-        ctx->surface_height > constraints->max_height) {
+    if (base_ctx->surface_width  < constraints->min_width  ||
+        base_ctx->surface_height < constraints->min_height ||
+        base_ctx->surface_width  > constraints->max_width ||
+        base_ctx->surface_height > constraints->max_height) {
         av_log(avctx, AV_LOG_ERROR, "Hardware does not support encoding at "
                "size %dx%d (constraints: width %d-%d height %d-%d).\n",
-               ctx->surface_width, ctx->surface_height,
+               base_ctx->surface_width, base_ctx->surface_height,
                constraints->min_width,  constraints->max_width,
                constraints->min_height, constraints->max_height);
         err = AVERROR(EINVAL);
@@ -2716,19 +2730,19 @@ static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx)
     av_freep(&hwconfig);
     av_hwframe_constraints_free(&constraints);
 
-    ctx->recon_frames_ref = av_hwframe_ctx_alloc(ctx->device_ref);
-    if (!ctx->recon_frames_ref) {
+    base_ctx->recon_frames_ref = av_hwframe_ctx_alloc(base_ctx->device_ref);
+    if (!base_ctx->recon_frames_ref) {
         err = AVERROR(ENOMEM);
         goto fail;
     }
-    ctx->recon_frames = (AVHWFramesContext*)ctx->recon_frames_ref->data;
+    base_ctx->recon_frames = (AVHWFramesContext*)base_ctx->recon_frames_ref->data;
 
-    ctx->recon_frames->format    = AV_PIX_FMT_VAAPI;
-    ctx->recon_frames->sw_format = recon_format;
-    ctx->recon_frames->width     = ctx->surface_width;
-    ctx->recon_frames->height    = ctx->surface_height;
+    base_ctx->recon_frames->format    = AV_PIX_FMT_VAAPI;
+    base_ctx->recon_frames->sw_format = recon_format;
+    base_ctx->recon_frames->width     = base_ctx->surface_width;
+    base_ctx->recon_frames->height    = base_ctx->surface_height;
 
-    err = av_hwframe_ctx_init(ctx->recon_frames_ref);
+    err = av_hwframe_ctx_init(base_ctx->recon_frames_ref);
     if (err < 0) {
         av_log(avctx, AV_LOG_ERROR, "Failed to initialise reconstructed "
                "frame context: %d.\n", err);
@@ -2744,7 +2758,8 @@ static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx)
 
 av_cold int ff_vaapi_encode_init(AVCodecContext *avctx)
 {
-    VAAPIEncodeContext *ctx = avctx->priv_data;
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    VAAPIEncodeContext       *ctx = avctx->priv_data;
     AVVAAPIFramesContext *recon_hwctx = NULL;
     VAStatus vas;
     int err;
@@ -2754,8 +2769,8 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx)
 
     /* If you add something that can fail above this av_frame_alloc(),
      * modify ff_vaapi_encode_close() accordingly. */
-    ctx->frame = av_frame_alloc();
-    if (!ctx->frame) {
+    base_ctx->frame = av_frame_alloc();
+    if (!base_ctx->frame) {
         return AVERROR(ENOMEM);
     }
 
@@ -2765,23 +2780,23 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx)
         return AVERROR(EINVAL);
     }
 
-    ctx->input_frames_ref = av_buffer_ref(avctx->hw_frames_ctx);
-    if (!ctx->input_frames_ref) {
+    base_ctx->input_frames_ref = av_buffer_ref(avctx->hw_frames_ctx);
+    if (!base_ctx->input_frames_ref) {
         err = AVERROR(ENOMEM);
         goto fail;
     }
-    ctx->input_frames = (AVHWFramesContext*)ctx->input_frames_ref->data;
+    base_ctx->input_frames = (AVHWFramesContext*)base_ctx->input_frames_ref->data;
 
-    ctx->device_ref = av_buffer_ref(ctx->input_frames->device_ref);
-    if (!ctx->device_ref) {
+    base_ctx->device_ref = av_buffer_ref(base_ctx->input_frames->device_ref);
+    if (!base_ctx->device_ref) {
         err = AVERROR(ENOMEM);
         goto fail;
     }
-    ctx->device = (AVHWDeviceContext*)ctx->device_ref->data;
-    ctx->hwctx = ctx->device->hwctx;
+    base_ctx->device = (AVHWDeviceContext*)base_ctx->device_ref->data;
+    ctx->hwctx = base_ctx->device->hwctx;
 
-    ctx->tail_pkt = av_packet_alloc();
-    if (!ctx->tail_pkt) {
+    base_ctx->tail_pkt = av_packet_alloc();
+    if (!base_ctx->tail_pkt) {
         err = AVERROR(ENOMEM);
         goto fail;
     }
@@ -2796,11 +2811,11 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx)
             goto fail;
     } else {
         // Assume 16x16 blocks.
-        ctx->surface_width  = FFALIGN(avctx->width,  16);
-        ctx->surface_height = FFALIGN(avctx->height, 16);
+        base_ctx->surface_width  = FFALIGN(avctx->width,  16);
+        base_ctx->surface_height = FFALIGN(avctx->height, 16);
         if (ctx->codec->flags & FLAG_SLICE_CONTROL) {
-            ctx->slice_block_width  = 16;
-            ctx->slice_block_height = 16;
+            base_ctx->slice_block_width  = 16;
+            base_ctx->slice_block_height = 16;
         }
     }
 
@@ -2851,9 +2866,9 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx)
     if (err < 0)
         goto fail;
 
-    recon_hwctx = ctx->recon_frames->hwctx;
+    recon_hwctx = base_ctx->recon_frames->hwctx;
     vas = vaCreateContext(ctx->hwctx->display, ctx->va_config,
-                          ctx->surface_width, ctx->surface_height,
+                          base_ctx->surface_width, base_ctx->surface_height,
                           VA_PROGRESSIVE,
                           recon_hwctx->surface_ids,
                           recon_hwctx->nb_surfaces,
@@ -2880,8 +2895,8 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx)
             goto fail;
     }
 
-    ctx->output_delay = ctx->b_per_p;
-    ctx->decode_delay = ctx->max_b_depth;
+    base_ctx->output_delay = base_ctx->b_per_p;
+    base_ctx->decode_delay = base_ctx->max_b_depth;
 
     if (ctx->codec->sequence_params_size > 0) {
         ctx->codec_sequence_params =
@@ -2936,11 +2951,11 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx)
     // check vaSyncBuffer function
     vas = vaSyncBuffer(ctx->hwctx->display, VA_INVALID_ID, 0);
     if (vas != VA_STATUS_ERROR_UNIMPLEMENTED) {
-        ctx->has_sync_buffer_func = 1;
-        ctx->encode_fifo = av_fifo_alloc2(ctx->async_depth,
-                                          sizeof(VAAPIEncodePicture *),
-                                          0);
-        if (!ctx->encode_fifo)
+        base_ctx->async_encode = 1;
+        base_ctx->encode_fifo = av_fifo_alloc2(base_ctx->async_depth,
+                                               sizeof(VAAPIEncodePicture*),
+                                               0);
+        if (!base_ctx->encode_fifo)
             return AVERROR(ENOMEM);
     }
 #endif
@@ -2953,15 +2968,16 @@ fail:
 
 av_cold int ff_vaapi_encode_close(AVCodecContext *avctx)
 {
-    VAAPIEncodeContext *ctx = avctx->priv_data;
-    VAAPIEncodePicture *pic, *next;
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    VAAPIEncodeContext       *ctx = avctx->priv_data;
+    HWBaseEncodePicture *pic, *next;
 
     /* We check ctx->frame to know whether ff_vaapi_encode_init()
      * has been called and va_config/va_context initialized. */
-    if (!ctx->frame)
+    if (!base_ctx->frame)
         return 0;
 
-    for (pic = ctx->pic_start; pic; pic = next) {
+    for (pic = base_ctx->pic_start; pic; pic = next) {
         next = pic->next;
         vaapi_encode_free(avctx, pic);
     }
@@ -2978,16 +2994,16 @@ av_cold int ff_vaapi_encode_close(AVCodecContext *avctx)
         ctx->va_config = VA_INVALID_ID;
     }
 
-    av_frame_free(&ctx->frame);
-    av_packet_free(&ctx->tail_pkt);
+    av_frame_free(&base_ctx->frame);
+    av_packet_free(&base_ctx->tail_pkt);
 
     av_freep(&ctx->codec_sequence_params);
     av_freep(&ctx->codec_picture_params);
-    av_fifo_freep2(&ctx->encode_fifo);
+    av_fifo_freep2(&base_ctx->encode_fifo);
 
-    av_buffer_unref(&ctx->recon_frames_ref);
-    av_buffer_unref(&ctx->input_frames_ref);
-    av_buffer_unref(&ctx->device_ref);
+    av_buffer_unref(&base_ctx->recon_frames_ref);
+    av_buffer_unref(&base_ctx->input_frames_ref);
+    av_buffer_unref(&base_ctx->device_ref);
 
     return 0;
 }
diff --git a/libavcodec/vaapi_encode.h b/libavcodec/vaapi_encode.h
index 6964055b93..8eee455881 100644
--- a/libavcodec/vaapi_encode.h
+++ b/libavcodec/vaapi_encode.h
@@ -29,38 +29,30 @@
 
 #include "libavutil/hwcontext.h"
 #include "libavutil/hwcontext_vaapi.h"
-#include "libavutil/fifo.h"
 
 #include "avcodec.h"
 #include "hwconfig.h"
+#include "hw_base_encode.h"
 
 struct VAAPIEncodeType;
 struct VAAPIEncodePicture;
 
+// Codec output packet without timestamp delay, which means the
+// output packet has same PTS and DTS.
+#define FLAG_TIMESTAMP_NO_DELAY 1 << 6
+
 enum {
     MAX_CONFIG_ATTRIBUTES  = 4,
     MAX_GLOBAL_PARAMS      = 4,
-    MAX_DPB_SIZE           = 16,
-    MAX_PICTURE_REFERENCES = 2,
-    MAX_REORDER_DELAY      = 16,
     MAX_PARAM_BUFFER_SIZE  = 1024,
     // A.4.1: table A.6 allows at most 22 tile rows for any level.
     MAX_TILE_ROWS          = 22,
     // A.4.1: table A.6 allows at most 20 tile columns for any level.
     MAX_TILE_COLS          = 20,
-    MAX_ASYNC_DEPTH        = 64,
-    MAX_REFERENCE_LIST_NUM = 2,
 };
 
 extern const AVCodecHWConfigInternal *const ff_vaapi_encode_hw_configs[];
 
-enum {
-    PICTURE_TYPE_IDR = 0,
-    PICTURE_TYPE_I   = 1,
-    PICTURE_TYPE_P   = 2,
-    PICTURE_TYPE_B   = 3,
-};
-
 typedef struct VAAPIEncodeSlice {
     int             index;
     int             row_start;
@@ -71,16 +63,7 @@ typedef struct VAAPIEncodeSlice {
 } VAAPIEncodeSlice;
 
 typedef struct VAAPIEncodePicture {
-    struct VAAPIEncodePicture *next;
-
-    int64_t         display_order;
-    int64_t         encode_order;
-    int64_t         pts;
-    int64_t         duration;
-    int             force_idr;
-
-    void           *opaque;
-    AVBufferRef    *opaque_ref;
+    HWBaseEncodePicture base;
 
 #if VA_CHECK_VERSION(1, 0, 0)
     // ROI regions.
@@ -89,15 +72,7 @@ typedef struct VAAPIEncodePicture {
     void           *roi;
 #endif
 
-    int             type;
-    int             b_depth;
-    int             encode_issued;
-    int             encode_complete;
-
-    AVFrame        *input_image;
     VASurfaceID     input_surface;
-
-    AVFrame        *recon_image;
     VASurfaceID     recon_surface;
 
     int          nb_param_buffers;
@@ -107,34 +82,10 @@ typedef struct VAAPIEncodePicture {
     VABufferID     *output_buffer_ref;
     VABufferID      output_buffer;
 
-    void           *priv_data;
     void           *codec_picture_params;
 
-    // Whether this picture is a reference picture.
-    int             is_reference;
-
-    // The contents of the DPB after this picture has been decoded.
-    // This will contain the picture itself if it is a reference picture,
-    // but not if it isn't.
-    int                     nb_dpb_pics;
-    struct VAAPIEncodePicture *dpb[MAX_DPB_SIZE];
-    // The reference pictures used in decoding this picture. If they are
-    // used by later pictures they will also appear in the DPB. ref[0][] for
-    // previous reference frames. ref[1][] for future reference frames.
-    int                     nb_refs[MAX_REFERENCE_LIST_NUM];
-    struct VAAPIEncodePicture *refs[MAX_REFERENCE_LIST_NUM][MAX_PICTURE_REFERENCES];
-    // The previous reference picture in encode order.  Must be in at least
-    // one of the reference list and DPB list.
-    struct VAAPIEncodePicture *prev;
-    // Reference count for other pictures referring to this one through
-    // the above pointers, directly from incomplete pictures and indirectly
-    // through completed pictures.
-    int             ref_count[2];
-    int             ref_removed[2];
-
     int          nb_slices;
     VAAPIEncodeSlice *slices;
-
     /**
      * indicate if current frame is an independent frame that the coded data
      * can be pushed to downstream directly. Coded of non-independent frame
@@ -193,57 +144,26 @@ typedef struct VAAPIEncodeRCMode {
 } VAAPIEncodeRCMode;
 
 typedef struct VAAPIEncodeContext {
-    const AVClass *class;
+    // Base.
+    HWBaseEncodeContext base;
 
     // Codec-specific hooks.
     const struct VAAPIEncodeType *codec;
 
-    // Global options.
-
     // Use low power encoding mode.
     int             low_power;
 
-    // Number of I frames between IDR frames.
-    int             idr_interval;
-
-    // Desired B frame reference depth.
-    int             desired_b_depth;
-
     // Max Frame Size
     int             max_frame_size;
 
-    // Explicitly set RC mode (otherwise attempt to pick from
-    // available modes).
-    int             explicit_rc_mode;
-
-    // Explicitly-set QP, for use with the "qp" options.
-    // (Forces CQP mode when set, overriding everything else.)
-    int             explicit_qp;
-
     // Desired packed headers.
     unsigned int    desired_packed_headers;
 
-    // The required size of surfaces.  This is probably the input
-    // size (AVCodecContext.width|height) aligned up to whatever
-    // block size is required by the codec.
-    int             surface_width;
-    int             surface_height;
-
-    // The block size for slice calculations.
-    int             slice_block_width;
-    int             slice_block_height;
-
-    // Everything above this point must be set before calling
-    // ff_vaapi_encode_init().
-
     // Chosen encoding profile details.
     const VAAPIEncodeProfile *profile;
 
     // Chosen rate control mode details.
     const VAAPIEncodeRCMode *rc_mode;
-    // RC quality level - meaning depends on codec and RC mode.
-    // In CQP mode this sets the fixed quantiser value.
-    int             rc_quality;
 
     // Encoding profile (VAProfile*).
     VAProfile       va_profile;
@@ -263,18 +183,8 @@ typedef struct VAAPIEncodeContext {
     VAConfigID      va_config;
     VAContextID     va_context;
 
-    AVBufferRef    *device_ref;
-    AVHWDeviceContext *device;
     AVVAAPIDeviceContext *hwctx;
 
-    // The hardware frame context containing the input frames.
-    AVBufferRef    *input_frames_ref;
-    AVHWFramesContext *input_frames;
-
-    // The hardware frame context containing the reconstructed frames.
-    AVBufferRef    *recon_frames_ref;
-    AVHWFramesContext *recon_frames;
-
     // Pool of (reusable) bitstream output buffers.
     struct FFRefStructPool *output_buffer_pool;
 
@@ -301,30 +211,6 @@ typedef struct VAAPIEncodeContext {
     // structure (VAEncPictureParameterBuffer*).
     void           *codec_picture_params;
 
-    // Current encoding window, in display (input) order.
-    VAAPIEncodePicture *pic_start, *pic_end;
-    // The next picture to use as the previous reference picture in
-    // encoding order. Order from small to large in encoding order.
-    VAAPIEncodePicture *next_prev[MAX_PICTURE_REFERENCES];
-    int                 nb_next_prev;
-
-    // Next input order index (display order).
-    int64_t         input_order;
-    // Number of frames that output is behind input.
-    int64_t         output_delay;
-    // Next encode order index.
-    int64_t         encode_order;
-    // Number of frames decode output will need to be delayed.
-    int64_t         decode_delay;
-    // Next output order index (in encode order).
-    int64_t         output_order;
-
-    // Timestamp handling.
-    int64_t         first_pts;
-    int64_t         dts_pts_diff;
-    int64_t         ts_ring[MAX_REORDER_DELAY * 3 +
-                            MAX_ASYNC_DEPTH];
-
     // Slice structure.
     int slice_block_rows;
     int slice_block_cols;
@@ -343,43 +229,12 @@ typedef struct VAAPIEncodeContext {
     // Location of the i-th tile row boundary.
     int row_bd[MAX_TILE_ROWS + 1];
 
-    // Frame type decision.
-    int gop_size;
-    int closed_gop;
-    int gop_per_idr;
-    int p_per_i;
-    int max_b_depth;
-    int b_per_p;
-    int force_idr;
-    int idr_counter;
-    int gop_counter;
-    int end_of_stream;
-    int p_to_gpb;
-
-    // Whether the driver supports ROI at all.
-    int             roi_allowed;
     // Maximum number of regions supported by the driver.
     int             roi_max_regions;
     // Quantisation range for offset calculations.  Set by codec-specific
     // code, as it may change based on parameters.
     int             roi_quant_range;
 
-    // The encoder does not support cropping information, so warn about
-    // it the first time we encounter any nonzero crop fields.
-    int             crop_warned;
-    // If the driver does not support ROI then warn the first time we
-    // encounter a frame with ROI side data.
-    int             roi_warned;
-
-    AVFrame         *frame;
-
-    // Whether the driver support vaSyncBuffer
-    int             has_sync_buffer_func;
-    // Store buffered pic
-    AVFifo          *encode_fifo;
-    // Max number of frame buffered in encoder.
-    int             async_depth;
-
     /** Head data for current output pkt, used only for AV1. */
     //void  *header_data;
     //size_t header_data_size;
@@ -389,30 +244,8 @@ typedef struct VAAPIEncodeContext {
      * This is a RefStruct reference.
      */
     VABufferID     *coded_buffer_ref;
-
-    /** Tail data of a pic, now only used for av1 repeat frame header. */
-    AVPacket        *tail_pkt;
 } VAAPIEncodeContext;
 
-enum {
-    // Codec supports controlling the subdivision of pictures into slices.
-    FLAG_SLICE_CONTROL         = 1 << 0,
-    // Codec only supports constant quality (no rate control).
-    FLAG_CONSTANT_QUALITY_ONLY = 1 << 1,
-    // Codec is intra-only.
-    FLAG_INTRA_ONLY            = 1 << 2,
-    // Codec supports B-pictures.
-    FLAG_B_PICTURES            = 1 << 3,
-    // Codec supports referencing B-pictures.
-    FLAG_B_PICTURE_REFERENCES  = 1 << 4,
-    // Codec supports non-IDR key pictures (that is, key pictures do
-    // not necessarily empty the DPB).
-    FLAG_NON_IDR_KEY_PICTURES  = 1 << 5,
-    // Codec output packet without timestamp delay, which means the
-    // output packet has same PTS and DTS.
-    FLAG_TIMESTAMP_NO_DELAY    = 1 << 6,
-};
-
 typedef struct VAAPIEncodeType {
     // List of supported profiles and corresponding VAAPI profiles.
     // (Must end with AV_PROFILE_UNKNOWN.)
@@ -505,19 +338,6 @@ int ff_vaapi_encode_close(AVCodecContext *avctx);
       "may not support all encoding features)", \
       OFFSET(common.low_power), AV_OPT_TYPE_BOOL, \
       { .i64 = 0 }, 0, 1, FLAGS }, \
-    { "idr_interval", \
-      "Distance (in I-frames) between IDR frames", \
-      OFFSET(common.idr_interval), AV_OPT_TYPE_INT, \
-      { .i64 = 0 }, 0, INT_MAX, FLAGS }, \
-    { "b_depth", \
-      "Maximum B-frame reference depth", \
-      OFFSET(common.desired_b_depth), AV_OPT_TYPE_INT, \
-      { .i64 = 1 }, 1, INT_MAX, FLAGS }, \
-    { "async_depth", "Maximum processing parallelism. " \
-      "Increase this to improve single channel performance. This option " \
-      "doesn't work if driver doesn't implement vaSyncBuffer function.", \
-      OFFSET(common.async_depth), AV_OPT_TYPE_INT, \
-      { .i64 = 2 }, 1, MAX_ASYNC_DEPTH, FLAGS }, \
     { "max_frame_size", \
       "Maximum frame size (in bytes)",\
       OFFSET(common.max_frame_size), AV_OPT_TYPE_INT, \
@@ -529,7 +349,7 @@ int ff_vaapi_encode_close(AVCodecContext *avctx);
 #define VAAPI_ENCODE_RC_OPTIONS \
     { "rc_mode",\
       "Set rate control mode", \
-      OFFSET(common.explicit_rc_mode), AV_OPT_TYPE_INT, \
+      OFFSET(common.base.explicit_rc_mode), AV_OPT_TYPE_INT, \
       { .i64 = RC_MODE_AUTO }, RC_MODE_AUTO, RC_MODE_MAX, FLAGS, .unit = "rc_mode" }, \
     { "auto", "Choose mode automatically based on other parameters", \
       0, AV_OPT_TYPE_CONST, { .i64 = RC_MODE_AUTO }, 0, 0, FLAGS, .unit = "rc_mode" }, \
diff --git a/libavcodec/vaapi_encode_av1.c b/libavcodec/vaapi_encode_av1.c
index a46b882ab9..512b4e3733 100644
--- a/libavcodec/vaapi_encode_av1.c
+++ b/libavcodec/vaapi_encode_av1.c
@@ -109,20 +109,21 @@ static void vaapi_encode_av1_trace_write_log(void *ctx,
 
 static av_cold int vaapi_encode_av1_get_encoder_caps(AVCodecContext *avctx)
 {
-    VAAPIEncodeContext *ctx = avctx->priv_data;
-    VAAPIEncodeAV1Context *priv = avctx->priv_data;
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    VAAPIEncodeAV1Context   *priv = avctx->priv_data;
 
     // Surfaces must be aligned to superblock boundaries.
-    ctx->surface_width  = FFALIGN(avctx->width,  priv->use_128x128_superblock ? 128 : 64);
-    ctx->surface_height = FFALIGN(avctx->height, priv->use_128x128_superblock ? 128 : 64);
+    base_ctx->surface_width  = FFALIGN(avctx->width,  priv->use_128x128_superblock ? 128 : 64);
+    base_ctx->surface_height = FFALIGN(avctx->height, priv->use_128x128_superblock ? 128 : 64);
 
     return 0;
 }
 
 static av_cold int vaapi_encode_av1_configure(AVCodecContext *avctx)
 {
-    VAAPIEncodeContext     *ctx = avctx->priv_data;
-    VAAPIEncodeAV1Context *priv = avctx->priv_data;
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    VAAPIEncodeContext       *ctx = avctx->priv_data;
+    VAAPIEncodeAV1Context   *priv = avctx->priv_data;
     int ret;
 
     ret = ff_cbs_init(&priv->cbc, AV_CODEC_ID_AV1, avctx);
@@ -134,7 +135,7 @@ static av_cold int vaapi_encode_av1_configure(AVCodecContext *avctx)
     priv->cbc->trace_write_callback = vaapi_encode_av1_trace_write_log;
 
     if (ctx->rc_mode->quality) {
-        priv->q_idx_p = av_clip(ctx->rc_quality, 0, AV1_MAX_QUANT);
+        priv->q_idx_p = av_clip(base_ctx->rc_quality, 0, AV1_MAX_QUANT);
         if (fabs(avctx->i_quant_factor) > 0.0)
             priv->q_idx_idr =
                 av_clip((fabs(avctx->i_quant_factor) * priv->q_idx_p  +
@@ -355,6 +356,7 @@ static int vaapi_encode_av1_write_sequence_header(AVCodecContext *avctx,
 
 static int vaapi_encode_av1_init_sequence_params(AVCodecContext *avctx)
 {
+    HWBaseEncodeContext         *base_ctx = avctx->priv_data;
     VAAPIEncodeContext               *ctx = avctx->priv_data;
     VAAPIEncodeAV1Context           *priv = avctx->priv_data;
     AV1RawOBU                     *sh_obu = &priv->sh;
@@ -367,7 +369,7 @@ static int vaapi_encode_av1_init_sequence_params(AVCodecContext *avctx)
     memset(sh_obu, 0, sizeof(*sh_obu));
     sh_obu->header.obu_type = AV1_OBU_SEQUENCE_HEADER;
 
-    desc = av_pix_fmt_desc_get(priv->common.input_frames->sw_format);
+    desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format);
     av_assert0(desc);
 
     sh->seq_profile  = avctx->profile;
@@ -419,7 +421,7 @@ static int vaapi_encode_av1_init_sequence_params(AVCodecContext *avctx)
             framerate = 0;
 
         level = ff_av1_guess_level(avctx->bit_rate, priv->tier,
-                                   ctx->surface_width, ctx->surface_height,
+                                   base_ctx->surface_width, base_ctx->surface_height,
                                    priv->tile_rows * priv->tile_cols,
                                    priv->tile_cols, framerate);
         if (level) {
@@ -436,8 +438,8 @@ static int vaapi_encode_av1_init_sequence_params(AVCodecContext *avctx)
     vseq->seq_level_idx           = sh->seq_level_idx[0];
     vseq->seq_tier                = sh->seq_tier[0];
     vseq->order_hint_bits_minus_1 = sh->order_hint_bits_minus_1;
-    vseq->intra_period            = ctx->gop_size;
-    vseq->ip_period               = ctx->b_per_p + 1;
+    vseq->intra_period            = base_ctx->gop_size;
+    vseq->ip_period               = base_ctx->b_per_p + 1;
 
     vseq->seq_fields.bits.enable_order_hint = sh->enable_order_hint;
 
@@ -464,12 +466,13 @@ static int vaapi_encode_av1_init_picture_params(AVCodecContext *avctx,
 {
     VAAPIEncodeContext              *ctx = avctx->priv_data;
     VAAPIEncodeAV1Context          *priv = avctx->priv_data;
-    VAAPIEncodeAV1Picture          *hpic = pic->priv_data;
+    HWBaseEncodePicture        *base_pic = (HWBaseEncodePicture *)pic;
+    VAAPIEncodeAV1Picture          *hpic = base_pic->priv_data;
     AV1RawOBU                    *fh_obu = &priv->fh;
     AV1RawFrameHeader                *fh = &fh_obu->obu.frame.header;
     VAEncPictureParameterBufferAV1 *vpic = pic->codec_picture_params;
     CodedBitstreamFragment          *obu = &priv->current_obu;
-    VAAPIEncodePicture    *ref;
+    HWBaseEncodePicture    *ref;
     VAAPIEncodeAV1Picture *href;
     int slot, i;
     int ret;
@@ -478,24 +481,24 @@ static int vaapi_encode_av1_init_picture_params(AVCodecContext *avctx,
 
     memset(fh_obu, 0, sizeof(*fh_obu));
     pic->nb_slices = priv->tile_groups;
-    pic->non_independent_frame = pic->encode_order < pic->display_order;
+    pic->non_independent_frame = base_pic->encode_order < base_pic->display_order;
     fh_obu->header.obu_type = AV1_OBU_FRAME_HEADER;
     fh_obu->header.obu_has_size_field = 1;
 
-    switch (pic->type) {
+    switch (base_pic->type) {
     case PICTURE_TYPE_IDR:
-        av_assert0(pic->nb_refs[0] == 0 || pic->nb_refs[1]);
+        av_assert0(base_pic->nb_refs[0] == 0 || base_pic->nb_refs[1]);
         fh->frame_type = AV1_FRAME_KEY;
         fh->refresh_frame_flags = 0xFF;
         fh->base_q_idx = priv->q_idx_idr;
         hpic->slot = 0;
-        hpic->last_idr_frame = pic->display_order;
+        hpic->last_idr_frame = base_pic->display_order;
         break;
     case PICTURE_TYPE_P:
-        av_assert0(pic->nb_refs[0]);
+        av_assert0(base_pic->nb_refs[0]);
         fh->frame_type = AV1_FRAME_INTER;
         fh->base_q_idx = priv->q_idx_p;
-        ref = pic->refs[0][pic->nb_refs[0] - 1];
+        ref = base_pic->refs[0][base_pic->nb_refs[0] - 1];
         href = ref->priv_data;
         hpic->slot = !href->slot;
         hpic->last_idr_frame = href->last_idr_frame;
@@ -510,8 +513,8 @@ static int vaapi_encode_av1_init_picture_params(AVCodecContext *avctx,
         vpic->ref_frame_ctrl_l0.fields.search_idx0 = AV1_REF_FRAME_LAST;
 
         /** set the 2nd nearest frame in L0 as Golden frame. */
-        if (pic->nb_refs[0] > 1) {
-            ref = pic->refs[0][pic->nb_refs[0] - 2];
+        if (base_pic->nb_refs[0] > 1) {
+            ref = base_pic->refs[0][base_pic->nb_refs[0] - 2];
             href = ref->priv_data;
             fh->ref_frame_idx[3] = href->slot;
             fh->ref_order_hint[href->slot] = ref->display_order - href->last_idr_frame;
@@ -519,7 +522,7 @@ static int vaapi_encode_av1_init_picture_params(AVCodecContext *avctx,
         }
         break;
     case PICTURE_TYPE_B:
-        av_assert0(pic->nb_refs[0] && pic->nb_refs[1]);
+        av_assert0(base_pic->nb_refs[0] && base_pic->nb_refs[1]);
         fh->frame_type = AV1_FRAME_INTER;
         fh->base_q_idx = priv->q_idx_b;
         fh->refresh_frame_flags = 0x0;
@@ -532,7 +535,7 @@ static int vaapi_encode_av1_init_picture_params(AVCodecContext *avctx,
         vpic->ref_frame_ctrl_l0.fields.search_idx0 = AV1_REF_FRAME_LAST;
         vpic->ref_frame_ctrl_l1.fields.search_idx0 = AV1_REF_FRAME_BWDREF;
 
-        ref                            = pic->refs[0][pic->nb_refs[0] - 1];
+        ref                            = base_pic->refs[0][base_pic->nb_refs[0] - 1];
         href                           = ref->priv_data;
         hpic->last_idr_frame           = href->last_idr_frame;
         fh->primary_ref_frame          = href->slot;
@@ -541,7 +544,7 @@ static int vaapi_encode_av1_init_picture_params(AVCodecContext *avctx,
             fh->ref_frame_idx[i] = href->slot;
         }
 
-        ref                            = pic->refs[1][pic->nb_refs[1] - 1];
+        ref                            = base_pic->refs[1][base_pic->nb_refs[1] - 1];
         href                           = ref->priv_data;
         fh->ref_order_hint[href->slot] = ref->display_order - href->last_idr_frame;
         for (i = AV1_REF_FRAME_GOLDEN; i < AV1_REFS_PER_FRAME; i++) {
@@ -552,13 +555,13 @@ static int vaapi_encode_av1_init_picture_params(AVCodecContext *avctx,
         av_assert0(0 && "invalid picture type");
     }
 
-    fh->show_frame                = pic->display_order <= pic->encode_order;
+    fh->show_frame                = base_pic->display_order <= base_pic->encode_order;
     fh->showable_frame            = fh->frame_type != AV1_FRAME_KEY;
     fh->frame_width_minus_1       = avctx->width - 1;
     fh->frame_height_minus_1      = avctx->height - 1;
     fh->render_width_minus_1      = fh->frame_width_minus_1;
     fh->render_height_minus_1     = fh->frame_height_minus_1;
-    fh->order_hint                = pic->display_order - hpic->last_idr_frame;
+    fh->order_hint                = base_pic->display_order - hpic->last_idr_frame;
     fh->tile_cols                 = priv->tile_cols;
     fh->tile_rows                 = priv->tile_rows;
     fh->tile_cols_log2            = priv->tile_cols_log2;
@@ -624,13 +627,13 @@ static int vaapi_encode_av1_init_picture_params(AVCodecContext *avctx,
         vpic->reference_frames[i] = VA_INVALID_SURFACE;
 
     for (i = 0; i < MAX_REFERENCE_LIST_NUM; i++) {
-        for (int j = 0; j < pic->nb_refs[i]; j++) {
-            VAAPIEncodePicture *ref_pic = pic->refs[i][j];
+        for (int j = 0; j < base_pic->nb_refs[i]; j++) {
+            HWBaseEncodePicture *ref_pic = base_pic->refs[i][j];
 
             slot = ((VAAPIEncodeAV1Picture*)ref_pic->priv_data)->slot;
             av_assert0(vpic->reference_frames[slot] == VA_INVALID_SURFACE);
 
-            vpic->reference_frames[slot] = ref_pic->recon_surface;
+            vpic->reference_frames[slot] = ((VAAPIEncodePicture *)ref_pic)->recon_surface;
         }
     }
 
@@ -651,7 +654,7 @@ static int vaapi_encode_av1_init_picture_params(AVCodecContext *avctx,
         vpic->bit_offset_cdef_params         = priv->cdef_start_offset;
         vpic->size_in_bits_cdef_params       = priv->cdef_param_size;
         vpic->size_in_bits_frame_hdr_obu     = priv->fh_data_len;
-        vpic->byte_offset_frame_hdr_obu_size = (((pic->type == PICTURE_TYPE_IDR) ?
+        vpic->byte_offset_frame_hdr_obu_size = (((base_pic->type == PICTURE_TYPE_IDR) ?
                                                priv->sh_data_len / 8 : 0) +
                                                (fh_obu->header.obu_extension_flag ?
                                                2 : 1));
@@ -693,14 +696,15 @@ static int vaapi_encode_av1_write_picture_header(AVCodecContext *avctx,
     CodedBitstreamAV1Context *cbctx = priv->cbc->priv_data;
     AV1RawOBU               *fh_obu = &priv->fh;
     AV1RawFrameHeader       *rep_fh = &fh_obu->obu.frame_header;
+    HWBaseEncodePicture *base_pic   = (HWBaseEncodePicture *)pic;
     VAAPIEncodeAV1Picture *href;
     int ret = 0;
 
     pic->tail_size = 0;
     /** Pack repeat frame header. */
-    if (pic->display_order > pic->encode_order) {
+    if (base_pic->display_order > base_pic->encode_order) {
         memset(fh_obu, 0, sizeof(*fh_obu));
-        href = pic->refs[0][pic->nb_refs[0] - 1]->priv_data;
+        href = base_pic->refs[0][base_pic->nb_refs[0] - 1]->priv_data;
         fh_obu->header.obu_type = AV1_OBU_FRAME_HEADER;
         fh_obu->header.obu_has_size_field = 1;
 
@@ -862,6 +866,7 @@ static av_cold int vaapi_encode_av1_close(AVCodecContext *avctx)
 #define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM)
 
 static const AVOption vaapi_encode_av1_options[] = {
+    HW_BASE_ENCODE_COMMON_OPTIONS,
     VAAPI_ENCODE_COMMON_OPTIONS,
     VAAPI_ENCODE_RC_OPTIONS,
     { "profile", "Set profile (seq_profile)",
diff --git a/libavcodec/vaapi_encode_h264.c b/libavcodec/vaapi_encode_h264.c
index 37df9103ae..aa011ba307 100644
--- a/libavcodec/vaapi_encode_h264.c
+++ b/libavcodec/vaapi_encode_h264.c
@@ -234,7 +234,7 @@ static int vaapi_encode_h264_write_extra_header(AVCodecContext *avctx,
                 goto fail;
         }
         if (priv->sei_needed & SEI_TIMING) {
-            if (pic->type == PICTURE_TYPE_IDR) {
+            if (pic->base.type == PICTURE_TYPE_IDR) {
                 err = ff_cbs_sei_add_message(priv->cbc, au, 1,
                                              SEI_TYPE_BUFFERING_PERIOD,
                                              &priv->sei_buffering_period, NULL);
@@ -296,6 +296,7 @@ fail:
 
 static int vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx)
 {
+    HWBaseEncodeContext          *base_ctx = avctx->priv_data;
     VAAPIEncodeContext                *ctx = avctx->priv_data;
     VAAPIEncodeH264Context           *priv = avctx->priv_data;
     H264RawSPS                        *sps = &priv->raw_sps;
@@ -308,7 +309,7 @@ static int vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx)
     memset(sps, 0, sizeof(*sps));
     memset(pps, 0, sizeof(*pps));
 
-    desc = av_pix_fmt_desc_get(priv->common.input_frames->sw_format);
+    desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format);
     av_assert0(desc);
     if (desc->nb_components == 1 || desc->log2_chroma_w != 1 || desc->log2_chroma_h != 1) {
         av_log(avctx, AV_LOG_ERROR, "Chroma format of input pixel format "
@@ -327,18 +328,18 @@ static int vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx)
         sps->constraint_set1_flag = 1;
 
     if (avctx->profile == AV_PROFILE_H264_HIGH || avctx->profile == AV_PROFILE_H264_HIGH_10)
-        sps->constraint_set3_flag = ctx->gop_size == 1;
+        sps->constraint_set3_flag = base_ctx->gop_size == 1;
 
     if (avctx->profile == AV_PROFILE_H264_MAIN ||
         avctx->profile == AV_PROFILE_H264_HIGH || avctx->profile == AV_PROFILE_H264_HIGH_10) {
         sps->constraint_set4_flag = 1;
-        sps->constraint_set5_flag = ctx->b_per_p == 0;
+        sps->constraint_set5_flag = base_ctx->b_per_p == 0;
     }
 
-    if (ctx->gop_size == 1)
+    if (base_ctx->gop_size == 1)
         priv->dpb_frames = 0;
     else
-        priv->dpb_frames = 1 + ctx->max_b_depth;
+        priv->dpb_frames = 1 + base_ctx->max_b_depth;
 
     if (avctx->level != AV_LEVEL_UNKNOWN) {
         sps->level_idc = avctx->level;
@@ -375,7 +376,7 @@ static int vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx)
     sps->bit_depth_chroma_minus8 = bit_depth - 8;
 
     sps->log2_max_frame_num_minus4 = 4;
-    sps->pic_order_cnt_type        = ctx->max_b_depth ? 0 : 2;
+    sps->pic_order_cnt_type        = base_ctx->max_b_depth ? 0 : 2;
     if (sps->pic_order_cnt_type == 0) {
         sps->log2_max_pic_order_cnt_lsb_minus4 = 4;
     }
@@ -502,8 +503,8 @@ static int vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx)
     sps->vui.motion_vectors_over_pic_boundaries_flag = 1;
     sps->vui.log2_max_mv_length_horizontal = 15;
     sps->vui.log2_max_mv_length_vertical   = 15;
-    sps->vui.max_num_reorder_frames        = ctx->max_b_depth;
-    sps->vui.max_dec_frame_buffering       = ctx->max_b_depth + 1;
+    sps->vui.max_num_reorder_frames        = base_ctx->max_b_depth;
+    sps->vui.max_dec_frame_buffering       = base_ctx->max_b_depth + 1;
 
     pps->nal_unit_header.nal_ref_idc = 3;
     pps->nal_unit_header.nal_unit_type = H264_NAL_PPS;
@@ -536,9 +537,9 @@ static int vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx)
     *vseq = (VAEncSequenceParameterBufferH264) {
         .seq_parameter_set_id = sps->seq_parameter_set_id,
         .level_idc        = sps->level_idc,
-        .intra_period     = ctx->gop_size,
-        .intra_idr_period = ctx->gop_size,
-        .ip_period        = ctx->b_per_p + 1,
+        .intra_period     = base_ctx->gop_size,
+        .intra_idr_period = base_ctx->gop_size,
+        .ip_period        = base_ctx->b_per_p + 1,
 
         .bits_per_second       = ctx->va_bit_rate,
         .max_num_ref_frames    = sps->max_num_ref_frames,
@@ -622,19 +623,20 @@ static int vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx)
 static int vaapi_encode_h264_init_picture_params(AVCodecContext *avctx,
                                                  VAAPIEncodePicture *pic)
 {
-    VAAPIEncodeContext               *ctx = avctx->priv_data;
+    HWBaseEncodeContext         *base_ctx = avctx->priv_data;
     VAAPIEncodeH264Context          *priv = avctx->priv_data;
-    VAAPIEncodeH264Picture          *hpic = pic->priv_data;
-    VAAPIEncodePicture              *prev = pic->prev;
+    HWBaseEncodePicture         *base_pic = (HWBaseEncodePicture *)pic;
+    VAAPIEncodeH264Picture          *hpic = base_pic->priv_data;
+    HWBaseEncodePicture             *prev = base_pic->prev;
     VAAPIEncodeH264Picture         *hprev = prev ? prev->priv_data : NULL;
     VAEncPictureParameterBufferH264 *vpic = pic->codec_picture_params;
     int i, j = 0;
 
-    if (pic->type == PICTURE_TYPE_IDR) {
-        av_assert0(pic->display_order == pic->encode_order);
+    if (base_pic->type == PICTURE_TYPE_IDR) {
+        av_assert0(base_pic->display_order == base_pic->encode_order);
 
         hpic->frame_num      = 0;
-        hpic->last_idr_frame = pic->display_order;
+        hpic->last_idr_frame = base_pic->display_order;
         hpic->idr_pic_id     = hprev ? hprev->idr_pic_id + 1 : 0;
 
         hpic->primary_pic_type = 0;
@@ -647,10 +649,10 @@ static int vaapi_encode_h264_init_picture_params(AVCodecContext *avctx,
         hpic->last_idr_frame = hprev->last_idr_frame;
         hpic->idr_pic_id     = hprev->idr_pic_id;
 
-        if (pic->type == PICTURE_TYPE_I) {
+        if (base_pic->type == PICTURE_TYPE_I) {
             hpic->slice_type       = 7;
             hpic->primary_pic_type = 0;
-        } else if (pic->type == PICTURE_TYPE_P) {
+        } else if (base_pic->type == PICTURE_TYPE_P) {
             hpic->slice_type       = 5;
             hpic->primary_pic_type = 1;
         } else {
@@ -658,13 +660,13 @@ static int vaapi_encode_h264_init_picture_params(AVCodecContext *avctx,
             hpic->primary_pic_type = 2;
         }
     }
-    hpic->pic_order_cnt = pic->display_order - hpic->last_idr_frame;
+    hpic->pic_order_cnt = base_pic->display_order - hpic->last_idr_frame;
     if (priv->raw_sps.pic_order_cnt_type == 2) {
         hpic->pic_order_cnt *= 2;
     }
 
-    hpic->dpb_delay     = pic->display_order - pic->encode_order + ctx->max_b_depth;
-    hpic->cpb_delay     = pic->encode_order - hpic->last_idr_frame;
+    hpic->dpb_delay     = base_pic->display_order - base_pic->encode_order + base_ctx->max_b_depth;
+    hpic->cpb_delay     = base_pic->encode_order - hpic->last_idr_frame;
 
     if (priv->aud) {
         priv->aud_needed = 1;
@@ -680,7 +682,7 @@ static int vaapi_encode_h264_init_picture_params(AVCodecContext *avctx,
 
     priv->sei_needed = 0;
 
-    if (priv->sei & SEI_IDENTIFIER && pic->encode_order == 0)
+    if (priv->sei & SEI_IDENTIFIER && base_pic->encode_order == 0)
         priv->sei_needed |= SEI_IDENTIFIER;
 #if !CONFIG_VAAPI_1
     if (ctx->va_rc_mode == VA_RC_CBR)
@@ -696,11 +698,11 @@ static int vaapi_encode_h264_init_picture_params(AVCodecContext *avctx,
         priv->sei_needed |= SEI_TIMING;
     }
 
-    if (priv->sei & SEI_RECOVERY_POINT && pic->type == PICTURE_TYPE_I) {
+    if (priv->sei & SEI_RECOVERY_POINT && base_pic->type == PICTURE_TYPE_I) {
         priv->sei_recovery_point = (H264RawSEIRecoveryPoint) {
             .recovery_frame_cnt = 0,
             .exact_match_flag   = 1,
-            .broken_link_flag   = ctx->b_per_p > 0,
+            .broken_link_flag   = base_ctx->b_per_p > 0,
         };
 
         priv->sei_needed |= SEI_RECOVERY_POINT;
@@ -710,7 +712,7 @@ static int vaapi_encode_h264_init_picture_params(AVCodecContext *avctx,
         int err;
         size_t sei_a53cc_len;
         av_freep(&priv->sei_a53cc_data);
-        err = ff_alloc_a53_sei(pic->input_image, 0, &priv->sei_a53cc_data, &sei_a53cc_len);
+        err = ff_alloc_a53_sei(base_pic->input_image, 0, &priv->sei_a53cc_data, &sei_a53cc_len);
         if (err < 0)
             return err;
         if (priv->sei_a53cc_data != NULL) {
@@ -730,15 +732,15 @@ static int vaapi_encode_h264_init_picture_params(AVCodecContext *avctx,
         .BottomFieldOrderCnt = hpic->pic_order_cnt,
     };
     for (int k = 0; k < MAX_REFERENCE_LIST_NUM; k++) {
-        for (i = 0; i < pic->nb_refs[k]; i++) {
-            VAAPIEncodePicture      *ref = pic->refs[k][i];
+        for (i = 0; i < base_pic->nb_refs[k]; i++) {
+            HWBaseEncodePicture    *ref = base_pic->refs[k][i];
             VAAPIEncodeH264Picture *href;
 
-            av_assert0(ref && ref->encode_order < pic->encode_order);
+            av_assert0(ref && ref->encode_order < base_pic->encode_order);
             href = ref->priv_data;
 
             vpic->ReferenceFrames[j++] = (VAPictureH264) {
-                .picture_id          = ref->recon_surface,
+                .picture_id          = ((VAAPIEncodePicture *)ref)->recon_surface,
                 .frame_idx           = href->frame_num,
                 .flags               = VA_PICTURE_H264_SHORT_TERM_REFERENCE,
                 .TopFieldOrderCnt    = href->pic_order_cnt,
@@ -758,8 +760,8 @@ static int vaapi_encode_h264_init_picture_params(AVCodecContext *avctx,
 
     vpic->frame_num = hpic->frame_num;
 
-    vpic->pic_fields.bits.idr_pic_flag       = (pic->type == PICTURE_TYPE_IDR);
-    vpic->pic_fields.bits.reference_pic_flag = (pic->type != PICTURE_TYPE_B);
+    vpic->pic_fields.bits.idr_pic_flag       = (base_pic->type == PICTURE_TYPE_IDR);
+    vpic->pic_fields.bits.reference_pic_flag = (base_pic->type != PICTURE_TYPE_B);
 
     return 0;
 }
@@ -770,31 +772,32 @@ static void vaapi_encode_h264_default_ref_pic_list(AVCodecContext *avctx,
                                                    VAAPIEncodePicture **rpl1,
                                                    int *rpl_size)
 {
-    VAAPIEncodePicture *prev;
+    HWBaseEncodePicture *base_pic = (HWBaseEncodePicture *)pic;
+    HWBaseEncodePicture *prev;
     VAAPIEncodeH264Picture *hp, *hn, *hc;
     int i, j, n = 0;
 
-    prev = pic->prev;
+    prev = base_pic->prev;
     av_assert0(prev);
-    hp = pic->priv_data;
+    hp = base_pic->priv_data;
 
-    for (i = 0; i < pic->prev->nb_dpb_pics; i++) {
+    for (i = 0; i < base_pic->prev->nb_dpb_pics; i++) {
         hn = prev->dpb[i]->priv_data;
         av_assert0(hn->frame_num < hp->frame_num);
 
-        if (pic->type == PICTURE_TYPE_P) {
+        if (base_pic->type == PICTURE_TYPE_P) {
             for (j = n; j > 0; j--) {
-                hc = rpl0[j - 1]->priv_data;
+                hc = rpl0[j - 1]->base.priv_data;
                 av_assert0(hc->frame_num != hn->frame_num);
                 if (hc->frame_num > hn->frame_num)
                     break;
                 rpl0[j] = rpl0[j - 1];
             }
-            rpl0[j] = prev->dpb[i];
+            rpl0[j] = (VAAPIEncodePicture *)prev->dpb[i];
 
-        } else if (pic->type == PICTURE_TYPE_B) {
+        } else if (base_pic->type == PICTURE_TYPE_B) {
             for (j = n; j > 0; j--) {
-                hc = rpl0[j - 1]->priv_data;
+                hc = rpl0[j - 1]->base.priv_data;
                 av_assert0(hc->pic_order_cnt != hp->pic_order_cnt);
                 if (hc->pic_order_cnt < hp->pic_order_cnt) {
                     if (hn->pic_order_cnt > hp->pic_order_cnt ||
@@ -806,10 +809,10 @@ static void vaapi_encode_h264_default_ref_pic_list(AVCodecContext *avctx,
                 }
                 rpl0[j] = rpl0[j - 1];
             }
-            rpl0[j] = prev->dpb[i];
+            rpl0[j] = (VAAPIEncodePicture *)prev->dpb[i];
 
             for (j = n; j > 0; j--) {
-                hc = rpl1[j - 1]->priv_data;
+                hc = rpl1[j - 1]->base.priv_data;
                 av_assert0(hc->pic_order_cnt != hp->pic_order_cnt);
                 if (hc->pic_order_cnt > hp->pic_order_cnt) {
                     if (hn->pic_order_cnt < hp->pic_order_cnt ||
@@ -821,13 +824,13 @@ static void vaapi_encode_h264_default_ref_pic_list(AVCodecContext *avctx,
                 }
                 rpl1[j] = rpl1[j - 1];
             }
-            rpl1[j] = prev->dpb[i];
+            rpl1[j] = (VAAPIEncodePicture *)prev->dpb[i];
         }
 
         ++n;
     }
 
-    if (pic->type == PICTURE_TYPE_B) {
+    if (base_pic->type == PICTURE_TYPE_B) {
         for (i = 0; i < n; i++) {
             if (rpl0[i] != rpl1[i])
                 break;
@@ -836,22 +839,22 @@ static void vaapi_encode_h264_default_ref_pic_list(AVCodecContext *avctx,
             FFSWAP(VAAPIEncodePicture*, rpl1[0], rpl1[1]);
     }
 
-    if (pic->type == PICTURE_TYPE_P ||
-        pic->type == PICTURE_TYPE_B) {
+    if (base_pic->type == PICTURE_TYPE_P ||
+        base_pic->type == PICTURE_TYPE_B) {
         av_log(avctx, AV_LOG_DEBUG, "Default RefPicList0 for fn=%d/poc=%d:",
                hp->frame_num, hp->pic_order_cnt);
         for (i = 0; i < n; i++) {
-            hn = rpl0[i]->priv_data;
+            hn = rpl0[i]->base.priv_data;
             av_log(avctx, AV_LOG_DEBUG, "  fn=%d/poc=%d",
                    hn->frame_num, hn->pic_order_cnt);
         }
         av_log(avctx, AV_LOG_DEBUG, "\n");
     }
-    if (pic->type == PICTURE_TYPE_B) {
+    if (base_pic->type == PICTURE_TYPE_B) {
         av_log(avctx, AV_LOG_DEBUG, "Default RefPicList1 for fn=%d/poc=%d:",
                hp->frame_num, hp->pic_order_cnt);
         for (i = 0; i < n; i++) {
-            hn = rpl1[i]->priv_data;
+            hn = rpl1[i]->base.priv_data;
             av_log(avctx, AV_LOG_DEBUG, "  fn=%d/poc=%d",
                    hn->frame_num, hn->pic_order_cnt);
         }
@@ -866,8 +869,9 @@ static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx,
                                                VAAPIEncodeSlice *slice)
 {
     VAAPIEncodeH264Context          *priv = avctx->priv_data;
-    VAAPIEncodeH264Picture          *hpic = pic->priv_data;
-    VAAPIEncodePicture              *prev = pic->prev;
+    HWBaseEncodePicture          *base_pic = (HWBaseEncodePicture *)pic;
+    VAAPIEncodeH264Picture          *hpic = base_pic->priv_data;
+    HWBaseEncodePicture             *prev = base_pic->prev;
     H264RawSPS                       *sps = &priv->raw_sps;
     H264RawPPS                       *pps = &priv->raw_pps;
     H264RawSliceHeader                *sh = &priv->raw_slice.header;
@@ -875,12 +879,12 @@ static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx,
     VAEncSliceParameterBufferH264 *vslice = slice->codec_slice_params;
     int i, j;
 
-    if (pic->type == PICTURE_TYPE_IDR) {
+    if (base_pic->type == PICTURE_TYPE_IDR) {
         sh->nal_unit_header.nal_unit_type = H264_NAL_IDR_SLICE;
         sh->nal_unit_header.nal_ref_idc   = 3;
     } else {
         sh->nal_unit_header.nal_unit_type = H264_NAL_SLICE;
-        sh->nal_unit_header.nal_ref_idc   = pic->is_reference;
+        sh->nal_unit_header.nal_ref_idc   = base_pic->is_reference;
     }
 
     sh->first_mb_in_slice = slice->block_start;
@@ -896,25 +900,25 @@ static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx,
 
     sh->direct_spatial_mv_pred_flag = 1;
 
-    if (pic->type == PICTURE_TYPE_B)
+    if (base_pic->type == PICTURE_TYPE_B)
         sh->slice_qp_delta = priv->fixed_qp_b - (pps->pic_init_qp_minus26 + 26);
-    else if (pic->type == PICTURE_TYPE_P)
+    else if (base_pic->type == PICTURE_TYPE_P)
         sh->slice_qp_delta = priv->fixed_qp_p - (pps->pic_init_qp_minus26 + 26);
     else
         sh->slice_qp_delta = priv->fixed_qp_idr - (pps->pic_init_qp_minus26 + 26);
 
-    if (pic->is_reference && pic->type != PICTURE_TYPE_IDR) {
-        VAAPIEncodePicture *discard_list[MAX_DPB_SIZE];
+    if (base_pic->is_reference && base_pic->type != PICTURE_TYPE_IDR) {
+        HWBaseEncodePicture *discard_list[MAX_DPB_SIZE];
         int discard = 0, keep = 0;
 
         // Discard everything which is in the DPB of the previous frame but
         // not in the DPB of this one.
         for (i = 0; i < prev->nb_dpb_pics; i++) {
-            for (j = 0; j < pic->nb_dpb_pics; j++) {
-                if (prev->dpb[i] == pic->dpb[j])
+            for (j = 0; j < base_pic->nb_dpb_pics; j++) {
+                if (prev->dpb[i] == base_pic->dpb[j])
                     break;
             }
-            if (j == pic->nb_dpb_pics) {
+            if (j == base_pic->nb_dpb_pics) {
                 discard_list[discard] = prev->dpb[i];
                 ++discard;
             } else {
@@ -940,7 +944,7 @@ static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx,
 
     // If the intended references are not the first entries of RefPicListN
     // by default, use ref-pic-list-modification to move them there.
-    if (pic->type == PICTURE_TYPE_P || pic->type == PICTURE_TYPE_B) {
+    if (base_pic->type == PICTURE_TYPE_P || base_pic->type == PICTURE_TYPE_B) {
         VAAPIEncodePicture *def_l0[MAX_DPB_SIZE], *def_l1[MAX_DPB_SIZE];
         VAAPIEncodeH264Picture *href;
         int n;
@@ -948,19 +952,19 @@ static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx,
         vaapi_encode_h264_default_ref_pic_list(avctx, pic,
                                                def_l0, def_l1, &n);
 
-        if (pic->type == PICTURE_TYPE_P) {
+        if (base_pic->type == PICTURE_TYPE_P) {
             int need_rplm = 0;
-            for (i = 0; i < pic->nb_refs[0]; i++) {
-                av_assert0(pic->refs[0][i]);
-                if (pic->refs[0][i] != def_l0[i])
+            for (i = 0; i < base_pic->nb_refs[0]; i++) {
+                av_assert0(base_pic->refs[0][i]);
+                if (base_pic->refs[0][i] != (HWBaseEncodePicture *)def_l0[i])
                     need_rplm = 1;
             }
 
             sh->ref_pic_list_modification_flag_l0 = need_rplm;
             if (need_rplm) {
                 int pic_num = hpic->frame_num;
-                for (i = 0; i < pic->nb_refs[0]; i++) {
-                    href = pic->refs[0][i]->priv_data;
+                for (i = 0; i < base_pic->nb_refs[0]; i++) {
+                    href = base_pic->refs[0][i]->priv_data;
                     av_assert0(href->frame_num != pic_num);
                     if (href->frame_num < pic_num) {
                         sh->rplm_l0[i].modification_of_pic_nums_idc = 0;
@@ -979,20 +983,20 @@ static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx,
         } else {
             int need_rplm_l0 = 0, need_rplm_l1 = 0;
             int n0 = 0, n1 = 0;
-            for (i = 0; i < pic->nb_refs[0]; i++) {
-                av_assert0(pic->refs[0][i]);
-                href = pic->refs[0][i]->priv_data;
+            for (i = 0; i < base_pic->nb_refs[0]; i++) {
+                av_assert0(base_pic->refs[0][i]);
+                href = base_pic->refs[0][i]->priv_data;
                 av_assert0(href->pic_order_cnt < hpic->pic_order_cnt);
-                if (pic->refs[0][i] != def_l0[n0])
+                if (base_pic->refs[0][i] != (HWBaseEncodePicture *)def_l0[n0])
                     need_rplm_l0 = 1;
                 ++n0;
             }
 
-            for (i = 0; i < pic->nb_refs[1]; i++) {
-                av_assert0(pic->refs[1][i]);
-                href = pic->refs[1][i]->priv_data;
+            for (i = 0; i < base_pic->nb_refs[1]; i++) {
+                av_assert0(base_pic->refs[1][i]);
+                href = base_pic->refs[1][i]->priv_data;
                 av_assert0(href->pic_order_cnt > hpic->pic_order_cnt);
-                if (pic->refs[1][i] != def_l1[n1])
+                if (base_pic->refs[1][i] != (HWBaseEncodePicture *)def_l1[n1])
                     need_rplm_l1 = 1;
                 ++n1;
             }
@@ -1000,8 +1004,8 @@ static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx,
             sh->ref_pic_list_modification_flag_l0 = need_rplm_l0;
             if (need_rplm_l0) {
                 int pic_num = hpic->frame_num;
-                for (i = j = 0; i < pic->nb_refs[0]; i++) {
-                    href = pic->refs[0][i]->priv_data;
+                for (i = j = 0; i < base_pic->nb_refs[0]; i++) {
+                    href = base_pic->refs[0][i]->priv_data;
                     av_assert0(href->frame_num != pic_num);
                     if (href->frame_num < pic_num) {
                         sh->rplm_l0[j].modification_of_pic_nums_idc = 0;
@@ -1022,8 +1026,8 @@ static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx,
             sh->ref_pic_list_modification_flag_l1 = need_rplm_l1;
             if (need_rplm_l1) {
                 int pic_num = hpic->frame_num;
-                for (i = j = 0; i < pic->nb_refs[1]; i++) {
-                    href = pic->refs[1][i]->priv_data;
+                for (i = j = 0; i < base_pic->nb_refs[1]; i++) {
+                    href = base_pic->refs[1][i]->priv_data;
                     av_assert0(href->frame_num != pic_num);
                     if (href->frame_num < pic_num) {
                         sh->rplm_l1[j].modification_of_pic_nums_idc = 0;
@@ -1063,15 +1067,15 @@ static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx,
         vslice->RefPicList1[i].flags      = VA_PICTURE_H264_INVALID;
     }
 
-    if (pic->nb_refs[0]) {
+    if (base_pic->nb_refs[0]) {
         // Backward reference for P- or B-frame.
-        av_assert0(pic->type == PICTURE_TYPE_P ||
-                   pic->type == PICTURE_TYPE_B);
+        av_assert0(base_pic->type == PICTURE_TYPE_P ||
+                   base_pic->type == PICTURE_TYPE_B);
         vslice->RefPicList0[0] = vpic->ReferenceFrames[0];
     }
-    if (pic->nb_refs[1]) {
+    if (base_pic->nb_refs[1]) {
         // Forward reference for B-frame.
-        av_assert0(pic->type == PICTURE_TYPE_B);
+        av_assert0(base_pic->type == PICTURE_TYPE_B);
         vslice->RefPicList1[0] = vpic->ReferenceFrames[1];
     }
 
@@ -1082,8 +1086,9 @@ static int vaapi_encode_h264_init_slice_params(AVCodecContext *avctx,
 
 static av_cold int vaapi_encode_h264_configure(AVCodecContext *avctx)
 {
-    VAAPIEncodeContext      *ctx = avctx->priv_data;
-    VAAPIEncodeH264Context *priv = avctx->priv_data;
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    VAAPIEncodeContext       *ctx = avctx->priv_data;
+    VAAPIEncodeH264Context  *priv = avctx->priv_data;
     int err;
 
     err = ff_cbs_init(&priv->cbc, AV_CODEC_ID_H264, avctx);
@@ -1094,7 +1099,7 @@ static av_cold int vaapi_encode_h264_configure(AVCodecContext *avctx)
     priv->mb_height = FFALIGN(avctx->height, 16) / 16;
 
     if (ctx->va_rc_mode == VA_RC_CQP) {
-        priv->fixed_qp_p = av_clip(ctx->rc_quality, 1, 51);
+        priv->fixed_qp_p = av_clip(base_ctx->rc_quality, 1, 51);
         if (avctx->i_quant_factor > 0.0)
             priv->fixed_qp_idr =
                 av_clip((avctx->i_quant_factor * priv->fixed_qp_p +
@@ -1202,8 +1207,9 @@ static const VAAPIEncodeType vaapi_encode_type_h264 = {
 
 static av_cold int vaapi_encode_h264_init(AVCodecContext *avctx)
 {
-    VAAPIEncodeContext      *ctx = avctx->priv_data;
-    VAAPIEncodeH264Context *priv = avctx->priv_data;
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    VAAPIEncodeContext       *ctx = avctx->priv_data;
+    VAAPIEncodeH264Context  *priv = avctx->priv_data;
 
     ctx->codec = &vaapi_encode_type_h264;
 
@@ -1251,13 +1257,13 @@ static av_cold int vaapi_encode_h264_init(AVCodecContext *avctx)
         VA_ENC_PACKED_HEADER_SLICE    | // Slice headers.
         VA_ENC_PACKED_HEADER_MISC;      // SEI.
 
-    ctx->surface_width  = FFALIGN(avctx->width,  16);
-    ctx->surface_height = FFALIGN(avctx->height, 16);
+    base_ctx->surface_width  = FFALIGN(avctx->width,  16);
+    base_ctx->surface_height = FFALIGN(avctx->height, 16);
 
-    ctx->slice_block_height = ctx->slice_block_width = 16;
+    base_ctx->slice_block_height = base_ctx->slice_block_width = 16;
 
     if (priv->qp > 0)
-        ctx->explicit_qp = priv->qp;
+        base_ctx->explicit_qp = priv->qp;
 
     return ff_vaapi_encode_init(avctx);
 }
@@ -1277,6 +1283,7 @@ static av_cold int vaapi_encode_h264_close(AVCodecContext *avctx)
 #define OFFSET(x) offsetof(VAAPIEncodeH264Context, x)
 #define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM)
 static const AVOption vaapi_encode_h264_options[] = {
+    HW_BASE_ENCODE_COMMON_OPTIONS,
     VAAPI_ENCODE_COMMON_OPTIONS,
     VAAPI_ENCODE_RC_OPTIONS,
 
diff --git a/libavcodec/vaapi_encode_h265.c b/libavcodec/vaapi_encode_h265.c
index c4aabbf5ed..4f5d8fc76f 100644
--- a/libavcodec/vaapi_encode_h265.c
+++ b/libavcodec/vaapi_encode_h265.c
@@ -260,6 +260,7 @@ fail:
 
 static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx)
 {
+    HWBaseEncodeContext          *base_ctx = avctx->priv_data;
     VAAPIEncodeContext                *ctx = avctx->priv_data;
     VAAPIEncodeH265Context           *priv = avctx->priv_data;
     H265RawVPS                        *vps = &priv->raw_vps;
@@ -278,7 +279,7 @@ static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx)
     memset(pps, 0, sizeof(*pps));
 
 
-    desc = av_pix_fmt_desc_get(priv->common.input_frames->sw_format);
+    desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format);
     av_assert0(desc);
     if (desc->nb_components == 1) {
         chroma_format = 0;
@@ -341,7 +342,7 @@ static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx)
     ptl->general_max_420chroma_constraint_flag  = chroma_format <= 1;
     ptl->general_max_monochrome_constraint_flag = chroma_format == 0;
 
-    ptl->general_intra_constraint_flag = ctx->gop_size == 1;
+    ptl->general_intra_constraint_flag = base_ctx->gop_size == 1;
     ptl->general_one_picture_only_constraint_flag = 0;
 
     ptl->general_lower_bit_rate_constraint_flag = 1;
@@ -352,9 +353,9 @@ static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx)
         const H265LevelDescriptor *level;
 
         level = ff_h265_guess_level(ptl, avctx->bit_rate,
-                                    ctx->surface_width, ctx->surface_height,
+                                    base_ctx->surface_width, base_ctx->surface_height,
                                     ctx->nb_slices, ctx->tile_rows, ctx->tile_cols,
-                                    (ctx->b_per_p > 0) + 1);
+                                    (base_ctx->b_per_p > 0) + 1);
         if (level) {
             av_log(avctx, AV_LOG_VERBOSE, "Using level %s.\n", level->name);
             ptl->general_level_idc = level->level_idc;
@@ -368,8 +369,8 @@ static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx)
     }
 
     vps->vps_sub_layer_ordering_info_present_flag = 0;
-    vps->vps_max_dec_pic_buffering_minus1[0]      = ctx->max_b_depth + 1;
-    vps->vps_max_num_reorder_pics[0]              = ctx->max_b_depth;
+    vps->vps_max_dec_pic_buffering_minus1[0]      = base_ctx->max_b_depth + 1;
+    vps->vps_max_num_reorder_pics[0]              = base_ctx->max_b_depth;
     vps->vps_max_latency_increase_plus1[0]        = 0;
 
     vps->vps_max_layer_id             = 0;
@@ -410,18 +411,18 @@ static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx)
     sps->chroma_format_idc          = chroma_format;
     sps->separate_colour_plane_flag = 0;
 
-    sps->pic_width_in_luma_samples  = ctx->surface_width;
-    sps->pic_height_in_luma_samples = ctx->surface_height;
+    sps->pic_width_in_luma_samples  = base_ctx->surface_width;
+    sps->pic_height_in_luma_samples = base_ctx->surface_height;
 
-    if (avctx->width  != ctx->surface_width ||
-        avctx->height != ctx->surface_height) {
+    if (avctx->width  != base_ctx->surface_width ||
+        avctx->height != base_ctx->surface_height) {
         sps->conformance_window_flag = 1;
         sps->conf_win_left_offset   = 0;
         sps->conf_win_right_offset  =
-            (ctx->surface_width - avctx->width) >> desc->log2_chroma_w;
+            (base_ctx->surface_width - avctx->width) >> desc->log2_chroma_w;
         sps->conf_win_top_offset    = 0;
         sps->conf_win_bottom_offset =
-            (ctx->surface_height - avctx->height) >> desc->log2_chroma_h;
+            (base_ctx->surface_height - avctx->height) >> desc->log2_chroma_h;
     } else {
         sps->conformance_window_flag = 0;
     }
@@ -643,9 +644,9 @@ static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx)
         .general_level_idc   = vps->profile_tier_level.general_level_idc,
         .general_tier_flag   = vps->profile_tier_level.general_tier_flag,
 
-        .intra_period     = ctx->gop_size,
-        .intra_idr_period = ctx->gop_size,
-        .ip_period        = ctx->b_per_p + 1,
+        .intra_period     = base_ctx->gop_size,
+        .intra_idr_period = base_ctx->gop_size,
+        .ip_period        = base_ctx->b_per_p + 1,
         .bits_per_second  = ctx->va_bit_rate,
 
         .pic_width_in_luma_samples  = sps->pic_width_in_luma_samples,
@@ -758,18 +759,19 @@ static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx)
 static int vaapi_encode_h265_init_picture_params(AVCodecContext *avctx,
                                                  VAAPIEncodePicture *pic)
 {
-    VAAPIEncodeContext               *ctx = avctx->priv_data;
+    HWBaseEncodeContext         *base_ctx = avctx->priv_data;
     VAAPIEncodeH265Context          *priv = avctx->priv_data;
-    VAAPIEncodeH265Picture          *hpic = pic->priv_data;
-    VAAPIEncodePicture              *prev = pic->prev;
+    HWBaseEncodePicture         *base_pic = (HWBaseEncodePicture *)pic;
+    VAAPIEncodeH265Picture          *hpic = base_pic->priv_data;
+    HWBaseEncodePicture             *prev = base_pic->prev;
     VAAPIEncodeH265Picture         *hprev = prev ? prev->priv_data : NULL;
     VAEncPictureParameterBufferHEVC *vpic = pic->codec_picture_params;
     int i, j = 0;
 
-    if (pic->type == PICTURE_TYPE_IDR) {
-        av_assert0(pic->display_order == pic->encode_order);
+    if (base_pic->type == PICTURE_TYPE_IDR) {
+        av_assert0(base_pic->display_order == base_pic->encode_order);
 
-        hpic->last_idr_frame = pic->display_order;
+        hpic->last_idr_frame = base_pic->display_order;
 
         hpic->slice_nal_unit = HEVC_NAL_IDR_W_RADL;
         hpic->slice_type     = HEVC_SLICE_I;
@@ -778,23 +780,23 @@ static int vaapi_encode_h265_init_picture_params(AVCodecContext *avctx,
         av_assert0(prev);
         hpic->last_idr_frame = hprev->last_idr_frame;
 
-        if (pic->type == PICTURE_TYPE_I) {
+        if (base_pic->type == PICTURE_TYPE_I) {
             hpic->slice_nal_unit = HEVC_NAL_CRA_NUT;
             hpic->slice_type     = HEVC_SLICE_I;
             hpic->pic_type       = 0;
-        } else if (pic->type == PICTURE_TYPE_P) {
-            av_assert0(pic->refs[0]);
+        } else if (base_pic->type == PICTURE_TYPE_P) {
+            av_assert0(base_pic->refs[0]);
             hpic->slice_nal_unit = HEVC_NAL_TRAIL_R;
             hpic->slice_type     = HEVC_SLICE_P;
             hpic->pic_type       = 1;
         } else {
-            VAAPIEncodePicture *irap_ref;
-            av_assert0(pic->refs[0][0] && pic->refs[1][0]);
-            for (irap_ref = pic; irap_ref; irap_ref = irap_ref->refs[1][0]) {
+            HWBaseEncodePicture *irap_ref;
+            av_assert0(base_pic->refs[0][0] && base_pic->refs[1][0]);
+            for (irap_ref = base_pic; irap_ref; irap_ref = irap_ref->refs[1][0]) {
                 if (irap_ref->type == PICTURE_TYPE_I)
                     break;
             }
-            if (pic->b_depth == ctx->max_b_depth) {
+            if (base_pic->b_depth == base_ctx->max_b_depth) {
                 hpic->slice_nal_unit = irap_ref ? HEVC_NAL_RASL_N
                                                 : HEVC_NAL_TRAIL_N;
             } else {
@@ -805,7 +807,7 @@ static int vaapi_encode_h265_init_picture_params(AVCodecContext *avctx,
             hpic->pic_type   = 2;
         }
     }
-    hpic->pic_order_cnt = pic->display_order - hpic->last_idr_frame;
+    hpic->pic_order_cnt = base_pic->display_order - hpic->last_idr_frame;
 
     if (priv->aud) {
         priv->aud_needed = 1;
@@ -827,9 +829,9 @@ static int vaapi_encode_h265_init_picture_params(AVCodecContext *avctx,
     // may force an IDR frame on the output where the medadata gets
     // changed on the input frame.
     if ((priv->sei & SEI_MASTERING_DISPLAY) &&
-        (pic->type == PICTURE_TYPE_I || pic->type == PICTURE_TYPE_IDR)) {
+        (base_pic->type == PICTURE_TYPE_I || base_pic->type == PICTURE_TYPE_IDR)) {
         AVFrameSideData *sd =
-            av_frame_get_side_data(pic->input_image,
+            av_frame_get_side_data(base_pic->input_image,
                                    AV_FRAME_DATA_MASTERING_DISPLAY_METADATA);
 
         if (sd) {
@@ -875,9 +877,9 @@ static int vaapi_encode_h265_init_picture_params(AVCodecContext *avctx,
     }
 
     if ((priv->sei & SEI_CONTENT_LIGHT_LEVEL) &&
-        (pic->type == PICTURE_TYPE_I || pic->type == PICTURE_TYPE_IDR)) {
+        (base_pic->type == PICTURE_TYPE_I || base_pic->type == PICTURE_TYPE_IDR)) {
         AVFrameSideData *sd =
-            av_frame_get_side_data(pic->input_image,
+            av_frame_get_side_data(base_pic->input_image,
                                    AV_FRAME_DATA_CONTENT_LIGHT_LEVEL);
 
         if (sd) {
@@ -897,7 +899,7 @@ static int vaapi_encode_h265_init_picture_params(AVCodecContext *avctx,
         int err;
         size_t sei_a53cc_len;
         av_freep(&priv->sei_a53cc_data);
-        err = ff_alloc_a53_sei(pic->input_image, 0, &priv->sei_a53cc_data, &sei_a53cc_len);
+        err = ff_alloc_a53_sei(base_pic->input_image, 0, &priv->sei_a53cc_data, &sei_a53cc_len);
         if (err < 0)
             return err;
         if (priv->sei_a53cc_data != NULL) {
@@ -916,19 +918,19 @@ static int vaapi_encode_h265_init_picture_params(AVCodecContext *avctx,
     };
 
     for (int k = 0; k < MAX_REFERENCE_LIST_NUM; k++) {
-        for (i = 0; i < pic->nb_refs[k]; i++) {
-            VAAPIEncodePicture      *ref = pic->refs[k][i];
+        for (i = 0; i < base_pic->nb_refs[k]; i++) {
+            HWBaseEncodePicture    *ref = base_pic->refs[k][i];
             VAAPIEncodeH265Picture *href;
 
-            av_assert0(ref && ref->encode_order < pic->encode_order);
+            av_assert0(ref && ref->encode_order < base_pic->encode_order);
             href = ref->priv_data;
 
             vpic->reference_frames[j++] = (VAPictureHEVC) {
-                .picture_id    = ref->recon_surface,
+                .picture_id    = ((VAAPIEncodePicture *)ref)->recon_surface,
                 .pic_order_cnt = href->pic_order_cnt,
-                .flags = (ref->display_order < pic->display_order ?
+                .flags = (ref->display_order < base_pic->display_order ?
                           VA_PICTURE_HEVC_RPS_ST_CURR_BEFORE : 0) |
-                          (ref->display_order > pic->display_order ?
+                          (ref->display_order > base_pic->display_order ?
                           VA_PICTURE_HEVC_RPS_ST_CURR_AFTER  : 0),
             };
         }
@@ -945,7 +947,7 @@ static int vaapi_encode_h265_init_picture_params(AVCodecContext *avctx,
 
     vpic->nal_unit_type = hpic->slice_nal_unit;
 
-    switch (pic->type) {
+    switch (base_pic->type) {
     case PICTURE_TYPE_IDR:
         vpic->pic_fields.bits.idr_pic_flag       = 1;
         vpic->pic_fields.bits.coding_type        = 1;
@@ -977,9 +979,10 @@ static int vaapi_encode_h265_init_slice_params(AVCodecContext *avctx,
                                                VAAPIEncodePicture *pic,
                                                VAAPIEncodeSlice *slice)
 {
-    VAAPIEncodeContext                *ctx = avctx->priv_data;
+    HWBaseEncodeContext          *base_ctx = avctx->priv_data;
     VAAPIEncodeH265Context           *priv = avctx->priv_data;
-    VAAPIEncodeH265Picture           *hpic = pic->priv_data;
+    HWBaseEncodePicture          *base_pic = (HWBaseEncodePicture *)pic;
+    VAAPIEncodeH265Picture           *hpic = base_pic->priv_data;
     const H265RawSPS                  *sps = &priv->raw_sps;
     const H265RawPPS                  *pps = &priv->raw_pps;
     H265RawSliceHeader                 *sh = &priv->raw_slice.header;
@@ -1000,13 +1003,13 @@ static int vaapi_encode_h265_init_slice_params(AVCodecContext *avctx,
 
     sh->slice_type = hpic->slice_type;
 
-    if (sh->slice_type == HEVC_SLICE_P && ctx->p_to_gpb)
+    if (sh->slice_type == HEVC_SLICE_P && base_ctx->p_to_gpb)
         sh->slice_type = HEVC_SLICE_B;
 
     sh->slice_pic_order_cnt_lsb = hpic->pic_order_cnt &
         (1 << (sps->log2_max_pic_order_cnt_lsb_minus4 + 4)) - 1;
 
-    if (pic->type != PICTURE_TYPE_IDR) {
+    if (base_pic->type != PICTURE_TYPE_IDR) {
         H265RawSTRefPicSet *rps;
         const VAAPIEncodeH265Picture *strp;
         int rps_poc[MAX_DPB_SIZE];
@@ -1020,33 +1023,33 @@ static int vaapi_encode_h265_init_slice_params(AVCodecContext *avctx,
 
         rps_pics = 0;
         for (i = 0; i < MAX_REFERENCE_LIST_NUM; i++) {
-            for (j = 0; j < pic->nb_refs[i]; j++) {
-                strp = pic->refs[i][j]->priv_data;
+            for (j = 0; j < base_pic->nb_refs[i]; j++) {
+                strp = base_pic->refs[i][j]->priv_data;
                 rps_poc[rps_pics]  = strp->pic_order_cnt;
                 rps_used[rps_pics] = 1;
                 ++rps_pics;
             }
         }
 
-        for (i = 0; i < pic->nb_dpb_pics; i++) {
-            if (pic->dpb[i] == pic)
+        for (i = 0; i < base_pic->nb_dpb_pics; i++) {
+            if (base_pic->dpb[i] == base_pic)
                 continue;
 
-            for (j = 0; j < pic->nb_refs[0]; j++) {
-                if (pic->dpb[i] == pic->refs[0][j])
+            for (j = 0; j < base_pic->nb_refs[0]; j++) {
+                if (base_pic->dpb[i] == base_pic->refs[0][j])
                     break;
             }
-            if (j < pic->nb_refs[0])
+            if (j < base_pic->nb_refs[0])
                 continue;
 
-            for (j = 0; j < pic->nb_refs[1]; j++) {
-                if (pic->dpb[i] == pic->refs[1][j])
+            for (j = 0; j < base_pic->nb_refs[1]; j++) {
+                if (base_pic->dpb[i] == base_pic->refs[1][j])
                     break;
             }
-            if (j < pic->nb_refs[1])
+            if (j < base_pic->nb_refs[1])
                 continue;
 
-            strp = pic->dpb[i]->priv_data;
+            strp = base_pic->dpb[i]->priv_data;
             rps_poc[rps_pics]  = strp->pic_order_cnt;
             rps_used[rps_pics] = 0;
             ++rps_pics;
@@ -1113,9 +1116,9 @@ static int vaapi_encode_h265_init_slice_params(AVCodecContext *avctx,
     sh->slice_sao_luma_flag = sh->slice_sao_chroma_flag =
         sps->sample_adaptive_offset_enabled_flag;
 
-    if (pic->type == PICTURE_TYPE_B)
+    if (base_pic->type == PICTURE_TYPE_B)
         sh->slice_qp_delta = priv->fixed_qp_b - (pps->init_qp_minus26 + 26);
-    else if (pic->type == PICTURE_TYPE_P)
+    else if (base_pic->type == PICTURE_TYPE_P)
         sh->slice_qp_delta = priv->fixed_qp_p - (pps->init_qp_minus26 + 26);
     else
         sh->slice_qp_delta = priv->fixed_qp_idr - (pps->init_qp_minus26 + 26);
@@ -1170,22 +1173,22 @@ static int vaapi_encode_h265_init_slice_params(AVCodecContext *avctx,
         vslice->ref_pic_list1[i].flags      = VA_PICTURE_HEVC_INVALID;
     }
 
-    if (pic->nb_refs[0]) {
+    if (base_pic->nb_refs[0]) {
         // Backward reference for P- or B-frame.
-        av_assert0(pic->type == PICTURE_TYPE_P ||
-                   pic->type == PICTURE_TYPE_B);
+        av_assert0(base_pic->type == PICTURE_TYPE_P ||
+                   base_pic->type == PICTURE_TYPE_B);
         vslice->ref_pic_list0[0] = vpic->reference_frames[0];
-        if (ctx->p_to_gpb && pic->type == PICTURE_TYPE_P)
+        if (base_ctx->p_to_gpb && base_pic->type == PICTURE_TYPE_P)
             // Reference for GPB B-frame, L0 == L1
             vslice->ref_pic_list1[0] = vpic->reference_frames[0];
     }
-    if (pic->nb_refs[1]) {
+    if (base_pic->nb_refs[1]) {
         // Forward reference for B-frame.
-        av_assert0(pic->type == PICTURE_TYPE_B);
+        av_assert0(base_pic->type == PICTURE_TYPE_B);
         vslice->ref_pic_list1[0] = vpic->reference_frames[1];
     }
 
-    if (pic->type == PICTURE_TYPE_P && ctx->p_to_gpb) {
+    if (base_pic->type == PICTURE_TYPE_P && base_ctx->p_to_gpb) {
         vslice->slice_type = HEVC_SLICE_B;
         for (i = 0; i < FF_ARRAY_ELEMS(vslice->ref_pic_list0); i++) {
             vslice->ref_pic_list1[i].picture_id = vslice->ref_pic_list0[i].picture_id;
@@ -1198,8 +1201,9 @@ static int vaapi_encode_h265_init_slice_params(AVCodecContext *avctx,
 
 static av_cold int vaapi_encode_h265_get_encoder_caps(AVCodecContext *avctx)
 {
-    VAAPIEncodeContext      *ctx = avctx->priv_data;
-    VAAPIEncodeH265Context *priv = avctx->priv_data;
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    VAAPIEncodeContext       *ctx = avctx->priv_data;
+    VAAPIEncodeH265Context  *priv = avctx->priv_data;
 
 #if VA_CHECK_VERSION(1, 13, 0)
     {
@@ -1250,18 +1254,19 @@ static av_cold int vaapi_encode_h265_get_encoder_caps(AVCodecContext *avctx)
            "min CB size %dx%d.\n", priv->ctu_size, priv->ctu_size,
            priv->min_cb_size, priv->min_cb_size);
 
-    ctx->surface_width  = FFALIGN(avctx->width,  priv->min_cb_size);
-    ctx->surface_height = FFALIGN(avctx->height, priv->min_cb_size);
+    base_ctx->surface_width  = FFALIGN(avctx->width,  priv->min_cb_size);
+    base_ctx->surface_height = FFALIGN(avctx->height, priv->min_cb_size);
 
-    ctx->slice_block_width = ctx->slice_block_height = priv->ctu_size;
+    base_ctx->slice_block_width = base_ctx->slice_block_height = priv->ctu_size;
 
     return 0;
 }
 
 static av_cold int vaapi_encode_h265_configure(AVCodecContext *avctx)
 {
-    VAAPIEncodeContext      *ctx = avctx->priv_data;
-    VAAPIEncodeH265Context *priv = avctx->priv_data;
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    VAAPIEncodeContext       *ctx = avctx->priv_data;
+    VAAPIEncodeH265Context  *priv = avctx->priv_data;
     int err;
 
     err = ff_cbs_init(&priv->cbc, AV_CODEC_ID_HEVC, avctx);
@@ -1273,7 +1278,7 @@ static av_cold int vaapi_encode_h265_configure(AVCodecContext *avctx)
         // therefore always bounded below by 1, even in 10-bit mode where
         // it should go down to -12.
 
-        priv->fixed_qp_p = av_clip(ctx->rc_quality, 1, 51);
+        priv->fixed_qp_p = av_clip(base_ctx->rc_quality, 1, 51);
         if (avctx->i_quant_factor > 0.0)
             priv->fixed_qp_idr =
                 av_clip((avctx->i_quant_factor * priv->fixed_qp_p +
@@ -1357,8 +1362,9 @@ static const VAAPIEncodeType vaapi_encode_type_h265 = {
 
 static av_cold int vaapi_encode_h265_init(AVCodecContext *avctx)
 {
-    VAAPIEncodeContext      *ctx = avctx->priv_data;
-    VAAPIEncodeH265Context *priv = avctx->priv_data;
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    VAAPIEncodeContext       *ctx = avctx->priv_data;
+    VAAPIEncodeH265Context  *priv = avctx->priv_data;
 
     ctx->codec = &vaapi_encode_type_h265;
 
@@ -1379,7 +1385,7 @@ static av_cold int vaapi_encode_h265_init(AVCodecContext *avctx)
         VA_ENC_PACKED_HEADER_MISC;      // SEI
 
     if (priv->qp > 0)
-        ctx->explicit_qp = priv->qp;
+        base_ctx->explicit_qp = priv->qp;
 
     return ff_vaapi_encode_init(avctx);
 }
@@ -1398,6 +1404,7 @@ static av_cold int vaapi_encode_h265_close(AVCodecContext *avctx)
 #define OFFSET(x) offsetof(VAAPIEncodeH265Context, x)
 #define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM)
 static const AVOption vaapi_encode_h265_options[] = {
+    HW_BASE_ENCODE_COMMON_OPTIONS,
     VAAPI_ENCODE_COMMON_OPTIONS,
     VAAPI_ENCODE_RC_OPTIONS,
 
diff --git a/libavcodec/vaapi_encode_mjpeg.c b/libavcodec/vaapi_encode_mjpeg.c
index c17747e3a9..91829b1e0e 100644
--- a/libavcodec/vaapi_encode_mjpeg.c
+++ b/libavcodec/vaapi_encode_mjpeg.c
@@ -222,7 +222,9 @@ static int vaapi_encode_mjpeg_write_extra_buffer(AVCodecContext *avctx,
 static int vaapi_encode_mjpeg_init_picture_params(AVCodecContext *avctx,
                                                   VAAPIEncodePicture *pic)
 {
+    HWBaseEncodeContext         *base_ctx = avctx->priv_data;
     VAAPIEncodeMJPEGContext         *priv = avctx->priv_data;
+    HWBaseEncodePicture         *base_pic = (HWBaseEncodePicture *)pic;
     JPEGRawFrameHeader                *fh = &priv->frame_header;
     JPEGRawScanHeader                 *sh = &priv->scan.header;
     VAEncPictureParameterBufferJPEG *vpic = pic->codec_picture_params;
@@ -232,9 +234,9 @@ static int vaapi_encode_mjpeg_init_picture_params(AVCodecContext *avctx,
     const uint8_t *components;
     int t, i, quant_scale, len;
 
-    av_assert0(pic->type == PICTURE_TYPE_IDR);
+    av_assert0(base_pic->type == PICTURE_TYPE_IDR);
 
-    desc = av_pix_fmt_desc_get(priv->common.input_frames->sw_format);
+    desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format);
     av_assert0(desc);
     if (desc->flags & AV_PIX_FMT_FLAG_RGB)
         components = components_rgb;
@@ -261,7 +263,7 @@ static int vaapi_encode_mjpeg_init_picture_params(AVCodecContext *avctx,
     // JFIF header.
     if (priv->jfif) {
         JPEGRawApplicationData *app = &priv->jfif_header;
-        AVRational sar = pic->input_image->sample_aspect_ratio;
+        AVRational sar = base_pic->input_image->sample_aspect_ratio;
         int sar_w, sar_h;
         PutByteContext pbc;
 
@@ -436,25 +438,26 @@ static int vaapi_encode_mjpeg_init_slice_params(AVCodecContext *avctx,
 
 static av_cold int vaapi_encode_mjpeg_get_encoder_caps(AVCodecContext *avctx)
 {
-    VAAPIEncodeContext *ctx = avctx->priv_data;
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
     const AVPixFmtDescriptor *desc;
 
-    desc = av_pix_fmt_desc_get(ctx->input_frames->sw_format);
+    desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format);
     av_assert0(desc);
 
-    ctx->surface_width  = FFALIGN(avctx->width,  8 << desc->log2_chroma_w);
-    ctx->surface_height = FFALIGN(avctx->height, 8 << desc->log2_chroma_h);
+    base_ctx->surface_width  = FFALIGN(avctx->width,  8 << desc->log2_chroma_w);
+    base_ctx->surface_height = FFALIGN(avctx->height, 8 << desc->log2_chroma_h);
 
     return 0;
 }
 
 static av_cold int vaapi_encode_mjpeg_configure(AVCodecContext *avctx)
 {
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
     VAAPIEncodeContext       *ctx = avctx->priv_data;
     VAAPIEncodeMJPEGContext *priv = avctx->priv_data;
     int err;
 
-    priv->quality = ctx->rc_quality;
+    priv->quality = base_ctx->rc_quality;
     if (priv->quality < 1 || priv->quality > 100) {
         av_log(avctx, AV_LOG_ERROR, "Invalid quality value %d "
                "(must be 1-100).\n", priv->quality);
@@ -540,6 +543,7 @@ static av_cold int vaapi_encode_mjpeg_close(AVCodecContext *avctx)
 #define OFFSET(x) offsetof(VAAPIEncodeMJPEGContext, x)
 #define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM)
 static const AVOption vaapi_encode_mjpeg_options[] = {
+    HW_BASE_ENCODE_COMMON_OPTIONS,
     VAAPI_ENCODE_COMMON_OPTIONS,
 
     { "jfif", "Include JFIF header",
diff --git a/libavcodec/vaapi_encode_mpeg2.c b/libavcodec/vaapi_encode_mpeg2.c
index c9b16fbcfc..aa8e6d6bdf 100644
--- a/libavcodec/vaapi_encode_mpeg2.c
+++ b/libavcodec/vaapi_encode_mpeg2.c
@@ -166,6 +166,7 @@ fail:
 
 static int vaapi_encode_mpeg2_init_sequence_params(AVCodecContext *avctx)
 {
+    HWBaseEncodeContext           *base_ctx = avctx->priv_data;
     VAAPIEncodeContext                 *ctx = avctx->priv_data;
     VAAPIEncodeMPEG2Context           *priv = avctx->priv_data;
     MPEG2RawSequenceHeader              *sh = &priv->sequence_header;
@@ -281,7 +282,7 @@ static int vaapi_encode_mpeg2_init_sequence_params(AVCodecContext *avctx)
 
     se->bit_rate_extension        = priv->bit_rate >> 18;
     se->vbv_buffer_size_extension = priv->vbv_buffer_size >> 10;
-    se->low_delay                 = ctx->b_per_p == 0;
+    se->low_delay                 = base_ctx->b_per_p == 0;
 
     se->frame_rate_extension_n = ext_n;
     se->frame_rate_extension_d = ext_d;
@@ -353,8 +354,8 @@ static int vaapi_encode_mpeg2_init_sequence_params(AVCodecContext *avctx)
 
 
     *vseq = (VAEncSequenceParameterBufferMPEG2) {
-        .intra_period = ctx->gop_size,
-        .ip_period    = ctx->b_per_p + 1,
+        .intra_period = base_ctx->gop_size,
+        .ip_period    = base_ctx->b_per_p + 1,
 
         .picture_width  = avctx->width,
         .picture_height = avctx->height,
@@ -417,30 +418,31 @@ static int vaapi_encode_mpeg2_init_sequence_params(AVCodecContext *avctx)
 }
 
 static int vaapi_encode_mpeg2_init_picture_params(AVCodecContext *avctx,
-                                                 VAAPIEncodePicture *pic)
+                                                  VAAPIEncodePicture *pic)
 {
     VAAPIEncodeMPEG2Context          *priv = avctx->priv_data;
+    HWBaseEncodePicture          *base_pic = (HWBaseEncodePicture *)pic;
     MPEG2RawPictureHeader              *ph = &priv->picture_header;
     MPEG2RawPictureCodingExtension    *pce = &priv->picture_coding_extension.data.picture_coding;
     VAEncPictureParameterBufferMPEG2 *vpic = pic->codec_picture_params;
 
-    if (pic->type == PICTURE_TYPE_IDR || pic->type == PICTURE_TYPE_I) {
+    if (base_pic->type == PICTURE_TYPE_IDR || base_pic->type == PICTURE_TYPE_I) {
         ph->temporal_reference  = 0;
         ph->picture_coding_type = 1;
-        priv->last_i_frame = pic->display_order;
+        priv->last_i_frame = base_pic->display_order;
     } else {
-        ph->temporal_reference = pic->display_order - priv->last_i_frame;
-        ph->picture_coding_type = pic->type == PICTURE_TYPE_B ? 3 : 2;
+        ph->temporal_reference = base_pic->display_order - priv->last_i_frame;
+        ph->picture_coding_type = base_pic->type == PICTURE_TYPE_B ? 3 : 2;
     }
 
-    if (pic->type == PICTURE_TYPE_P || pic->type == PICTURE_TYPE_B) {
+    if (base_pic->type == PICTURE_TYPE_P || base_pic->type == PICTURE_TYPE_B) {
         pce->f_code[0][0] = priv->f_code_horizontal;
         pce->f_code[0][1] = priv->f_code_vertical;
     } else {
         pce->f_code[0][0] = 15;
         pce->f_code[0][1] = 15;
     }
-    if (pic->type == PICTURE_TYPE_B) {
+    if (base_pic->type == PICTURE_TYPE_B) {
         pce->f_code[1][0] = priv->f_code_horizontal;
         pce->f_code[1][1] = priv->f_code_vertical;
     } else {
@@ -451,19 +453,19 @@ static int vaapi_encode_mpeg2_init_picture_params(AVCodecContext *avctx,
     vpic->reconstructed_picture = pic->recon_surface;
     vpic->coded_buf             = pic->output_buffer;
 
-    switch (pic->type) {
+    switch (base_pic->type) {
     case PICTURE_TYPE_IDR:
     case PICTURE_TYPE_I:
         vpic->picture_type = VAEncPictureTypeIntra;
         break;
     case PICTURE_TYPE_P:
         vpic->picture_type = VAEncPictureTypePredictive;
-        vpic->forward_reference_picture = pic->refs[0][0]->recon_surface;
+        vpic->forward_reference_picture = ((VAAPIEncodePicture *)base_pic->refs[0][0])->recon_surface;
         break;
     case PICTURE_TYPE_B:
         vpic->picture_type = VAEncPictureTypeBidirectional;
-        vpic->forward_reference_picture  = pic->refs[0][0]->recon_surface;
-        vpic->backward_reference_picture = pic->refs[1][0]->recon_surface;
+        vpic->forward_reference_picture  = ((VAAPIEncodePicture *)base_pic->refs[0][0])->recon_surface;
+        vpic->backward_reference_picture = ((VAAPIEncodePicture *)base_pic->refs[1][0])->recon_surface;
         break;
     default:
         av_assert0(0 && "invalid picture type");
@@ -479,17 +481,18 @@ static int vaapi_encode_mpeg2_init_picture_params(AVCodecContext *avctx,
 }
 
 static int vaapi_encode_mpeg2_init_slice_params(AVCodecContext *avctx,
-                                               VAAPIEncodePicture *pic,
-                                               VAAPIEncodeSlice *slice)
+                                                VAAPIEncodePicture *pic,
+                                                VAAPIEncodeSlice *slice)
 {
-    VAAPIEncodeMPEG2Context            *priv = avctx->priv_data;
-    VAEncSliceParameterBufferMPEG2   *vslice = slice->codec_slice_params;
+    HWBaseEncodePicture          *base_pic = (HWBaseEncodePicture *)pic;
+    VAAPIEncodeMPEG2Context          *priv = avctx->priv_data;
+    VAEncSliceParameterBufferMPEG2 *vslice = slice->codec_slice_params;
     int qp;
 
     vslice->macroblock_address = slice->block_start;
     vslice->num_macroblocks    = slice->block_size;
 
-    switch (pic->type) {
+    switch (base_pic->type) {
     case PICTURE_TYPE_IDR:
     case PICTURE_TYPE_I:
         qp = priv->quant_i;
@@ -505,14 +508,15 @@ static int vaapi_encode_mpeg2_init_slice_params(AVCodecContext *avctx,
     }
 
     vslice->quantiser_scale_code = qp;
-    vslice->is_intra_slice = (pic->type == PICTURE_TYPE_IDR ||
-                              pic->type == PICTURE_TYPE_I);
+    vslice->is_intra_slice = (base_pic->type == PICTURE_TYPE_IDR ||
+                              base_pic->type == PICTURE_TYPE_I);
 
     return 0;
 }
 
 static av_cold int vaapi_encode_mpeg2_configure(AVCodecContext *avctx)
 {
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
     VAAPIEncodeContext       *ctx = avctx->priv_data;
     VAAPIEncodeMPEG2Context *priv = avctx->priv_data;
     int err;
@@ -522,7 +526,7 @@ static av_cold int vaapi_encode_mpeg2_configure(AVCodecContext *avctx)
         return err;
 
     if (ctx->va_rc_mode == VA_RC_CQP) {
-        priv->quant_p = av_clip(ctx->rc_quality, 1, 31);
+        priv->quant_p = av_clip(base_ctx->rc_quality, 1, 31);
         if (avctx->i_quant_factor > 0.0)
             priv->quant_i =
                 av_clip((avctx->i_quant_factor * priv->quant_p +
@@ -639,6 +643,7 @@ static av_cold int vaapi_encode_mpeg2_close(AVCodecContext *avctx)
 #define OFFSET(x) offsetof(VAAPIEncodeMPEG2Context, x)
 #define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM)
 static const AVOption vaapi_encode_mpeg2_options[] = {
+    HW_BASE_ENCODE_COMMON_OPTIONS,
     VAAPI_ENCODE_COMMON_OPTIONS,
     VAAPI_ENCODE_RC_OPTIONS,
 
diff --git a/libavcodec/vaapi_encode_vp8.c b/libavcodec/vaapi_encode_vp8.c
index 8a557b967e..c8203dcbc9 100644
--- a/libavcodec/vaapi_encode_vp8.c
+++ b/libavcodec/vaapi_encode_vp8.c
@@ -52,6 +52,7 @@ typedef struct VAAPIEncodeVP8Context {
 
 static int vaapi_encode_vp8_init_sequence_params(AVCodecContext *avctx)
 {
+    HWBaseEncodeContext         *base_ctx = avctx->priv_data;
     VAAPIEncodeContext               *ctx = avctx->priv_data;
     VAEncSequenceParameterBufferVP8 *vseq = ctx->codec_sequence_params;
 
@@ -66,7 +67,7 @@ static int vaapi_encode_vp8_init_sequence_params(AVCodecContext *avctx)
 
     if (!(ctx->va_rc_mode & VA_RC_CQP)) {
         vseq->bits_per_second = ctx->va_bit_rate;
-        vseq->intra_period    = ctx->gop_size;
+        vseq->intra_period    = base_ctx->gop_size;
     }
 
     return 0;
@@ -75,6 +76,7 @@ static int vaapi_encode_vp8_init_sequence_params(AVCodecContext *avctx)
 static int vaapi_encode_vp8_init_picture_params(AVCodecContext *avctx,
                                                 VAAPIEncodePicture *pic)
 {
+    HWBaseEncodePicture        *base_pic = (HWBaseEncodePicture *)pic;
     VAAPIEncodeVP8Context          *priv = avctx->priv_data;
     VAEncPictureParameterBufferVP8 *vpic = pic->codec_picture_params;
     int i;
@@ -83,10 +85,10 @@ static int vaapi_encode_vp8_init_picture_params(AVCodecContext *avctx,
 
     vpic->coded_buf = pic->output_buffer;
 
-    switch (pic->type) {
+    switch (base_pic->type) {
     case PICTURE_TYPE_IDR:
     case PICTURE_TYPE_I:
-        av_assert0(pic->nb_refs[0] == 0 && pic->nb_refs[1] == 0);
+        av_assert0(base_pic->nb_refs[0] == 0 && base_pic->nb_refs[1] == 0);
         vpic->ref_flags.bits.force_kf = 1;
         vpic->ref_last_frame =
         vpic->ref_gf_frame   =
@@ -94,20 +96,20 @@ static int vaapi_encode_vp8_init_picture_params(AVCodecContext *avctx,
             VA_INVALID_SURFACE;
         break;
     case PICTURE_TYPE_P:
-        av_assert0(!pic->nb_refs[1]);
+        av_assert0(!base_pic->nb_refs[1]);
         vpic->ref_flags.bits.no_ref_last = 0;
         vpic->ref_flags.bits.no_ref_gf   = 1;
         vpic->ref_flags.bits.no_ref_arf  = 1;
         vpic->ref_last_frame =
         vpic->ref_gf_frame   =
         vpic->ref_arf_frame  =
-            pic->refs[0][0]->recon_surface;
+            ((VAAPIEncodePicture *)base_pic->refs[0][0])->recon_surface;
         break;
     default:
         av_assert0(0 && "invalid picture type");
     }
 
-    vpic->pic_flags.bits.frame_type = (pic->type != PICTURE_TYPE_IDR);
+    vpic->pic_flags.bits.frame_type = (base_pic->type != PICTURE_TYPE_IDR);
     vpic->pic_flags.bits.show_frame = 1;
 
     vpic->pic_flags.bits.refresh_last            = 1;
@@ -145,7 +147,7 @@ static int vaapi_encode_vp8_write_quant_table(AVCodecContext *avctx,
 
     memset(&quant, 0, sizeof(quant));
 
-    if (pic->type == PICTURE_TYPE_P)
+    if (pic->base.type == PICTURE_TYPE_P)
         q = priv->q_index_p;
     else
         q = priv->q_index_i;
@@ -161,10 +163,11 @@ static int vaapi_encode_vp8_write_quant_table(AVCodecContext *avctx,
 
 static av_cold int vaapi_encode_vp8_configure(AVCodecContext *avctx)
 {
-    VAAPIEncodeContext     *ctx = avctx->priv_data;
-    VAAPIEncodeVP8Context *priv = avctx->priv_data;
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    VAAPIEncodeContext      *ctx = avctx->priv_data;
+    VAAPIEncodeVP8Context  *priv = avctx->priv_data;
 
-    priv->q_index_p = av_clip(ctx->rc_quality, 0, VP8_MAX_QUANT);
+    priv->q_index_p = av_clip(base_ctx->rc_quality, 0, VP8_MAX_QUANT);
     if (avctx->i_quant_factor > 0.0)
         priv->q_index_i =
             av_clip((avctx->i_quant_factor * priv->q_index_p  +
@@ -216,6 +219,7 @@ static av_cold int vaapi_encode_vp8_init(AVCodecContext *avctx)
 #define OFFSET(x) offsetof(VAAPIEncodeVP8Context, x)
 #define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM)
 static const AVOption vaapi_encode_vp8_options[] = {
+    HW_BASE_ENCODE_COMMON_OPTIONS,
     VAAPI_ENCODE_COMMON_OPTIONS,
     VAAPI_ENCODE_RC_OPTIONS,
 
diff --git a/libavcodec/vaapi_encode_vp9.c b/libavcodec/vaapi_encode_vp9.c
index c2a8dec71b..7a0cb0c7fc 100644
--- a/libavcodec/vaapi_encode_vp9.c
+++ b/libavcodec/vaapi_encode_vp9.c
@@ -53,6 +53,7 @@ typedef struct VAAPIEncodeVP9Context {
 
 static int vaapi_encode_vp9_init_sequence_params(AVCodecContext *avctx)
 {
+    HWBaseEncodeContext         *base_ctx = avctx->priv_data;
     VAAPIEncodeContext               *ctx = avctx->priv_data;
     VAEncSequenceParameterBufferVP9 *vseq = ctx->codec_sequence_params;
     VAEncPictureParameterBufferVP9  *vpic = ctx->codec_picture_params;
@@ -64,7 +65,7 @@ static int vaapi_encode_vp9_init_sequence_params(AVCodecContext *avctx)
 
     if (!(ctx->va_rc_mode & VA_RC_CQP)) {
         vseq->bits_per_second = ctx->va_bit_rate;
-        vseq->intra_period    = ctx->gop_size;
+        vseq->intra_period    = base_ctx->gop_size;
     }
 
     vpic->frame_width_src  = avctx->width;
@@ -78,9 +79,10 @@ static int vaapi_encode_vp9_init_sequence_params(AVCodecContext *avctx)
 static int vaapi_encode_vp9_init_picture_params(AVCodecContext *avctx,
                                                 VAAPIEncodePicture *pic)
 {
-    VAAPIEncodeContext              *ctx = avctx->priv_data;
+    HWBaseEncodeContext        *base_ctx = avctx->priv_data;
     VAAPIEncodeVP9Context          *priv = avctx->priv_data;
-    VAAPIEncodeVP9Picture          *hpic = pic->priv_data;
+    HWBaseEncodePicture        *base_pic = (HWBaseEncodePicture *)pic;
+    VAAPIEncodeVP9Picture          *hpic = base_pic->priv_data;
     VAEncPictureParameterBufferVP9 *vpic = pic->codec_picture_params;
     int i;
     int num_tile_columns;
@@ -94,20 +96,20 @@ static int vaapi_encode_vp9_init_picture_params(AVCodecContext *avctx,
     num_tile_columns = (vpic->frame_width_src + VP9_MAX_TILE_WIDTH - 1) / VP9_MAX_TILE_WIDTH;
     vpic->log2_tile_columns = num_tile_columns == 1 ? 0 : av_log2(num_tile_columns - 1) + 1;
 
-    switch (pic->type) {
+    switch (base_pic->type) {
     case PICTURE_TYPE_IDR:
-        av_assert0(pic->nb_refs[0] == 0 && pic->nb_refs[1] == 0);
+        av_assert0(base_pic->nb_refs[0] == 0 && base_pic->nb_refs[1] == 0);
         vpic->ref_flags.bits.force_kf = 1;
         vpic->refresh_frame_flags = 0xff;
         hpic->slot = 0;
         break;
     case PICTURE_TYPE_P:
-        av_assert0(!pic->nb_refs[1]);
+        av_assert0(!base_pic->nb_refs[1]);
         {
-            VAAPIEncodeVP9Picture *href = pic->refs[0][0]->priv_data;
+            VAAPIEncodeVP9Picture *href = base_pic->refs[0][0]->priv_data;
             av_assert0(href->slot == 0 || href->slot == 1);
 
-            if (ctx->max_b_depth > 0) {
+            if (base_ctx->max_b_depth > 0) {
                 hpic->slot = !href->slot;
                 vpic->refresh_frame_flags = 1 << hpic->slot | 0xfc;
             } else {
@@ -120,20 +122,20 @@ static int vaapi_encode_vp9_init_picture_params(AVCodecContext *avctx,
         }
         break;
     case PICTURE_TYPE_B:
-        av_assert0(pic->nb_refs[0] && pic->nb_refs[1]);
+        av_assert0(base_pic->nb_refs[0] && base_pic->nb_refs[1]);
         {
-            VAAPIEncodeVP9Picture *href0 = pic->refs[0][0]->priv_data,
-                                  *href1 = pic->refs[1][0]->priv_data;
-            av_assert0(href0->slot < pic->b_depth + 1 &&
-                       href1->slot < pic->b_depth + 1);
+            VAAPIEncodeVP9Picture *href0 = base_pic->refs[0][0]->priv_data,
+                                  *href1 = base_pic->refs[1][0]->priv_data;
+            av_assert0(href0->slot < base_pic->b_depth + 1 &&
+                       href1->slot < base_pic->b_depth + 1);
 
-            if (pic->b_depth == ctx->max_b_depth) {
+            if (base_pic->b_depth == base_ctx->max_b_depth) {
                 // Unreferenced frame.
                 vpic->refresh_frame_flags = 0x00;
                 hpic->slot = 8;
             } else {
-                vpic->refresh_frame_flags = 0xfe << pic->b_depth & 0xff;
-                hpic->slot = 1 + pic->b_depth;
+                vpic->refresh_frame_flags = 0xfe << base_pic->b_depth & 0xff;
+                hpic->slot = 1 + base_pic->b_depth;
             }
             vpic->ref_flags.bits.ref_frame_ctrl_l0  = 1;
             vpic->ref_flags.bits.ref_frame_ctrl_l1  = 2;
@@ -148,31 +150,31 @@ static int vaapi_encode_vp9_init_picture_params(AVCodecContext *avctx,
     }
     if (vpic->refresh_frame_flags == 0x00) {
         av_log(avctx, AV_LOG_DEBUG, "Pic %"PRId64" not stored.\n",
-               pic->display_order);
+               base_pic->display_order);
     } else {
         av_log(avctx, AV_LOG_DEBUG, "Pic %"PRId64" stored in slot %d.\n",
-               pic->display_order, hpic->slot);
+               base_pic->display_order, hpic->slot);
     }
 
     for (i = 0; i < FF_ARRAY_ELEMS(vpic->reference_frames); i++)
         vpic->reference_frames[i] = VA_INVALID_SURFACE;
 
     for (i = 0; i < MAX_REFERENCE_LIST_NUM; i++) {
-        for (int j = 0; j < pic->nb_refs[i]; j++) {
-            VAAPIEncodePicture *ref_pic = pic->refs[i][j];
+        for (int j = 0; j < base_pic->nb_refs[i]; j++) {
+            HWBaseEncodePicture *ref_pic = base_pic->refs[i][j];
             int slot;
             slot = ((VAAPIEncodeVP9Picture*)ref_pic->priv_data)->slot;
             av_assert0(vpic->reference_frames[slot] == VA_INVALID_SURFACE);
-            vpic->reference_frames[slot] = ref_pic->recon_surface;
+            vpic->reference_frames[slot] = ((VAAPIEncodePicture *)ref_pic)->recon_surface;
         }
     }
 
-    vpic->pic_flags.bits.frame_type = (pic->type != PICTURE_TYPE_IDR);
-    vpic->pic_flags.bits.show_frame = pic->display_order <= pic->encode_order;
+    vpic->pic_flags.bits.frame_type = (base_pic->type != PICTURE_TYPE_IDR);
+    vpic->pic_flags.bits.show_frame = base_pic->display_order <= base_pic->encode_order;
 
-    if (pic->type == PICTURE_TYPE_IDR)
+    if (base_pic->type == PICTURE_TYPE_IDR)
         vpic->luma_ac_qindex     = priv->q_idx_idr;
-    else if (pic->type == PICTURE_TYPE_P)
+    else if (base_pic->type == PICTURE_TYPE_P)
         vpic->luma_ac_qindex     = priv->q_idx_p;
     else
         vpic->luma_ac_qindex     = priv->q_idx_b;
@@ -188,22 +190,23 @@ static int vaapi_encode_vp9_init_picture_params(AVCodecContext *avctx,
 
 static av_cold int vaapi_encode_vp9_get_encoder_caps(AVCodecContext *avctx)
 {
-    VAAPIEncodeContext *ctx = avctx->priv_data;
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
 
     // Surfaces must be aligned to 64x64 superblock boundaries.
-    ctx->surface_width  = FFALIGN(avctx->width,  64);
-    ctx->surface_height = FFALIGN(avctx->height, 64);
+    base_ctx->surface_width  = FFALIGN(avctx->width,  64);
+    base_ctx->surface_height = FFALIGN(avctx->height, 64);
 
     return 0;
 }
 
 static av_cold int vaapi_encode_vp9_configure(AVCodecContext *avctx)
 {
-    VAAPIEncodeContext     *ctx = avctx->priv_data;
-    VAAPIEncodeVP9Context *priv = avctx->priv_data;
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    VAAPIEncodeContext       *ctx = avctx->priv_data;
+    VAAPIEncodeVP9Context   *priv = avctx->priv_data;
 
     if (ctx->rc_mode->quality) {
-        priv->q_idx_p = av_clip(ctx->rc_quality, 0, VP9_MAX_QUANT);
+        priv->q_idx_p = av_clip(base_ctx->rc_quality, 0, VP9_MAX_QUANT);
         if (avctx->i_quant_factor > 0.0)
             priv->q_idx_idr =
                 av_clip((avctx->i_quant_factor * priv->q_idx_p  +
@@ -273,6 +276,7 @@ static av_cold int vaapi_encode_vp9_init(AVCodecContext *avctx)
 #define OFFSET(x) offsetof(VAAPIEncodeVP9Context, x)
 #define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM)
 static const AVOption vaapi_encode_vp9_options[] = {
+    HW_BASE_ENCODE_COMMON_OPTIONS,
     VAAPI_ENCODE_COMMON_OPTIONS,
     VAAPI_ENCODE_RC_OPTIONS,
 
-- 
2.41.0.windows.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [FFmpeg-devel] [PATCH v7 03/12] avcodec/vaapi_encode: move the dpb logic from VAAPI to base layer
  2024-03-14  8:14 [FFmpeg-devel] [PATCH v7 01/12] avcodec/vaapi_encode: move pic->input_surface initialization to encode_alloc tong1.wu-at-intel.com
  2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 02/12] avcodec/vaapi_encode: introduce a base layer for vaapi encode tong1.wu-at-intel.com
@ 2024-03-14  8:14 ` tong1.wu-at-intel.com
  2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 04/12] avcodec/vaapi_encode: extract a init function " tong1.wu-at-intel.com
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 15+ messages in thread
From: tong1.wu-at-intel.com @ 2024-03-14  8:14 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Tong Wu

From: Tong Wu <tong1.wu@intel.com>

Move receive_packet function to base. Add *alloc, *issue, *output, *free
as hardware callback. DPB management logic can be fully extracted to
base layer as-is.

Signed-off-by: Tong Wu <tong1.wu@intel.com>
---
 libavcodec/Makefile             |   2 +-
 libavcodec/hw_base_encode.c     | 599 ++++++++++++++++++++++++++++++++
 libavcodec/hw_base_encode.h     |   2 +
 libavcodec/vaapi_encode.c       | 586 +------------------------------
 libavcodec/vaapi_encode.h       |   3 -
 libavcodec/vaapi_encode_av1.c   |   2 +-
 libavcodec/vaapi_encode_h264.c  |   2 +-
 libavcodec/vaapi_encode_h265.c  |   2 +-
 libavcodec/vaapi_encode_mjpeg.c |   2 +-
 libavcodec/vaapi_encode_mpeg2.c |   2 +-
 libavcodec/vaapi_encode_vp8.c   |   2 +-
 libavcodec/vaapi_encode_vp9.c   |   2 +-
 12 files changed, 625 insertions(+), 581 deletions(-)
 create mode 100644 libavcodec/hw_base_encode.c

diff --git a/libavcodec/Makefile b/libavcodec/Makefile
index 708434ac76..cbfae5f182 100644
--- a/libavcodec/Makefile
+++ b/libavcodec/Makefile
@@ -162,7 +162,7 @@ OBJS-$(CONFIG_STARTCODE)               += startcode.o
 OBJS-$(CONFIG_TEXTUREDSP)              += texturedsp.o
 OBJS-$(CONFIG_TEXTUREDSPENC)           += texturedspenc.o
 OBJS-$(CONFIG_TPELDSP)                 += tpeldsp.o
-OBJS-$(CONFIG_VAAPI_ENCODE)            += vaapi_encode.o
+OBJS-$(CONFIG_VAAPI_ENCODE)            += vaapi_encode.o hw_base_encode.o
 OBJS-$(CONFIG_AV1_AMF_ENCODER)         += amfenc_av1.o
 OBJS-$(CONFIG_VC1DSP)                  += vc1dsp.o
 OBJS-$(CONFIG_VIDEODSP)                += videodsp.o
diff --git a/libavcodec/hw_base_encode.c b/libavcodec/hw_base_encode.c
new file mode 100644
index 0000000000..dcba902f44
--- /dev/null
+++ b/libavcodec/hw_base_encode.c
@@ -0,0 +1,599 @@
+/*
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include "libavutil/avassert.h"
+#include "libavutil/common.h"
+#include "libavutil/internal.h"
+#include "libavutil/log.h"
+#include "libavutil/pixdesc.h"
+
+#include "encode.h"
+#include "avcodec.h"
+#include "hw_base_encode.h"
+
+static void hw_base_encode_add_ref(AVCodecContext *avctx,
+                                   HWBaseEncodePicture *pic,
+                                   HWBaseEncodePicture *target,
+                                   int is_ref, int in_dpb, int prev)
+{
+    int refs = 0;
+
+    if (is_ref) {
+        av_assert0(pic != target);
+        av_assert0(pic->nb_refs[0] < MAX_PICTURE_REFERENCES &&
+                   pic->nb_refs[1] < MAX_PICTURE_REFERENCES);
+        if (target->display_order < pic->display_order)
+            pic->refs[0][pic->nb_refs[0]++] = target;
+        else
+            pic->refs[1][pic->nb_refs[1]++] = target;
+        ++refs;
+    }
+
+    if (in_dpb) {
+        av_assert0(pic->nb_dpb_pics < MAX_DPB_SIZE);
+        pic->dpb[pic->nb_dpb_pics++] = target;
+        ++refs;
+    }
+
+    if (prev) {
+        av_assert0(!pic->prev);
+        pic->prev = target;
+        ++refs;
+    }
+
+    target->ref_count[0] += refs;
+    target->ref_count[1] += refs;
+}
+
+static void hw_base_encode_remove_refs(AVCodecContext *avctx,
+                                       HWBaseEncodePicture *pic,
+                                       int level)
+{
+    int i;
+
+    if (pic->ref_removed[level])
+        return;
+
+    for (i = 0; i < pic->nb_refs[0]; i++) {
+        av_assert0(pic->refs[0][i]);
+        --pic->refs[0][i]->ref_count[level];
+        av_assert0(pic->refs[0][i]->ref_count[level] >= 0);
+    }
+
+    for (i = 0; i < pic->nb_refs[1]; i++) {
+        av_assert0(pic->refs[1][i]);
+        --pic->refs[1][i]->ref_count[level];
+        av_assert0(pic->refs[1][i]->ref_count[level] >= 0);
+    }
+
+    for (i = 0; i < pic->nb_dpb_pics; i++) {
+        av_assert0(pic->dpb[i]);
+        --pic->dpb[i]->ref_count[level];
+        av_assert0(pic->dpb[i]->ref_count[level] >= 0);
+    }
+
+    av_assert0(pic->prev || pic->type == PICTURE_TYPE_IDR);
+    if (pic->prev) {
+        --pic->prev->ref_count[level];
+        av_assert0(pic->prev->ref_count[level] >= 0);
+    }
+
+    pic->ref_removed[level] = 1;
+}
+
+static void hw_base_encode_set_b_pictures(AVCodecContext *avctx,
+                                          HWBaseEncodePicture *start,
+                                          HWBaseEncodePicture *end,
+                                          HWBaseEncodePicture *prev,
+                                          int current_depth,
+                                          HWBaseEncodePicture **last)
+{
+    HWBaseEncodeContext *ctx = avctx->priv_data;
+    HWBaseEncodePicture *pic, *next, *ref;
+    int i, len;
+
+    av_assert0(start && end && start != end && start->next != end);
+
+    // If we are at the maximum depth then encode all pictures as
+    // non-referenced B-pictures.  Also do this if there is exactly one
+    // picture left, since there will be nothing to reference it.
+    if (current_depth == ctx->max_b_depth || start->next->next == end) {
+        for (pic = start->next; pic; pic = pic->next) {
+            if (pic == end)
+                break;
+            pic->type    = PICTURE_TYPE_B;
+            pic->b_depth = current_depth;
+
+            hw_base_encode_add_ref(avctx, pic, start, 1, 1, 0);
+            hw_base_encode_add_ref(avctx, pic, end,   1, 1, 0);
+            hw_base_encode_add_ref(avctx, pic, prev,  0, 0, 1);
+
+            for (ref = end->refs[1][0]; ref; ref = ref->refs[1][0])
+                hw_base_encode_add_ref(avctx, pic, ref, 0, 1, 0);
+        }
+        *last = prev;
+
+    } else {
+        // Split the current list at the midpoint with a referenced
+        // B-picture, then descend into each side separately.
+        len = 0;
+        for (pic = start->next; pic != end; pic = pic->next)
+            ++len;
+        for (pic = start->next, i = 1; 2 * i < len; pic = pic->next, i++);
+
+        pic->type    = PICTURE_TYPE_B;
+        pic->b_depth = current_depth;
+
+        pic->is_reference = 1;
+
+        hw_base_encode_add_ref(avctx, pic, pic,   0, 1, 0);
+        hw_base_encode_add_ref(avctx, pic, start, 1, 1, 0);
+        hw_base_encode_add_ref(avctx, pic, end,   1, 1, 0);
+        hw_base_encode_add_ref(avctx, pic, prev,  0, 0, 1);
+
+        for (ref = end->refs[1][0]; ref; ref = ref->refs[1][0])
+            hw_base_encode_add_ref(avctx, pic, ref, 0, 1, 0);
+
+        if (i > 1)
+            hw_base_encode_set_b_pictures(avctx, start, pic, pic,
+                                          current_depth + 1, &next);
+        else
+            next = pic;
+
+        hw_base_encode_set_b_pictures(avctx, pic, end, next,
+                                      current_depth + 1, last);
+    }
+}
+
+static void hw_base_encode_add_next_prev(AVCodecContext *avctx,
+                                         HWBaseEncodePicture *pic)
+{
+    HWBaseEncodeContext *ctx = avctx->priv_data;
+    int i;
+
+    if (!pic)
+        return;
+
+    if (pic->type == PICTURE_TYPE_IDR) {
+        for (i = 0; i < ctx->nb_next_prev; i++) {
+            --ctx->next_prev[i]->ref_count[0];
+            ctx->next_prev[i] = NULL;
+        }
+        ctx->next_prev[0] = pic;
+        ++pic->ref_count[0];
+        ctx->nb_next_prev = 1;
+
+        return;
+    }
+
+    if (ctx->nb_next_prev < MAX_PICTURE_REFERENCES) {
+        ctx->next_prev[ctx->nb_next_prev++] = pic;
+        ++pic->ref_count[0];
+    } else {
+        --ctx->next_prev[0]->ref_count[0];
+        for (i = 0; i < MAX_PICTURE_REFERENCES - 1; i++)
+            ctx->next_prev[i] = ctx->next_prev[i + 1];
+        ctx->next_prev[i] = pic;
+        ++pic->ref_count[0];
+    }
+}
+
+static int hw_base_encode_pick_next(AVCodecContext *avctx,
+                                    HWBaseEncodePicture **pic_out)
+{
+    HWBaseEncodeContext *ctx = avctx->priv_data;
+    HWBaseEncodePicture *pic = NULL, *prev = NULL, *next, *start;
+    int i, b_counter, closed_gop_end;
+
+    // If there are any B-frames already queued, the next one to encode
+    // is the earliest not-yet-issued frame for which all references are
+    // available.
+    for (pic = ctx->pic_start; pic; pic = pic->next) {
+        if (pic->encode_issued)
+            continue;
+        if (pic->type != PICTURE_TYPE_B)
+            continue;
+        for (i = 0; i < pic->nb_refs[0]; i++) {
+            if (!pic->refs[0][i]->encode_issued)
+                break;
+        }
+        if (i != pic->nb_refs[0])
+            continue;
+
+        for (i = 0; i < pic->nb_refs[1]; i++) {
+            if (!pic->refs[1][i]->encode_issued)
+                break;
+        }
+        if (i == pic->nb_refs[1])
+            break;
+    }
+
+    if (pic) {
+        av_log(avctx, AV_LOG_DEBUG, "Pick B-picture at depth %d to "
+               "encode next.\n", pic->b_depth);
+        *pic_out = pic;
+        return 0;
+    }
+
+    // Find the B-per-Pth available picture to become the next picture
+    // on the top layer.
+    start = NULL;
+    b_counter = 0;
+    closed_gop_end = ctx->closed_gop ||
+                     ctx->idr_counter == ctx->gop_per_idr;
+    for (pic = ctx->pic_start; pic; pic = next) {
+        next = pic->next;
+        if (pic->encode_issued) {
+            start = pic;
+            continue;
+        }
+        // If the next available picture is force-IDR, encode it to start
+        // a new GOP immediately.
+        if (pic->force_idr)
+            break;
+        if (b_counter == ctx->b_per_p)
+            break;
+        // If this picture ends a closed GOP or starts a new GOP then it
+        // needs to be in the top layer.
+        if (ctx->gop_counter + b_counter + closed_gop_end >= ctx->gop_size)
+            break;
+        // If the picture after this one is force-IDR, we need to encode
+        // this one in the top layer.
+        if (next && next->force_idr)
+            break;
+        ++b_counter;
+    }
+
+    // At the end of the stream the last picture must be in the top layer.
+    if (!pic && ctx->end_of_stream) {
+        --b_counter;
+        pic = ctx->pic_end;
+        if (pic->encode_complete)
+            return AVERROR_EOF;
+        else if (pic->encode_issued)
+            return AVERROR(EAGAIN);
+    }
+
+    if (!pic) {
+        av_log(avctx, AV_LOG_DEBUG, "Pick nothing to encode next - "
+               "need more input for reference pictures.\n");
+        return AVERROR(EAGAIN);
+    }
+    if (ctx->input_order <= ctx->decode_delay && !ctx->end_of_stream) {
+        av_log(avctx, AV_LOG_DEBUG, "Pick nothing to encode next - "
+               "need more input for timestamps.\n");
+        return AVERROR(EAGAIN);
+    }
+
+    if (pic->force_idr) {
+        av_log(avctx, AV_LOG_DEBUG, "Pick forced IDR-picture to "
+               "encode next.\n");
+        pic->type = PICTURE_TYPE_IDR;
+        ctx->idr_counter = 1;
+        ctx->gop_counter = 1;
+
+    } else if (ctx->gop_counter + b_counter >= ctx->gop_size) {
+        if (ctx->idr_counter == ctx->gop_per_idr) {
+            av_log(avctx, AV_LOG_DEBUG, "Pick new-GOP IDR-picture to "
+                   "encode next.\n");
+            pic->type = PICTURE_TYPE_IDR;
+            ctx->idr_counter = 1;
+        } else {
+            av_log(avctx, AV_LOG_DEBUG, "Pick new-GOP I-picture to "
+                   "encode next.\n");
+            pic->type = PICTURE_TYPE_I;
+            ++ctx->idr_counter;
+        }
+        ctx->gop_counter = 1;
+
+    } else {
+        if (ctx->gop_counter + b_counter + closed_gop_end == ctx->gop_size) {
+            av_log(avctx, AV_LOG_DEBUG, "Pick group-end P-picture to "
+                   "encode next.\n");
+        } else {
+            av_log(avctx, AV_LOG_DEBUG, "Pick normal P-picture to "
+                   "encode next.\n");
+        }
+        pic->type = PICTURE_TYPE_P;
+        av_assert0(start);
+        ctx->gop_counter += 1 + b_counter;
+    }
+    pic->is_reference = 1;
+    *pic_out = pic;
+
+    hw_base_encode_add_ref(avctx, pic, pic, 0, 1, 0);
+    if (pic->type != PICTURE_TYPE_IDR) {
+        // TODO: apply both previous and forward multi reference for all vaapi encoders.
+        // And L0/L1 reference frame number can be set dynamically through query
+        // VAConfigAttribEncMaxRefFrames attribute.
+        if (avctx->codec_id == AV_CODEC_ID_AV1) {
+            for (i = 0; i < ctx->nb_next_prev; i++)
+                hw_base_encode_add_ref(avctx, pic, ctx->next_prev[i],
+                                       pic->type == PICTURE_TYPE_P,
+                                       b_counter > 0, 0);
+        } else
+            hw_base_encode_add_ref(avctx, pic, start,
+                                   pic->type == PICTURE_TYPE_P,
+                                   b_counter > 0, 0);
+
+        hw_base_encode_add_ref(avctx, pic, ctx->next_prev[ctx->nb_next_prev - 1], 0, 0, 1);
+    }
+
+    if (b_counter > 0) {
+        hw_base_encode_set_b_pictures(avctx, start, pic, pic, 1,
+                                      &prev);
+    } else {
+        prev = pic;
+    }
+    hw_base_encode_add_next_prev(avctx, prev);
+
+    return 0;
+}
+
+static int hw_base_encode_clear_old(AVCodecContext *avctx)
+{
+    HWBaseEncodeContext *ctx = avctx->priv_data;
+    HWBaseEncodePicture *pic, *prev, *next;
+
+    av_assert0(ctx->pic_start);
+
+    // Remove direct references once each picture is complete.
+    for (pic = ctx->pic_start; pic; pic = pic->next) {
+        if (pic->encode_complete && pic->next)
+            hw_base_encode_remove_refs(avctx, pic, 0);
+    }
+
+    // Remove indirect references once a picture has no direct references.
+    for (pic = ctx->pic_start; pic; pic = pic->next) {
+        if (pic->encode_complete && pic->ref_count[0] == 0)
+            hw_base_encode_remove_refs(avctx, pic, 1);
+    }
+
+    // Clear out all complete pictures with no remaining references.
+    prev = NULL;
+    for (pic = ctx->pic_start; pic; pic = next) {
+        next = pic->next;
+        if (pic->encode_complete && pic->ref_count[1] == 0) {
+            av_assert0(pic->ref_removed[0] && pic->ref_removed[1]);
+            if (prev)
+                prev->next = next;
+            else
+                ctx->pic_start = next;
+            ctx->op->free(avctx, pic);
+        } else {
+            prev = pic;
+        }
+    }
+
+    return 0;
+}
+
+static int hw_base_encode_check_frame(AVCodecContext *avctx,
+                                      const AVFrame *frame)
+{
+    HWBaseEncodeContext *ctx = avctx->priv_data;
+
+    if ((frame->crop_top  || frame->crop_bottom ||
+         frame->crop_left || frame->crop_right) && !ctx->crop_warned) {
+        av_log(avctx, AV_LOG_WARNING, "Cropping information on input "
+               "frames ignored due to lack of API support.\n");
+        ctx->crop_warned = 1;
+    }
+
+    if (!ctx->roi_allowed) {
+        AVFrameSideData *sd =
+            av_frame_get_side_data(frame, AV_FRAME_DATA_REGIONS_OF_INTEREST);
+
+        if (sd && !ctx->roi_warned) {
+            av_log(avctx, AV_LOG_WARNING, "ROI side data on input "
+                   "frames ignored due to lack of driver support.\n");
+            ctx->roi_warned = 1;
+        }
+    }
+
+    return 0;
+}
+
+static int hw_base_encode_send_frame(AVCodecContext *avctx, AVFrame *frame)
+{
+    HWBaseEncodeContext *ctx = avctx->priv_data;
+    HWBaseEncodePicture *pic;
+    int err;
+
+    if (frame) {
+        av_log(avctx, AV_LOG_DEBUG, "Input frame: %ux%u (%"PRId64").\n",
+               frame->width, frame->height, frame->pts);
+
+        err = hw_base_encode_check_frame(avctx, frame);
+        if (err < 0)
+            return err;
+
+        pic = ctx->op->alloc(avctx, frame);
+        if (!pic)
+            return AVERROR(ENOMEM);
+
+        pic->input_image = av_frame_alloc();
+        if (!pic->input_image) {
+            err = AVERROR(ENOMEM);
+            goto fail;
+        }
+
+        pic->recon_image = av_frame_alloc();
+        if (!pic->recon_image) {
+            err = AVERROR(ENOMEM);
+            goto fail;
+        }
+
+        if (ctx->input_order == 0 || frame->pict_type == AV_PICTURE_TYPE_I)
+            pic->force_idr = 1;
+
+        pic->pts = frame->pts;
+        pic->duration = frame->duration;
+
+        if (avctx->flags & AV_CODEC_FLAG_COPY_OPAQUE) {
+            err = av_buffer_replace(&pic->opaque_ref, frame->opaque_ref);
+            if (err < 0)
+                goto fail;
+
+            pic->opaque = frame->opaque;
+        }
+
+        av_frame_move_ref(pic->input_image, frame);
+
+        if (ctx->input_order == 0)
+            ctx->first_pts = pic->pts;
+        if (ctx->input_order == ctx->decode_delay)
+            ctx->dts_pts_diff = pic->pts - ctx->first_pts;
+        if (ctx->output_delay > 0)
+            ctx->ts_ring[ctx->input_order %
+                        (3 * ctx->output_delay + ctx->async_depth)] = pic->pts;
+
+        pic->display_order = ctx->input_order;
+        ++ctx->input_order;
+
+        if (ctx->pic_start) {
+            ctx->pic_end->next = pic;
+            ctx->pic_end       = pic;
+        } else {
+            ctx->pic_start     = pic;
+            ctx->pic_end       = pic;
+        }
+
+    } else {
+        ctx->end_of_stream = 1;
+
+        // Fix timestamps if we hit end-of-stream before the initial decode
+        // delay has elapsed.
+        if (ctx->input_order < ctx->decode_delay)
+            ctx->dts_pts_diff = ctx->pic_end->pts - ctx->first_pts;
+    }
+
+    return 0;
+
+fail:
+    ctx->op->free(avctx, pic);
+    return err;
+}
+
+int ff_hw_base_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt)
+{
+    HWBaseEncodeContext *ctx = avctx->priv_data;
+    HWBaseEncodePicture *pic = NULL;
+    AVFrame *frame = ctx->frame;
+    int err;
+
+start:
+    /** if no B frame before repeat P frame, sent repeat P frame out. */
+    if (ctx->tail_pkt->size) {
+        for (HWBaseEncodePicture *tmp = ctx->pic_start; tmp; tmp = tmp->next) {
+            if (tmp->type == PICTURE_TYPE_B && tmp->pts < ctx->tail_pkt->pts)
+                break;
+            else if (!tmp->next) {
+                av_packet_move_ref(pkt, ctx->tail_pkt);
+                goto end;
+            }
+        }
+    }
+
+    err = ff_encode_get_frame(avctx, frame);
+    if (err < 0 && err != AVERROR_EOF)
+        return err;
+
+    if (err == AVERROR_EOF)
+        frame = NULL;
+
+    if (!(ctx->op && ctx->op->alloc && ctx->op->issue &&
+          ctx->op->output && ctx->op->free)) {
+        err = AVERROR(EINVAL);
+        return err;
+    }
+
+    err = hw_base_encode_send_frame(avctx, frame);
+    if (err < 0)
+        return err;
+
+    if (!ctx->pic_start) {
+        if (ctx->end_of_stream)
+            return AVERROR_EOF;
+        else
+            return AVERROR(EAGAIN);
+    }
+
+    if (ctx->async_encode) {
+        if (av_fifo_can_write(ctx->encode_fifo)) {
+            err = hw_base_encode_pick_next(avctx, &pic);
+            if (!err) {
+                av_assert0(pic);
+                pic->encode_order = ctx->encode_order +
+                    av_fifo_can_read(ctx->encode_fifo);
+                err = ctx->op->issue(avctx, pic);
+                if (err < 0) {
+                    av_log(avctx, AV_LOG_ERROR, "Encode failed: %d.\n", err);
+                    return err;
+                }
+                pic->encode_issued = 1;
+                av_fifo_write(ctx->encode_fifo, &pic, 1);
+            }
+        }
+
+        if (!av_fifo_can_read(ctx->encode_fifo))
+            return err;
+
+        // More frames can be buffered
+        if (av_fifo_can_write(ctx->encode_fifo) && !ctx->end_of_stream)
+            return AVERROR(EAGAIN);
+
+        av_fifo_read(ctx->encode_fifo, &pic, 1);
+        ctx->encode_order = pic->encode_order + 1;
+    } else {
+        err = hw_base_encode_pick_next(avctx, &pic);
+        if (err < 0)
+            return err;
+        av_assert0(pic);
+
+        pic->encode_order = ctx->encode_order++;
+
+        err = ctx->op->issue(avctx, pic);
+        if (err < 0) {
+            av_log(avctx, AV_LOG_ERROR, "Encode failed: %d.\n", err);
+            return err;
+        }
+
+        pic->encode_issued = 1;
+    }
+
+    err = ctx->op->output(avctx, pic, pkt);
+    if (err < 0) {
+        av_log(avctx, AV_LOG_ERROR, "Output failed: %d.\n", err);
+        return err;
+    }
+
+    ctx->output_order = pic->encode_order;
+    hw_base_encode_clear_old(avctx);
+
+    /** loop to get an available pkt in encoder flushing. */
+    if (ctx->end_of_stream && !pkt->size)
+        goto start;
+
+end:
+    if (pkt->size)
+        av_log(avctx, AV_LOG_DEBUG, "Output packet: pts %"PRId64", dts %"PRId64", "
+               "size %d bytes.\n", pkt->pts, pkt->dts, pkt->size);
+
+    return 0;
+}
diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h
index 41b68aa073..c9df95f952 100644
--- a/libavcodec/hw_base_encode.h
+++ b/libavcodec/hw_base_encode.h
@@ -224,6 +224,8 @@ typedef struct HWBaseEncodeContext {
     AVPacket        *tail_pkt;
 } HWBaseEncodeContext;
 
+int ff_hw_base_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt);
+
 #define HW_BASE_ENCODE_COMMON_OPTIONS \
     { "idr_interval", \
       "Distance (in I-frames) between key frames", \
diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c
index 4350960248..c6742c4301 100644
--- a/libavcodec/vaapi_encode.c
+++ b/libavcodec/vaapi_encode.c
@@ -264,7 +264,7 @@ static int vaapi_encode_make_tile_slice(AVCodecContext *avctx,
 }
 
 static int vaapi_encode_issue(AVCodecContext *avctx,
-                              HWBaseEncodePicture *base_pic)
+                              const HWBaseEncodePicture *base_pic)
 {
     HWBaseEncodeContext *base_ctx = avctx->priv_data;
     VAAPIEncodeContext       *ctx = avctx->priv_data;
@@ -311,12 +311,6 @@ static int vaapi_encode_issue(AVCodecContext *avctx,
 
     av_log(avctx, AV_LOG_DEBUG, "Input surface is %#x.\n", pic->input_surface);
 
-    base_pic->recon_image = av_frame_alloc();
-    if (!base_pic->recon_image) {
-        err = AVERROR(ENOMEM);
-        goto fail;
-    }
-
     err = av_hwframe_get_buffer(base_ctx->recon_frames_ref, base_pic->recon_image, 0);
     if (err < 0) {
         err = AVERROR(ENOMEM);
@@ -642,8 +636,6 @@ static int vaapi_encode_issue(AVCodecContext *avctx,
         }
     }
 
-    base_pic->encode_issued = 1;
-
     return 0;
 
 fail_with_picture:
@@ -660,7 +652,6 @@ fail_at_end:
     av_freep(&pic->param_buffers);
     av_freep(&pic->slices);
     av_freep(&pic->roi);
-    av_frame_free(&base_pic->recon_image);
     ff_refstruct_unref(&pic->output_buffer_ref);
     pic->output_buffer = VA_INVALID_ID;
     return err;
@@ -671,7 +662,7 @@ static int vaapi_encode_set_output_property(AVCodecContext *avctx,
                                             AVPacket *pkt)
 {
     HWBaseEncodeContext *base_ctx = avctx->priv_data;
-    VAAPIEncodeContext       *ctx = avctx->priv_data;
+    VAAPIEncodeContext *ctx = avctx->priv_data;
 
     if (pic->type == PICTURE_TYPE_IDR)
         pkt->flags |= AV_PKT_FLAG_KEY;
@@ -820,7 +811,7 @@ end:
 }
 
 static int vaapi_encode_output(AVCodecContext *avctx,
-                               HWBaseEncodePicture *base_pic, AVPacket *pkt)
+                               const HWBaseEncodePicture *base_pic, AVPacket *pkt)
 {
     HWBaseEncodeContext *base_ctx = avctx->priv_data;
     VAAPIEncodeContext       *ctx = avctx->priv_data;
@@ -858,7 +849,7 @@ static int vaapi_encode_output(AVCodecContext *avctx,
     av_log(avctx, AV_LOG_DEBUG, "Output read for pic %"PRId64"/%"PRId64".\n",
            base_pic->display_order, base_pic->encode_order);
 
-    vaapi_encode_set_output_property(avctx, base_pic, pkt_ptr);
+    vaapi_encode_set_output_property(avctx, (HWBaseEncodePicture*)base_pic, pkt_ptr);
 
 end:
     ff_refstruct_unref(&pic->output_buffer_ref);
@@ -942,563 +933,6 @@ static int vaapi_encode_free(AVCodecContext *avctx,
     return 0;
 }
 
-static void vaapi_encode_add_ref(AVCodecContext *avctx,
-                                 HWBaseEncodePicture *pic,
-                                 HWBaseEncodePicture *target,
-                                 int is_ref, int in_dpb, int prev)
-{
-    int refs = 0;
-
-    if (is_ref) {
-        av_assert0(pic != target);
-        av_assert0(pic->nb_refs[0] < MAX_PICTURE_REFERENCES &&
-                   pic->nb_refs[1] < MAX_PICTURE_REFERENCES);
-        if (target->display_order < pic->display_order)
-            pic->refs[0][pic->nb_refs[0]++] = target;
-        else
-            pic->refs[1][pic->nb_refs[1]++] = target;
-        ++refs;
-    }
-
-    if (in_dpb) {
-        av_assert0(pic->nb_dpb_pics < MAX_DPB_SIZE);
-        pic->dpb[pic->nb_dpb_pics++] = target;
-        ++refs;
-    }
-
-    if (prev) {
-        av_assert0(!pic->prev);
-        pic->prev = target;
-        ++refs;
-    }
-
-    target->ref_count[0] += refs;
-    target->ref_count[1] += refs;
-}
-
-static void vaapi_encode_remove_refs(AVCodecContext *avctx,
-                                     HWBaseEncodePicture *pic,
-                                     int level)
-{
-    int i;
-
-    if (pic->ref_removed[level])
-        return;
-
-    for (i = 0; i < pic->nb_refs[0]; i++) {
-        av_assert0(pic->refs[0][i]);
-        --pic->refs[0][i]->ref_count[level];
-        av_assert0(pic->refs[0][i]->ref_count[level] >= 0);
-    }
-
-    for (i = 0; i < pic->nb_refs[1]; i++) {
-        av_assert0(pic->refs[1][i]);
-        --pic->refs[1][i]->ref_count[level];
-        av_assert0(pic->refs[1][i]->ref_count[level] >= 0);
-    }
-
-    for (i = 0; i < pic->nb_dpb_pics; i++) {
-        av_assert0(pic->dpb[i]);
-        --pic->dpb[i]->ref_count[level];
-        av_assert0(pic->dpb[i]->ref_count[level] >= 0);
-    }
-
-    av_assert0(pic->prev || pic->type == PICTURE_TYPE_IDR);
-    if (pic->prev) {
-        --pic->prev->ref_count[level];
-        av_assert0(pic->prev->ref_count[level] >= 0);
-    }
-
-    pic->ref_removed[level] = 1;
-}
-
-static void vaapi_encode_set_b_pictures(AVCodecContext *avctx,
-                                        HWBaseEncodePicture *start,
-                                        HWBaseEncodePicture *end,
-                                        HWBaseEncodePicture *prev,
-                                        int current_depth,
-                                        HWBaseEncodePicture **last)
-{
-    HWBaseEncodeContext *ctx = avctx->priv_data;
-    HWBaseEncodePicture *pic, *next, *ref;
-    int i, len;
-
-    av_assert0(start && end && start != end && start->next != end);
-
-    // If we are at the maximum depth then encode all pictures as
-    // non-referenced B-pictures.  Also do this if there is exactly one
-    // picture left, since there will be nothing to reference it.
-    if (current_depth == ctx->max_b_depth || start->next->next == end) {
-        for (pic = start->next; pic; pic = pic->next) {
-            if (pic == end)
-                break;
-            pic->type    = PICTURE_TYPE_B;
-            pic->b_depth = current_depth;
-
-            vaapi_encode_add_ref(avctx, pic, start, 1, 1, 0);
-            vaapi_encode_add_ref(avctx, pic, end,   1, 1, 0);
-            vaapi_encode_add_ref(avctx, pic, prev,  0, 0, 1);
-
-            for (ref = end->refs[1][0]; ref; ref = ref->refs[1][0])
-                vaapi_encode_add_ref(avctx, pic, ref, 0, 1, 0);
-        }
-        *last = prev;
-
-    } else {
-        // Split the current list at the midpoint with a referenced
-        // B-picture, then descend into each side separately.
-        len = 0;
-        for (pic = start->next; pic != end; pic = pic->next)
-            ++len;
-        for (pic = start->next, i = 1; 2 * i < len; pic = pic->next, i++);
-
-        pic->type    = PICTURE_TYPE_B;
-        pic->b_depth = current_depth;
-
-        pic->is_reference = 1;
-
-        vaapi_encode_add_ref(avctx, pic, pic,   0, 1, 0);
-        vaapi_encode_add_ref(avctx, pic, start, 1, 1, 0);
-        vaapi_encode_add_ref(avctx, pic, end,   1, 1, 0);
-        vaapi_encode_add_ref(avctx, pic, prev,  0, 0, 1);
-
-        for (ref = end->refs[1][0]; ref; ref = ref->refs[1][0])
-            vaapi_encode_add_ref(avctx, pic, ref, 0, 1, 0);
-
-        if (i > 1)
-            vaapi_encode_set_b_pictures(avctx, start, pic, pic,
-                                        current_depth + 1, &next);
-        else
-            next = pic;
-
-        vaapi_encode_set_b_pictures(avctx, pic, end, next,
-                                    current_depth + 1, last);
-    }
-}
-
-static void vaapi_encode_add_next_prev(AVCodecContext *avctx,
-                                       HWBaseEncodePicture *pic)
-{
-    HWBaseEncodeContext *ctx = avctx->priv_data;
-    int i;
-
-    if (!pic)
-        return;
-
-    if (pic->type == PICTURE_TYPE_IDR) {
-        for (i = 0; i < ctx->nb_next_prev; i++) {
-            --ctx->next_prev[i]->ref_count[0];
-            ctx->next_prev[i] = NULL;
-        }
-        ctx->next_prev[0] = pic;
-        ++pic->ref_count[0];
-        ctx->nb_next_prev = 1;
-
-        return;
-    }
-
-    if (ctx->nb_next_prev < MAX_PICTURE_REFERENCES) {
-        ctx->next_prev[ctx->nb_next_prev++] = pic;
-        ++pic->ref_count[0];
-    } else {
-        --ctx->next_prev[0]->ref_count[0];
-        for (i = 0; i < MAX_PICTURE_REFERENCES - 1; i++)
-            ctx->next_prev[i] = ctx->next_prev[i + 1];
-        ctx->next_prev[i] = pic;
-        ++pic->ref_count[0];
-    }
-}
-
-static int vaapi_encode_pick_next(AVCodecContext *avctx,
-                                  HWBaseEncodePicture **pic_out)
-{
-    HWBaseEncodeContext *ctx = avctx->priv_data;
-    HWBaseEncodePicture *pic = NULL, *prev = NULL, *next, *start;
-    int i, b_counter, closed_gop_end;
-
-    // If there are any B-frames already queued, the next one to encode
-    // is the earliest not-yet-issued frame for which all references are
-    // available.
-    for (pic = ctx->pic_start; pic; pic = pic->next) {
-        if (pic->encode_issued)
-            continue;
-        if (pic->type != PICTURE_TYPE_B)
-            continue;
-        for (i = 0; i < pic->nb_refs[0]; i++) {
-            if (!pic->refs[0][i]->encode_issued)
-                break;
-        }
-        if (i != pic->nb_refs[0])
-            continue;
-
-        for (i = 0; i < pic->nb_refs[1]; i++) {
-            if (!pic->refs[1][i]->encode_issued)
-                break;
-        }
-        if (i == pic->nb_refs[1])
-            break;
-    }
-
-    if (pic) {
-        av_log(avctx, AV_LOG_DEBUG, "Pick B-picture at depth %d to "
-               "encode next.\n", pic->b_depth);
-        *pic_out = pic;
-        return 0;
-    }
-
-    // Find the B-per-Pth available picture to become the next picture
-    // on the top layer.
-    start = NULL;
-    b_counter = 0;
-    closed_gop_end = ctx->closed_gop ||
-                     ctx->idr_counter == ctx->gop_per_idr;
-    for (pic = ctx->pic_start; pic; pic = next) {
-        next = pic->next;
-        if (pic->encode_issued) {
-            start = pic;
-            continue;
-        }
-        // If the next available picture is force-IDR, encode it to start
-        // a new GOP immediately.
-        if (pic->force_idr)
-            break;
-        if (b_counter == ctx->b_per_p)
-            break;
-        // If this picture ends a closed GOP or starts a new GOP then it
-        // needs to be in the top layer.
-        if (ctx->gop_counter + b_counter + closed_gop_end >= ctx->gop_size)
-            break;
-        // If the picture after this one is force-IDR, we need to encode
-        // this one in the top layer.
-        if (next && next->force_idr)
-            break;
-        ++b_counter;
-    }
-
-    // At the end of the stream the last picture must be in the top layer.
-    if (!pic && ctx->end_of_stream) {
-        --b_counter;
-        pic = ctx->pic_end;
-        if (pic->encode_complete)
-            return AVERROR_EOF;
-        else if (pic->encode_issued)
-            return AVERROR(EAGAIN);
-    }
-
-    if (!pic) {
-        av_log(avctx, AV_LOG_DEBUG, "Pick nothing to encode next - "
-               "need more input for reference pictures.\n");
-        return AVERROR(EAGAIN);
-    }
-    if (ctx->input_order <= ctx->decode_delay && !ctx->end_of_stream) {
-        av_log(avctx, AV_LOG_DEBUG, "Pick nothing to encode next - "
-               "need more input for timestamps.\n");
-        return AVERROR(EAGAIN);
-    }
-
-    if (pic->force_idr) {
-        av_log(avctx, AV_LOG_DEBUG, "Pick forced IDR-picture to "
-               "encode next.\n");
-        pic->type = PICTURE_TYPE_IDR;
-        ctx->idr_counter = 1;
-        ctx->gop_counter = 1;
-
-    } else if (ctx->gop_counter + b_counter >= ctx->gop_size) {
-        if (ctx->idr_counter == ctx->gop_per_idr) {
-            av_log(avctx, AV_LOG_DEBUG, "Pick new-GOP IDR-picture to "
-                   "encode next.\n");
-            pic->type = PICTURE_TYPE_IDR;
-            ctx->idr_counter = 1;
-        } else {
-            av_log(avctx, AV_LOG_DEBUG, "Pick new-GOP I-picture to "
-                   "encode next.\n");
-            pic->type = PICTURE_TYPE_I;
-            ++ctx->idr_counter;
-        }
-        ctx->gop_counter = 1;
-
-    } else {
-        if (ctx->gop_counter + b_counter + closed_gop_end == ctx->gop_size) {
-            av_log(avctx, AV_LOG_DEBUG, "Pick group-end P-picture to "
-                   "encode next.\n");
-        } else {
-            av_log(avctx, AV_LOG_DEBUG, "Pick normal P-picture to "
-                   "encode next.\n");
-        }
-        pic->type = PICTURE_TYPE_P;
-        av_assert0(start);
-        ctx->gop_counter += 1 + b_counter;
-    }
-    pic->is_reference = 1;
-    *pic_out = pic;
-
-    vaapi_encode_add_ref(avctx, pic, pic, 0, 1, 0);
-    if (pic->type != PICTURE_TYPE_IDR) {
-        // TODO: apply both previous and forward multi reference for all vaapi encoders.
-        // And L0/L1 reference frame number can be set dynamically through query
-        // VAConfigAttribEncMaxRefFrames attribute.
-        if (avctx->codec_id == AV_CODEC_ID_AV1) {
-            for (i = 0; i < ctx->nb_next_prev; i++)
-                vaapi_encode_add_ref(avctx, pic, ctx->next_prev[i],
-                                     pic->type == PICTURE_TYPE_P,
-                                     b_counter > 0, 0);
-        } else
-            vaapi_encode_add_ref(avctx, pic, start,
-                                 pic->type == PICTURE_TYPE_P,
-                                 b_counter > 0, 0);
-
-        vaapi_encode_add_ref(avctx, pic, ctx->next_prev[ctx->nb_next_prev - 1], 0, 0, 1);
-    }
-
-    if (b_counter > 0) {
-        vaapi_encode_set_b_pictures(avctx, start, pic, pic, 1,
-                                    &prev);
-    } else {
-        prev = pic;
-    }
-    vaapi_encode_add_next_prev(avctx, prev);
-
-    return 0;
-}
-
-static int vaapi_encode_clear_old(AVCodecContext *avctx)
-{
-    HWBaseEncodeContext *ctx = avctx->priv_data;
-    HWBaseEncodePicture *pic, *prev, *next;
-
-    av_assert0(ctx->pic_start);
-
-    // Remove direct references once each picture is complete.
-    for (pic = ctx->pic_start; pic; pic = pic->next) {
-        if (pic->encode_complete && pic->next)
-            vaapi_encode_remove_refs(avctx, pic, 0);
-    }
-
-    // Remove indirect references once a picture has no direct references.
-    for (pic = ctx->pic_start; pic; pic = pic->next) {
-        if (pic->encode_complete && pic->ref_count[0] == 0)
-            vaapi_encode_remove_refs(avctx, pic, 1);
-    }
-
-    // Clear out all complete pictures with no remaining references.
-    prev = NULL;
-    for (pic = ctx->pic_start; pic; pic = next) {
-        next = pic->next;
-        if (pic->encode_complete && pic->ref_count[1] == 0) {
-            av_assert0(pic->ref_removed[0] && pic->ref_removed[1]);
-            if (prev)
-                prev->next = next;
-            else
-                ctx->pic_start = next;
-            vaapi_encode_free(avctx, pic);
-        } else {
-            prev = pic;
-        }
-    }
-
-    return 0;
-}
-
-static int vaapi_encode_check_frame(AVCodecContext *avctx,
-                                    const AVFrame *frame)
-{
-    HWBaseEncodeContext *ctx = avctx->priv_data;
-
-    if ((frame->crop_top  || frame->crop_bottom ||
-         frame->crop_left || frame->crop_right) && !ctx->crop_warned) {
-        av_log(avctx, AV_LOG_WARNING, "Cropping information on input "
-               "frames ignored due to lack of API support.\n");
-        ctx->crop_warned = 1;
-    }
-
-    if (!ctx->roi_allowed) {
-        AVFrameSideData *sd =
-            av_frame_get_side_data(frame, AV_FRAME_DATA_REGIONS_OF_INTEREST);
-
-        if (sd && !ctx->roi_warned) {
-            av_log(avctx, AV_LOG_WARNING, "ROI side data on input "
-                   "frames ignored due to lack of driver support.\n");
-            ctx->roi_warned = 1;
-        }
-    }
-
-    return 0;
-}
-
-static int vaapi_encode_send_frame(AVCodecContext *avctx, AVFrame *frame)
-{
-    HWBaseEncodeContext *ctx = avctx->priv_data;
-    HWBaseEncodePicture *pic;
-    int err;
-
-    if (frame) {
-        av_log(avctx, AV_LOG_DEBUG, "Input frame: %ux%u (%"PRId64").\n",
-               frame->width, frame->height, frame->pts);
-
-        err = vaapi_encode_check_frame(avctx, frame);
-        if (err < 0)
-            return err;
-
-        pic = vaapi_encode_alloc(avctx, frame);
-        if (!pic)
-            return AVERROR(ENOMEM);
-
-        pic->input_image = av_frame_alloc();
-        if (!pic->input_image) {
-            err = AVERROR(ENOMEM);
-            goto fail;
-        }
-
-        if (ctx->input_order == 0 || frame->pict_type == AV_PICTURE_TYPE_I)
-            pic->force_idr = 1;
-
-        pic->pts = frame->pts;
-        pic->duration = frame->duration;
-
-        if (avctx->flags & AV_CODEC_FLAG_COPY_OPAQUE) {
-            err = av_buffer_replace(&pic->opaque_ref, frame->opaque_ref);
-            if (err < 0)
-                goto fail;
-
-            pic->opaque = frame->opaque;
-        }
-
-        av_frame_move_ref(pic->input_image, frame);
-
-        if (ctx->input_order == 0)
-            ctx->first_pts = pic->pts;
-        if (ctx->input_order == ctx->decode_delay)
-            ctx->dts_pts_diff = pic->pts - ctx->first_pts;
-        if (ctx->output_delay > 0)
-            ctx->ts_ring[ctx->input_order %
-                        (3 * ctx->output_delay + ctx->async_depth)] = pic->pts;
-
-        pic->display_order = ctx->input_order;
-        ++ctx->input_order;
-
-        if (ctx->pic_start) {
-            ctx->pic_end->next = pic;
-            ctx->pic_end       = pic;
-        } else {
-            ctx->pic_start     = pic;
-            ctx->pic_end       = pic;
-        }
-
-    } else {
-        ctx->end_of_stream = 1;
-
-        // Fix timestamps if we hit end-of-stream before the initial decode
-        // delay has elapsed.
-        if (ctx->input_order < ctx->decode_delay)
-            ctx->dts_pts_diff = ctx->pic_end->pts - ctx->first_pts;
-    }
-
-    return 0;
-
-fail:
-    vaapi_encode_free(avctx, pic);
-    return err;
-}
-
-int ff_vaapi_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt)
-{
-    HWBaseEncodeContext *ctx = avctx->priv_data;
-    HWBaseEncodePicture *pic = NULL;
-    AVFrame *frame = ctx->frame;
-    int err;
-
-start:
-    /** if no B frame before repeat P frame, sent repeat P frame out. */
-    if (ctx->tail_pkt->size) {
-        for (HWBaseEncodePicture *tmp = ctx->pic_start; tmp; tmp = tmp->next) {
-            if (tmp->type == PICTURE_TYPE_B && tmp->pts < ctx->tail_pkt->pts)
-                break;
-            else if (!tmp->next) {
-                av_packet_move_ref(pkt, ctx->tail_pkt);
-                goto end;
-            }
-        }
-    }
-
-    err = ff_encode_get_frame(avctx, frame);
-    if (err < 0 && err != AVERROR_EOF)
-        return err;
-
-    if (err == AVERROR_EOF)
-        frame = NULL;
-
-    err = vaapi_encode_send_frame(avctx, frame);
-    if (err < 0)
-        return err;
-
-    if (!ctx->pic_start) {
-        if (ctx->end_of_stream)
-            return AVERROR_EOF;
-        else
-            return AVERROR(EAGAIN);
-    }
-
-    if (ctx->async_encode) {
-        if (av_fifo_can_write(ctx->encode_fifo)) {
-            err = vaapi_encode_pick_next(avctx, &pic);
-            if (!err) {
-                av_assert0(pic);
-                pic->encode_order = ctx->encode_order +
-                    av_fifo_can_read(ctx->encode_fifo);
-                err = vaapi_encode_issue(avctx, pic);
-                if (err < 0) {
-                    av_log(avctx, AV_LOG_ERROR, "Encode failed: %d.\n", err);
-                    return err;
-                }
-                av_fifo_write(ctx->encode_fifo, &pic, 1);
-            }
-        }
-
-        if (!av_fifo_can_read(ctx->encode_fifo))
-            return err;
-
-        // More frames can be buffered
-        if (av_fifo_can_write(ctx->encode_fifo) && !ctx->end_of_stream)
-            return AVERROR(EAGAIN);
-
-        av_fifo_read(ctx->encode_fifo, &pic, 1);
-        ctx->encode_order = pic->encode_order + 1;
-    } else {
-        err = vaapi_encode_pick_next(avctx, &pic);
-        if (err < 0)
-            return err;
-        av_assert0(pic);
-
-        pic->encode_order = ctx->encode_order++;
-
-        err = vaapi_encode_issue(avctx, pic);
-        if (err < 0) {
-            av_log(avctx, AV_LOG_ERROR, "Encode failed: %d.\n", err);
-            return err;
-        }
-    }
-
-    err = vaapi_encode_output(avctx, pic, pkt);
-    if (err < 0) {
-        av_log(avctx, AV_LOG_ERROR, "Output failed: %d.\n", err);
-        return err;
-    }
-
-    ctx->output_order = pic->encode_order;
-    vaapi_encode_clear_old(avctx);
-
-    /** loop to get an available pkt in encoder flushing. */
-    if (ctx->end_of_stream && !pkt->size)
-        goto start;
-
-end:
-    if (pkt->size)
-        av_log(avctx, AV_LOG_DEBUG, "Output packet: pts %"PRId64", dts %"PRId64", "
-               "size %d bytes.\n", pkt->pts, pkt->dts, pkt->size);
-
-    return 0;
-}
-
 static av_cold void vaapi_encode_add_global_param(AVCodecContext *avctx, int type,
                                                   void *buffer, size_t size)
 {
@@ -2756,6 +2190,16 @@ static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx)
     return err;
 }
 
+static const HWEncodePictureOperation vaapi_op = {
+    .alloc  = &vaapi_encode_alloc,
+
+    .issue  = &vaapi_encode_issue,
+
+    .output = &vaapi_encode_output,
+
+    .free   = &vaapi_encode_free,
+};
+
 av_cold int ff_vaapi_encode_init(AVCodecContext *avctx)
 {
     HWBaseEncodeContext *base_ctx = avctx->priv_data;
@@ -2767,6 +2211,8 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx)
     ctx->va_config  = VA_INVALID_ID;
     ctx->va_context = VA_INVALID_ID;
 
+    base_ctx->op = &vaapi_op;
+
     /* If you add something that can fail above this av_frame_alloc(),
      * modify ff_vaapi_encode_close() accordingly. */
     base_ctx->frame = av_frame_alloc();
diff --git a/libavcodec/vaapi_encode.h b/libavcodec/vaapi_encode.h
index 8eee455881..3b99cf1a24 100644
--- a/libavcodec/vaapi_encode.h
+++ b/libavcodec/vaapi_encode.h
@@ -325,9 +325,6 @@ typedef struct VAAPIEncodeType {
                                  char *data, size_t *data_len);
 } VAAPIEncodeType;
 
-
-int ff_vaapi_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt);
-
 int ff_vaapi_encode_init(AVCodecContext *avctx);
 int ff_vaapi_encode_close(AVCodecContext *avctx);
 
diff --git a/libavcodec/vaapi_encode_av1.c b/libavcodec/vaapi_encode_av1.c
index 512b4e3733..5e3e95af48 100644
--- a/libavcodec/vaapi_encode_av1.c
+++ b/libavcodec/vaapi_encode_av1.c
@@ -939,7 +939,7 @@ const FFCodec ff_av1_vaapi_encoder = {
     .p.id           = AV_CODEC_ID_AV1,
     .priv_data_size = sizeof(VAAPIEncodeAV1Context),
     .init           = &vaapi_encode_av1_init,
-    FF_CODEC_RECEIVE_PACKET_CB(&ff_vaapi_encode_receive_packet),
+    FF_CODEC_RECEIVE_PACKET_CB(&ff_hw_base_encode_receive_packet),
     .close          = &vaapi_encode_av1_close,
     .p.priv_class   = &vaapi_encode_av1_class,
     .p.capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE |
diff --git a/libavcodec/vaapi_encode_h264.c b/libavcodec/vaapi_encode_h264.c
index aa011ba307..3f0aa9f96b 100644
--- a/libavcodec/vaapi_encode_h264.c
+++ b/libavcodec/vaapi_encode_h264.c
@@ -1387,7 +1387,7 @@ const FFCodec ff_h264_vaapi_encoder = {
     .p.id           = AV_CODEC_ID_H264,
     .priv_data_size = sizeof(VAAPIEncodeH264Context),
     .init           = &vaapi_encode_h264_init,
-    FF_CODEC_RECEIVE_PACKET_CB(&ff_vaapi_encode_receive_packet),
+    FF_CODEC_RECEIVE_PACKET_CB(&ff_hw_base_encode_receive_packet),
     .close          = &vaapi_encode_h264_close,
     .p.priv_class   = &vaapi_encode_h264_class,
     .p.capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE |
diff --git a/libavcodec/vaapi_encode_h265.c b/libavcodec/vaapi_encode_h265.c
index 4f5d8fc76f..d28a13336e 100644
--- a/libavcodec/vaapi_encode_h265.c
+++ b/libavcodec/vaapi_encode_h265.c
@@ -1504,7 +1504,7 @@ const FFCodec ff_hevc_vaapi_encoder = {
     .p.id           = AV_CODEC_ID_HEVC,
     .priv_data_size = sizeof(VAAPIEncodeH265Context),
     .init           = &vaapi_encode_h265_init,
-    FF_CODEC_RECEIVE_PACKET_CB(&ff_vaapi_encode_receive_packet),
+    FF_CODEC_RECEIVE_PACKET_CB(&ff_hw_base_encode_receive_packet),
     .close          = &vaapi_encode_h265_close,
     .p.priv_class   = &vaapi_encode_h265_class,
     .p.capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE |
diff --git a/libavcodec/vaapi_encode_mjpeg.c b/libavcodec/vaapi_encode_mjpeg.c
index 91829b1e0e..3096043991 100644
--- a/libavcodec/vaapi_encode_mjpeg.c
+++ b/libavcodec/vaapi_encode_mjpeg.c
@@ -575,7 +575,7 @@ const FFCodec ff_mjpeg_vaapi_encoder = {
     .p.id           = AV_CODEC_ID_MJPEG,
     .priv_data_size = sizeof(VAAPIEncodeMJPEGContext),
     .init           = &vaapi_encode_mjpeg_init,
-    FF_CODEC_RECEIVE_PACKET_CB(&ff_vaapi_encode_receive_packet),
+    FF_CODEC_RECEIVE_PACKET_CB(&ff_hw_base_encode_receive_packet),
     .close          = &vaapi_encode_mjpeg_close,
     .p.priv_class   = &vaapi_encode_mjpeg_class,
     .p.capabilities = AV_CODEC_CAP_HARDWARE | AV_CODEC_CAP_DR1 |
diff --git a/libavcodec/vaapi_encode_mpeg2.c b/libavcodec/vaapi_encode_mpeg2.c
index aa8e6d6bdf..92ba8e41c4 100644
--- a/libavcodec/vaapi_encode_mpeg2.c
+++ b/libavcodec/vaapi_encode_mpeg2.c
@@ -699,7 +699,7 @@ const FFCodec ff_mpeg2_vaapi_encoder = {
     .p.id           = AV_CODEC_ID_MPEG2VIDEO,
     .priv_data_size = sizeof(VAAPIEncodeMPEG2Context),
     .init           = &vaapi_encode_mpeg2_init,
-    FF_CODEC_RECEIVE_PACKET_CB(&ff_vaapi_encode_receive_packet),
+    FF_CODEC_RECEIVE_PACKET_CB(&ff_hw_base_encode_receive_packet),
     .close          = &vaapi_encode_mpeg2_close,
     .p.priv_class   = &vaapi_encode_mpeg2_class,
     .p.capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE |
diff --git a/libavcodec/vaapi_encode_vp8.c b/libavcodec/vaapi_encode_vp8.c
index c8203dcbc9..f8c375ee03 100644
--- a/libavcodec/vaapi_encode_vp8.c
+++ b/libavcodec/vaapi_encode_vp8.c
@@ -253,7 +253,7 @@ const FFCodec ff_vp8_vaapi_encoder = {
     .p.id           = AV_CODEC_ID_VP8,
     .priv_data_size = sizeof(VAAPIEncodeVP8Context),
     .init           = &vaapi_encode_vp8_init,
-    FF_CODEC_RECEIVE_PACKET_CB(&ff_vaapi_encode_receive_packet),
+    FF_CODEC_RECEIVE_PACKET_CB(&ff_hw_base_encode_receive_packet),
     .close          = &ff_vaapi_encode_close,
     .p.priv_class   = &vaapi_encode_vp8_class,
     .p.capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE |
diff --git a/libavcodec/vaapi_encode_vp9.c b/libavcodec/vaapi_encode_vp9.c
index 7a0cb0c7fc..9ad175b5e7 100644
--- a/libavcodec/vaapi_encode_vp9.c
+++ b/libavcodec/vaapi_encode_vp9.c
@@ -310,7 +310,7 @@ const FFCodec ff_vp9_vaapi_encoder = {
     .p.id           = AV_CODEC_ID_VP9,
     .priv_data_size = sizeof(VAAPIEncodeVP9Context),
     .init           = &vaapi_encode_vp9_init,
-    FF_CODEC_RECEIVE_PACKET_CB(&ff_vaapi_encode_receive_packet),
+    FF_CODEC_RECEIVE_PACKET_CB(&ff_hw_base_encode_receive_packet),
     .close          = &ff_vaapi_encode_close,
     .p.priv_class   = &vaapi_encode_vp9_class,
     .p.capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE |
-- 
2.41.0.windows.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [FFmpeg-devel] [PATCH v7 04/12] avcodec/vaapi_encode: extract a init function to base layer
  2024-03-14  8:14 [FFmpeg-devel] [PATCH v7 01/12] avcodec/vaapi_encode: move pic->input_surface initialization to encode_alloc tong1.wu-at-intel.com
  2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 02/12] avcodec/vaapi_encode: introduce a base layer for vaapi encode tong1.wu-at-intel.com
  2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 03/12] avcodec/vaapi_encode: move the dpb logic from VAAPI to base layer tong1.wu-at-intel.com
@ 2024-03-14  8:14 ` tong1.wu-at-intel.com
  2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 05/12] avcodec/vaapi_encode: extract a close function for " tong1.wu-at-intel.com
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 15+ messages in thread
From: tong1.wu-at-intel.com @ 2024-03-14  8:14 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Tong Wu

From: Tong Wu <tong1.wu@intel.com>

Move the base_ctx parameter initialization to base layer.

Signed-off-by: Tong Wu <tong1.wu@intel.com>
---
 libavcodec/hw_base_encode.c | 33 +++++++++++++++++++++++++++++++++
 libavcodec/hw_base_encode.h |  2 ++
 libavcodec/vaapi_encode.c   | 36 ++++--------------------------------
 3 files changed, 39 insertions(+), 32 deletions(-)

diff --git a/libavcodec/hw_base_encode.c b/libavcodec/hw_base_encode.c
index dcba902f44..71ead512c0 100644
--- a/libavcodec/hw_base_encode.c
+++ b/libavcodec/hw_base_encode.c
@@ -597,3 +597,36 @@ end:
 
     return 0;
 }
+
+int ff_hw_base_encode_init(AVCodecContext *avctx)
+{
+    HWBaseEncodeContext *ctx = avctx->priv_data;
+
+    ctx->frame = av_frame_alloc();
+    if (!ctx->frame)
+        return AVERROR(ENOMEM);
+
+    if (!avctx->hw_frames_ctx) {
+        av_log(avctx, AV_LOG_ERROR, "A hardware frames reference is "
+               "required to associate the encoding device.\n");
+        return AVERROR(EINVAL);
+    }
+
+    ctx->input_frames_ref = av_buffer_ref(avctx->hw_frames_ctx);
+    if (!ctx->input_frames_ref)
+        return AVERROR(ENOMEM);
+
+    ctx->input_frames = (AVHWFramesContext *)ctx->input_frames_ref->data;
+
+    ctx->device_ref = av_buffer_ref(ctx->input_frames->device_ref);
+    if (!ctx->device_ref)
+        return AVERROR(ENOMEM);
+
+    ctx->device = (AVHWDeviceContext *)ctx->device_ref->data;
+
+    ctx->tail_pkt = av_packet_alloc();
+    if (!ctx->tail_pkt)
+        return AVERROR(ENOMEM);
+
+    return 0;
+}
diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h
index c9df95f952..c9c8fb43c9 100644
--- a/libavcodec/hw_base_encode.h
+++ b/libavcodec/hw_base_encode.h
@@ -226,6 +226,8 @@ typedef struct HWBaseEncodeContext {
 
 int ff_hw_base_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt);
 
+int ff_hw_base_encode_init(AVCodecContext *avctx);
+
 #define HW_BASE_ENCODE_COMMON_OPTIONS \
     { "idr_interval", \
       "Distance (in I-frames) between key frames", \
diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c
index c6742c4301..887543734b 100644
--- a/libavcodec/vaapi_encode.c
+++ b/libavcodec/vaapi_encode.c
@@ -2208,45 +2208,17 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx)
     VAStatus vas;
     int err;
 
+    err = ff_hw_base_encode_init(avctx);
+    if (err < 0)
+        goto fail;
+
     ctx->va_config  = VA_INVALID_ID;
     ctx->va_context = VA_INVALID_ID;
 
     base_ctx->op = &vaapi_op;
 
-    /* If you add something that can fail above this av_frame_alloc(),
-     * modify ff_vaapi_encode_close() accordingly. */
-    base_ctx->frame = av_frame_alloc();
-    if (!base_ctx->frame) {
-        return AVERROR(ENOMEM);
-    }
-
-    if (!avctx->hw_frames_ctx) {
-        av_log(avctx, AV_LOG_ERROR, "A hardware frames reference is "
-               "required to associate the encoding device.\n");
-        return AVERROR(EINVAL);
-    }
-
-    base_ctx->input_frames_ref = av_buffer_ref(avctx->hw_frames_ctx);
-    if (!base_ctx->input_frames_ref) {
-        err = AVERROR(ENOMEM);
-        goto fail;
-    }
-    base_ctx->input_frames = (AVHWFramesContext*)base_ctx->input_frames_ref->data;
-
-    base_ctx->device_ref = av_buffer_ref(base_ctx->input_frames->device_ref);
-    if (!base_ctx->device_ref) {
-        err = AVERROR(ENOMEM);
-        goto fail;
-    }
-    base_ctx->device = (AVHWDeviceContext*)base_ctx->device_ref->data;
     ctx->hwctx = base_ctx->device->hwctx;
 
-    base_ctx->tail_pkt = av_packet_alloc();
-    if (!base_ctx->tail_pkt) {
-        err = AVERROR(ENOMEM);
-        goto fail;
-    }
-
     err = vaapi_encode_profile_entrypoint(avctx);
     if (err < 0)
         goto fail;
-- 
2.41.0.windows.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [FFmpeg-devel] [PATCH v7 05/12] avcodec/vaapi_encode: extract a close function for base layer
  2024-03-14  8:14 [FFmpeg-devel] [PATCH v7 01/12] avcodec/vaapi_encode: move pic->input_surface initialization to encode_alloc tong1.wu-at-intel.com
                   ` (2 preceding siblings ...)
  2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 04/12] avcodec/vaapi_encode: extract a init function " tong1.wu-at-intel.com
@ 2024-03-14  8:14 ` tong1.wu-at-intel.com
  2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 06/12] avcodec/vaapi_encode: extract set_output_property to " tong1.wu-at-intel.com
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 15+ messages in thread
From: tong1.wu-at-intel.com @ 2024-03-14  8:14 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Tong Wu

From: Tong Wu <tong1.wu@intel.com>

Signed-off-by: Tong Wu <tong1.wu@intel.com>
---
 libavcodec/hw_base_encode.c | 16 ++++++++++++++++
 libavcodec/hw_base_encode.h |  2 ++
 libavcodec/vaapi_encode.c   |  8 +-------
 3 files changed, 19 insertions(+), 7 deletions(-)

diff --git a/libavcodec/hw_base_encode.c b/libavcodec/hw_base_encode.c
index 71ead512c0..754f990ed0 100644
--- a/libavcodec/hw_base_encode.c
+++ b/libavcodec/hw_base_encode.c
@@ -630,3 +630,19 @@ int ff_hw_base_encode_init(AVCodecContext *avctx)
 
     return 0;
 }
+
+int ff_hw_base_encode_close(AVCodecContext *avctx)
+{
+    HWBaseEncodeContext *ctx = avctx->priv_data;
+
+    av_fifo_freep2(&ctx->encode_fifo);
+
+    av_frame_free(&ctx->frame);
+    av_packet_free(&ctx->tail_pkt);
+
+    av_buffer_unref(&ctx->device_ref);
+    av_buffer_unref(&ctx->input_frames_ref);
+    av_buffer_unref(&ctx->recon_frames_ref);
+
+    return 0;
+}
diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h
index c9c8fb43c9..436767b706 100644
--- a/libavcodec/hw_base_encode.h
+++ b/libavcodec/hw_base_encode.h
@@ -228,6 +228,8 @@ int ff_hw_base_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt);
 
 int ff_hw_base_encode_init(AVCodecContext *avctx);
 
+int ff_hw_base_encode_close(AVCodecContext *avctx);
+
 #define HW_BASE_ENCODE_COMMON_OPTIONS \
     { "idr_interval", \
       "Distance (in I-frames) between key frames", \
diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c
index 887543734b..ffac27a08f 100644
--- a/libavcodec/vaapi_encode.c
+++ b/libavcodec/vaapi_encode.c
@@ -2412,16 +2412,10 @@ av_cold int ff_vaapi_encode_close(AVCodecContext *avctx)
         ctx->va_config = VA_INVALID_ID;
     }
 
-    av_frame_free(&base_ctx->frame);
-    av_packet_free(&base_ctx->tail_pkt);
-
     av_freep(&ctx->codec_sequence_params);
     av_freep(&ctx->codec_picture_params);
-    av_fifo_freep2(&base_ctx->encode_fifo);
 
-    av_buffer_unref(&base_ctx->recon_frames_ref);
-    av_buffer_unref(&base_ctx->input_frames_ref);
-    av_buffer_unref(&base_ctx->device_ref);
+    ff_hw_base_encode_close(avctx);
 
     return 0;
 }
-- 
2.41.0.windows.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [FFmpeg-devel] [PATCH v7 06/12] avcodec/vaapi_encode: extract set_output_property to base layer
  2024-03-14  8:14 [FFmpeg-devel] [PATCH v7 01/12] avcodec/vaapi_encode: move pic->input_surface initialization to encode_alloc tong1.wu-at-intel.com
                   ` (3 preceding siblings ...)
  2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 05/12] avcodec/vaapi_encode: extract a close function for " tong1.wu-at-intel.com
@ 2024-03-14  8:14 ` tong1.wu-at-intel.com
  2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 07/12] avcodec/vaapi_encode: extract gop configuration " tong1.wu-at-intel.com
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 15+ messages in thread
From: tong1.wu-at-intel.com @ 2024-03-14  8:14 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Tong Wu

From: Tong Wu <tong1.wu@intel.com>

Signed-off-by: Tong Wu <tong1.wu@intel.com>
---
 libavcodec/hw_base_encode.c | 40 +++++++++++++++++++++++++++++++++
 libavcodec/hw_base_encode.h |  3 +++
 libavcodec/vaapi_encode.c   | 44 ++-----------------------------------
 3 files changed, 45 insertions(+), 42 deletions(-)

diff --git a/libavcodec/hw_base_encode.c b/libavcodec/hw_base_encode.c
index 754f990ed0..c7aed019fa 100644
--- a/libavcodec/hw_base_encode.c
+++ b/libavcodec/hw_base_encode.c
@@ -490,6 +490,46 @@ fail:
     return err;
 }
 
+int ff_hw_base_encode_set_output_property(AVCodecContext *avctx,
+                                          HWBaseEncodePicture *pic,
+                                          AVPacket *pkt, int flag_no_delay)
+{
+    HWBaseEncodeContext *ctx = avctx->priv_data;
+
+    if (pic->type == PICTURE_TYPE_IDR)
+        pkt->flags |= AV_PKT_FLAG_KEY;
+
+    pkt->pts = pic->pts;
+    pkt->duration = pic->duration;
+
+    // for no-delay encoders this is handled in generic codec
+    if (avctx->codec->capabilities & AV_CODEC_CAP_DELAY &&
+        avctx->flags & AV_CODEC_FLAG_COPY_OPAQUE) {
+        pkt->opaque          = pic->opaque;
+        pkt->opaque_ref      = pic->opaque_ref;
+        pic->opaque_ref = NULL;
+    }
+
+    if (flag_no_delay) {
+        pkt->dts = pkt->pts;
+        return 0;
+    }
+
+    if (ctx->output_delay == 0) {
+        pkt->dts = pkt->pts;
+    } else if (pic->encode_order < ctx->decode_delay) {
+        if (ctx->ts_ring[pic->encode_order] < INT64_MIN + ctx->dts_pts_diff)
+            pkt->dts = INT64_MIN;
+        else
+            pkt->dts = ctx->ts_ring[pic->encode_order] - ctx->dts_pts_diff;
+    } else {
+        pkt->dts = ctx->ts_ring[(pic->encode_order - ctx->decode_delay) %
+                                (3 * ctx->output_delay + ctx->async_depth)];
+    }
+
+    return 0;
+}
+
 int ff_hw_base_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt)
 {
     HWBaseEncodeContext *ctx = avctx->priv_data;
diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h
index 436767b706..3af696a21a 100644
--- a/libavcodec/hw_base_encode.h
+++ b/libavcodec/hw_base_encode.h
@@ -224,6 +224,9 @@ typedef struct HWBaseEncodeContext {
     AVPacket        *tail_pkt;
 } HWBaseEncodeContext;
 
+int ff_hw_base_encode_set_output_property(AVCodecContext *avctx, HWBaseEncodePicture *pic,
+                                          AVPacket *pkt, int flag_no_delay);
+
 int ff_hw_base_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt);
 
 int ff_hw_base_encode_init(AVCodecContext *avctx);
diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c
index ffac27a08f..0c8be3a1e6 100644
--- a/libavcodec/vaapi_encode.c
+++ b/libavcodec/vaapi_encode.c
@@ -657,47 +657,6 @@ fail_at_end:
     return err;
 }
 
-static int vaapi_encode_set_output_property(AVCodecContext *avctx,
-                                            HWBaseEncodePicture *pic,
-                                            AVPacket *pkt)
-{
-    HWBaseEncodeContext *base_ctx = avctx->priv_data;
-    VAAPIEncodeContext *ctx = avctx->priv_data;
-
-    if (pic->type == PICTURE_TYPE_IDR)
-        pkt->flags |= AV_PKT_FLAG_KEY;
-
-    pkt->pts = pic->pts;
-    pkt->duration = pic->duration;
-
-    // for no-delay encoders this is handled in generic codec
-    if (avctx->codec->capabilities & AV_CODEC_CAP_DELAY &&
-        avctx->flags & AV_CODEC_FLAG_COPY_OPAQUE) {
-        pkt->opaque     = pic->opaque;
-        pkt->opaque_ref = pic->opaque_ref;
-        pic->opaque_ref = NULL;
-    }
-
-    if (ctx->codec->flags & FLAG_TIMESTAMP_NO_DELAY) {
-        pkt->dts = pkt->pts;
-        return 0;
-    }
-
-    if (base_ctx->output_delay == 0) {
-        pkt->dts = pkt->pts;
-    } else if (pic->encode_order < base_ctx->decode_delay) {
-        if (base_ctx->ts_ring[pic->encode_order] < INT64_MIN + base_ctx->dts_pts_diff)
-            pkt->dts = INT64_MIN;
-        else
-            pkt->dts = base_ctx->ts_ring[pic->encode_order] - base_ctx->dts_pts_diff;
-    } else {
-        pkt->dts = base_ctx->ts_ring[(pic->encode_order - base_ctx->decode_delay) %
-                                     (3 * base_ctx->output_delay + base_ctx->async_depth)];
-    }
-
-    return 0;
-}
-
 static int vaapi_encode_get_coded_buffer_size(AVCodecContext *avctx, VABufferID buf_id)
 {
     VAAPIEncodeContext *ctx = avctx->priv_data;
@@ -849,7 +808,8 @@ static int vaapi_encode_output(AVCodecContext *avctx,
     av_log(avctx, AV_LOG_DEBUG, "Output read for pic %"PRId64"/%"PRId64".\n",
            base_pic->display_order, base_pic->encode_order);
 
-    vaapi_encode_set_output_property(avctx, (HWBaseEncodePicture*)base_pic, pkt_ptr);
+    ff_hw_base_encode_set_output_property(avctx, (HWBaseEncodePicture*)base_pic, pkt_ptr,
+                                          ctx->codec->flags & FLAG_TIMESTAMP_NO_DELAY);
 
 end:
     ff_refstruct_unref(&pic->output_buffer_ref);
-- 
2.41.0.windows.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [FFmpeg-devel] [PATCH v7 07/12] avcodec/vaapi_encode: extract gop configuration to base layer
  2024-03-14  8:14 [FFmpeg-devel] [PATCH v7 01/12] avcodec/vaapi_encode: move pic->input_surface initialization to encode_alloc tong1.wu-at-intel.com
                   ` (4 preceding siblings ...)
  2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 06/12] avcodec/vaapi_encode: extract set_output_property to " tong1.wu-at-intel.com
@ 2024-03-14  8:14 ` tong1.wu-at-intel.com
  2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 08/12] avcodec/vaapi_encode: extract a get_recon_format function " tong1.wu-at-intel.com
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 15+ messages in thread
From: tong1.wu-at-intel.com @ 2024-03-14  8:14 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Tong Wu

From: Tong Wu <tong1.wu@intel.com>

Signed-off-by: Tong Wu <tong1.wu@intel.com>
---
 libavcodec/hw_base_encode.c | 54 +++++++++++++++++++++++++++++++++++++
 libavcodec/hw_base_encode.h |  3 +++
 libavcodec/vaapi_encode.c   | 52 +++--------------------------------
 3 files changed, 61 insertions(+), 48 deletions(-)

diff --git a/libavcodec/hw_base_encode.c b/libavcodec/hw_base_encode.c
index c7aed019fa..8044b83292 100644
--- a/libavcodec/hw_base_encode.c
+++ b/libavcodec/hw_base_encode.c
@@ -638,6 +638,60 @@ end:
     return 0;
 }
 
+int ff_hw_base_init_gop_structure(AVCodecContext *avctx, uint32_t ref_l0, uint32_t ref_l1,
+                                  int flags, int prediction_pre_only)
+{
+    HWBaseEncodeContext *ctx = avctx->priv_data;
+
+    if (flags & FLAG_INTRA_ONLY || avctx->gop_size <= 1) {
+        av_log(avctx, AV_LOG_VERBOSE, "Using intra frames only.\n");
+        ctx->gop_size = 1;
+    } else if (ref_l0 < 1) {
+        av_log(avctx, AV_LOG_ERROR, "Driver does not support any "
+               "reference frames.\n");
+        return AVERROR(EINVAL);
+    } else if (!(flags & FLAG_B_PICTURES) || ref_l1 < 1 ||
+               avctx->max_b_frames < 1 || prediction_pre_only) {
+        if (ctx->p_to_gpb)
+           av_log(avctx, AV_LOG_VERBOSE, "Using intra and B-frames "
+                  "(supported references: %d / %d).\n",
+                  ref_l0, ref_l1);
+        else
+            av_log(avctx, AV_LOG_VERBOSE, "Using intra and P-frames "
+                   "(supported references: %d / %d).\n", ref_l0, ref_l1);
+        ctx->gop_size = avctx->gop_size;
+        ctx->p_per_i  = INT_MAX;
+        ctx->b_per_p  = 0;
+    } else {
+       if (ctx->p_to_gpb)
+           av_log(avctx, AV_LOG_VERBOSE, "Using intra and B-frames "
+                  "(supported references: %d / %d).\n",
+                  ref_l0, ref_l1);
+       else
+           av_log(avctx, AV_LOG_VERBOSE, "Using intra, P- and B-frames "
+                  "(supported references: %d / %d).\n", ref_l0, ref_l1);
+        ctx->gop_size = avctx->gop_size;
+        ctx->p_per_i  = INT_MAX;
+        ctx->b_per_p  = avctx->max_b_frames;
+        if (flags & FLAG_B_PICTURE_REFERENCES) {
+            ctx->max_b_depth = FFMIN(ctx->desired_b_depth,
+                                     av_log2(ctx->b_per_p) + 1);
+        } else {
+            ctx->max_b_depth = 1;
+        }
+    }
+
+    if (flags & FLAG_NON_IDR_KEY_PICTURES) {
+        ctx->closed_gop  = !!(avctx->flags & AV_CODEC_FLAG_CLOSED_GOP);
+        ctx->gop_per_idr = ctx->idr_interval + 1;
+    } else {
+        ctx->closed_gop  = 1;
+        ctx->gop_per_idr = 1;
+    }
+
+    return 0;
+}
+
 int ff_hw_base_encode_init(AVCodecContext *avctx)
 {
     HWBaseEncodeContext *ctx = avctx->priv_data;
diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h
index 3af696a21a..39893a8cc5 100644
--- a/libavcodec/hw_base_encode.h
+++ b/libavcodec/hw_base_encode.h
@@ -229,6 +229,9 @@ int ff_hw_base_encode_set_output_property(AVCodecContext *avctx, HWBaseEncodePic
 
 int ff_hw_base_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt);
 
+int ff_hw_base_init_gop_structure(AVCodecContext *avctx, uint32_t ref_l0, uint32_t ref_l1,
+                                  int flags, int prediction_pre_only);
+
 int ff_hw_base_encode_init(AVCodecContext *avctx);
 
 int ff_hw_base_encode_close(AVCodecContext *avctx);
diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c
index 0c8be3a1e6..4633cf08b7 100644
--- a/libavcodec/vaapi_encode.c
+++ b/libavcodec/vaapi_encode.c
@@ -1567,7 +1567,7 @@ static av_cold int vaapi_encode_init_gop_structure(AVCodecContext *avctx)
     VAStatus vas;
     VAConfigAttrib attr = { VAConfigAttribEncMaxRefFrames };
     uint32_t ref_l0, ref_l1;
-    int prediction_pre_only;
+    int prediction_pre_only, err;
 
     vas = vaGetConfigAttributes(ctx->hwctx->display,
                                 ctx->va_profile,
@@ -1631,53 +1631,9 @@ static av_cold int vaapi_encode_init_gop_structure(AVCodecContext *avctx)
     }
 #endif
 
-    if (ctx->codec->flags & FLAG_INTRA_ONLY ||
-        avctx->gop_size <= 1) {
-        av_log(avctx, AV_LOG_VERBOSE, "Using intra frames only.\n");
-        base_ctx->gop_size = 1;
-    } else if (ref_l0 < 1) {
-        av_log(avctx, AV_LOG_ERROR, "Driver does not support any "
-               "reference frames.\n");
-        return AVERROR(EINVAL);
-    } else if (!(ctx->codec->flags & FLAG_B_PICTURES) ||
-               ref_l1 < 1 || avctx->max_b_frames < 1 ||
-               prediction_pre_only) {
-        if (base_ctx->p_to_gpb)
-           av_log(avctx, AV_LOG_VERBOSE, "Using intra and B-frames "
-                  "(supported references: %d / %d).\n",
-                  ref_l0, ref_l1);
-        else
-            av_log(avctx, AV_LOG_VERBOSE, "Using intra and P-frames "
-                   "(supported references: %d / %d).\n", ref_l0, ref_l1);
-        base_ctx->gop_size = avctx->gop_size;
-        base_ctx->p_per_i  = INT_MAX;
-        base_ctx->b_per_p  = 0;
-    } else {
-       if (base_ctx->p_to_gpb)
-           av_log(avctx, AV_LOG_VERBOSE, "Using intra and B-frames "
-                  "(supported references: %d / %d).\n",
-                  ref_l0, ref_l1);
-       else
-           av_log(avctx, AV_LOG_VERBOSE, "Using intra, P- and B-frames "
-                  "(supported references: %d / %d).\n", ref_l0, ref_l1);
-        base_ctx->gop_size = avctx->gop_size;
-        base_ctx->p_per_i  = INT_MAX;
-        base_ctx->b_per_p  = avctx->max_b_frames;
-        if (ctx->codec->flags & FLAG_B_PICTURE_REFERENCES) {
-            base_ctx->max_b_depth = FFMIN(base_ctx->desired_b_depth,
-                                          av_log2(base_ctx->b_per_p) + 1);
-        } else {
-            base_ctx->max_b_depth = 1;
-        }
-    }
-
-    if (ctx->codec->flags & FLAG_NON_IDR_KEY_PICTURES) {
-        base_ctx->closed_gop  = !!(avctx->flags & AV_CODEC_FLAG_CLOSED_GOP);
-        base_ctx->gop_per_idr = base_ctx->idr_interval + 1;
-    } else {
-        base_ctx->closed_gop  = 1;
-        base_ctx->gop_per_idr = 1;
-    }
+    err = ff_hw_base_init_gop_structure(avctx, ref_l0, ref_l1, ctx->codec->flags, prediction_pre_only);
+    if (err < 0)
+        return err;
 
     return 0;
 }
-- 
2.41.0.windows.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [FFmpeg-devel] [PATCH v7 08/12] avcodec/vaapi_encode: extract a get_recon_format function to base layer
  2024-03-14  8:14 [FFmpeg-devel] [PATCH v7 01/12] avcodec/vaapi_encode: move pic->input_surface initialization to encode_alloc tong1.wu-at-intel.com
                   ` (5 preceding siblings ...)
  2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 07/12] avcodec/vaapi_encode: extract gop configuration " tong1.wu-at-intel.com
@ 2024-03-14  8:14 ` tong1.wu-at-intel.com
  2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 09/12] avcodec/vaapi_encode: extract a free funtion " tong1.wu-at-intel.com
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 15+ messages in thread
From: tong1.wu-at-intel.com @ 2024-03-14  8:14 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Tong Wu

From: Tong Wu <tong1.wu@intel.com>

Get constraints and set recon frame format can be shared with other HW
encoder such as D3D12. Extract this part as a new function to base
layer.

Signed-off-by: Tong Wu <tong1.wu@intel.com>
---
 libavcodec/hw_base_encode.c | 58 +++++++++++++++++++++++++++++++++++++
 libavcodec/hw_base_encode.h |  2 ++
 libavcodec/vaapi_encode.c   | 51 ++------------------------------
 3 files changed, 63 insertions(+), 48 deletions(-)

diff --git a/libavcodec/hw_base_encode.c b/libavcodec/hw_base_encode.c
index 8044b83292..d21137b64a 100644
--- a/libavcodec/hw_base_encode.c
+++ b/libavcodec/hw_base_encode.c
@@ -692,6 +692,64 @@ int ff_hw_base_init_gop_structure(AVCodecContext *avctx, uint32_t ref_l0, uint32
     return 0;
 }
 
+int ff_hw_base_get_recon_format(AVCodecContext *avctx, const void *hwconfig, enum AVPixelFormat *fmt)
+{
+    HWBaseEncodeContext *ctx = avctx->priv_data;
+    AVHWFramesConstraints *constraints = NULL;
+    enum AVPixelFormat recon_format;
+    int err, i;
+
+    constraints = av_hwdevice_get_hwframe_constraints(ctx->device_ref,
+                                                      hwconfig);
+    if (!constraints) {
+        err = AVERROR(ENOMEM);
+        goto fail;
+    }
+
+    // Probably we can use the input surface format as the surface format
+    // of the reconstructed frames.  If not, we just pick the first (only?)
+    // format in the valid list and hope that it all works.
+    recon_format = AV_PIX_FMT_NONE;
+    if (constraints->valid_sw_formats) {
+        for (i = 0; constraints->valid_sw_formats[i] != AV_PIX_FMT_NONE; i++) {
+            if (ctx->input_frames->sw_format ==
+                constraints->valid_sw_formats[i]) {
+                recon_format = ctx->input_frames->sw_format;
+                break;
+            }
+        }
+        if (recon_format == AV_PIX_FMT_NONE) {
+            // No match.  Just use the first in the supported list and
+            // hope for the best.
+            recon_format = constraints->valid_sw_formats[0];
+        }
+    } else {
+        // No idea what to use; copy input format.
+        recon_format = ctx->input_frames->sw_format;
+    }
+    av_log(avctx, AV_LOG_DEBUG, "Using %s as format of "
+           "reconstructed frames.\n", av_get_pix_fmt_name(recon_format));
+
+    if (ctx->surface_width  < constraints->min_width  ||
+        ctx->surface_height < constraints->min_height ||
+        ctx->surface_width  > constraints->max_width ||
+        ctx->surface_height > constraints->max_height) {
+        av_log(avctx, AV_LOG_ERROR, "Hardware does not support encoding at "
+               "size %dx%d (constraints: width %d-%d height %d-%d).\n",
+               ctx->surface_width, ctx->surface_height,
+               constraints->min_width,  constraints->max_width,
+               constraints->min_height, constraints->max_height);
+        err = AVERROR(EINVAL);
+        goto fail;
+    }
+
+    *fmt = recon_format;
+    err = 0;
+fail:
+    av_hwframe_constraints_free(&constraints);
+    return err;
+}
+
 int ff_hw_base_encode_init(AVCodecContext *avctx)
 {
     HWBaseEncodeContext *ctx = avctx->priv_data;
diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h
index 39893a8cc5..9957786a3f 100644
--- a/libavcodec/hw_base_encode.h
+++ b/libavcodec/hw_base_encode.h
@@ -232,6 +232,8 @@ int ff_hw_base_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt);
 int ff_hw_base_init_gop_structure(AVCodecContext *avctx, uint32_t ref_l0, uint32_t ref_l1,
                                   int flags, int prediction_pre_only);
 
+int ff_hw_base_get_recon_format(AVCodecContext *avctx, const void *hwconfig, enum AVPixelFormat *fmt);
+
 int ff_hw_base_encode_init(AVCodecContext *avctx);
 
 int ff_hw_base_encode_close(AVCodecContext *avctx);
diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c
index 4633cf08b7..b2fa3dc93c 100644
--- a/libavcodec/vaapi_encode.c
+++ b/libavcodec/vaapi_encode.c
@@ -2022,9 +2022,8 @@ static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx)
     HWBaseEncodeContext *base_ctx = avctx->priv_data;
     VAAPIEncodeContext       *ctx = avctx->priv_data;
     AVVAAPIHWConfig *hwconfig = NULL;
-    AVHWFramesConstraints *constraints = NULL;
     enum AVPixelFormat recon_format;
-    int err, i;
+    int err;
 
     hwconfig = av_hwdevice_hwconfig_alloc(base_ctx->device_ref);
     if (!hwconfig) {
@@ -2033,52 +2032,9 @@ static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx)
     }
     hwconfig->config_id = ctx->va_config;
 
-    constraints = av_hwdevice_get_hwframe_constraints(base_ctx->device_ref,
-                                                      hwconfig);
-    if (!constraints) {
-        err = AVERROR(ENOMEM);
-        goto fail;
-    }
-
-    // Probably we can use the input surface format as the surface format
-    // of the reconstructed frames.  If not, we just pick the first (only?)
-    // format in the valid list and hope that it all works.
-    recon_format = AV_PIX_FMT_NONE;
-    if (constraints->valid_sw_formats) {
-        for (i = 0; constraints->valid_sw_formats[i] != AV_PIX_FMT_NONE; i++) {
-            if (base_ctx->input_frames->sw_format ==
-                constraints->valid_sw_formats[i]) {
-                recon_format = base_ctx->input_frames->sw_format;
-                break;
-            }
-        }
-        if (recon_format == AV_PIX_FMT_NONE) {
-            // No match.  Just use the first in the supported list and
-            // hope for the best.
-            recon_format = constraints->valid_sw_formats[0];
-        }
-    } else {
-        // No idea what to use; copy input format.
-        recon_format = base_ctx->input_frames->sw_format;
-    }
-    av_log(avctx, AV_LOG_DEBUG, "Using %s as format of "
-           "reconstructed frames.\n", av_get_pix_fmt_name(recon_format));
-
-    if (base_ctx->surface_width  < constraints->min_width  ||
-        base_ctx->surface_height < constraints->min_height ||
-        base_ctx->surface_width  > constraints->max_width ||
-        base_ctx->surface_height > constraints->max_height) {
-        av_log(avctx, AV_LOG_ERROR, "Hardware does not support encoding at "
-               "size %dx%d (constraints: width %d-%d height %d-%d).\n",
-               base_ctx->surface_width, base_ctx->surface_height,
-               constraints->min_width,  constraints->max_width,
-               constraints->min_height, constraints->max_height);
-        err = AVERROR(EINVAL);
+    err = ff_hw_base_get_recon_format(avctx, (const void*)hwconfig, &recon_format);
+    if (err < 0)
         goto fail;
-    }
-
-    av_freep(&hwconfig);
-    av_hwframe_constraints_free(&constraints);
 
     base_ctx->recon_frames_ref = av_hwframe_ctx_alloc(base_ctx->device_ref);
     if (!base_ctx->recon_frames_ref) {
@@ -2102,7 +2058,6 @@ static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx)
     err = 0;
   fail:
     av_freep(&hwconfig);
-    av_hwframe_constraints_free(&constraints);
     return err;
 }
 
-- 
2.41.0.windows.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [FFmpeg-devel] [PATCH v7 09/12] avcodec/vaapi_encode: extract a free funtion to base layer
  2024-03-14  8:14 [FFmpeg-devel] [PATCH v7 01/12] avcodec/vaapi_encode: move pic->input_surface initialization to encode_alloc tong1.wu-at-intel.com
                   ` (6 preceding siblings ...)
  2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 08/12] avcodec/vaapi_encode: extract a get_recon_format function " tong1.wu-at-intel.com
@ 2024-03-14  8:14 ` tong1.wu-at-intel.com
  2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 10/12] avutil/hwcontext_d3d12va: add Flags for resource creation tong1.wu-at-intel.com
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 15+ messages in thread
From: tong1.wu-at-intel.com @ 2024-03-14  8:14 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Tong Wu

From: Tong Wu <tong1.wu@intel.com>

Signed-off-by: Tong Wu <tong1.wu@intel.com>
---
 libavcodec/hw_base_encode.c | 11 +++++++++++
 libavcodec/hw_base_encode.h |  2 ++
 libavcodec/vaapi_encode.c   |  6 +-----
 3 files changed, 14 insertions(+), 5 deletions(-)

diff --git a/libavcodec/hw_base_encode.c b/libavcodec/hw_base_encode.c
index d21137b64a..ad221ae4b0 100644
--- a/libavcodec/hw_base_encode.c
+++ b/libavcodec/hw_base_encode.c
@@ -750,6 +750,17 @@ fail:
     return err;
 }
 
+int ff_hw_base_encode_free(AVCodecContext *avctx, HWBaseEncodePicture *pic)
+{
+    av_frame_free(&pic->input_image);
+    av_frame_free(&pic->recon_image);
+
+    av_buffer_unref(&pic->opaque_ref);
+    av_freep(&pic->priv_data);
+
+    return 0;
+}
+
 int ff_hw_base_encode_init(AVCodecContext *avctx)
 {
     HWBaseEncodeContext *ctx = avctx->priv_data;
diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h
index 9957786a3f..225ca68d3e 100644
--- a/libavcodec/hw_base_encode.h
+++ b/libavcodec/hw_base_encode.h
@@ -234,6 +234,8 @@ int ff_hw_base_init_gop_structure(AVCodecContext *avctx, uint32_t ref_l0, uint32
 
 int ff_hw_base_get_recon_format(AVCodecContext *avctx, const void *hwconfig, enum AVPixelFormat *fmt);
 
+int ff_hw_base_encode_free(AVCodecContext *avctx, HWBaseEncodePicture *pic);
+
 int ff_hw_base_encode_init(AVCodecContext *avctx);
 
 int ff_hw_base_encode_close(AVCodecContext *avctx);
diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c
index b2fa3dc93c..c75e8ba5fb 100644
--- a/libavcodec/vaapi_encode.c
+++ b/libavcodec/vaapi_encode.c
@@ -874,17 +874,13 @@ static int vaapi_encode_free(AVCodecContext *avctx,
             av_freep(&pic->slices[i].codec_slice_params);
     }
 
-    av_frame_free(&base_pic->input_image);
-    av_frame_free(&base_pic->recon_image);
-
-    av_buffer_unref(&base_pic->opaque_ref);
+    ff_hw_base_encode_free(avctx, base_pic);
 
     av_freep(&pic->param_buffers);
     av_freep(&pic->slices);
     // Output buffer should already be destroyed.
     av_assert0(pic->output_buffer == VA_INVALID_ID);
 
-    av_freep(&base_pic->priv_data);
     av_freep(&pic->codec_picture_params);
     av_freep(&pic->roi);
 
-- 
2.41.0.windows.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [FFmpeg-devel] [PATCH v7 10/12] avutil/hwcontext_d3d12va: add Flags for resource creation
  2024-03-14  8:14 [FFmpeg-devel] [PATCH v7 01/12] avcodec/vaapi_encode: move pic->input_surface initialization to encode_alloc tong1.wu-at-intel.com
                   ` (7 preceding siblings ...)
  2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 09/12] avcodec/vaapi_encode: extract a free funtion " tong1.wu-at-intel.com
@ 2024-03-14  8:14 ` tong1.wu-at-intel.com
  2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 11/12] avcodec: add D3D12VA hardware HEVC encoder tong1.wu-at-intel.com
  2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 12/12] Changelog: add D3D12VA HEVC encoder changelog tong1.wu-at-intel.com
  10 siblings, 0 replies; 15+ messages in thread
From: tong1.wu-at-intel.com @ 2024-03-14  8:14 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Tong Wu

From: Tong Wu <tong1.wu@intel.com>

Flags field is added to support diffferent resource creation.

Signed-off-by: Tong Wu <tong1.wu@intel.com>
---
 doc/APIchanges                | 3 +++
 libavutil/hwcontext_d3d12va.c | 2 +-
 libavutil/hwcontext_d3d12va.h | 8 ++++++++
 libavutil/version.h           | 2 +-
 4 files changed, 13 insertions(+), 2 deletions(-)

diff --git a/doc/APIchanges b/doc/APIchanges
index cf58c8c5f0..7e003ddf65 100644
--- a/doc/APIchanges
+++ b/doc/APIchanges
@@ -2,6 +2,9 @@ The last version increases of all libraries were on 2024-03-07
 
 API changes, most recent first:
 
+2024-01-xx - xxxxxxxxxx - lavu 59.2.100 - hwcontext_d3d12va.h
+ Add AVD3D12VAFramesContext.flags
+
 2024-03-08 - xxxxxxxxxx - lavc 61.1.100 - avcodec.h
   Add AVCodecContext.[nb_]side_data_prefer_packet.
 
diff --git a/libavutil/hwcontext_d3d12va.c b/libavutil/hwcontext_d3d12va.c
index 353807359b..3abe90247f 100644
--- a/libavutil/hwcontext_d3d12va.c
+++ b/libavutil/hwcontext_d3d12va.c
@@ -246,7 +246,7 @@ static AVBufferRef *d3d12va_pool_alloc(void *opaque, size_t size)
         .Format           = hwctx->format,
         .SampleDesc       = {.Count = 1, .Quality = 0 },
         .Layout           = D3D12_TEXTURE_LAYOUT_UNKNOWN,
-        .Flags            = D3D12_RESOURCE_FLAG_NONE,
+        .Flags            = hwctx->flags,
     };
 
     frame = av_mallocz(sizeof(AVD3D12VAFrame));
diff --git a/libavutil/hwcontext_d3d12va.h b/libavutil/hwcontext_d3d12va.h
index ff06e6f2ef..608dbac97f 100644
--- a/libavutil/hwcontext_d3d12va.h
+++ b/libavutil/hwcontext_d3d12va.h
@@ -129,6 +129,14 @@ typedef struct AVD3D12VAFramesContext {
      * If unset, will be automatically set.
      */
     DXGI_FORMAT format;
+
+    /**
+     * This field is used to specify options for working with resources.
+     * If unset, this will be D3D12_RESOURCE_FLAG_NONE.
+     *
+     * @see: https://learn.microsoft.com/en-us/windows/win32/api/d3d12/ne-d3d12-d3d12_resource_flags.
+     */
+    D3D12_RESOURCE_FLAGS flags;
 } AVD3D12VAFramesContext;
 
 #endif /* AVUTIL_HWCONTEXT_D3D12VA_H */
diff --git a/libavutil/version.h b/libavutil/version.h
index 09f8cdc292..57cad02ec0 100644
--- a/libavutil/version.h
+++ b/libavutil/version.h
@@ -79,7 +79,7 @@
  */
 
 #define LIBAVUTIL_VERSION_MAJOR  59
-#define LIBAVUTIL_VERSION_MINOR   1
+#define LIBAVUTIL_VERSION_MINOR   2
 #define LIBAVUTIL_VERSION_MICRO 100
 
 #define LIBAVUTIL_VERSION_INT   AV_VERSION_INT(LIBAVUTIL_VERSION_MAJOR, \
-- 
2.41.0.windows.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [FFmpeg-devel] [PATCH v7 11/12] avcodec: add D3D12VA hardware HEVC encoder
  2024-03-14  8:14 [FFmpeg-devel] [PATCH v7 01/12] avcodec/vaapi_encode: move pic->input_surface initialization to encode_alloc tong1.wu-at-intel.com
                   ` (8 preceding siblings ...)
  2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 10/12] avutil/hwcontext_d3d12va: add Flags for resource creation tong1.wu-at-intel.com
@ 2024-03-14  8:14 ` tong1.wu-at-intel.com
  2024-03-28  2:35   ` Wu, Tong1
  2024-04-15  8:42   ` Xiang, Haihao
  2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 12/12] Changelog: add D3D12VA HEVC encoder changelog tong1.wu-at-intel.com
  10 siblings, 2 replies; 15+ messages in thread
From: tong1.wu-at-intel.com @ 2024-03-14  8:14 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Tong Wu

From: Tong Wu <tong1.wu@intel.com>

This implementation is based on D3D12 Video Encoding Spec:
https://microsoft.github.io/DirectX-Specs/d3d/D3D12VideoEncoding.html

Sample command line for transcoding:
ffmpeg.exe -hwaccel d3d12va -hwaccel_output_format d3d12 -i input.mp4
-c:v hevc_d3d12va output.mp4

Signed-off-by: Tong Wu <tong1.wu@intel.com>
---
 configure                        |    6 +
 libavcodec/Makefile              |    4 +-
 libavcodec/allcodecs.c           |    1 +
 libavcodec/d3d12va_encode.c      | 1550 ++++++++++++++++++++++++++++++
 libavcodec/d3d12va_encode.h      |  321 +++++++
 libavcodec/d3d12va_encode_hevc.c |  957 ++++++++++++++++++
 6 files changed, 2838 insertions(+), 1 deletion(-)
 create mode 100644 libavcodec/d3d12va_encode.c
 create mode 100644 libavcodec/d3d12va_encode.h
 create mode 100644 libavcodec/d3d12va_encode_hevc.c

diff --git a/configure b/configure
index c34bdd13f5..53076fbf22 100755
--- a/configure
+++ b/configure
@@ -2570,6 +2570,7 @@ CONFIG_EXTRA="
     tpeldsp
     vaapi_1
     vaapi_encode
+    d3d12va_encode
     vc1dsp
     videodsp
     vp3dsp
@@ -3214,6 +3215,7 @@ wmv3_vaapi_hwaccel_select="vc1_vaapi_hwaccel"
 wmv3_vdpau_hwaccel_select="vc1_vdpau_hwaccel"
 
 # hardware-accelerated codecs
+d3d12va_encode_deps="d3d12va ID3D12VideoEncoder d3d12_encoder_feature"
 mediafoundation_deps="mftransform_h MFCreateAlignedMemoryBuffer"
 omx_deps="libdl pthreads"
 omx_rpi_select="omx"
@@ -3280,6 +3282,7 @@ h264_v4l2m2m_encoder_deps="v4l2_m2m h264_v4l2_m2m"
 hevc_amf_encoder_deps="amf"
 hevc_cuvid_decoder_deps="cuvid"
 hevc_cuvid_decoder_select="hevc_mp4toannexb_bsf"
+hevc_d3d12va_encoder_select="cbs_h265 d3d12va_encode"
 hevc_mediacodec_decoder_deps="mediacodec"
 hevc_mediacodec_decoder_select="hevc_mp4toannexb_bsf hevc_parser"
 hevc_mediacodec_encoder_deps="mediacodec"
@@ -6620,6 +6623,9 @@ check_type "windows.h d3d11.h" "ID3D11VideoDecoder"
 check_type "windows.h d3d11.h" "ID3D11VideoContext"
 check_type "windows.h d3d12.h" "ID3D12Device"
 check_type "windows.h d3d12video.h" "ID3D12VideoDecoder"
+check_type "windows.h d3d12video.h" "ID3D12VideoEncoder"
+test_code cc "windows.h d3d12video.h" "D3D12_FEATURE_VIDEO feature = D3D12_FEATURE_VIDEO_ENCODER_CODEC" && \
+test_code cc "windows.h d3d12video.h" "D3D12_FEATURE_DATA_VIDEO_ENCODER_RESOURCE_REQUIREMENTS req" && enable d3d12_encoder_feature
 check_type "windows.h" "DPI_AWARENESS_CONTEXT" -D_WIN32_WINNT=0x0A00
 check_type "d3d9.h dxva2api.h" DXVA2_ConfigPictureDecode -D_WIN32_WINNT=0x0602
 check_func_headers mfapi.h MFCreateAlignedMemoryBuffer -lmfplat
diff --git a/libavcodec/Makefile b/libavcodec/Makefile
index cbfae5f182..cdda3f0d0a 100644
--- a/libavcodec/Makefile
+++ b/libavcodec/Makefile
@@ -84,6 +84,7 @@ OBJS-$(CONFIG_CBS_JPEG)                += cbs_jpeg.o
 OBJS-$(CONFIG_CBS_MPEG2)               += cbs_mpeg2.o
 OBJS-$(CONFIG_CBS_VP8)                 += cbs_vp8.o vp8data.o
 OBJS-$(CONFIG_CBS_VP9)                 += cbs_vp9.o
+OBJS-$(CONFIG_D3D12VA_ENCODE)          += d3d12va_encode.o hw_base_encode.o
 OBJS-$(CONFIG_DEFLATE_WRAPPER)         += zlib_wrapper.o
 OBJS-$(CONFIG_DOVI_RPU)                += dovi_rpu.o
 OBJS-$(CONFIG_ERROR_RESILIENCE)        += error_resilience.o
@@ -435,6 +436,7 @@ OBJS-$(CONFIG_HEVC_DECODER)            += hevcdec.o hevc_mvs.o \
                                           h274.o
 OBJS-$(CONFIG_HEVC_AMF_ENCODER)        += amfenc_hevc.o
 OBJS-$(CONFIG_HEVC_CUVID_DECODER)      += cuviddec.o
+OBJS-$(CONFIG_HEVC_D3D12VA_ENCODER)    += d3d12va_encode_hevc.o
 OBJS-$(CONFIG_HEVC_MEDIACODEC_DECODER) += mediacodecdec.o
 OBJS-$(CONFIG_HEVC_MEDIACODEC_ENCODER) += mediacodecenc.o
 OBJS-$(CONFIG_HEVC_MF_ENCODER)         += mfenc.o mf_utils.o
@@ -1263,7 +1265,7 @@ SKIPHEADERS                            += %_tablegen.h                  \
 
 SKIPHEADERS-$(CONFIG_AMF)              += amfenc.h
 SKIPHEADERS-$(CONFIG_D3D11VA)          += d3d11va.h dxva2_internal.h
-SKIPHEADERS-$(CONFIG_D3D12VA)          += d3d12va_decode.h
+SKIPHEADERS-$(CONFIG_D3D12VA)          += d3d12va_decode.h d3d12va_encode.h
 SKIPHEADERS-$(CONFIG_DXVA2)            += dxva2.h dxva2_internal.h
 SKIPHEADERS-$(CONFIG_JNI)              += ffjni.h
 SKIPHEADERS-$(CONFIG_LCMS2)            += fflcms2.h
diff --git a/libavcodec/allcodecs.c b/libavcodec/allcodecs.c
index 2386b450a6..7b5093233c 100644
--- a/libavcodec/allcodecs.c
+++ b/libavcodec/allcodecs.c
@@ -855,6 +855,7 @@ extern const FFCodec ff_h264_vaapi_encoder;
 extern const FFCodec ff_h264_videotoolbox_encoder;
 extern const FFCodec ff_hevc_amf_encoder;
 extern const FFCodec ff_hevc_cuvid_decoder;
+extern const FFCodec ff_hevc_d3d12va_encoder;
 extern const FFCodec ff_hevc_mediacodec_decoder;
 extern const FFCodec ff_hevc_mediacodec_encoder;
 extern const FFCodec ff_hevc_mf_encoder;
diff --git a/libavcodec/d3d12va_encode.c b/libavcodec/d3d12va_encode.c
new file mode 100644
index 0000000000..88a08efa76
--- /dev/null
+++ b/libavcodec/d3d12va_encode.c
@@ -0,0 +1,1550 @@
+/*
+ * Direct3D 12 HW acceleration video encoder
+ *
+ * Copyright (c) 2024 Intel Corporation
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include "libavutil/avassert.h"
+#include "libavutil/common.h"
+#include "libavutil/internal.h"
+#include "libavutil/log.h"
+#include "libavutil/pixdesc.h"
+#include "libavutil/hwcontext_d3d12va_internal.h"
+#include "libavutil/hwcontext_d3d12va.h"
+
+#include "avcodec.h"
+#include "d3d12va_encode.h"
+#include "encode.h"
+
+const AVCodecHWConfigInternal *const ff_d3d12va_encode_hw_configs[] = {
+    HW_CONFIG_ENCODER_FRAMES(D3D12, D3D12VA),
+    NULL,
+};
+
+static int d3d12va_fence_completion(AVD3D12VASyncContext *psync_ctx)
+{
+    uint64_t completion = ID3D12Fence_GetCompletedValue(psync_ctx->fence);
+    if (completion < psync_ctx->fence_value) {
+        if (FAILED(ID3D12Fence_SetEventOnCompletion(psync_ctx->fence, psync_ctx->fence_value, psync_ctx->event)))
+            return AVERROR(EINVAL);
+
+        WaitForSingleObjectEx(psync_ctx->event, INFINITE, FALSE);
+    }
+
+    return 0;
+}
+
+static int d3d12va_sync_with_gpu(AVCodecContext *avctx)
+{
+    D3D12VAEncodeContext *ctx = avctx->priv_data;
+
+    DX_CHECK(ID3D12CommandQueue_Signal(ctx->command_queue, ctx->sync_ctx.fence, ++ctx->sync_ctx.fence_value));
+    return d3d12va_fence_completion(&ctx->sync_ctx);
+
+fail:
+    return AVERROR(EINVAL);
+}
+
+typedef struct CommandAllocator {
+    ID3D12CommandAllocator *command_allocator;
+    uint64_t fence_value;
+} CommandAllocator;
+
+static int d3d12va_get_valid_command_allocator(AVCodecContext *avctx, ID3D12CommandAllocator **ppAllocator)
+{
+    HRESULT hr;
+    D3D12VAEncodeContext *ctx = avctx->priv_data;
+    CommandAllocator allocator;
+
+    if (av_fifo_peek(ctx->allocator_queue, &allocator, 1, 0) >= 0) {
+        uint64_t completion = ID3D12Fence_GetCompletedValue(ctx->sync_ctx.fence);
+        if (completion >= allocator.fence_value) {
+            *ppAllocator = allocator.command_allocator;
+            av_fifo_read(ctx->allocator_queue, &allocator, 1);
+            return 0;
+        }
+    }
+
+    hr = ID3D12Device_CreateCommandAllocator(ctx->hwctx->device, D3D12_COMMAND_LIST_TYPE_VIDEO_ENCODE,
+                                             &IID_ID3D12CommandAllocator, (void **)ppAllocator);
+    if (FAILED(hr)) {
+        av_log(avctx, AV_LOG_ERROR, "Failed to create a new command allocator!\n");
+        return AVERROR(EINVAL);
+    }
+
+    return 0;
+}
+
+static int d3d12va_discard_command_allocator(AVCodecContext *avctx, ID3D12CommandAllocator *pAllocator, uint64_t fence_value)
+{
+    D3D12VAEncodeContext *ctx = avctx->priv_data;
+
+    CommandAllocator allocator = {
+        .command_allocator = pAllocator,
+        .fence_value = fence_value,
+    };
+
+    av_fifo_write(ctx->allocator_queue, &allocator, 1);
+
+    return 0;
+}
+
+static int d3d12va_encode_wait(AVCodecContext *avctx,
+                               D3D12VAEncodePicture *pic)
+{
+    D3D12VAEncodeContext *ctx     = avctx->priv_data;
+    HWBaseEncodePicture *base_pic = (HWBaseEncodePicture *)pic;
+    uint64_t completion;
+
+    av_assert0(base_pic->encode_issued);
+
+    if (base_pic->encode_complete) {
+        // Already waited for this picture.
+        return 0;
+    }
+
+    completion = ID3D12Fence_GetCompletedValue(ctx->sync_ctx.fence);
+    if (completion < pic->fence_value) {
+        if (FAILED(ID3D12Fence_SetEventOnCompletion(ctx->sync_ctx.fence, pic->fence_value,
+                                                    ctx->sync_ctx.event)))
+            return AVERROR(EINVAL);
+
+        WaitForSingleObjectEx(ctx->sync_ctx.event, INFINITE, FALSE);
+    }
+
+    av_log(avctx, AV_LOG_DEBUG, "Sync to pic %"PRId64"/%"PRId64" "
+           "(input surface %p).\n", base_pic->display_order,
+           base_pic->encode_order, pic->input_surface->texture);
+
+    av_frame_free(&base_pic->input_image);
+
+    base_pic->encode_complete = 1;
+    return 0;
+}
+
+static int d3d12va_encode_create_metadata_buffers(AVCodecContext *avctx,
+                                                  D3D12VAEncodePicture *pic)
+{
+    D3D12VAEncodeContext *ctx = avctx->priv_data;
+    int width = sizeof(D3D12_VIDEO_ENCODER_OUTPUT_METADATA) + sizeof(D3D12_VIDEO_ENCODER_FRAME_SUBREGION_METADATA);
+    D3D12_HEAP_PROPERTIES encoded_meta_props = { .Type = D3D12_HEAP_TYPE_DEFAULT }, resolved_meta_props;
+    D3D12_HEAP_TYPE resolved_heap_type = D3D12_HEAP_TYPE_READBACK;
+    HRESULT hr;
+
+    D3D12_RESOURCE_DESC meta_desc = {
+        .Dimension        = D3D12_RESOURCE_DIMENSION_BUFFER,
+        .Alignment        = 0,
+        .Width            = ctx->req.MaxEncoderOutputMetadataBufferSize,
+        .Height           = 1,
+        .DepthOrArraySize = 1,
+        .MipLevels        = 1,
+        .Format           = DXGI_FORMAT_UNKNOWN,
+        .SampleDesc       = { .Count = 1, .Quality = 0 },
+        .Layout           = D3D12_TEXTURE_LAYOUT_ROW_MAJOR,
+        .Flags            = D3D12_RESOURCE_FLAG_NONE,
+    };
+
+    hr = ID3D12Device_CreateCommittedResource(ctx->hwctx->device, &encoded_meta_props, D3D12_HEAP_FLAG_NONE,
+                                              &meta_desc, D3D12_RESOURCE_STATE_COMMON, NULL,
+                                              &IID_ID3D12Resource, (void **)&pic->encoded_metadata);
+    if (FAILED(hr)) {
+        av_log(avctx, AV_LOG_ERROR, "Failed to create metadata buffer.\n");
+        return AVERROR_UNKNOWN;
+    }
+
+    ctx->hwctx->device->lpVtbl->GetCustomHeapProperties(ctx->hwctx->device, &resolved_meta_props, 0, resolved_heap_type);
+
+    meta_desc.Width = width;
+
+    hr = ID3D12Device_CreateCommittedResource(ctx->hwctx->device, &resolved_meta_props, D3D12_HEAP_FLAG_NONE,
+                                              &meta_desc, D3D12_RESOURCE_STATE_COMMON, NULL,
+                                              &IID_ID3D12Resource, (void **)&pic->resolved_metadata);
+
+    if (FAILED(hr)) {
+        av_log(avctx, AV_LOG_ERROR, "Failed to create output metadata buffer.\n");
+        return AVERROR_UNKNOWN;
+    }
+
+    return 0;
+}
+
+static int d3d12va_encode_issue(AVCodecContext *avctx,
+                                const HWBaseEncodePicture *base_pic)
+{
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    D3D12VAEncodeContext     *ctx = avctx->priv_data;
+    AVD3D12VAFramesContext *frames_hwctx = base_ctx->input_frames->hwctx;
+    D3D12VAEncodePicture *pic = (D3D12VAEncodePicture *)base_pic;
+    int err, i, j;
+    HRESULT hr;
+    char data[MAX_PARAM_BUFFER_SIZE];
+    void *ptr;
+    size_t bit_len;
+    ID3D12CommandAllocator *command_allocator = NULL;
+    ID3D12VideoEncodeCommandList2 *cmd_list = ctx->command_list;
+    D3D12_RESOURCE_BARRIER barriers[32] = { 0 };
+    D3D12_VIDEO_ENCODE_REFERENCE_FRAMES d3d12_refs = { 0 };
+
+    D3D12_VIDEO_ENCODER_ENCODEFRAME_INPUT_ARGUMENTS input_args = {
+        .SequenceControlDesc = {
+            .Flags = D3D12_VIDEO_ENCODER_SEQUENCE_CONTROL_FLAG_NONE,
+            .IntraRefreshConfig = { 0 },
+            .RateControl = ctx->rc,
+            .PictureTargetResolution = ctx->resolution,
+            .SelectedLayoutMode = D3D12_VIDEO_ENCODER_FRAME_SUBREGION_LAYOUT_MODE_FULL_FRAME,
+            .FrameSubregionsLayoutData = { 0 },
+            .CodecGopSequence = ctx->gop,
+        },
+        .pInputFrame = pic->input_surface->texture,
+        .InputFrameSubresource = 0,
+    };
+
+    D3D12_VIDEO_ENCODER_ENCODEFRAME_OUTPUT_ARGUMENTS output_args = { 0 };
+
+    D3D12_VIDEO_ENCODER_RESOLVE_METADATA_INPUT_ARGUMENTS input_metadata = {
+        .EncoderCodec = ctx->codec->d3d12_codec,
+        .EncoderProfile = ctx->profile->d3d12_profile,
+        .EncoderInputFormat = frames_hwctx->format,
+        .EncodedPictureEffectiveResolution = ctx->resolution,
+    };
+
+    D3D12_VIDEO_ENCODER_RESOLVE_METADATA_OUTPUT_ARGUMENTS output_metadata = { 0 };
+
+    memset(data, 0, sizeof(data));
+
+    av_log(avctx, AV_LOG_DEBUG, "Issuing encode for pic %"PRId64"/%"PRId64" "
+           "as type %s.\n", base_pic->display_order, base_pic->encode_order,
+           ff_hw_base_encode_get_pictype_name(base_pic->type));
+    if (base_pic->nb_refs[0] == 0 && base_pic->nb_refs[1] == 0) {
+        av_log(avctx, AV_LOG_DEBUG, "No reference pictures.\n");
+    } else {
+        av_log(avctx, AV_LOG_DEBUG, "L0 refers to");
+        for (i = 0; i < base_pic->nb_refs[0]; i++) {
+            av_log(avctx, AV_LOG_DEBUG, " %"PRId64"/%"PRId64,
+                   base_pic->refs[0][i]->display_order, base_pic->refs[0][i]->encode_order);
+        }
+        av_log(avctx, AV_LOG_DEBUG, ".\n");
+
+        if (base_pic->nb_refs[1]) {
+            av_log(avctx, AV_LOG_DEBUG, "L1 refers to");
+            for (i = 0; i < base_pic->nb_refs[1]; i++) {
+                av_log(avctx, AV_LOG_DEBUG, " %"PRId64"/%"PRId64,
+                       base_pic->refs[1][i]->display_order, base_pic->refs[1][i]->encode_order);
+            }
+            av_log(avctx, AV_LOG_DEBUG, ".\n");
+        }
+    }
+
+    av_assert0(!base_pic->encode_issued);
+    for (i = 0; i < base_pic->nb_refs[0]; i++) {
+        av_assert0(base_pic->refs[0][i]);
+        av_assert0(base_pic->refs[0][i]->encode_issued);
+    }
+    for (i = 0; i < base_pic->nb_refs[1]; i++) {
+        av_assert0(base_pic->refs[1][i]);
+        av_assert0(base_pic->refs[1][i]->encode_issued);
+    }
+
+    av_log(avctx, AV_LOG_DEBUG, "Input surface is %p.\n", pic->input_surface->texture);
+
+    err = av_hwframe_get_buffer(base_ctx->recon_frames_ref, base_pic->recon_image, 0);
+    if (err < 0) {
+        err = AVERROR(ENOMEM);
+        goto fail;
+    }
+
+    pic->recon_surface = (AVD3D12VAFrame *)base_pic->recon_image->data[0];
+    av_log(avctx, AV_LOG_DEBUG, "Recon surface is %p.\n",
+           pic->recon_surface->texture);
+
+    pic->output_buffer_ref = av_buffer_pool_get(ctx->output_buffer_pool);
+    if (!pic->output_buffer_ref) {
+        err = AVERROR(ENOMEM);
+        goto fail;
+    }
+    pic->output_buffer = (ID3D12Resource *)pic->output_buffer_ref->data;
+    av_log(avctx, AV_LOG_DEBUG, "Output buffer is %p.\n",
+           pic->output_buffer);
+
+    err = d3d12va_encode_create_metadata_buffers(avctx, pic);
+    if (err < 0)
+        goto fail;
+
+    if (ctx->codec->init_picture_params) {
+        err = ctx->codec->init_picture_params(avctx, pic);
+        if (err < 0) {
+            av_log(avctx, AV_LOG_ERROR, "Failed to initialise picture "
+                   "parameters: %d.\n", err);
+            goto fail;
+        }
+    }
+
+    if (base_pic->type == PICTURE_TYPE_IDR) {
+        if (ctx->codec->write_sequence_header) {
+            bit_len = 8 * sizeof(data);
+            err = ctx->codec->write_sequence_header(avctx, data, &bit_len);
+            if (err < 0) {
+                av_log(avctx, AV_LOG_ERROR, "Failed to write per-sequence "
+                       "header: %d.\n", err);
+                goto fail;
+            }
+        }
+
+        pic->header_size = (int)bit_len / 8;
+        pic->header_size = pic->header_size % ctx->req.CompressedBitstreamBufferAccessAlignment ?
+                           FFALIGN(pic->header_size, ctx->req.CompressedBitstreamBufferAccessAlignment) :
+                           pic->header_size;
+
+        hr = ID3D12Resource_Map(pic->output_buffer, 0, NULL, (void **)&ptr);
+        if (FAILED(hr)) {
+            err = AVERROR_UNKNOWN;
+            goto fail;
+        }
+
+        memcpy(ptr, data, pic->header_size);
+        ID3D12Resource_Unmap(pic->output_buffer, 0, NULL);
+    }
+
+    d3d12_refs.NumTexture2Ds = base_pic->nb_refs[0] + base_pic->nb_refs[1];
+    if (d3d12_refs.NumTexture2Ds) {
+        d3d12_refs.ppTexture2Ds = av_calloc(d3d12_refs.NumTexture2Ds,
+                                            sizeof(*d3d12_refs.ppTexture2Ds));
+        if (!d3d12_refs.ppTexture2Ds) {
+            err = AVERROR(ENOMEM);
+            goto fail;
+        }
+
+        i = 0;
+        for (j = 0; j < base_pic->nb_refs[0]; j++)
+            d3d12_refs.ppTexture2Ds[i++] = ((D3D12VAEncodePicture *)base_pic->refs[0][j])->recon_surface->texture;
+        for (j = 0; j < base_pic->nb_refs[1]; j++)
+            d3d12_refs.ppTexture2Ds[i++] = ((D3D12VAEncodePicture *)base_pic->refs[1][j])->recon_surface->texture;
+    }
+
+    input_args.PictureControlDesc.IntraRefreshFrameIndex  = 0;
+    if (base_pic->is_reference)
+        input_args.PictureControlDesc.Flags |= D3D12_VIDEO_ENCODER_PICTURE_CONTROL_FLAG_USED_AS_REFERENCE_PICTURE;
+
+    input_args.PictureControlDesc.PictureControlCodecData = pic->pic_ctl;
+    input_args.PictureControlDesc.ReferenceFrames         = d3d12_refs;
+    input_args.CurrentFrameBitstreamMetadataSize          = pic->header_size;
+
+    output_args.Bitstream.pBuffer                                    = pic->output_buffer;
+    output_args.Bitstream.FrameStartOffset                           = pic->header_size;
+    output_args.ReconstructedPicture.pReconstructedPicture           = pic->recon_surface->texture;
+    output_args.ReconstructedPicture.ReconstructedPictureSubresource = 0;
+    output_args.EncoderOutputMetadata.pBuffer                        = pic->encoded_metadata;
+    output_args.EncoderOutputMetadata.Offset                         = 0;
+
+    input_metadata.HWLayoutMetadata.pBuffer = pic->encoded_metadata;
+    input_metadata.HWLayoutMetadata.Offset  = 0;
+
+    output_metadata.ResolvedLayoutMetadata.pBuffer = pic->resolved_metadata;
+    output_metadata.ResolvedLayoutMetadata.Offset  = 0;
+
+    err = d3d12va_get_valid_command_allocator(avctx, &command_allocator);
+    if (err < 0)
+        goto fail;
+
+    hr = ID3D12CommandAllocator_Reset(command_allocator);
+    if (FAILED(hr)) {
+        err = AVERROR_UNKNOWN;
+        goto fail;
+    }
+
+    hr = ID3D12VideoEncodeCommandList2_Reset(cmd_list, command_allocator);
+    if (FAILED(hr)) {
+        err = AVERROR_UNKNOWN;
+        goto fail;
+    }
+
+#define TRANSITION_BARRIER(res, before, after)                      \
+    (D3D12_RESOURCE_BARRIER) {                                      \
+        .Type  = D3D12_RESOURCE_BARRIER_TYPE_TRANSITION,            \
+        .Flags = D3D12_RESOURCE_BARRIER_FLAG_NONE,                  \
+        .Transition = {                                             \
+            .pResource   = res,                                     \
+            .Subresource = D3D12_RESOURCE_BARRIER_ALL_SUBRESOURCES, \
+            .StateBefore = before,                                  \
+            .StateAfter  = after,                                   \
+        },                                                          \
+    }
+
+    barriers[0] = TRANSITION_BARRIER(pic->input_surface->texture,
+                                     D3D12_RESOURCE_STATE_COMMON,
+                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ);
+    barriers[1] = TRANSITION_BARRIER(pic->output_buffer,
+                                     D3D12_RESOURCE_STATE_COMMON,
+                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE);
+    barriers[2] = TRANSITION_BARRIER(pic->recon_surface->texture,
+                                     D3D12_RESOURCE_STATE_COMMON,
+                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE);
+    barriers[3] = TRANSITION_BARRIER(pic->encoded_metadata,
+                                     D3D12_RESOURCE_STATE_COMMON,
+                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE);
+    barriers[4] = TRANSITION_BARRIER(pic->resolved_metadata,
+                                     D3D12_RESOURCE_STATE_COMMON,
+                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE);
+
+    ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, 5, barriers);
+
+    if (d3d12_refs.NumTexture2Ds) {
+        D3D12_RESOURCE_BARRIER refs_barriers[3];
+
+        for (i = 0; i < d3d12_refs.NumTexture2Ds; i++)
+            refs_barriers[i] = TRANSITION_BARRIER(d3d12_refs.ppTexture2Ds[i],
+                                                  D3D12_RESOURCE_STATE_COMMON,
+                                                  D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ);
+
+        ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, d3d12_refs.NumTexture2Ds,
+                                                      refs_barriers);
+    }
+
+    ID3D12VideoEncodeCommandList2_EncodeFrame(cmd_list, ctx->encoder, ctx->encoder_heap,
+                                              &input_args, &output_args);
+
+    barriers[3] = TRANSITION_BARRIER(pic->encoded_metadata,
+                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE,
+                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ);
+
+    ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, 1, &barriers[3]);
+
+    ID3D12VideoEncodeCommandList2_ResolveEncoderOutputMetadata(cmd_list, &input_metadata, &output_metadata);
+
+    if (d3d12_refs.NumTexture2Ds) {
+        D3D12_RESOURCE_BARRIER refs_barriers[3];
+
+        for (i = 0; i < d3d12_refs.NumTexture2Ds; i++)
+                    refs_barriers[i] = TRANSITION_BARRIER(d3d12_refs.ppTexture2Ds[i],
+                                                          D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ,
+                                                          D3D12_RESOURCE_STATE_COMMON);
+
+        ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, d3d12_refs.NumTexture2Ds,
+                                                      refs_barriers);
+    }
+
+    barriers[0] = TRANSITION_BARRIER(pic->input_surface->texture,
+                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ,
+                                     D3D12_RESOURCE_STATE_COMMON);
+    barriers[1] = TRANSITION_BARRIER(pic->output_buffer,
+                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE,
+                                     D3D12_RESOURCE_STATE_COMMON);
+    barriers[2] = TRANSITION_BARRIER(pic->recon_surface->texture,
+                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE,
+                                     D3D12_RESOURCE_STATE_COMMON);
+    barriers[3] = TRANSITION_BARRIER(pic->encoded_metadata,
+                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ,
+                                     D3D12_RESOURCE_STATE_COMMON);
+    barriers[4] = TRANSITION_BARRIER(pic->resolved_metadata,
+                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE,
+                                     D3D12_RESOURCE_STATE_COMMON);
+
+    ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, 5, barriers);
+
+    hr = ID3D12VideoEncodeCommandList2_Close(cmd_list);
+    if (FAILED(hr)) {
+        err = AVERROR_UNKNOWN;
+        goto fail;
+    }
+
+    hr = ID3D12CommandQueue_Wait(ctx->command_queue, pic->input_surface->sync_ctx.fence,
+                                 pic->input_surface->sync_ctx.fence_value);
+    if (FAILED(hr)) {
+        err = AVERROR_UNKNOWN;
+        goto fail;
+    }
+
+    ID3D12CommandQueue_ExecuteCommandLists(ctx->command_queue, 1, (ID3D12CommandList **)&ctx->command_list);
+
+    hr = ID3D12CommandQueue_Signal(ctx->command_queue, pic->input_surface->sync_ctx.fence,
+                                   ++pic->input_surface->sync_ctx.fence_value);
+    if (FAILED(hr)) {
+        err = AVERROR_UNKNOWN;
+        goto fail;
+    }
+
+    hr = ID3D12CommandQueue_Signal(ctx->command_queue, ctx->sync_ctx.fence, ++ctx->sync_ctx.fence_value);
+    if (FAILED(hr)) {
+        err = AVERROR_UNKNOWN;
+        goto fail;
+    }
+
+    err = d3d12va_discard_command_allocator(avctx, command_allocator, ctx->sync_ctx.fence_value);
+    if (err < 0)
+        goto fail;
+
+    pic->fence_value = ctx->sync_ctx.fence_value;
+
+    if (d3d12_refs.ppTexture2Ds)
+        av_freep(&d3d12_refs.ppTexture2Ds);
+
+    return 0;
+
+fail:
+    if (command_allocator)
+        d3d12va_discard_command_allocator(avctx, command_allocator, ctx->sync_ctx.fence_value);
+
+    if (d3d12_refs.ppTexture2Ds)
+        av_freep(&d3d12_refs.ppTexture2Ds);
+
+    if (ctx->codec->free_picture_params)
+        ctx->codec->free_picture_params(pic);
+
+    av_buffer_unref(&pic->output_buffer_ref);
+    pic->output_buffer = NULL;
+    D3D12_OBJECT_RELEASE(pic->encoded_metadata);
+    D3D12_OBJECT_RELEASE(pic->resolved_metadata);
+    return err;
+}
+
+static int d3d12va_encode_discard(AVCodecContext *avctx,
+                                  D3D12VAEncodePicture *pic)
+{
+    HWBaseEncodePicture *base_pic = (HWBaseEncodePicture *)pic;
+    d3d12va_encode_wait(avctx, pic);
+
+    if (pic->output_buffer_ref) {
+        av_log(avctx, AV_LOG_DEBUG, "Discard output for pic "
+               "%"PRId64"/%"PRId64".\n",
+               base_pic->display_order, base_pic->encode_order);
+
+        av_buffer_unref(&pic->output_buffer_ref);
+        pic->output_buffer = NULL;
+    }
+
+    D3D12_OBJECT_RELEASE(pic->encoded_metadata);
+    D3D12_OBJECT_RELEASE(pic->resolved_metadata);
+
+    return 0;
+}
+
+static int d3d12va_encode_free_rc_params(AVCodecContext *avctx)
+{
+    D3D12VAEncodeContext *ctx = avctx->priv_data;
+
+    switch (ctx->rc.Mode)
+    {
+    case D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_CQP:
+        av_freep(&ctx->rc.ConfigParams.pConfiguration_CQP);
+        break;
+    case D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_CBR:
+        av_freep(&ctx->rc.ConfigParams.pConfiguration_CBR);
+        break;
+    case D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_VBR:
+        av_freep(&ctx->rc.ConfigParams.pConfiguration_VBR);
+        break;
+    case D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_QVBR:
+        av_freep(&ctx->rc.ConfigParams.pConfiguration_QVBR);
+        break;
+    default:
+        break;
+    }
+
+    return 0;
+}
+
+static HWBaseEncodePicture *d3d12va_encode_alloc(AVCodecContext *avctx,
+                                                  const AVFrame *frame)
+{
+    D3D12VAEncodeContext *ctx = avctx->priv_data;
+    D3D12VAEncodePicture *pic;
+
+    pic = av_mallocz(sizeof(*pic));
+    if (!pic)
+        return NULL;
+
+    if (ctx->codec->picture_priv_data_size > 0) {
+        pic->base.priv_data = av_mallocz(ctx->codec->picture_priv_data_size);
+        if (!pic->base.priv_data) {
+            av_freep(&pic);
+            return NULL;
+        }
+    }
+
+    pic->input_surface = (AVD3D12VAFrame *)frame->data[0];
+
+    return (HWBaseEncodePicture *)pic;
+}
+
+static int d3d12va_encode_free(AVCodecContext *avctx,
+                               HWBaseEncodePicture *base_pic)
+{
+    D3D12VAEncodeContext *ctx = avctx->priv_data;
+    D3D12VAEncodePicture *pic = (D3D12VAEncodePicture *)base_pic;
+
+    if (base_pic->encode_issued)
+        d3d12va_encode_discard(avctx, pic);
+
+    if (ctx->codec->free_picture_params)
+        ctx->codec->free_picture_params(pic);
+
+    ff_hw_base_encode_free(avctx, base_pic);
+
+    av_free(pic);
+
+    return 0;
+}
+
+static int d3d12va_encode_get_buffer_size(AVCodecContext *avctx,
+                                          D3D12VAEncodePicture *pic, size_t *size)
+{
+    D3D12_VIDEO_ENCODER_OUTPUT_METADATA *meta = NULL;
+    uint8_t *data;
+    HRESULT hr;
+    int err;
+
+    hr = ID3D12Resource_Map(pic->resolved_metadata, 0, NULL, (void **)&data);
+    if (FAILED(hr)) {
+        err = AVERROR_UNKNOWN;
+        return err;
+    }
+
+    meta = (D3D12_VIDEO_ENCODER_OUTPUT_METADATA *)data;
+
+    if (meta->EncodeErrorFlags != D3D12_VIDEO_ENCODER_ENCODE_ERROR_FLAG_NO_ERROR) {
+        av_log(avctx, AV_LOG_ERROR, "Encode failed %"PRIu64"\n", meta->EncodeErrorFlags);
+        err = AVERROR(EINVAL);
+        return err;
+    }
+
+    if (meta->EncodedBitstreamWrittenBytesCount == 0) {
+        av_log(avctx, AV_LOG_ERROR, "No bytes were written to encoded bitstream\n");
+        err = AVERROR(EINVAL);
+        return err;
+    }
+
+    *size = meta->EncodedBitstreamWrittenBytesCount;
+
+    ID3D12Resource_Unmap(pic->resolved_metadata, 0, NULL);
+
+    return 0;
+}
+
+static int d3d12va_encode_get_coded_data(AVCodecContext *avctx,
+                                         D3D12VAEncodePicture *pic, AVPacket *pkt)
+{
+    int err;
+    uint8_t *ptr, *mapped_data;
+    size_t total_size = 0;
+    HRESULT hr;
+
+    err = d3d12va_encode_get_buffer_size(avctx, pic, &total_size);
+    if (err < 0)
+        goto end;
+
+    total_size += pic->header_size;
+    av_log(avctx, AV_LOG_DEBUG, "Output buffer size %"PRId64"\n", total_size);
+
+    hr = ID3D12Resource_Map(pic->output_buffer, 0, NULL, (void **)&mapped_data);
+    if (FAILED(hr)) {
+        err = AVERROR_UNKNOWN;
+        goto end;
+    }
+
+    err = ff_get_encode_buffer(avctx, pkt, total_size, 0);
+    if (err < 0)
+        goto end;
+    ptr = pkt->data;
+
+    memcpy(ptr, mapped_data, total_size);
+
+    ID3D12Resource_Unmap(pic->output_buffer, 0, NULL);
+
+end:
+    av_buffer_unref(&pic->output_buffer_ref);
+    pic->output_buffer = NULL;
+    return err;
+}
+
+static int d3d12va_encode_output(AVCodecContext *avctx,
+                                 const HWBaseEncodePicture *base_pic, AVPacket *pkt)
+{
+    D3D12VAEncodeContext *ctx = avctx->priv_data;
+    D3D12VAEncodePicture *pic = (D3D12VAEncodePicture *)base_pic;
+    AVPacket *pkt_ptr = pkt;
+    int err;
+
+    err = d3d12va_encode_wait(avctx, pic);
+    if (err < 0)
+        return err;
+
+    err = d3d12va_encode_get_coded_data(avctx, pic, pkt);
+    if (err < 0)
+        return err;
+
+    av_log(avctx, AV_LOG_DEBUG, "Output read for pic %"PRId64"/%"PRId64".\n",
+           base_pic->display_order, base_pic->encode_order);
+
+    ff_hw_base_encode_set_output_property(avctx, (HWBaseEncodePicture *)base_pic, pkt_ptr, 0);
+
+    return 0;
+}
+
+static int d3d12va_encode_set_profile(AVCodecContext *avctx)
+{
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    D3D12VAEncodeContext *ctx     = avctx->priv_data;
+    const D3D12VAEncodeProfile *profile;
+    const AVPixFmtDescriptor *desc;
+    int i, depth;
+
+    desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format);
+    if (!desc) {
+        av_log(avctx, AV_LOG_ERROR, "Invalid input pixfmt (%d).\n",
+               base_ctx->input_frames->sw_format);
+        return AVERROR(EINVAL);
+    }
+
+    depth = desc->comp[0].depth;
+    for (i = 1; i < desc->nb_components; i++) {
+        if (desc->comp[i].depth != depth) {
+            av_log(avctx, AV_LOG_ERROR, "Invalid input pixfmt (%s).\n",
+                   desc->name);
+            return AVERROR(EINVAL);
+        }
+    }
+    av_log(avctx, AV_LOG_VERBOSE, "Input surface format is %s.\n",
+           desc->name);
+
+    av_assert0(ctx->codec->profiles);
+    for (i = 0; (ctx->codec->profiles[i].av_profile !=
+                 AV_PROFILE_UNKNOWN); i++) {
+        profile = &ctx->codec->profiles[i];
+        if (depth               != profile->depth ||
+            desc->nb_components != profile->nb_components)
+            continue;
+        if (desc->nb_components > 1 &&
+            (desc->log2_chroma_w != profile->log2_chroma_w ||
+             desc->log2_chroma_h != profile->log2_chroma_h))
+            continue;
+        if (avctx->profile != profile->av_profile &&
+            avctx->profile != AV_PROFILE_UNKNOWN)
+            continue;
+
+        ctx->profile = profile;
+        break;
+    }
+    if (!ctx->profile) {
+        av_log(avctx, AV_LOG_ERROR, "No usable encoding profile found.\n");
+        return AVERROR(ENOSYS);
+    }
+
+    avctx->profile = profile->av_profile;
+    return 0;
+}
+
+static const D3D12VAEncodeRCMode d3d12va_encode_rc_modes[] = {
+    //                     Bitrate   Quality
+    //                        | Maxrate | HRD/VBV
+    { 0 }, //             |    |    |    |
+    { RC_MODE_CQP,  "CQP",  0,   0,   1,   0, 1, D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_CQP },
+    { RC_MODE_CBR,  "CBR",  1,   0,   0,   1, 1, D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_CBR },
+    { RC_MODE_VBR,  "VBR",  1,   1,   0,   1, 1, D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_VBR },
+    { RC_MODE_QVBR, "QVBR", 1,   1,   1,   1, 1, D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_QVBR },
+};
+
+static int check_rate_control_support(AVCodecContext *avctx, const D3D12VAEncodeRCMode *rc_mode)
+{
+    HRESULT hr;
+    D3D12VAEncodeContext *ctx = avctx->priv_data;
+    D3D12_FEATURE_DATA_VIDEO_ENCODER_RATE_CONTROL_MODE d3d12_rc_mode = {
+        .Codec = ctx->codec->d3d12_codec,
+    };
+
+    if (!rc_mode->d3d12_mode)
+        return 0;
+
+    d3d12_rc_mode.IsSupported = 0;
+    d3d12_rc_mode.RateControlMode = rc_mode->d3d12_mode;
+
+    hr = ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3,
+                                                D3D12_FEATURE_VIDEO_ENCODER_RATE_CONTROL_MODE,
+                                                &d3d12_rc_mode, sizeof(d3d12_rc_mode));
+    if (FAILED(hr)) {
+        av_log(avctx, AV_LOG_ERROR, "Failed to check rate control support.\n");
+        return 0;
+    }
+
+    return d3d12_rc_mode.IsSupported;
+}
+
+static int d3d12va_encode_init_rate_control(AVCodecContext *avctx)
+{
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    D3D12VAEncodeContext     *ctx = avctx->priv_data;
+    int64_t rc_target_bitrate;
+    int64_t rc_peak_bitrate;
+    int     rc_quality;
+    int64_t hrd_buffer_size;
+    int64_t hrd_initial_buffer_fullness;
+    int fr_num, fr_den;
+    const D3D12VAEncodeRCMode *rc_mode;
+
+    // Rate control mode selection:
+    // * If the user has set a mode explicitly with the rc_mode option,
+    //   use it and fail if it is not available.
+    // * If an explicit QP option has been set, use CQP.
+    // * If the codec is CQ-only, use CQP.
+    // * If the QSCALE avcodec option is set, use CQP.
+    // * If bitrate and quality are both set, try QVBR.
+    // * If quality is set, try CQP.
+    // * If bitrate and maxrate are set and have the same value, try CBR.
+    // * If a bitrate is set, try VBR, then CBR.
+    // * If no bitrate is set, try CQP.
+
+#define TRY_RC_MODE(mode, fail) do { \
+        rc_mode = &d3d12va_encode_rc_modes[mode]; \
+        if (!(rc_mode->d3d12_mode && check_rate_control_support(avctx, rc_mode))) { \
+            if (fail) { \
+                av_log(avctx, AV_LOG_ERROR, "Driver does not support %s " \
+                       "RC mode.\n", rc_mode->name); \
+                return AVERROR(EINVAL); \
+            } \
+            av_log(avctx, AV_LOG_DEBUG, "Driver does not support %s " \
+                   "RC mode.\n", rc_mode->name); \
+            rc_mode = NULL; \
+        } else { \
+            goto rc_mode_found; \
+        } \
+    } while (0)
+
+    if (base_ctx->explicit_rc_mode)
+        TRY_RC_MODE(base_ctx->explicit_rc_mode, 1);
+
+    if (base_ctx->explicit_qp)
+        TRY_RC_MODE(RC_MODE_CQP, 1);
+
+    if (ctx->codec->flags & FLAG_CONSTANT_QUALITY_ONLY)
+        TRY_RC_MODE(RC_MODE_CQP, 1);
+
+    if (avctx->flags & AV_CODEC_FLAG_QSCALE)
+        TRY_RC_MODE(RC_MODE_CQP, 1);
+
+    if (avctx->bit_rate > 0 && avctx->global_quality > 0)
+        TRY_RC_MODE(RC_MODE_QVBR, 0);
+
+    if (avctx->global_quality > 0) {
+        TRY_RC_MODE(RC_MODE_CQP, 0);
+    }
+
+    if (avctx->bit_rate > 0 && avctx->rc_max_rate == avctx->bit_rate)
+        TRY_RC_MODE(RC_MODE_CBR, 0);
+
+    if (avctx->bit_rate > 0) {
+        TRY_RC_MODE(RC_MODE_VBR, 0);
+        TRY_RC_MODE(RC_MODE_CBR, 0);
+    } else {
+        TRY_RC_MODE(RC_MODE_CQP, 0);
+    }
+
+    av_log(avctx, AV_LOG_ERROR, "Driver does not support any "
+           "RC mode compatible with selected options.\n");
+    return AVERROR(EINVAL);
+
+rc_mode_found:
+    if (rc_mode->bitrate) {
+        if (avctx->bit_rate <= 0) {
+            av_log(avctx, AV_LOG_ERROR, "Bitrate must be set for %s "
+                   "RC mode.\n", rc_mode->name);
+            return AVERROR(EINVAL);
+        }
+
+        if (rc_mode->maxrate) {
+            if (avctx->rc_max_rate > 0) {
+                if (avctx->rc_max_rate < avctx->bit_rate) {
+                    av_log(avctx, AV_LOG_ERROR, "Invalid bitrate settings: "
+                           "bitrate (%"PRId64") must not be greater than "
+                           "maxrate (%"PRId64").\n", avctx->bit_rate,
+                           avctx->rc_max_rate);
+                    return AVERROR(EINVAL);
+                }
+                rc_target_bitrate = avctx->bit_rate;
+                rc_peak_bitrate   = avctx->rc_max_rate;
+            } else {
+                // We only have a target bitrate, but this mode requires
+                // that a maximum rate be supplied as well.  Since the
+                // user does not want this to be a constraint, arbitrarily
+                // pick a maximum rate of double the target rate.
+                rc_target_bitrate = avctx->bit_rate;
+                rc_peak_bitrate   = 2 * avctx->bit_rate;
+            }
+        } else {
+            if (avctx->rc_max_rate > avctx->bit_rate) {
+                av_log(avctx, AV_LOG_WARNING, "Max bitrate is ignored "
+                       "in %s RC mode.\n", rc_mode->name);
+            }
+            rc_target_bitrate = avctx->bit_rate;
+            rc_peak_bitrate   = 0;
+        }
+    } else {
+        rc_target_bitrate = 0;
+        rc_peak_bitrate   = 0;
+    }
+
+    if (rc_mode->quality) {
+        if (base_ctx->explicit_qp) {
+            rc_quality = base_ctx->explicit_qp;
+        } else if (avctx->global_quality > 0) {
+            rc_quality = avctx->global_quality;
+        } else {
+            rc_quality = ctx->codec->default_quality;
+            av_log(avctx, AV_LOG_WARNING, "No quality level set; "
+                   "using default (%d).\n", rc_quality);
+        }
+    } else {
+        rc_quality = 0;
+    }
+
+    if (rc_mode->hrd) {
+        if (avctx->rc_buffer_size)
+            hrd_buffer_size = avctx->rc_buffer_size;
+        else if (avctx->rc_max_rate > 0)
+            hrd_buffer_size = avctx->rc_max_rate;
+        else
+            hrd_buffer_size = avctx->bit_rate;
+        if (avctx->rc_initial_buffer_occupancy) {
+            if (avctx->rc_initial_buffer_occupancy > hrd_buffer_size) {
+                av_log(avctx, AV_LOG_ERROR, "Invalid RC buffer settings: "
+                       "must have initial buffer size (%d) <= "
+                       "buffer size (%"PRId64").\n",
+                       avctx->rc_initial_buffer_occupancy, hrd_buffer_size);
+                return AVERROR(EINVAL);
+            }
+            hrd_initial_buffer_fullness = avctx->rc_initial_buffer_occupancy;
+        } else {
+            hrd_initial_buffer_fullness = hrd_buffer_size * 3 / 4;
+        }
+    } else {
+        if (avctx->rc_buffer_size || avctx->rc_initial_buffer_occupancy) {
+            av_log(avctx, AV_LOG_WARNING, "Buffering settings are ignored "
+                   "in %s RC mode.\n", rc_mode->name);
+        }
+
+        hrd_buffer_size             = 0;
+        hrd_initial_buffer_fullness = 0;
+    }
+
+    if (rc_target_bitrate          > UINT32_MAX ||
+        hrd_buffer_size             > UINT32_MAX ||
+        hrd_initial_buffer_fullness > UINT32_MAX) {
+        av_log(avctx, AV_LOG_ERROR, "RC parameters of 2^32 or "
+               "greater are not supported by D3D12.\n");
+        return AVERROR(EINVAL);
+    }
+
+    base_ctx->rc_quality  = rc_quality;
+
+    av_log(avctx, AV_LOG_VERBOSE, "RC mode: %s.\n", rc_mode->name);
+
+    if (rc_mode->quality)
+        av_log(avctx, AV_LOG_VERBOSE, "RC quality: %d.\n", rc_quality);
+
+    if (rc_mode->hrd) {
+        av_log(avctx, AV_LOG_VERBOSE, "RC buffer: %"PRId64" bits, "
+               "initial fullness %"PRId64" bits.\n",
+               hrd_buffer_size, hrd_initial_buffer_fullness);
+    }
+
+    if (avctx->framerate.num > 0 && avctx->framerate.den > 0)
+        av_reduce(&fr_num, &fr_den,
+                  avctx->framerate.num, avctx->framerate.den, 65535);
+    else
+        av_reduce(&fr_num, &fr_den,
+                  avctx->time_base.den, avctx->time_base.num, 65535);
+
+    av_log(avctx, AV_LOG_VERBOSE, "RC framerate: %d/%d (%.2f fps).\n",
+           fr_num, fr_den, (double)fr_num / fr_den);
+
+    ctx->rc.Flags                       = D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_NONE;
+    ctx->rc.TargetFrameRate.Numerator   = fr_num;
+    ctx->rc.TargetFrameRate.Denominator = fr_den;
+    ctx->rc.Mode                        = rc_mode->d3d12_mode;
+
+    switch (rc_mode->mode) {
+        case RC_MODE_CQP:
+            // cqp ConfigParams will be updated in ctx->codec->configure.
+            break;
+
+        case RC_MODE_CBR:
+            D3D12_VIDEO_ENCODER_RATE_CONTROL_CBR *cbr_ctl;
+
+            ctx->rc.ConfigParams.DataSize = sizeof(D3D12_VIDEO_ENCODER_RATE_CONTROL_CBR);
+            cbr_ctl = av_mallocz(ctx->rc.ConfigParams.DataSize);
+            if (!cbr_ctl)
+                return AVERROR(ENOMEM);
+
+            cbr_ctl->TargetBitRate      = rc_target_bitrate;
+            cbr_ctl->VBVCapacity        = hrd_buffer_size;
+            cbr_ctl->InitialVBVFullness = hrd_initial_buffer_fullness;
+            ctx->rc.Flags |= D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_ENABLE_VBV_SIZES;
+
+            if (avctx->qmin > 0 || avctx->qmax > 0) {
+                cbr_ctl->MinQP = avctx->qmin;
+                cbr_ctl->MaxQP = avctx->qmax;
+                ctx->rc.Flags |= D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_ENABLE_QP_RANGE;
+            }
+
+            ctx->rc.ConfigParams.pConfiguration_CBR = cbr_ctl;
+            break;
+
+        case RC_MODE_VBR:
+            D3D12_VIDEO_ENCODER_RATE_CONTROL_VBR *vbr_ctl;
+
+            ctx->rc.ConfigParams.DataSize = sizeof(D3D12_VIDEO_ENCODER_RATE_CONTROL_VBR);
+            vbr_ctl = av_mallocz(ctx->rc.ConfigParams.DataSize);
+            if (!vbr_ctl)
+                return AVERROR(ENOMEM);
+
+            vbr_ctl->TargetAvgBitRate   = rc_target_bitrate;
+            vbr_ctl->PeakBitRate        = rc_peak_bitrate;
+            vbr_ctl->VBVCapacity        = hrd_buffer_size;
+            vbr_ctl->InitialVBVFullness = hrd_initial_buffer_fullness;
+            ctx->rc.Flags |= D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_ENABLE_VBV_SIZES;
+
+            if (avctx->qmin > 0 || avctx->qmax > 0) {
+                vbr_ctl->MinQP = avctx->qmin;
+                vbr_ctl->MaxQP = avctx->qmax;
+                ctx->rc.Flags |= D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_ENABLE_QP_RANGE;
+            }
+
+            ctx->rc.ConfigParams.pConfiguration_VBR = vbr_ctl;
+            break;
+
+        case RC_MODE_QVBR:
+            D3D12_VIDEO_ENCODER_RATE_CONTROL_QVBR *qvbr_ctl;
+
+            ctx->rc.ConfigParams.DataSize = sizeof(D3D12_VIDEO_ENCODER_RATE_CONTROL_QVBR);
+            qvbr_ctl = av_mallocz(ctx->rc.ConfigParams.DataSize);
+            if (!qvbr_ctl)
+                return AVERROR(ENOMEM);
+
+            qvbr_ctl->TargetAvgBitRate      = rc_target_bitrate;
+            qvbr_ctl->PeakBitRate           = rc_peak_bitrate;
+            qvbr_ctl->ConstantQualityTarget = rc_quality;
+
+            if (avctx->qmin > 0 || avctx->qmax > 0) {
+                qvbr_ctl->MinQP = avctx->qmin;
+                qvbr_ctl->MaxQP = avctx->qmax;
+                ctx->rc.Flags |= D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_ENABLE_QP_RANGE;
+            }
+
+            ctx->rc.ConfigParams.pConfiguration_QVBR = qvbr_ctl;
+            break;
+
+        default:
+            break;
+    }
+    return 0;
+}
+
+static int d3d12va_encode_init_gop_structure(AVCodecContext *avctx)
+{
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    D3D12VAEncodeContext     *ctx = avctx->priv_data;
+    uint32_t ref_l0, ref_l1;
+    int err;
+    HRESULT hr;
+    D3D12_FEATURE_DATA_VIDEO_ENCODER_CODEC_PICTURE_CONTROL_SUPPORT support;
+    union {
+        D3D12_VIDEO_ENCODER_CODEC_PICTURE_CONTROL_SUPPORT_H264 h264;
+        D3D12_VIDEO_ENCODER_CODEC_PICTURE_CONTROL_SUPPORT_HEVC hevc;
+    } codec_support;
+
+    support.NodeIndex = 0;
+    support.Codec     = ctx->codec->d3d12_codec;
+    support.Profile   = ctx->profile->d3d12_profile;
+
+    switch (ctx->codec->d3d12_codec) {
+        case D3D12_VIDEO_ENCODER_CODEC_H264:
+            support.PictureSupport.DataSize = sizeof(codec_support.h264);
+            support.PictureSupport.pH264Support = &codec_support.h264;
+            break;
+
+        case D3D12_VIDEO_ENCODER_CODEC_HEVC:
+            support.PictureSupport.DataSize = sizeof(codec_support.hevc);
+            support.PictureSupport.pHEVCSupport = &codec_support.hevc;
+            break;
+    }
+
+    hr = ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3, D3D12_FEATURE_VIDEO_ENCODER_CODEC_PICTURE_CONTROL_SUPPORT,
+             &support, sizeof(support));
+    if (FAILED(hr))
+        return AVERROR(EINVAL);
+
+    if (support.IsSupported) {
+        switch (ctx->codec->d3d12_codec) {
+            case D3D12_VIDEO_ENCODER_CODEC_H264:
+                ref_l0 = FFMIN(support.PictureSupport.pH264Support->MaxL0ReferencesForP,
+                               support.PictureSupport.pH264Support->MaxL1ReferencesForB);
+                ref_l1 = support.PictureSupport.pH264Support->MaxL1ReferencesForB;
+                break;
+
+            case D3D12_VIDEO_ENCODER_CODEC_HEVC:
+                ref_l0 = FFMIN(support.PictureSupport.pHEVCSupport->MaxL0ReferencesForP,
+                               support.PictureSupport.pHEVCSupport->MaxL1ReferencesForB);
+                ref_l1 = support.PictureSupport.pHEVCSupport->MaxL1ReferencesForB;
+                break;
+        }
+    } else {
+        ref_l0 = ref_l1 = 0;
+    }
+
+    if (ref_l0 > 0 && ref_l1 > 0 && ctx->bi_not_empty) {
+        base_ctx->p_to_gpb = 1;
+        av_log(avctx, AV_LOG_VERBOSE, "Driver does not support P-frames, "
+               "replacing them with B-frames.\n");
+    }
+
+    err = ff_hw_base_init_gop_structure(avctx, ref_l0, ref_l1, ctx->codec->flags, 0);
+    if (err < 0)
+        return err;
+
+    return 0;
+}
+
+static int d3d12va_create_encoder(AVCodecContext *avctx)
+{
+    HWBaseEncodeContext    *base_ctx     = avctx->priv_data;
+    D3D12VAEncodeContext   *ctx          = avctx->priv_data;
+    AVD3D12VAFramesContext *frames_hwctx = base_ctx->input_frames->hwctx;
+    HRESULT hr;
+
+    D3D12_VIDEO_ENCODER_DESC desc = {
+        .NodeMask                     = 0,
+        .Flags                        = D3D12_VIDEO_ENCODER_FLAG_NONE,
+        .EncodeCodec                  = ctx->codec->d3d12_codec,
+        .EncodeProfile                = ctx->profile->d3d12_profile,
+        .InputFormat                  = frames_hwctx->format,
+        .CodecConfiguration           = ctx->codec_conf,
+        .MaxMotionEstimationPrecision = D3D12_VIDEO_ENCODER_MOTION_ESTIMATION_PRECISION_MODE_MAXIMUM,
+    };
+
+    hr = ID3D12VideoDevice3_CreateVideoEncoder(ctx->video_device3, &desc, &IID_ID3D12VideoEncoder,
+                                               (void **)&ctx->encoder);
+    if (FAILED(hr)) {
+        av_log(avctx, AV_LOG_ERROR, "Failed to create encoder.\n");
+        return AVERROR(EINVAL);
+    }
+
+    return 0;
+}
+
+static int d3d12va_create_encoder_heap(AVCodecContext* avctx)
+{
+    D3D12VAEncodeContext *ctx = avctx->priv_data;
+    HRESULT hr;
+
+    D3D12_VIDEO_ENCODER_HEAP_DESC desc = {
+        .NodeMask             = 0,
+        .Flags                = D3D12_VIDEO_ENCODER_FLAG_NONE,
+        .EncodeCodec          = ctx->codec->d3d12_codec,
+        .EncodeProfile        = ctx->profile->d3d12_profile,
+        .EncodeLevel          = ctx->level,
+        .ResolutionsListCount = 1,
+        .pResolutionList      = &ctx->resolution,
+    };
+
+    hr = ID3D12VideoDevice3_CreateVideoEncoderHeap(ctx->video_device3, &desc,
+                                                   &IID_ID3D12VideoEncoderHeap, (void **)&ctx->encoder_heap);
+    if (FAILED(hr)) {
+        av_log(avctx, AV_LOG_ERROR, "Failed to create encoder heap.\n");
+        return AVERROR(EINVAL);
+    }
+
+    return 0;
+}
+
+static void d3d12va_encode_free_buffer(void *opaque, uint8_t *data)
+{
+    ID3D12Resource *pResource;
+
+    pResource = (ID3D12Resource *)data;
+    D3D12_OBJECT_RELEASE(pResource);
+}
+
+static AVBufferRef *d3d12va_encode_alloc_output_buffer(void *opaque, size_t size)
+{
+    AVCodecContext     *avctx = opaque;
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    D3D12VAEncodeContext     *ctx = avctx->priv_data;
+    ID3D12Resource *pResource = NULL;
+    HRESULT hr;
+    AVBufferRef *ref;
+    D3D12_HEAP_PROPERTIES heap_props;
+    D3D12_HEAP_TYPE heap_type = D3D12_HEAP_TYPE_READBACK;
+
+    D3D12_RESOURCE_DESC desc = {
+        .Dimension        = D3D12_RESOURCE_DIMENSION_BUFFER,
+        .Alignment        = 0,
+        .Width            = FFALIGN(3 * base_ctx->surface_width * base_ctx->surface_height + (1 << 16),
+                                    D3D12_TEXTURE_DATA_PLACEMENT_ALIGNMENT),
+        .Height           = 1,
+        .DepthOrArraySize = 1,
+        .MipLevels        = 1,
+        .Format           = DXGI_FORMAT_UNKNOWN,
+        .SampleDesc       = { .Count = 1, .Quality = 0 },
+        .Layout           = D3D12_TEXTURE_LAYOUT_ROW_MAJOR,
+        .Flags            = D3D12_RESOURCE_FLAG_NONE,
+    };
+
+    ctx->hwctx->device->lpVtbl->GetCustomHeapProperties(ctx->hwctx->device, &heap_props, 0, heap_type);
+
+    hr = ID3D12Device_CreateCommittedResource(ctx->hwctx->device, &heap_props, D3D12_HEAP_FLAG_NONE,
+                                              &desc, D3D12_RESOURCE_STATE_COMMON, NULL, &IID_ID3D12Resource,
+                                              (void **)&pResource);
+
+    if (FAILED(hr)) {
+        av_log(avctx, AV_LOG_ERROR, "Failed to create d3d12 buffer.\n");
+        return NULL;
+    }
+
+    ref = av_buffer_create((uint8_t *)(uintptr_t)pResource,
+                           sizeof(pResource),
+                           &d3d12va_encode_free_buffer,
+                           avctx, AV_BUFFER_FLAG_READONLY);
+    if (!ref) {
+        D3D12_OBJECT_RELEASE(pResource);
+        return NULL;
+    }
+
+    return ref;
+}
+
+static int d3d12va_encode_prepare_output_buffers(AVCodecContext *avctx)
+{
+    HWBaseEncodeContext *base_ctx      = avctx->priv_data;
+    D3D12VAEncodeContext *ctx          = avctx->priv_data;
+    AVD3D12VAFramesContext *frames_ctx = base_ctx->input_frames->hwctx;
+    HRESULT hr;
+
+    ctx->req.NodeIndex               = 0;
+    ctx->req.Codec                   = ctx->codec->d3d12_codec;
+    ctx->req.Profile                 = ctx->profile->d3d12_profile;
+    ctx->req.InputFormat             = frames_ctx->format;
+    ctx->req.PictureTargetResolution = ctx->resolution;
+
+    hr = ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3,
+                                                D3D12_FEATURE_VIDEO_ENCODER_RESOURCE_REQUIREMENTS,
+                                                &ctx->req, sizeof(ctx->req));
+    if (FAILED(hr)) {
+        av_log(avctx, AV_LOG_ERROR, "Failed to check encoder resource requirements support.\n");
+        return AVERROR(EINVAL);
+    }
+
+    if (!ctx->req.IsSupported) {
+        av_log(avctx, AV_LOG_ERROR, "Encoder resource requirements unsupported.\n");
+        return AVERROR(EINVAL);
+    }
+
+    ctx->output_buffer_pool = av_buffer_pool_init2(sizeof(ID3D12Resource *), avctx,
+                                                   &d3d12va_encode_alloc_output_buffer, NULL);
+    if (!ctx->output_buffer_pool)
+        return AVERROR(ENOMEM);
+
+    return 0;
+}
+
+static int d3d12va_encode_create_command_objects(AVCodecContext *avctx)
+{
+    D3D12VAEncodeContext *ctx = avctx->priv_data;
+    ID3D12CommandAllocator *command_allocator = NULL;
+    int err;
+    HRESULT hr;
+
+    D3D12_COMMAND_QUEUE_DESC queue_desc = {
+        .Type     = D3D12_COMMAND_LIST_TYPE_VIDEO_ENCODE,
+        .Priority = 0,
+        .Flags    = D3D12_COMMAND_QUEUE_FLAG_NONE,
+        .NodeMask = 0,
+    };
+
+    ctx->allocator_queue = av_fifo_alloc2(D3D12VA_VIDEO_ENC_ASYNC_DEPTH,
+                                          sizeof(CommandAllocator), AV_FIFO_FLAG_AUTO_GROW);
+    if (!ctx->allocator_queue)
+        return AVERROR(ENOMEM);
+
+    hr = ID3D12Device_CreateFence(ctx->hwctx->device, 0, D3D12_FENCE_FLAG_NONE,
+                                  &IID_ID3D12Fence, (void **)&ctx->sync_ctx.fence);
+    if (FAILED(hr)) {
+        av_log(avctx, AV_LOG_ERROR, "Failed to create fence(%lx)\n", (long)hr);
+        err = AVERROR_UNKNOWN;
+        goto fail;
+    }
+
+    ctx->sync_ctx.event = CreateEvent(NULL, FALSE, FALSE, NULL);
+    if (!ctx->sync_ctx.event)
+        goto fail;
+
+    err = d3d12va_get_valid_command_allocator(avctx, &command_allocator);
+    if (err < 0)
+        goto fail;
+
+    hr = ID3D12Device_CreateCommandQueue(ctx->hwctx->device, &queue_desc,
+                                         &IID_ID3D12CommandQueue, (void **)&ctx->command_queue);
+    if (FAILED(hr)) {
+        av_log(avctx, AV_LOG_ERROR, "Failed to create command queue(%lx)\n", (long)hr);
+        err = AVERROR_UNKNOWN;
+        goto fail;
+    }
+
+    hr = ID3D12Device_CreateCommandList(ctx->hwctx->device, 0, queue_desc.Type,
+                                        command_allocator, NULL, &IID_ID3D12CommandList,
+                                        (void **)&ctx->command_list);
+    if (FAILED(hr)) {
+        av_log(avctx, AV_LOG_ERROR, "Failed to create command list(%lx)\n", (long)hr);
+        err = AVERROR_UNKNOWN;
+        goto fail;
+    }
+
+    hr = ID3D12VideoEncodeCommandList2_Close(ctx->command_list);
+    if (FAILED(hr)) {
+        av_log(avctx, AV_LOG_ERROR, "Failed to close the command list(%lx)\n", (long)hr);
+        err = AVERROR_UNKNOWN;
+        goto fail;
+    }
+
+    ID3D12CommandQueue_ExecuteCommandLists(ctx->command_queue, 1, (ID3D12CommandList **)&ctx->command_list);
+
+    err = d3d12va_sync_with_gpu(avctx);
+    if (err < 0)
+        goto fail;
+
+    err = d3d12va_discard_command_allocator(avctx, command_allocator, ctx->sync_ctx.fence_value);
+    if (err < 0)
+        goto fail;
+
+    return 0;
+
+fail:
+    D3D12_OBJECT_RELEASE(command_allocator);
+    return err;
+}
+
+static int d3d12va_encode_create_recon_frames(AVCodecContext *avctx)
+{
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    AVD3D12VAFramesContext *hwctx;
+    enum AVPixelFormat recon_format;
+    int err;
+
+    err = ff_hw_base_get_recon_format(avctx, NULL, &recon_format);
+    if (err < 0)
+        return err;
+
+    base_ctx->recon_frames_ref = av_hwframe_ctx_alloc(base_ctx->device_ref);
+    if (!base_ctx->recon_frames_ref)
+        return AVERROR(ENOMEM);
+
+    base_ctx->recon_frames = (AVHWFramesContext *)base_ctx->recon_frames_ref->data;
+    hwctx = (AVD3D12VAFramesContext *)base_ctx->recon_frames->hwctx;
+
+    base_ctx->recon_frames->format    = AV_PIX_FMT_D3D12;
+    base_ctx->recon_frames->sw_format = recon_format;
+    base_ctx->recon_frames->width     = base_ctx->surface_width;
+    base_ctx->recon_frames->height    = base_ctx->surface_height;
+
+    hwctx->flags = D3D12_RESOURCE_FLAG_VIDEO_ENCODE_REFERENCE_ONLY |
+                   D3D12_RESOURCE_FLAG_DENY_SHADER_RESOURCE;
+
+    err = av_hwframe_ctx_init(base_ctx->recon_frames_ref);
+    if (err < 0) {
+        av_log(avctx, AV_LOG_ERROR, "Failed to initialise reconstructed "
+               "frame context: %d.\n", err);
+        return err;
+    }
+
+    return 0;
+}
+
+static const HWEncodePictureOperation d3d12va_type = {
+    .alloc  = &d3d12va_encode_alloc,
+
+    .issue  = &d3d12va_encode_issue,
+
+    .output = &d3d12va_encode_output,
+
+    .free   = &d3d12va_encode_free,
+};
+
+int ff_d3d12va_encode_init(AVCodecContext *avctx)
+{
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    D3D12VAEncodeContext     *ctx = avctx->priv_data;
+    D3D12_FEATURE_DATA_VIDEO_FEATURE_AREA_SUPPORT support = { 0 };
+    int err;
+    HRESULT hr;
+
+    err = ff_hw_base_encode_init(avctx);
+    if (err < 0)
+        goto fail;
+
+    base_ctx->op = &d3d12va_type;
+
+    ctx->hwctx = base_ctx->device->hwctx;
+
+    ctx->resolution.Width  = base_ctx->input_frames->width;
+    ctx->resolution.Height = base_ctx->input_frames->height;
+
+    hr = ID3D12Device_QueryInterface(ctx->hwctx->device, &IID_ID3D12Device3, (void **)&ctx->device3);
+    if (FAILED(hr)) {
+        av_log(avctx, AV_LOG_ERROR, "ID3D12Device3 interface is not supported.\n");
+        err = AVERROR_UNKNOWN;
+        goto fail;
+    }
+
+    hr = ID3D12Device3_QueryInterface(ctx->device3, &IID_ID3D12VideoDevice3, (void **)&ctx->video_device3);
+    if (FAILED(hr)) {
+        av_log(avctx, AV_LOG_ERROR, "ID3D12VideoDevice3 interface is not supported.\n");
+        err = AVERROR_UNKNOWN;
+        goto fail;
+    }
+
+    if (FAILED(ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3, D3D12_FEATURE_VIDEO_FEATURE_AREA_SUPPORT,
+                                                      &support, sizeof(support))) && !support.VideoEncodeSupport) {
+        av_log(avctx, AV_LOG_ERROR, "D3D12 video device has no video encoder support.\n");
+        err = AVERROR(EINVAL);
+        goto fail;
+    }
+
+    err = d3d12va_encode_set_profile(avctx);
+    if (err < 0)
+        goto fail;
+
+    if (ctx->codec->get_encoder_caps) {
+        err = ctx->codec->get_encoder_caps(avctx);
+        if (err < 0)
+            goto fail;
+    }
+
+    err = d3d12va_encode_init_rate_control(avctx);
+    if (err < 0)
+        goto fail;
+
+    err = d3d12va_encode_init_gop_structure(avctx);
+    if (err < 0)
+        goto fail;
+
+    if (!(ctx->codec->flags & FLAG_SLICE_CONTROL) && avctx->slices > 0) {
+        av_log(avctx, AV_LOG_WARNING, "Multiple slices were requested "
+               "but this codec does not support controlling slices.\n");
+    }
+
+    err = d3d12va_encode_create_command_objects(avctx);
+    if (err < 0)
+        goto fail;
+
+    err = d3d12va_encode_create_recon_frames(avctx);
+    if (err < 0)
+        goto fail;
+
+    err = d3d12va_encode_prepare_output_buffers(avctx);
+    if (err < 0)
+        goto fail;
+
+    if (ctx->codec->configure) {
+        err = ctx->codec->configure(avctx);
+        if (err < 0)
+            goto fail;
+    }
+
+    if (ctx->codec->init_sequence_params) {
+        err = ctx->codec->init_sequence_params(avctx);
+        if (err < 0) {
+            av_log(avctx, AV_LOG_ERROR, "Codec sequence initialisation "
+                   "failed: %d.\n", err);
+            goto fail;
+        }
+    }
+
+    if (ctx->codec->set_level) {
+        err = ctx->codec->set_level(avctx);
+        if (err < 0)
+            goto fail;
+    }
+
+    base_ctx->output_delay = base_ctx->b_per_p;
+    base_ctx->decode_delay = base_ctx->max_b_depth;
+
+    err = d3d12va_create_encoder(avctx);
+    if (err < 0)
+        goto fail;
+
+    err = d3d12va_create_encoder_heap(avctx);
+    if (err < 0)
+        goto fail;
+
+    base_ctx->async_encode = 1;
+    base_ctx->encode_fifo = av_fifo_alloc2(base_ctx->async_depth,
+                                           sizeof(D3D12VAEncodePicture *), 0);
+    if (!base_ctx->encode_fifo)
+        return AVERROR(ENOMEM);
+
+    return 0;
+
+fail:
+    return err;
+}
+
+int ff_d3d12va_encode_close(AVCodecContext *avctx)
+{
+    int num_allocator = 0;
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    D3D12VAEncodeContext     *ctx = avctx->priv_data;
+    HWBaseEncodePicture *pic, *next;
+    CommandAllocator allocator;
+
+    if (!base_ctx->frame)
+        return 0;
+
+    for (pic = base_ctx->pic_start; pic; pic = next) {
+        next = pic->next;
+        d3d12va_encode_free(avctx, pic);
+    }
+
+    d3d12va_encode_free_rc_params(avctx);
+
+    av_buffer_pool_uninit(&ctx->output_buffer_pool);
+
+    D3D12_OBJECT_RELEASE(ctx->command_list);
+    D3D12_OBJECT_RELEASE(ctx->command_queue);
+
+    if (ctx->allocator_queue) {
+        while (av_fifo_read(ctx->allocator_queue, &allocator, 1) >= 0) {
+            num_allocator++;
+            D3D12_OBJECT_RELEASE(allocator.command_allocator);
+        }
+
+        av_log(avctx, AV_LOG_VERBOSE, "Total number of command allocators reused: %d\n", num_allocator);
+    }
+
+    av_fifo_freep2(&ctx->allocator_queue);
+
+    D3D12_OBJECT_RELEASE(ctx->sync_ctx.fence);
+    if (ctx->sync_ctx.event)
+        CloseHandle(ctx->sync_ctx.event);
+
+    D3D12_OBJECT_RELEASE(ctx->encoder_heap);
+    D3D12_OBJECT_RELEASE(ctx->encoder);
+    D3D12_OBJECT_RELEASE(ctx->video_device3);
+    D3D12_OBJECT_RELEASE(ctx->device3);
+
+    ff_hw_base_encode_close(avctx);
+
+    return 0;
+}
diff --git a/libavcodec/d3d12va_encode.h b/libavcodec/d3d12va_encode.h
new file mode 100644
index 0000000000..10e2d87035
--- /dev/null
+++ b/libavcodec/d3d12va_encode.h
@@ -0,0 +1,321 @@
+/*
+ * Direct3D 12 HW acceleration video encoder
+ *
+ * Copyright (c) 2024 Intel Corporation
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef AVCODEC_D3D12VA_ENCODE_H
+#define AVCODEC_D3D12VA_ENCODE_H
+
+#include "libavutil/fifo.h"
+#include "libavutil/hwcontext.h"
+#include "libavutil/hwcontext_d3d12va_internal.h"
+#include "libavutil/hwcontext_d3d12va.h"
+#include "avcodec.h"
+#include "internal.h"
+#include "hwconfig.h"
+#include "hw_base_encode.h"
+
+struct D3D12VAEncodeType;
+
+extern const AVCodecHWConfigInternal *const ff_d3d12va_encode_hw_configs[];
+
+#define MAX_PARAM_BUFFER_SIZE 4096
+#define D3D12VA_VIDEO_ENC_ASYNC_DEPTH 8
+
+typedef struct D3D12VAEncodePicture {
+    HWBaseEncodePicture base;
+
+    int             header_size;
+
+    AVD3D12VAFrame *input_surface;
+    AVD3D12VAFrame *recon_surface;
+
+    AVBufferRef    *output_buffer_ref;
+    ID3D12Resource *output_buffer;
+
+    ID3D12Resource *encoded_metadata;
+    ID3D12Resource *resolved_metadata;
+
+    D3D12_VIDEO_ENCODER_PICTURE_CONTROL_CODEC_DATA pic_ctl;
+
+    int             fence_value;
+} D3D12VAEncodePicture;
+
+typedef struct D3D12VAEncodeProfile {
+    /**
+     * lavc profile value (AV_PROFILE_*).
+     */
+    int       av_profile;
+
+    /**
+     * Supported bit depth.
+     */
+    int       depth;
+
+    /**
+     * Number of components.
+     */
+    int       nb_components;
+
+    /**
+     * Chroma subsampling in width dimension.
+     */
+    int       log2_chroma_w;
+
+    /**
+     * Chroma subsampling in height dimension.
+     */
+    int       log2_chroma_h;
+
+    /**
+     * D3D12 profile value.
+     */
+    D3D12_VIDEO_ENCODER_PROFILE_DESC d3d12_profile;
+} D3D12VAEncodeProfile;
+
+enum {
+    RC_MODE_AUTO,
+    RC_MODE_CQP,
+    RC_MODE_CBR,
+    RC_MODE_VBR,
+    RC_MODE_QVBR,
+    RC_MODE_MAX = RC_MODE_QVBR,
+};
+
+
+typedef struct D3D12VAEncodeRCMode {
+    /**
+     * Mode from above enum (RC_MODE_*).
+     */
+    int mode;
+
+    /**
+     * Name.
+     *
+     */
+    const char *name;
+
+    /**
+     * Uses bitrate parameters.
+     *
+     */
+    int bitrate;
+
+    /**
+     * Supports maxrate distinct from bitrate.
+     *
+     */
+    int maxrate;
+
+    /**
+     * Uses quality value.
+     *
+     */
+    int quality;
+
+    /**
+     * Supports HRD/VBV parameters.
+     *
+     */
+    int hrd;
+
+    /**
+     * Supported by D3D12 HW.
+     */
+    int supported;
+
+    /**
+     * D3D12 mode value.
+     */
+    D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE d3d12_mode;
+} D3D12VAEncodeRCMode;
+
+typedef struct D3D12VAEncodeContext {
+    HWBaseEncodeContext base;
+
+    /**
+     * Codec-specific hooks.
+     */
+    const struct D3D12VAEncodeType *codec;
+
+    /**
+     * Chosen encoding profile details.
+     */
+    const D3D12VAEncodeProfile *profile;
+
+    AVD3D12VADeviceContext *hwctx;
+
+    /**
+     * ID3D12Device3 interface.
+     */
+    ID3D12Device3 *device3;
+
+    /**
+     * ID3D12VideoDevice3 interface.
+     */
+    ID3D12VideoDevice3 *video_device3;
+
+    /**
+     * Pool of (reusable) bitstream output buffers.
+     */
+    AVBufferPool   *output_buffer_pool;
+
+    /**
+     * D3D12 video encoder.
+     */
+    AVBufferRef *encoder_ref;
+
+    ID3D12VideoEncoder *encoder;
+
+    /**
+     * D3D12 video encoder heap.
+     */
+    ID3D12VideoEncoderHeap *encoder_heap;
+
+    /**
+     * A cached queue for reusing the D3D12 command allocators.
+     *
+     * @see https://learn.microsoft.com/en-us/windows/win32/direct3d12/recording-command-lists-and-bundles#id3d12commandallocator
+     */
+    AVFifo *allocator_queue;
+
+    /**
+     * D3D12 command queue.
+     */
+    ID3D12CommandQueue *command_queue;
+
+    /**
+     * D3D12 video encode command list.
+     */
+    ID3D12VideoEncodeCommandList2 *command_list;
+
+    /**
+     * The sync context used to sync command queue.
+     */
+    AVD3D12VASyncContext sync_ctx;
+
+    /**
+     * The bi_not_empty feature.
+     */
+    int bi_not_empty;
+
+    /**
+     * D3D12_FEATURE structures.
+     */
+    D3D12_FEATURE_DATA_VIDEO_ENCODER_RESOURCE_REQUIREMENTS req;
+
+    D3D12_FEATURE_DATA_VIDEO_ENCODER_RESOLUTION_SUPPORT_LIMITS res_limits;
+
+    /**
+     * D3D12_VIDEO_ENCODER structures.
+     */
+    D3D12_VIDEO_ENCODER_PICTURE_RESOLUTION_DESC resolution;
+
+    D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION codec_conf;
+
+    D3D12_VIDEO_ENCODER_RATE_CONTROL rc;
+
+    D3D12_VIDEO_ENCODER_SEQUENCE_GOP_STRUCTURE gop;
+
+    D3D12_VIDEO_ENCODER_LEVEL_SETTING level;
+} D3D12VAEncodeContext;
+
+typedef struct D3D12VAEncodeType {
+    /**
+     * List of supported profiles.
+     */
+   const D3D12VAEncodeProfile *profiles;
+
+    /**
+     * D3D12 codec name.
+     */
+    D3D12_VIDEO_ENCODER_CODEC d3d12_codec;
+
+    /**
+     * Codec feature flags.
+     */
+    int flags;
+
+    /**
+     * Default quality for this codec - used as quantiser or RC quality
+     * factor depending on RC mode.
+     */
+    int default_quality;
+
+    /**
+     * Query codec configuration and determine encode parameters like
+     * block sizes for surface alignment and slices. If not set, assume
+     * that all blocks are 16x16 and that surfaces should be aligned to match
+     * this.
+     */
+    int (*get_encoder_caps)(AVCodecContext *avctx);
+
+    /**
+     * Perform any extra codec-specific configuration.
+     */
+    int (*configure)(AVCodecContext *avctx);
+
+    /**
+     * Set codec-specific level setting.
+     */
+    int (*set_level)(AVCodecContext *avctx);
+
+    /**
+     * The size of any private data structure associated with each
+     * picture (can be zero if not required).
+     */
+    size_t picture_priv_data_size;
+
+    /**
+     * Fill the corresponding parameters.
+     */
+    int (*init_sequence_params)(AVCodecContext *avctx);
+
+    int (*init_picture_params)(AVCodecContext *avctx,
+                               D3D12VAEncodePicture *pic);
+
+    void (*free_picture_params)(D3D12VAEncodePicture *pic);
+
+    /**
+     * Write the packed header data to the provided buffer.
+     */
+    int (*write_sequence_header)(AVCodecContext *avctx,
+                                 char *data, size_t *data_len);
+} D3D12VAEncodeType;
+
+int ff_d3d12va_encode_init(AVCodecContext *avctx);
+int ff_d3d12va_encode_close(AVCodecContext *avctx);
+
+#define D3D12VA_ENCODE_RC_MODE(name, desc) \
+    { #name, desc, 0, AV_OPT_TYPE_CONST, { .i64 = RC_MODE_ ## name }, \
+      0, 0, FLAGS, .unit = "rc_mode" }
+#define D3D12VA_ENCODE_RC_OPTIONS \
+    { "rc_mode",\
+      "Set rate control mode", \
+      OFFSET(common.base.explicit_rc_mode), AV_OPT_TYPE_INT, \
+      { .i64 = RC_MODE_AUTO }, RC_MODE_AUTO, RC_MODE_MAX, FLAGS, .unit = "rc_mode" }, \
+    { "auto", "Choose mode automatically based on other parameters", \
+      0, AV_OPT_TYPE_CONST, { .i64 = RC_MODE_AUTO }, 0, 0, FLAGS, .unit = "rc_mode" }, \
+    D3D12VA_ENCODE_RC_MODE(CQP,  "Constant-quality"), \
+    D3D12VA_ENCODE_RC_MODE(CBR,  "Constant-bitrate"), \
+    D3D12VA_ENCODE_RC_MODE(VBR,  "Variable-bitrate"), \
+    D3D12VA_ENCODE_RC_MODE(QVBR, "Quality-defined variable-bitrate")
+
+#endif /* AVCODEC_D3D12VA_ENCODE_H */
diff --git a/libavcodec/d3d12va_encode_hevc.c b/libavcodec/d3d12va_encode_hevc.c
new file mode 100644
index 0000000000..aec0d9dcec
--- /dev/null
+++ b/libavcodec/d3d12va_encode_hevc.c
@@ -0,0 +1,957 @@
+/*
+ * Direct3D 12 HW acceleration video encoder
+ *
+ * Copyright (c) 2024 Intel Corporation
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+#include "libavutil/opt.h"
+#include "libavutil/common.h"
+#include "libavutil/pixdesc.h"
+#include "libavutil/hwcontext_d3d12va_internal.h"
+
+#include "avcodec.h"
+#include "cbs.h"
+#include "cbs_h265.h"
+#include "h2645data.h"
+#include "h265_profile_level.h"
+#include "codec_internal.h"
+#include "d3d12va_encode.h"
+
+typedef struct D3D12VAEncodeHEVCPicture {
+    int pic_order_cnt;
+    int64_t last_idr_frame;
+} D3D12VAEncodeHEVCPicture;
+
+typedef struct D3D12VAEncodeHEVCContext {
+    D3D12VAEncodeContext common;
+
+    // User options.
+    int qp;
+    int profile;
+    int tier;
+    int level;
+
+    // Writer structures.
+    H265RawVPS   raw_vps;
+    H265RawSPS   raw_sps;
+    H265RawPPS   raw_pps;
+
+    CodedBitstreamContext *cbc;
+    CodedBitstreamFragment current_access_unit;
+} D3D12VAEncodeHEVCContext;
+
+typedef struct D3D12VAEncodeHEVCLevel {
+    int level;
+    D3D12_VIDEO_ENCODER_LEVELS_HEVC d3d12_level;
+} D3D12VAEncodeHEVCLevel;
+
+static const D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC hevc_config_support_sets[] =
+{
+    {
+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_NONE,
+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8,
+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_32x32,
+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4,
+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32,
+        3,
+        3,
+    },
+    {
+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_NONE,
+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8,
+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_32x32,
+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4,
+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32,
+        0,
+        0,
+    },
+    {
+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_NONE,
+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8,
+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_32x32,
+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4,
+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32,
+        2,
+        2,
+    },
+    {
+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_NONE,
+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8,
+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_64x64,
+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4,
+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32,
+        2,
+        2,
+    },
+    {
+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_NONE,
+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8,
+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_64x64,
+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4,
+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32,
+        4,
+        4,
+    },
+};
+
+static const D3D12VAEncodeHEVCLevel hevc_levels[] = {
+    { 30,  D3D12_VIDEO_ENCODER_LEVELS_HEVC_1  },
+    { 60,  D3D12_VIDEO_ENCODER_LEVELS_HEVC_2  },
+    { 63,  D3D12_VIDEO_ENCODER_LEVELS_HEVC_21 },
+    { 90,  D3D12_VIDEO_ENCODER_LEVELS_HEVC_3  },
+    { 93,  D3D12_VIDEO_ENCODER_LEVELS_HEVC_31 },
+    { 120, D3D12_VIDEO_ENCODER_LEVELS_HEVC_4  },
+    { 123, D3D12_VIDEO_ENCODER_LEVELS_HEVC_41 },
+    { 150, D3D12_VIDEO_ENCODER_LEVELS_HEVC_5  },
+    { 153, D3D12_VIDEO_ENCODER_LEVELS_HEVC_51 },
+    { 156, D3D12_VIDEO_ENCODER_LEVELS_HEVC_52 },
+    { 180, D3D12_VIDEO_ENCODER_LEVELS_HEVC_6  },
+    { 183, D3D12_VIDEO_ENCODER_LEVELS_HEVC_61 },
+    { 186, D3D12_VIDEO_ENCODER_LEVELS_HEVC_62 },
+};
+
+static const D3D12_VIDEO_ENCODER_PROFILE_HEVC profile_main   = D3D12_VIDEO_ENCODER_PROFILE_HEVC_MAIN;
+static const D3D12_VIDEO_ENCODER_PROFILE_HEVC profile_main10 = D3D12_VIDEO_ENCODER_PROFILE_HEVC_MAIN10;
+
+#define D3D_PROFILE_DESC(name) \
+    { sizeof(D3D12_VIDEO_ENCODER_PROFILE_HEVC), { .pHEVCProfile = (D3D12_VIDEO_ENCODER_PROFILE_HEVC *)&profile_ ## name } }
+static const D3D12VAEncodeProfile d3d12va_encode_hevc_profiles[] = {
+    { AV_PROFILE_HEVC_MAIN,     8, 3, 1, 1, D3D_PROFILE_DESC(main)   },
+    { AV_PROFILE_HEVC_MAIN_10, 10, 3, 1, 1, D3D_PROFILE_DESC(main10) },
+    { AV_PROFILE_UNKNOWN },
+};
+
+static uint8_t d3d12va_encode_hevc_map_cusize(D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE cusize)
+{
+    switch (cusize) {
+        case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8:   return 8;
+        case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_16x16: return 16;
+        case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_32x32: return 32;
+        case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_64x64: return 64;
+        default: av_assert0(0);
+    }
+    return 0;
+}
+
+static uint8_t d3d12va_encode_hevc_map_tusize(D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE tusize)
+{
+    switch (tusize) {
+        case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4:   return 4;
+        case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_8x8:   return 8;
+        case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_16x16: return 16;
+        case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32: return 32;
+        default: av_assert0(0);
+    }
+    return 0;
+}
+
+static int d3d12va_encode_hevc_write_access_unit(AVCodecContext *avctx,
+                                                 char *data, size_t *data_len,
+                                                 CodedBitstreamFragment *au)
+{
+    D3D12VAEncodeHEVCContext *priv = avctx->priv_data;
+    int err;
+
+    err = ff_cbs_write_fragment_data(priv->cbc, au);
+    if (err < 0) {
+        av_log(avctx, AV_LOG_ERROR, "Failed to write packed header.\n");
+        return err;
+    }
+
+    if (*data_len < 8 * au->data_size - au->data_bit_padding) {
+        av_log(avctx, AV_LOG_ERROR, "Access unit too large: "
+               "%zu < %zu.\n", *data_len,
+               8 * au->data_size - au->data_bit_padding);
+        return AVERROR(ENOSPC);
+    }
+
+    memcpy(data, au->data, au->data_size);
+    *data_len = 8 * au->data_size - au->data_bit_padding;
+
+    return 0;
+}
+
+static int d3d12va_encode_hevc_add_nal(AVCodecContext *avctx,
+                                       CodedBitstreamFragment *au,
+                                       void *nal_unit)
+{
+    H265RawNALUnitHeader *header = nal_unit;
+    int err;
+
+    err = ff_cbs_insert_unit_content(au, -1,
+                                     header->nal_unit_type, nal_unit, NULL);
+    if (err < 0) {
+        av_log(avctx, AV_LOG_ERROR, "Failed to add NAL unit: "
+               "type = %d.\n", header->nal_unit_type);
+        return err;
+    }
+
+    return 0;
+}
+
+static int d3d12va_encode_hevc_write_sequence_header(AVCodecContext *avctx,
+                                                     char *data, size_t *data_len)
+{
+    D3D12VAEncodeHEVCContext *priv = avctx->priv_data;
+    CodedBitstreamFragment   *au   = &priv->current_access_unit;
+    int err;
+
+    err = d3d12va_encode_hevc_add_nal(avctx, au, &priv->raw_vps);
+    if (err < 0)
+        goto fail;
+
+    err = d3d12va_encode_hevc_add_nal(avctx, au, &priv->raw_sps);
+    if (err < 0)
+        goto fail;
+
+    err = d3d12va_encode_hevc_add_nal(avctx, au, &priv->raw_pps);
+    if (err < 0)
+        goto fail;
+
+    err = d3d12va_encode_hevc_write_access_unit(avctx, data, data_len, au);
+fail:
+    ff_cbs_fragment_reset(au);
+    return err;
+
+}
+
+static int d3d12va_encode_hevc_init_sequence_params(AVCodecContext *avctx)
+{
+    HWBaseEncodeContext  *base_ctx = avctx->priv_data;
+    D3D12VAEncodeContext     *ctx  = avctx->priv_data;
+    D3D12VAEncodeHEVCContext *priv = avctx->priv_data;
+    AVD3D12VAFramesContext  *hwctx = base_ctx->input_frames->hwctx;
+    H265RawVPS               *vps  = &priv->raw_vps;
+    H265RawSPS               *sps  = &priv->raw_sps;
+    H265RawPPS               *pps  = &priv->raw_pps;
+    H265RawProfileTierLevel  *ptl  = &vps->profile_tier_level;
+    H265RawVUI               *vui  = &sps->vui;
+    D3D12_VIDEO_ENCODER_PROFILE_HEVC profile = D3D12_VIDEO_ENCODER_PROFILE_HEVC_MAIN;
+    D3D12_VIDEO_ENCODER_LEVEL_TIER_CONSTRAINTS_HEVC level = { 0 };
+    const AVPixFmtDescriptor *desc;
+    uint8_t min_cu_size, max_cu_size, min_tu_size, max_tu_size;
+    int chroma_format, bit_depth;
+    HRESULT hr;
+    int i;
+
+    D3D12_FEATURE_DATA_VIDEO_ENCODER_SUPPORT support = {
+        .NodeIndex                        = 0,
+        .Codec                            = D3D12_VIDEO_ENCODER_CODEC_HEVC,
+        .InputFormat                      = hwctx->format,
+        .RateControl                      = ctx->rc,
+        .IntraRefresh                     = D3D12_VIDEO_ENCODER_INTRA_REFRESH_MODE_NONE,
+        .SubregionFrameEncoding           = D3D12_VIDEO_ENCODER_FRAME_SUBREGION_LAYOUT_MODE_FULL_FRAME,
+        .ResolutionsListCount             = 1,
+        .pResolutionList                  = &ctx->resolution,
+        .CodecGopSequence                 = ctx->gop,
+        .MaxReferenceFramesInDPB          = MAX_DPB_SIZE - 1,
+        .CodecConfiguration               = ctx->codec_conf,
+        .SuggestedProfile.DataSize        = sizeof(D3D12_VIDEO_ENCODER_PROFILE_HEVC),
+        .SuggestedProfile.pHEVCProfile    = &profile,
+        .SuggestedLevel.DataSize          = sizeof(D3D12_VIDEO_ENCODER_LEVEL_TIER_CONSTRAINTS_HEVC),
+        .SuggestedLevel.pHEVCLevelSetting = &level,
+        .pResolutionDependentSupport      = &ctx->res_limits,
+     };
+
+    hr = ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3, D3D12_FEATURE_VIDEO_ENCODER_SUPPORT,
+                                                &support, sizeof(support));
+
+    if (FAILED(hr)) {
+        av_log(avctx, AV_LOG_ERROR, "Failed to check encoder support(%lx).\n", (long)hr);
+        return AVERROR(EINVAL);
+    }
+
+    if (!(support.SupportFlags & D3D12_VIDEO_ENCODER_SUPPORT_FLAG_GENERAL_SUPPORT_OK)) {
+        av_log(avctx, AV_LOG_ERROR, "Driver does not support some request features. %#x\n",
+               support.ValidationFlags);
+        return AVERROR(EINVAL);
+    }
+
+    if (support.SupportFlags & D3D12_VIDEO_ENCODER_SUPPORT_FLAG_RECONSTRUCTED_FRAMES_REQUIRE_TEXTURE_ARRAYS) {
+        av_log(avctx, AV_LOG_ERROR, "D3D12 video encode on this device requires texture array support, "
+               "but it's not implemented.\n");
+        return AVERROR_PATCHWELCOME;
+    }
+
+    memset(vps, 0, sizeof(*vps));
+    memset(sps, 0, sizeof(*sps));
+    memset(pps, 0, sizeof(*pps));
+
+    desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format);
+    av_assert0(desc);
+    if (desc->nb_components == 1) {
+        chroma_format = 0;
+    } else {
+        if (desc->log2_chroma_w == 1 && desc->log2_chroma_h == 1) {
+            chroma_format = 1;
+        } else if (desc->log2_chroma_w == 1 && desc->log2_chroma_h == 0) {
+            chroma_format = 2;
+        } else if (desc->log2_chroma_w == 0 && desc->log2_chroma_h == 0) {
+            chroma_format = 3;
+        } else {
+            av_log(avctx, AV_LOG_ERROR, "Chroma format of input pixel format "
+                   "%s is not supported.\n", desc->name);
+            return AVERROR(EINVAL);
+        }
+    }
+    bit_depth = desc->comp[0].depth;
+
+    min_cu_size = d3d12va_encode_hevc_map_cusize(ctx->codec_conf.pHEVCConfig->MinLumaCodingUnitSize);
+    max_cu_size = d3d12va_encode_hevc_map_cusize(ctx->codec_conf.pHEVCConfig->MaxLumaCodingUnitSize);
+    min_tu_size = d3d12va_encode_hevc_map_tusize(ctx->codec_conf.pHEVCConfig->MinLumaTransformUnitSize);
+    max_tu_size = d3d12va_encode_hevc_map_tusize(ctx->codec_conf.pHEVCConfig->MaxLumaTransformUnitSize);
+
+    // VPS
+
+    vps->nal_unit_header = (H265RawNALUnitHeader) {
+        .nal_unit_type         = HEVC_NAL_VPS,
+        .nuh_layer_id          = 0,
+        .nuh_temporal_id_plus1 = 1,
+    };
+
+    vps->vps_video_parameter_set_id = 0;
+
+    vps->vps_base_layer_internal_flag  = 1;
+    vps->vps_base_layer_available_flag = 1;
+    vps->vps_max_layers_minus1         = 0;
+    vps->vps_max_sub_layers_minus1     = 0;
+    vps->vps_temporal_id_nesting_flag  = 1;
+
+    ptl->general_profile_space = 0;
+    ptl->general_profile_idc   = avctx->profile;
+    ptl->general_tier_flag     = priv->tier;
+
+    ptl->general_profile_compatibility_flag[ptl->general_profile_idc] = 1;
+
+    ptl->general_progressive_source_flag    = 1;
+    ptl->general_interlaced_source_flag     = 0;
+    ptl->general_non_packed_constraint_flag = 1;
+    ptl->general_frame_only_constraint_flag = 1;
+
+    ptl->general_max_14bit_constraint_flag = bit_depth <= 14;
+    ptl->general_max_12bit_constraint_flag = bit_depth <= 12;
+    ptl->general_max_10bit_constraint_flag = bit_depth <= 10;
+    ptl->general_max_8bit_constraint_flag  = bit_depth ==  8;
+
+    ptl->general_max_422chroma_constraint_flag  = chroma_format <= 2;
+    ptl->general_max_420chroma_constraint_flag  = chroma_format <= 1;
+    ptl->general_max_monochrome_constraint_flag = chroma_format == 0;
+
+    ptl->general_intra_constraint_flag = base_ctx->gop_size == 1;
+    ptl->general_one_picture_only_constraint_flag = 0;
+
+    ptl->general_lower_bit_rate_constraint_flag = 1;
+
+    if (avctx->level != FF_LEVEL_UNKNOWN) {
+        ptl->general_level_idc = avctx->level;
+    } else {
+        const H265LevelDescriptor *level;
+
+        level = ff_h265_guess_level(ptl, avctx->bit_rate,
+                                    base_ctx->surface_width, base_ctx->surface_height,
+                                    1, 1, 1, (base_ctx->b_per_p > 0) + 1);
+        if (level) {
+            av_log(avctx, AV_LOG_VERBOSE, "Using level %s.\n", level->name);
+            ptl->general_level_idc = level->level_idc;
+        } else {
+            av_log(avctx, AV_LOG_VERBOSE, "Stream will not conform to "
+                   "any normal level; using level 8.5.\n");
+            ptl->general_level_idc = 255;
+            // The tier flag must be set in level 8.5.
+            ptl->general_tier_flag = 1;
+        }
+        avctx->level = ptl->general_level_idc;
+    }
+
+    vps->vps_sub_layer_ordering_info_present_flag = 0;
+    vps->vps_max_dec_pic_buffering_minus1[0]      = base_ctx->max_b_depth + 1;
+    vps->vps_max_num_reorder_pics[0]              = base_ctx->max_b_depth;
+    vps->vps_max_latency_increase_plus1[0]        = 0;
+
+    vps->vps_max_layer_id             = 0;
+    vps->vps_num_layer_sets_minus1    = 0;
+    vps->layer_id_included_flag[0][0] = 1;
+
+    vps->vps_timing_info_present_flag = 0;
+
+    // SPS
+
+    sps->nal_unit_header = (H265RawNALUnitHeader) {
+        .nal_unit_type         = HEVC_NAL_SPS,
+        .nuh_layer_id          = 0,
+        .nuh_temporal_id_plus1 = 1,
+    };
+
+    sps->sps_video_parameter_set_id = vps->vps_video_parameter_set_id;
+
+    sps->sps_max_sub_layers_minus1    = vps->vps_max_sub_layers_minus1;
+    sps->sps_temporal_id_nesting_flag = vps->vps_temporal_id_nesting_flag;
+
+    sps->profile_tier_level = vps->profile_tier_level;
+
+    sps->sps_seq_parameter_set_id = 0;
+
+    sps->chroma_format_idc          = chroma_format;
+    sps->separate_colour_plane_flag = 0;
+
+    av_assert0(ctx->res_limits.SubregionBlockPixelsSize % min_cu_size == 0);
+
+    sps->pic_width_in_luma_samples  = FFALIGN(base_ctx->surface_width,
+                                              ctx->res_limits.SubregionBlockPixelsSize);
+    sps->pic_height_in_luma_samples = FFALIGN(base_ctx->surface_height,
+                                              ctx->res_limits.SubregionBlockPixelsSize);
+
+    if (avctx->width  != sps->pic_width_in_luma_samples ||
+        avctx->height != sps->pic_height_in_luma_samples) {
+        sps->conformance_window_flag = 1;
+        sps->conf_win_left_offset   = 0;
+        sps->conf_win_right_offset  =
+            (sps->pic_width_in_luma_samples - avctx->width) >> desc->log2_chroma_w;
+        sps->conf_win_top_offset    = 0;
+        sps->conf_win_bottom_offset =
+            (sps->pic_height_in_luma_samples - avctx->height) >> desc->log2_chroma_h;
+    } else {
+        sps->conformance_window_flag = 0;
+    }
+
+    sps->bit_depth_luma_minus8   = bit_depth - 8;
+    sps->bit_depth_chroma_minus8 = bit_depth - 8;
+
+    sps->log2_max_pic_order_cnt_lsb_minus4 = ctx->gop.pHEVCGroupOfPictures->log2_max_pic_order_cnt_lsb_minus4;
+
+    sps->sps_sub_layer_ordering_info_present_flag =
+        vps->vps_sub_layer_ordering_info_present_flag;
+    for (i = 0; i <= sps->sps_max_sub_layers_minus1; i++) {
+        sps->sps_max_dec_pic_buffering_minus1[i] =
+            vps->vps_max_dec_pic_buffering_minus1[i];
+        sps->sps_max_num_reorder_pics[i] =
+            vps->vps_max_num_reorder_pics[i];
+        sps->sps_max_latency_increase_plus1[i] =
+            vps->vps_max_latency_increase_plus1[i];
+    }
+
+    sps->log2_min_luma_coding_block_size_minus3      = (uint8_t)(av_log2(min_cu_size) - 3);
+    sps->log2_diff_max_min_luma_coding_block_size    = (uint8_t)(av_log2(max_cu_size) - av_log2(min_cu_size));
+    sps->log2_min_luma_transform_block_size_minus2   = (uint8_t)(av_log2(min_tu_size) - 2);
+    sps->log2_diff_max_min_luma_transform_block_size = (uint8_t)(av_log2(max_tu_size) - av_log2(min_tu_size));
+
+    sps->max_transform_hierarchy_depth_inter = ctx->codec_conf.pHEVCConfig->max_transform_hierarchy_depth_inter;
+    sps->max_transform_hierarchy_depth_intra = ctx->codec_conf.pHEVCConfig->max_transform_hierarchy_depth_intra;
+
+    sps->amp_enabled_flag = !!(ctx->codec_conf.pHEVCConfig->ConfigurationFlags &
+                               D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_USE_ASYMETRIC_MOTION_PARTITION);
+    sps->sample_adaptive_offset_enabled_flag = !!(ctx->codec_conf.pHEVCConfig->ConfigurationFlags &
+                                                  D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_ENABLE_SAO_FILTER);
+    sps->sps_temporal_mvp_enabled_flag = 0;
+    sps->pcm_enabled_flag = 0;
+
+    sps->vui_parameters_present_flag = 1;
+
+    // vui default parameters
+    vui->aspect_ratio_idc                        = 0;
+    vui->video_format                            = 5;
+    vui->video_full_range_flag                   = 0;
+    vui->colour_primaries                        = 2;
+    vui->transfer_characteristics                = 2;
+    vui->matrix_coefficients                     = 2;
+    vui->chroma_sample_loc_type_top_field        = 0;
+    vui->chroma_sample_loc_type_bottom_field     = 0;
+    vui->tiles_fixed_structure_flag              = 0;
+    vui->motion_vectors_over_pic_boundaries_flag = 1;
+    vui->min_spatial_segmentation_idc            = 0;
+    vui->max_bytes_per_pic_denom                 = 2;
+    vui->max_bits_per_min_cu_denom               = 1;
+    vui->log2_max_mv_length_horizontal           = 15;
+    vui->log2_max_mv_length_vertical             = 15;
+
+    // PPS
+
+    pps->nal_unit_header = (H265RawNALUnitHeader) {
+        .nal_unit_type         = HEVC_NAL_PPS,
+        .nuh_layer_id          = 0,
+        .nuh_temporal_id_plus1 = 1,
+    };
+
+    pps->pps_pic_parameter_set_id = 0;
+    pps->pps_seq_parameter_set_id = sps->sps_seq_parameter_set_id;
+
+    pps->cabac_init_present_flag = 1;
+
+    pps->num_ref_idx_l0_default_active_minus1 = 0;
+    pps->num_ref_idx_l1_default_active_minus1 = 0;
+
+    pps->init_qp_minus26 = 0;
+
+    pps->transform_skip_enabled_flag = !!(ctx->codec_conf.pHEVCConfig->ConfigurationFlags &
+                                          D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_ENABLE_TRANSFORM_SKIPPING);
+
+    // cu_qp_delta always required to be 1 in https://github.com/microsoft/DirectX-Specs/blob/master/d3d/D3D12VideoEncoding.md
+    pps->cu_qp_delta_enabled_flag = 1;
+
+    pps->diff_cu_qp_delta_depth   = 0;
+
+    pps->pps_slice_chroma_qp_offsets_present_flag = 1;
+
+    pps->tiles_enabled_flag = 0; // no tiling in D3D12
+
+    pps->pps_loop_filter_across_slices_enabled_flag = !(ctx->codec_conf.pHEVCConfig->ConfigurationFlags &
+                                                        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_DISABLE_LOOP_FILTER_ACROSS_SLICES);
+    pps->deblocking_filter_control_present_flag = 1;
+
+    return 0;
+}
+
+static int d3d12va_encode_hevc_get_encoder_caps(AVCodecContext *avctx)
+{
+    int i;
+    HRESULT hr;
+    uint8_t min_cu_size, max_cu_size;
+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
+    D3D12VAEncodeContext     *ctx = avctx->priv_data;
+    D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC *config;
+    D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC hevc_caps;
+
+    D3D12_FEATURE_DATA_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT codec_caps = {
+        .NodeIndex                   = 0,
+        .Codec                       = D3D12_VIDEO_ENCODER_CODEC_HEVC,
+        .Profile                     = ctx->profile->d3d12_profile,
+        .CodecSupportLimits.DataSize = sizeof(D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC),
+    };
+
+    for (i = 0; i < FF_ARRAY_ELEMS(hevc_config_support_sets); i++) {
+        hevc_caps = hevc_config_support_sets[i];
+        codec_caps.CodecSupportLimits.pHEVCSupport = &hevc_caps;
+        hr = ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3, D3D12_FEATURE_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT,
+                                                    &codec_caps, sizeof(codec_caps));
+        if (SUCCEEDED(hr) && codec_caps.IsSupported)
+            break;
+    }
+
+    if (i == FF_ARRAY_ELEMS(hevc_config_support_sets)) {
+        av_log(avctx, AV_LOG_ERROR, "Unsupported codec configuration\n");
+        return AVERROR(EINVAL);
+    }
+
+    ctx->codec_conf.DataSize = sizeof(D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC);
+    ctx->codec_conf.pHEVCConfig = av_mallocz(ctx->codec_conf.DataSize);
+    if (!ctx->codec_conf.pHEVCConfig)
+        return AVERROR(ENOMEM);
+
+    config = ctx->codec_conf.pHEVCConfig;
+
+    config->ConfigurationFlags                  = D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_NONE;
+    config->MinLumaCodingUnitSize               = hevc_caps.MinLumaCodingUnitSize;
+    config->MaxLumaCodingUnitSize               = hevc_caps.MaxLumaCodingUnitSize;
+    config->MinLumaTransformUnitSize            = hevc_caps.MinLumaTransformUnitSize;
+    config->MaxLumaTransformUnitSize            = hevc_caps.MaxLumaTransformUnitSize;
+    config->max_transform_hierarchy_depth_inter = hevc_caps.max_transform_hierarchy_depth_inter;
+    config->max_transform_hierarchy_depth_intra = hevc_caps.max_transform_hierarchy_depth_intra;
+
+    if (hevc_caps.SupportFlags & D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_ASYMETRIC_MOTION_PARTITION_SUPPORT ||
+        hevc_caps.SupportFlags & D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_ASYMETRIC_MOTION_PARTITION_REQUIRED)
+        config->ConfigurationFlags |= D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_USE_ASYMETRIC_MOTION_PARTITION;
+
+    if (hevc_caps.SupportFlags & D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_SAO_FILTER_SUPPORT)
+        config->ConfigurationFlags |= D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_ENABLE_SAO_FILTER;
+
+    if (hevc_caps.SupportFlags & D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_DISABLING_LOOP_FILTER_ACROSS_SLICES_SUPPORT)
+        config->ConfigurationFlags |= D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_DISABLE_LOOP_FILTER_ACROSS_SLICES;
+
+    if (hevc_caps.SupportFlags & D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_TRANSFORM_SKIP_SUPPORT)
+        config->ConfigurationFlags |= D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_ENABLE_TRANSFORM_SKIPPING;
+
+    if (hevc_caps.SupportFlags & D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_P_FRAMES_IMPLEMENTED_AS_LOW_DELAY_B_FRAMES)
+        ctx->bi_not_empty = 1;
+
+    // block sizes
+    min_cu_size = d3d12va_encode_hevc_map_cusize(hevc_caps.MinLumaCodingUnitSize);
+    max_cu_size = d3d12va_encode_hevc_map_cusize(hevc_caps.MaxLumaCodingUnitSize);
+
+    av_log(avctx, AV_LOG_VERBOSE, "Using CTU size %dx%d, "
+           "min CB size %dx%d.\n", max_cu_size, max_cu_size,
+           min_cu_size, min_cu_size);
+
+    base_ctx->surface_width  = FFALIGN(avctx->width,  min_cu_size);
+    base_ctx->surface_height = FFALIGN(avctx->height, min_cu_size);
+
+    return 0;
+}
+
+static int d3d12va_encode_hevc_configure(AVCodecContext *avctx)
+{
+    HWBaseEncodeContext  *base_ctx = avctx->priv_data;
+    D3D12VAEncodeContext      *ctx = avctx->priv_data;
+    D3D12VAEncodeHEVCContext *priv = avctx->priv_data;
+    int fixed_qp_idr, fixed_qp_p, fixed_qp_b;
+    int err;
+
+    err = ff_cbs_init(&priv->cbc, AV_CODEC_ID_HEVC, avctx);
+    if (err < 0)
+        return err;
+
+    // Rate control
+    if (ctx->rc.Mode == D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_CQP) {
+        D3D12_VIDEO_ENCODER_RATE_CONTROL_CQP *cqp_ctl;
+        fixed_qp_p = av_clip(base_ctx->rc_quality, 1, 51);
+        if (avctx->i_quant_factor > 0.0)
+            fixed_qp_idr = av_clip((avctx->i_quant_factor * fixed_qp_p +
+                                    avctx->i_quant_offset) + 0.5, 1, 51);
+        else
+            fixed_qp_idr = fixed_qp_p;
+        if (avctx->b_quant_factor > 0.0)
+            fixed_qp_b = av_clip((avctx->b_quant_factor * fixed_qp_p +
+                                  avctx->b_quant_offset) + 0.5, 1, 51);
+        else
+            fixed_qp_b = fixed_qp_p;
+
+        av_log(avctx, AV_LOG_DEBUG, "Using fixed QP = "
+               "%d / %d / %d for IDR- / P- / B-frames.\n",
+               fixed_qp_idr, fixed_qp_p, fixed_qp_b);
+
+        ctx->rc.ConfigParams.DataSize = sizeof(D3D12_VIDEO_ENCODER_RATE_CONTROL_CQP);
+        cqp_ctl = av_mallocz(ctx->rc.ConfigParams.DataSize);
+        if (!cqp_ctl)
+            return AVERROR(ENOMEM);
+
+        cqp_ctl->ConstantQP_FullIntracodedFrame                  = fixed_qp_idr;
+        cqp_ctl->ConstantQP_InterPredictedFrame_PrevRefOnly      = fixed_qp_p;
+        cqp_ctl->ConstantQP_InterPredictedFrame_BiDirectionalRef = fixed_qp_b;
+
+        ctx->rc.ConfigParams.pConfiguration_CQP = cqp_ctl;
+    }
+
+    // GOP
+    ctx->gop.DataSize = sizeof(D3D12_VIDEO_ENCODER_SEQUENCE_GOP_STRUCTURE_HEVC);
+    ctx->gop.pHEVCGroupOfPictures = av_mallocz(ctx->gop.DataSize);
+    if (!ctx->gop.pHEVCGroupOfPictures)
+        return AVERROR(ENOMEM);
+
+    ctx->gop.pHEVCGroupOfPictures->GOPLength      = base_ctx->gop_size;
+    ctx->gop.pHEVCGroupOfPictures->PPicturePeriod = base_ctx->b_per_p + 1;
+    // Power of 2
+    if (base_ctx->gop_size & base_ctx->gop_size - 1 == 0)
+        ctx->gop.pHEVCGroupOfPictures->log2_max_pic_order_cnt_lsb_minus4 =
+            FFMAX(av_log2(base_ctx->gop_size) - 4, 0);
+    else
+        ctx->gop.pHEVCGroupOfPictures->log2_max_pic_order_cnt_lsb_minus4 =
+            FFMAX(av_log2(base_ctx->gop_size) - 3, 0);
+
+    return 0;
+}
+
+static int d3d12va_encode_hevc_set_level(AVCodecContext *avctx)
+{
+    D3D12VAEncodeContext      *ctx = avctx->priv_data;
+    D3D12VAEncodeHEVCContext *priv = avctx->priv_data;
+    int i;
+
+    ctx->level.DataSize = sizeof(D3D12_VIDEO_ENCODER_LEVEL_TIER_CONSTRAINTS_HEVC);
+    ctx->level.pHEVCLevelSetting = av_mallocz(ctx->level.DataSize);
+    if (!ctx->level.pHEVCLevelSetting)
+        return AVERROR(ENOMEM);
+
+    for (i = 0; i < FF_ARRAY_ELEMS(hevc_levels); i++) {
+        if (avctx->level == hevc_levels[i].level) {
+            ctx->level.pHEVCLevelSetting->Level = hevc_levels[i].d3d12_level;
+            break;
+        }
+    }
+
+    if (i == FF_ARRAY_ELEMS(hevc_levels)) {
+        av_log(avctx, AV_LOG_ERROR, "Invalid level %d.\n", avctx->level);
+        return AVERROR(EINVAL);
+    }
+
+    ctx->level.pHEVCLevelSetting->Tier = priv->raw_vps.profile_tier_level.general_tier_flag == 0 ?
+                                         D3D12_VIDEO_ENCODER_TIER_HEVC_MAIN :
+                                         D3D12_VIDEO_ENCODER_TIER_HEVC_HIGH;
+
+    return 0;
+}
+
+static void d3d12va_encode_hevc_free_picture_params(D3D12VAEncodePicture *pic)
+{
+    if (!pic->pic_ctl.pHEVCPicData)
+        return;
+
+    av_freep(&pic->pic_ctl.pHEVCPicData->pList0ReferenceFrames);
+    av_freep(&pic->pic_ctl.pHEVCPicData->pList1ReferenceFrames);
+    av_freep(&pic->pic_ctl.pHEVCPicData->pReferenceFramesReconPictureDescriptors);
+    av_freep(&pic->pic_ctl.pHEVCPicData);
+}
+
+static int d3d12va_encode_hevc_init_picture_params(AVCodecContext *avctx,
+                                                   D3D12VAEncodePicture *pic)
+{
+    HWBaseEncodePicture                             *base_pic = (HWBaseEncodePicture *)pic;
+    D3D12VAEncodeHEVCPicture                            *hpic = base_pic->priv_data;
+    HWBaseEncodePicture                                 *prev = base_pic->prev;
+    D3D12VAEncodeHEVCPicture                           *hprev = prev ? prev->priv_data : NULL;
+    D3D12_VIDEO_ENCODER_REFERENCE_PICTURE_DESCRIPTOR_HEVC *pd = NULL;
+    UINT                                           *ref_list0 = NULL, *ref_list1 = NULL;
+    int i, idx = 0;
+
+    pic->pic_ctl.DataSize = sizeof(D3D12_VIDEO_ENCODER_PICTURE_CONTROL_CODEC_DATA_HEVC);
+    pic->pic_ctl.pHEVCPicData = av_mallocz(pic->pic_ctl.DataSize);
+    if (!pic->pic_ctl.pHEVCPicData)
+        return AVERROR(ENOMEM);
+
+    if (base_pic->type == PICTURE_TYPE_IDR) {
+        av_assert0(base_pic->display_order == base_pic->encode_order);
+        hpic->last_idr_frame = base_pic->display_order;
+    } else {
+        av_assert0(prev);
+        hpic->last_idr_frame = hprev->last_idr_frame;
+    }
+    hpic->pic_order_cnt = base_pic->display_order - hpic->last_idr_frame;
+
+    switch(base_pic->type) {
+        case PICTURE_TYPE_IDR:
+            pic->pic_ctl.pHEVCPicData->FrameType = D3D12_VIDEO_ENCODER_FRAME_TYPE_HEVC_IDR_FRAME;
+            break;
+        case PICTURE_TYPE_I:
+            pic->pic_ctl.pHEVCPicData->FrameType = D3D12_VIDEO_ENCODER_FRAME_TYPE_HEVC_I_FRAME;
+            break;
+        case PICTURE_TYPE_P:
+            pic->pic_ctl.pHEVCPicData->FrameType = D3D12_VIDEO_ENCODER_FRAME_TYPE_HEVC_P_FRAME;
+            break;
+        case PICTURE_TYPE_B:
+            pic->pic_ctl.pHEVCPicData->FrameType = D3D12_VIDEO_ENCODER_FRAME_TYPE_HEVC_B_FRAME;
+            break;
+        default:
+            av_assert0(0 && "invalid picture type");
+    }
+
+    pic->pic_ctl.pHEVCPicData->slice_pic_parameter_set_id = 0;
+    pic->pic_ctl.pHEVCPicData->PictureOrderCountNumber    = hpic->pic_order_cnt;
+
+    if (base_pic->type == PICTURE_TYPE_P || base_pic->type == PICTURE_TYPE_B) {
+        pd = av_calloc(MAX_PICTURE_REFERENCES, sizeof(*pd));
+        if (!pd)
+            return AVERROR(ENOMEM);
+
+        ref_list0 = av_calloc(MAX_PICTURE_REFERENCES, sizeof(*ref_list0));
+        if (!ref_list0)
+            return AVERROR(ENOMEM);
+
+        pic->pic_ctl.pHEVCPicData->List0ReferenceFramesCount = base_pic->nb_refs[0];
+        for (i = 0; i < base_pic->nb_refs[0]; i++) {
+            HWBaseEncodePicture      *ref = base_pic->refs[0][i];
+            D3D12VAEncodeHEVCPicture *href;
+
+            av_assert0(ref && ref->encode_order < base_pic->encode_order);
+            href = ref->priv_data;
+
+            ref_list0[i] = idx;
+            pd[idx].ReconstructedPictureResourceIndex = idx;
+            pd[idx].IsRefUsedByCurrentPic = TRUE;
+            pd[idx].PictureOrderCountNumber = href->pic_order_cnt;
+            idx++;
+        }
+    }
+
+    if (base_pic->type == PICTURE_TYPE_B) {
+        ref_list1 = av_calloc(MAX_PICTURE_REFERENCES, sizeof(*ref_list1));
+        if (!ref_list1)
+            return AVERROR(ENOMEM);
+
+        pic->pic_ctl.pHEVCPicData->List1ReferenceFramesCount = base_pic->nb_refs[1];
+        for (i = 0; i < base_pic->nb_refs[1]; i++) {
+            HWBaseEncodePicture      *ref = base_pic->refs[1][i];
+            D3D12VAEncodeHEVCPicture *href;
+
+            av_assert0(ref && ref->encode_order < base_pic->encode_order);
+            href = ref->priv_data;
+
+            ref_list1[i] = idx;
+            pd[idx].ReconstructedPictureResourceIndex = idx;
+            pd[idx].IsRefUsedByCurrentPic = TRUE;
+            pd[idx].PictureOrderCountNumber = href->pic_order_cnt;
+            idx++;
+        }
+    }
+
+    pic->pic_ctl.pHEVCPicData->pList0ReferenceFrames = ref_list0;
+    pic->pic_ctl.pHEVCPicData->pList1ReferenceFrames = ref_list1;
+    pic->pic_ctl.pHEVCPicData->ReferenceFramesReconPictureDescriptorsCount = idx;
+    pic->pic_ctl.pHEVCPicData->pReferenceFramesReconPictureDescriptors = pd;
+
+    return 0;
+}
+
+static const D3D12VAEncodeType d3d12va_encode_type_hevc = {
+    .profiles               = d3d12va_encode_hevc_profiles,
+
+    .d3d12_codec            = D3D12_VIDEO_ENCODER_CODEC_HEVC,
+
+    .flags                  = FLAG_B_PICTURES |
+                              FLAG_B_PICTURE_REFERENCES |
+                              FLAG_NON_IDR_KEY_PICTURES,
+
+    .default_quality        = 25,
+
+    .get_encoder_caps       = &d3d12va_encode_hevc_get_encoder_caps,
+
+    .configure              = &d3d12va_encode_hevc_configure,
+
+    .set_level              = &d3d12va_encode_hevc_set_level,
+
+    .picture_priv_data_size = sizeof(D3D12VAEncodeHEVCPicture),
+
+    .init_sequence_params   = &d3d12va_encode_hevc_init_sequence_params,
+
+    .init_picture_params    = &d3d12va_encode_hevc_init_picture_params,
+
+    .free_picture_params    = &d3d12va_encode_hevc_free_picture_params,
+
+    .write_sequence_header  = &d3d12va_encode_hevc_write_sequence_header,
+};
+
+static int d3d12va_encode_hevc_init(AVCodecContext *avctx)
+{
+    HWBaseEncodeContext  *base_ctx = avctx->priv_data;
+    D3D12VAEncodeContext      *ctx = avctx->priv_data;
+    D3D12VAEncodeHEVCContext *priv = avctx->priv_data;
+
+    ctx->codec = &d3d12va_encode_type_hevc;
+
+    if (avctx->profile == AV_PROFILE_UNKNOWN)
+        avctx->profile = priv->profile;
+    if (avctx->level == FF_LEVEL_UNKNOWN)
+        avctx->level = priv->level;
+
+    if (avctx->level != FF_LEVEL_UNKNOWN && avctx->level & ~0xff) {
+        av_log(avctx, AV_LOG_ERROR, "Invalid level %d: must fit "
+               "in 8-bit unsigned integer.\n", avctx->level);
+        return AVERROR(EINVAL);
+    }
+
+    if (priv->qp > 0)
+        base_ctx->explicit_qp = priv->qp;
+
+    return ff_d3d12va_encode_init(avctx);
+}
+
+static int d3d12va_encode_hevc_close(AVCodecContext *avctx)
+{
+    D3D12VAEncodeHEVCContext *priv = avctx->priv_data;
+
+    ff_cbs_fragment_free(&priv->current_access_unit);
+    ff_cbs_close(&priv->cbc);
+
+    av_freep(&priv->common.codec_conf.pHEVCConfig);
+    av_freep(&priv->common.gop.pHEVCGroupOfPictures);
+    av_freep(&priv->common.level.pHEVCLevelSetting);
+
+    return ff_d3d12va_encode_close(avctx);
+}
+
+#define OFFSET(x) offsetof(D3D12VAEncodeHEVCContext, x)
+#define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM)
+static const AVOption d3d12va_encode_hevc_options[] = {
+    HW_BASE_ENCODE_COMMON_OPTIONS,
+    D3D12VA_ENCODE_RC_OPTIONS,
+
+    { "qp", "Constant QP (for P-frames; scaled by qfactor/qoffset for I/B)",
+      OFFSET(qp), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 52, FLAGS },
+
+    { "profile", "Set profile (general_profile_idc)",
+      OFFSET(profile), AV_OPT_TYPE_INT,
+      { .i64 = AV_PROFILE_UNKNOWN }, AV_PROFILE_UNKNOWN, 0xff, FLAGS, "profile" },
+
+#define PROFILE(name, value)  name, NULL, 0, AV_OPT_TYPE_CONST, \
+      { .i64 = value }, 0, 0, FLAGS, "profile"
+    { PROFILE("main",               AV_PROFILE_HEVC_MAIN) },
+    { PROFILE("main10",             AV_PROFILE_HEVC_MAIN_10) },
+    { PROFILE("rext",               AV_PROFILE_HEVC_REXT) },
+#undef PROFILE
+
+    { "tier", "Set tier (general_tier_flag)",
+      OFFSET(tier), AV_OPT_TYPE_INT,
+      { .i64 = 0 }, 0, 1, FLAGS, "tier" },
+    { "main", NULL, 0, AV_OPT_TYPE_CONST,
+      { .i64 = 0 }, 0, 0, FLAGS, "tier" },
+    { "high", NULL, 0, AV_OPT_TYPE_CONST,
+      { .i64 = 1 }, 0, 0, FLAGS, "tier" },
+
+    { "level", "Set level (general_level_idc)",
+      OFFSET(level), AV_OPT_TYPE_INT,
+      { .i64 = FF_LEVEL_UNKNOWN }, FF_LEVEL_UNKNOWN, 0xff, FLAGS, "level" },
+
+#define LEVEL(name, value) name, NULL, 0, AV_OPT_TYPE_CONST, \
+      { .i64 = value }, 0, 0, FLAGS, "level"
+    { LEVEL("1",    30) },
+    { LEVEL("2",    60) },
+    { LEVEL("2.1",  63) },
+    { LEVEL("3",    90) },
+    { LEVEL("3.1",  93) },
+    { LEVEL("4",   120) },
+    { LEVEL("4.1", 123) },
+    { LEVEL("5",   150) },
+    { LEVEL("5.1", 153) },
+    { LEVEL("5.2", 156) },
+    { LEVEL("6",   180) },
+    { LEVEL("6.1", 183) },
+    { LEVEL("6.2", 186) },
+#undef LEVEL
+
+    { NULL },
+};
+
+static const FFCodecDefault d3d12va_encode_hevc_defaults[] = {
+    { "b",              "0"   },
+    { "bf",             "2"   },
+    { "g",              "120" },
+    { "i_qfactor",      "1"   },
+    { "i_qoffset",      "0"   },
+    { "b_qfactor",      "1" },
+    { "b_qoffset",      "0"   },
+    { "qmin",           "-1"  },
+    { "qmax",           "-1"  },
+    { NULL },
+};
+
+static const AVClass d3d12va_encode_hevc_class = {
+    .class_name = "hevc_d3d12va",
+    .item_name  = av_default_item_name,
+    .option     = d3d12va_encode_hevc_options,
+    .version    = LIBAVUTIL_VERSION_INT,
+};
+
+const FFCodec ff_hevc_d3d12va_encoder = {
+    .p.name         = "hevc_d3d12va",
+    CODEC_LONG_NAME("D3D12VA hevc encoder"),
+    .p.type         = AVMEDIA_TYPE_VIDEO,
+    .p.id           = AV_CODEC_ID_HEVC,
+    .priv_data_size = sizeof(D3D12VAEncodeHEVCContext),
+    .init           = &d3d12va_encode_hevc_init,
+    FF_CODEC_RECEIVE_PACKET_CB(&ff_hw_base_encode_receive_packet),
+    .close          = &d3d12va_encode_hevc_close,
+    .p.priv_class   = &d3d12va_encode_hevc_class,
+    .p.capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE |
+                      AV_CODEC_CAP_DR1 | AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE,
+    .caps_internal  = FF_CODEC_CAP_NOT_INIT_THREADSAFE |
+                      FF_CODEC_CAP_INIT_CLEANUP,
+    .defaults       = d3d12va_encode_hevc_defaults,
+    .p.pix_fmts = (const enum AVPixelFormat[]) {
+        AV_PIX_FMT_D3D12,
+        AV_PIX_FMT_NONE,
+    },
+    .hw_configs     = ff_d3d12va_encode_hw_configs,
+    .p.wrapper_name = "d3d12va",
+};
-- 
2.41.0.windows.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [FFmpeg-devel] [PATCH v7 12/12] Changelog: add D3D12VA HEVC encoder changelog
  2024-03-14  8:14 [FFmpeg-devel] [PATCH v7 01/12] avcodec/vaapi_encode: move pic->input_surface initialization to encode_alloc tong1.wu-at-intel.com
                   ` (9 preceding siblings ...)
  2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 11/12] avcodec: add D3D12VA hardware HEVC encoder tong1.wu-at-intel.com
@ 2024-03-14  8:14 ` tong1.wu-at-intel.com
  10 siblings, 0 replies; 15+ messages in thread
From: tong1.wu-at-intel.com @ 2024-03-14  8:14 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Tong Wu

From: Tong Wu <tong1.wu@intel.com>

Signed-off-by: Tong Wu <tong1.wu@intel.com>
---
 Changelog | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Changelog b/Changelog
index b7d7535a9e..81ee3a7837 100644
--- a/Changelog
+++ b/Changelog
@@ -34,7 +34,7 @@ version <next>:
 - ffprobe (with -export_side_data film_grain) now prints film grain metadata
 - AEA muxer
 - ffmpeg CLI loopback decoders
-
+- D3D12VA HEVC encoder
 
 version 6.1:
 - libaribcaption decoder
-- 
2.41.0.windows.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [FFmpeg-devel] [PATCH v7 11/12] avcodec: add D3D12VA hardware HEVC encoder
  2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 11/12] avcodec: add D3D12VA hardware HEVC encoder tong1.wu-at-intel.com
@ 2024-03-28  2:35   ` Wu, Tong1
  2024-04-15  8:42   ` Xiang, Haihao
  1 sibling, 0 replies; 15+ messages in thread
From: Wu, Tong1 @ 2024-03-28  2:35 UTC (permalink / raw)
  To: ffmpeg-devel

Kindly ping. Is there any more comment on v7?

>-----Original Message-----
>From: Wu, Tong1 <tong1.wu@intel.com>
>Sent: Thursday, March 14, 2024 4:15 PM
>To: ffmpeg-devel@ffmpeg.org
>Cc: Wu, Tong1 <tong1.wu@intel.com>
>Subject: [FFmpeg-devel][PATCH v7 11/12] avcodec: add D3D12VA hardware
>HEVC encoder
>
>From: Tong Wu <tong1.wu@intel.com>
>
>This implementation is based on D3D12 Video Encoding Spec:
>https://microsoft.github.io/DirectX-Specs/d3d/D3D12VideoEncoding.html
>
>Sample command line for transcoding:
>ffmpeg.exe -hwaccel d3d12va -hwaccel_output_format d3d12 -i input.mp4
>-c:v hevc_d3d12va output.mp4
>
>Signed-off-by: Tong Wu <tong1.wu@intel.com>
>---
> configure                        |    6 +
> libavcodec/Makefile              |    4 +-
> libavcodec/allcodecs.c           |    1 +
> libavcodec/d3d12va_encode.c      | 1550 ++++++++++++++++++++++++++++++
> libavcodec/d3d12va_encode.h      |  321 +++++++
> libavcodec/d3d12va_encode_hevc.c |  957 ++++++++++++++++++
> 6 files changed, 2838 insertions(+), 1 deletion(-)
> create mode 100644 libavcodec/d3d12va_encode.c
> create mode 100644 libavcodec/d3d12va_encode.h
> create mode 100644 libavcodec/d3d12va_encode_hevc.c
>
>diff --git a/configure b/configure
>index c34bdd13f5..53076fbf22 100755
>--- a/configure
>+++ b/configure
>@@ -2570,6 +2570,7 @@ CONFIG_EXTRA="
>     tpeldsp
>     vaapi_1
>     vaapi_encode
>+    d3d12va_encode
>     vc1dsp
>     videodsp
>     vp3dsp
>@@ -3214,6 +3215,7 @@ wmv3_vaapi_hwaccel_select="vc1_vaapi_hwaccel"
> wmv3_vdpau_hwaccel_select="vc1_vdpau_hwaccel"
>
> # hardware-accelerated codecs
>+d3d12va_encode_deps="d3d12va ID3D12VideoEncoder
>d3d12_encoder_feature"
> mediafoundation_deps="mftransform_h MFCreateAlignedMemoryBuffer"
> omx_deps="libdl pthreads"
> omx_rpi_select="omx"
>@@ -3280,6 +3282,7 @@ h264_v4l2m2m_encoder_deps="v4l2_m2m
>h264_v4l2_m2m"
> hevc_amf_encoder_deps="amf"
> hevc_cuvid_decoder_deps="cuvid"
> hevc_cuvid_decoder_select="hevc_mp4toannexb_bsf"
>+hevc_d3d12va_encoder_select="cbs_h265 d3d12va_encode"
> hevc_mediacodec_decoder_deps="mediacodec"
> hevc_mediacodec_decoder_select="hevc_mp4toannexb_bsf hevc_parser"
> hevc_mediacodec_encoder_deps="mediacodec"
>@@ -6620,6 +6623,9 @@ check_type "windows.h d3d11.h"
>"ID3D11VideoDecoder"
> check_type "windows.h d3d11.h" "ID3D11VideoContext"
> check_type "windows.h d3d12.h" "ID3D12Device"
> check_type "windows.h d3d12video.h" "ID3D12VideoDecoder"
>+check_type "windows.h d3d12video.h" "ID3D12VideoEncoder"
>+test_code cc "windows.h d3d12video.h" "D3D12_FEATURE_VIDEO feature =
>D3D12_FEATURE_VIDEO_ENCODER_CODEC" && \
>+test_code cc "windows.h d3d12video.h"
>"D3D12_FEATURE_DATA_VIDEO_ENCODER_RESOURCE_REQUIREMENTS req"
>&& enable d3d12_encoder_feature
> check_type "windows.h" "DPI_AWARENESS_CONTEXT" -
>D_WIN32_WINNT=0x0A00
> check_type "d3d9.h dxva2api.h" DXVA2_ConfigPictureDecode -
>D_WIN32_WINNT=0x0602
> check_func_headers mfapi.h MFCreateAlignedMemoryBuffer -lmfplat
>diff --git a/libavcodec/Makefile b/libavcodec/Makefile
>index cbfae5f182..cdda3f0d0a 100644
>--- a/libavcodec/Makefile
>+++ b/libavcodec/Makefile
>@@ -84,6 +84,7 @@ OBJS-$(CONFIG_CBS_JPEG)                += cbs_jpeg.o
> OBJS-$(CONFIG_CBS_MPEG2)               += cbs_mpeg2.o
> OBJS-$(CONFIG_CBS_VP8)                 += cbs_vp8.o vp8data.o
> OBJS-$(CONFIG_CBS_VP9)                 += cbs_vp9.o
>+OBJS-$(CONFIG_D3D12VA_ENCODE)          += d3d12va_encode.o
>hw_base_encode.o
> OBJS-$(CONFIG_DEFLATE_WRAPPER)         += zlib_wrapper.o
> OBJS-$(CONFIG_DOVI_RPU)                += dovi_rpu.o
> OBJS-$(CONFIG_ERROR_RESILIENCE)        += error_resilience.o
>@@ -435,6 +436,7 @@ OBJS-$(CONFIG_HEVC_DECODER)            += hevcdec.o
>hevc_mvs.o \
>                                           h274.o
> OBJS-$(CONFIG_HEVC_AMF_ENCODER)        += amfenc_hevc.o
> OBJS-$(CONFIG_HEVC_CUVID_DECODER)      += cuviddec.o
>+OBJS-$(CONFIG_HEVC_D3D12VA_ENCODER)    += d3d12va_encode_hevc.o
> OBJS-$(CONFIG_HEVC_MEDIACODEC_DECODER) += mediacodecdec.o
> OBJS-$(CONFIG_HEVC_MEDIACODEC_ENCODER) += mediacodecenc.o
> OBJS-$(CONFIG_HEVC_MF_ENCODER)         += mfenc.o mf_utils.o
>@@ -1263,7 +1265,7 @@ SKIPHEADERS                            += %_tablegen.h
>\
>
> SKIPHEADERS-$(CONFIG_AMF)              += amfenc.h
> SKIPHEADERS-$(CONFIG_D3D11VA)          += d3d11va.h dxva2_internal.h
>-SKIPHEADERS-$(CONFIG_D3D12VA)          += d3d12va_decode.h
>+SKIPHEADERS-$(CONFIG_D3D12VA)          += d3d12va_decode.h
>d3d12va_encode.h
> SKIPHEADERS-$(CONFIG_DXVA2)            += dxva2.h dxva2_internal.h
> SKIPHEADERS-$(CONFIG_JNI)              += ffjni.h
> SKIPHEADERS-$(CONFIG_LCMS2)            += fflcms2.h
>diff --git a/libavcodec/allcodecs.c b/libavcodec/allcodecs.c
>index 2386b450a6..7b5093233c 100644
>--- a/libavcodec/allcodecs.c
>+++ b/libavcodec/allcodecs.c
>@@ -855,6 +855,7 @@ extern const FFCodec ff_h264_vaapi_encoder;
> extern const FFCodec ff_h264_videotoolbox_encoder;
> extern const FFCodec ff_hevc_amf_encoder;
> extern const FFCodec ff_hevc_cuvid_decoder;
>+extern const FFCodec ff_hevc_d3d12va_encoder;
> extern const FFCodec ff_hevc_mediacodec_decoder;
> extern const FFCodec ff_hevc_mediacodec_encoder;
> extern const FFCodec ff_hevc_mf_encoder;
>diff --git a/libavcodec/d3d12va_encode.c b/libavcodec/d3d12va_encode.c
>new file mode 100644
>index 0000000000..88a08efa76
>--- /dev/null
>+++ b/libavcodec/d3d12va_encode.c
>@@ -0,0 +1,1550 @@
>+/*
>+ * Direct3D 12 HW acceleration video encoder
>+ *
>+ * Copyright (c) 2024 Intel Corporation
>+ *
>+ * This file is part of FFmpeg.
>+ *
>+ * FFmpeg is free software; you can redistribute it and/or
>+ * modify it under the terms of the GNU Lesser General Public
>+ * License as published by the Free Software Foundation; either
>+ * version 2.1 of the License, or (at your option) any later version.
>+ *
>+ * FFmpeg is distributed in the hope that it will be useful,
>+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
>+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
>+ * Lesser General Public License for more details.
>+ *
>+ * You should have received a copy of the GNU Lesser General Public
>+ * License along with FFmpeg; if not, write to the Free Software
>+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
>+ */
>+
>+#include "libavutil/avassert.h"
>+#include "libavutil/common.h"
>+#include "libavutil/internal.h"
>+#include "libavutil/log.h"
>+#include "libavutil/pixdesc.h"
>+#include "libavutil/hwcontext_d3d12va_internal.h"
>+#include "libavutil/hwcontext_d3d12va.h"
>+
>+#include "avcodec.h"
>+#include "d3d12va_encode.h"
>+#include "encode.h"
>+
>+const AVCodecHWConfigInternal *const ff_d3d12va_encode_hw_configs[] = {
>+    HW_CONFIG_ENCODER_FRAMES(D3D12, D3D12VA),
>+    NULL,
>+};
>+
>+static int d3d12va_fence_completion(AVD3D12VASyncContext *psync_ctx)
>+{
>+    uint64_t completion = ID3D12Fence_GetCompletedValue(psync_ctx-
>>fence);
>+    if (completion < psync_ctx->fence_value) {
>+        if (FAILED(ID3D12Fence_SetEventOnCompletion(psync_ctx->fence,
>psync_ctx->fence_value, psync_ctx->event)))
>+            return AVERROR(EINVAL);
>+
>+        WaitForSingleObjectEx(psync_ctx->event, INFINITE, FALSE);
>+    }
>+
>+    return 0;
>+}
>+
>+static int d3d12va_sync_with_gpu(AVCodecContext *avctx)
>+{
>+    D3D12VAEncodeContext *ctx = avctx->priv_data;
>+
>+    DX_CHECK(ID3D12CommandQueue_Signal(ctx->command_queue, ctx-
>>sync_ctx.fence, ++ctx->sync_ctx.fence_value));
>+    return d3d12va_fence_completion(&ctx->sync_ctx);
>+
>+fail:
>+    return AVERROR(EINVAL);
>+}
>+
>+typedef struct CommandAllocator {
>+    ID3D12CommandAllocator *command_allocator;
>+    uint64_t fence_value;
>+} CommandAllocator;
>+
>+static int d3d12va_get_valid_command_allocator(AVCodecContext *avctx,
>ID3D12CommandAllocator **ppAllocator)
>+{
>+    HRESULT hr;
>+    D3D12VAEncodeContext *ctx = avctx->priv_data;
>+    CommandAllocator allocator;
>+
>+    if (av_fifo_peek(ctx->allocator_queue, &allocator, 1, 0) >= 0) {
>+        uint64_t completion = ID3D12Fence_GetCompletedValue(ctx-
>>sync_ctx.fence);
>+        if (completion >= allocator.fence_value) {
>+            *ppAllocator = allocator.command_allocator;
>+            av_fifo_read(ctx->allocator_queue, &allocator, 1);
>+            return 0;
>+        }
>+    }
>+
>+    hr = ID3D12Device_CreateCommandAllocator(ctx->hwctx->device,
>D3D12_COMMAND_LIST_TYPE_VIDEO_ENCODE,
>+                                             &IID_ID3D12CommandAllocator, (void
>**)ppAllocator);
>+    if (FAILED(hr)) {
>+        av_log(avctx, AV_LOG_ERROR, "Failed to create a new command
>allocator!\n");
>+        return AVERROR(EINVAL);
>+    }
>+
>+    return 0;
>+}
>+
>+static int d3d12va_discard_command_allocator(AVCodecContext *avctx,
>ID3D12CommandAllocator *pAllocator, uint64_t fence_value)
>+{
>+    D3D12VAEncodeContext *ctx = avctx->priv_data;
>+
>+    CommandAllocator allocator = {
>+        .command_allocator = pAllocator,
>+        .fence_value = fence_value,
>+    };
>+
>+    av_fifo_write(ctx->allocator_queue, &allocator, 1);
>+
>+    return 0;
>+}
>+
>+static int d3d12va_encode_wait(AVCodecContext *avctx,
>+                               D3D12VAEncodePicture *pic)
>+{
>+    D3D12VAEncodeContext *ctx     = avctx->priv_data;
>+    HWBaseEncodePicture *base_pic = (HWBaseEncodePicture *)pic;
>+    uint64_t completion;
>+
>+    av_assert0(base_pic->encode_issued);
>+
>+    if (base_pic->encode_complete) {
>+        // Already waited for this picture.
>+        return 0;
>+    }
>+
>+    completion = ID3D12Fence_GetCompletedValue(ctx->sync_ctx.fence);
>+    if (completion < pic->fence_value) {
>+        if (FAILED(ID3D12Fence_SetEventOnCompletion(ctx->sync_ctx.fence, pic-
>>fence_value,
>+                                                    ctx->sync_ctx.event)))
>+            return AVERROR(EINVAL);
>+
>+        WaitForSingleObjectEx(ctx->sync_ctx.event, INFINITE, FALSE);
>+    }
>+
>+    av_log(avctx, AV_LOG_DEBUG, "Sync to pic %"PRId64"/%"PRId64" "
>+           "(input surface %p).\n", base_pic->display_order,
>+           base_pic->encode_order, pic->input_surface->texture);
>+
>+    av_frame_free(&base_pic->input_image);
>+
>+    base_pic->encode_complete = 1;
>+    return 0;
>+}
>+
>+static int d3d12va_encode_create_metadata_buffers(AVCodecContext *avctx,
>+                                                  D3D12VAEncodePicture *pic)
>+{
>+    D3D12VAEncodeContext *ctx = avctx->priv_data;
>+    int width = sizeof(D3D12_VIDEO_ENCODER_OUTPUT_METADATA) +
>sizeof(D3D12_VIDEO_ENCODER_FRAME_SUBREGION_METADATA);
>+    D3D12_HEAP_PROPERTIES encoded_meta_props = { .Type =
>D3D12_HEAP_TYPE_DEFAULT }, resolved_meta_props;
>+    D3D12_HEAP_TYPE resolved_heap_type = D3D12_HEAP_TYPE_READBACK;
>+    HRESULT hr;
>+
>+    D3D12_RESOURCE_DESC meta_desc = {
>+        .Dimension        = D3D12_RESOURCE_DIMENSION_BUFFER,
>+        .Alignment        = 0,
>+        .Width            = ctx->req.MaxEncoderOutputMetadataBufferSize,
>+        .Height           = 1,
>+        .DepthOrArraySize = 1,
>+        .MipLevels        = 1,
>+        .Format           = DXGI_FORMAT_UNKNOWN,
>+        .SampleDesc       = { .Count = 1, .Quality = 0 },
>+        .Layout           = D3D12_TEXTURE_LAYOUT_ROW_MAJOR,
>+        .Flags            = D3D12_RESOURCE_FLAG_NONE,
>+    };
>+
>+    hr = ID3D12Device_CreateCommittedResource(ctx->hwctx->device,
>&encoded_meta_props, D3D12_HEAP_FLAG_NONE,
>+                                              &meta_desc, D3D12_RESOURCE_STATE_COMMON,
>NULL,
>+                                              &IID_ID3D12Resource, (void **)&pic-
>>encoded_metadata);
>+    if (FAILED(hr)) {
>+        av_log(avctx, AV_LOG_ERROR, "Failed to create metadata buffer.\n");
>+        return AVERROR_UNKNOWN;
>+    }
>+
>+    ctx->hwctx->device->lpVtbl->GetCustomHeapProperties(ctx->hwctx-
>>device, &resolved_meta_props, 0, resolved_heap_type);
>+
>+    meta_desc.Width = width;
>+
>+    hr = ID3D12Device_CreateCommittedResource(ctx->hwctx->device,
>&resolved_meta_props, D3D12_HEAP_FLAG_NONE,
>+                                              &meta_desc, D3D12_RESOURCE_STATE_COMMON,
>NULL,
>+                                              &IID_ID3D12Resource, (void **)&pic-
>>resolved_metadata);
>+
>+    if (FAILED(hr)) {
>+        av_log(avctx, AV_LOG_ERROR, "Failed to create output metadata
>buffer.\n");
>+        return AVERROR_UNKNOWN;
>+    }
>+
>+    return 0;
>+}
>+
>+static int d3d12va_encode_issue(AVCodecContext *avctx,
>+                                const HWBaseEncodePicture *base_pic)
>+{
>+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
>+    D3D12VAEncodeContext     *ctx = avctx->priv_data;
>+    AVD3D12VAFramesContext *frames_hwctx = base_ctx->input_frames-
>>hwctx;
>+    D3D12VAEncodePicture *pic = (D3D12VAEncodePicture *)base_pic;
>+    int err, i, j;
>+    HRESULT hr;
>+    char data[MAX_PARAM_BUFFER_SIZE];
>+    void *ptr;
>+    size_t bit_len;
>+    ID3D12CommandAllocator *command_allocator = NULL;
>+    ID3D12VideoEncodeCommandList2 *cmd_list = ctx->command_list;
>+    D3D12_RESOURCE_BARRIER barriers[32] = { 0 };
>+    D3D12_VIDEO_ENCODE_REFERENCE_FRAMES d3d12_refs = { 0 };
>+
>+    D3D12_VIDEO_ENCODER_ENCODEFRAME_INPUT_ARGUMENTS input_args
>= {
>+        .SequenceControlDesc = {
>+            .Flags = D3D12_VIDEO_ENCODER_SEQUENCE_CONTROL_FLAG_NONE,
>+            .IntraRefreshConfig = { 0 },
>+            .RateControl = ctx->rc,
>+            .PictureTargetResolution = ctx->resolution,
>+            .SelectedLayoutMode =
>D3D12_VIDEO_ENCODER_FRAME_SUBREGION_LAYOUT_MODE_FULL_FRAME,
>+            .FrameSubregionsLayoutData = { 0 },
>+            .CodecGopSequence = ctx->gop,
>+        },
>+        .pInputFrame = pic->input_surface->texture,
>+        .InputFrameSubresource = 0,
>+    };
>+
>+    D3D12_VIDEO_ENCODER_ENCODEFRAME_OUTPUT_ARGUMENTS
>output_args = { 0 };
>+
>+    D3D12_VIDEO_ENCODER_RESOLVE_METADATA_INPUT_ARGUMENTS
>input_metadata = {
>+        .EncoderCodec = ctx->codec->d3d12_codec,
>+        .EncoderProfile = ctx->profile->d3d12_profile,
>+        .EncoderInputFormat = frames_hwctx->format,
>+        .EncodedPictureEffectiveResolution = ctx->resolution,
>+    };
>+
>+    D3D12_VIDEO_ENCODER_RESOLVE_METADATA_OUTPUT_ARGUMENTS
>output_metadata = { 0 };
>+
>+    memset(data, 0, sizeof(data));
>+
>+    av_log(avctx, AV_LOG_DEBUG, "Issuing encode for
>pic %"PRId64"/%"PRId64" "
>+           "as type %s.\n", base_pic->display_order, base_pic->encode_order,
>+           ff_hw_base_encode_get_pictype_name(base_pic->type));
>+    if (base_pic->nb_refs[0] == 0 && base_pic->nb_refs[1] == 0) {
>+        av_log(avctx, AV_LOG_DEBUG, "No reference pictures.\n");
>+    } else {
>+        av_log(avctx, AV_LOG_DEBUG, "L0 refers to");
>+        for (i = 0; i < base_pic->nb_refs[0]; i++) {
>+            av_log(avctx, AV_LOG_DEBUG, " %"PRId64"/%"PRId64,
>+                   base_pic->refs[0][i]->display_order, base_pic->refs[0][i]-
>>encode_order);
>+        }
>+        av_log(avctx, AV_LOG_DEBUG, ".\n");
>+
>+        if (base_pic->nb_refs[1]) {
>+            av_log(avctx, AV_LOG_DEBUG, "L1 refers to");
>+            for (i = 0; i < base_pic->nb_refs[1]; i++) {
>+                av_log(avctx, AV_LOG_DEBUG, " %"PRId64"/%"PRId64,
>+                       base_pic->refs[1][i]->display_order, base_pic->refs[1][i]-
>>encode_order);
>+            }
>+            av_log(avctx, AV_LOG_DEBUG, ".\n");
>+        }
>+    }
>+
>+    av_assert0(!base_pic->encode_issued);
>+    for (i = 0; i < base_pic->nb_refs[0]; i++) {
>+        av_assert0(base_pic->refs[0][i]);
>+        av_assert0(base_pic->refs[0][i]->encode_issued);
>+    }
>+    for (i = 0; i < base_pic->nb_refs[1]; i++) {
>+        av_assert0(base_pic->refs[1][i]);
>+        av_assert0(base_pic->refs[1][i]->encode_issued);
>+    }
>+
>+    av_log(avctx, AV_LOG_DEBUG, "Input surface is %p.\n", pic->input_surface-
>>texture);
>+
>+    err = av_hwframe_get_buffer(base_ctx->recon_frames_ref, base_pic-
>>recon_image, 0);
>+    if (err < 0) {
>+        err = AVERROR(ENOMEM);
>+        goto fail;
>+    }
>+
>+    pic->recon_surface = (AVD3D12VAFrame *)base_pic->recon_image-
>>data[0];
>+    av_log(avctx, AV_LOG_DEBUG, "Recon surface is %p.\n",
>+           pic->recon_surface->texture);
>+
>+    pic->output_buffer_ref = av_buffer_pool_get(ctx->output_buffer_pool);
>+    if (!pic->output_buffer_ref) {
>+        err = AVERROR(ENOMEM);
>+        goto fail;
>+    }
>+    pic->output_buffer = (ID3D12Resource *)pic->output_buffer_ref->data;
>+    av_log(avctx, AV_LOG_DEBUG, "Output buffer is %p.\n",
>+           pic->output_buffer);
>+
>+    err = d3d12va_encode_create_metadata_buffers(avctx, pic);
>+    if (err < 0)
>+        goto fail;
>+
>+    if (ctx->codec->init_picture_params) {
>+        err = ctx->codec->init_picture_params(avctx, pic);
>+        if (err < 0) {
>+            av_log(avctx, AV_LOG_ERROR, "Failed to initialise picture "
>+                   "parameters: %d.\n", err);
>+            goto fail;
>+        }
>+    }
>+
>+    if (base_pic->type == PICTURE_TYPE_IDR) {
>+        if (ctx->codec->write_sequence_header) {
>+            bit_len = 8 * sizeof(data);
>+            err = ctx->codec->write_sequence_header(avctx, data, &bit_len);
>+            if (err < 0) {
>+                av_log(avctx, AV_LOG_ERROR, "Failed to write per-sequence "
>+                       "header: %d.\n", err);
>+                goto fail;
>+            }
>+        }
>+
>+        pic->header_size = (int)bit_len / 8;
>+        pic->header_size = pic->header_size % ctx-
>>req.CompressedBitstreamBufferAccessAlignment ?
>+                           FFALIGN(pic->header_size, ctx-
>>req.CompressedBitstreamBufferAccessAlignment) :
>+                           pic->header_size;
>+
>+        hr = ID3D12Resource_Map(pic->output_buffer, 0, NULL, (void **)&ptr);
>+        if (FAILED(hr)) {
>+            err = AVERROR_UNKNOWN;
>+            goto fail;
>+        }
>+
>+        memcpy(ptr, data, pic->header_size);
>+        ID3D12Resource_Unmap(pic->output_buffer, 0, NULL);
>+    }
>+
>+    d3d12_refs.NumTexture2Ds = base_pic->nb_refs[0] + base_pic->nb_refs[1];
>+    if (d3d12_refs.NumTexture2Ds) {
>+        d3d12_refs.ppTexture2Ds = av_calloc(d3d12_refs.NumTexture2Ds,
>+                                            sizeof(*d3d12_refs.ppTexture2Ds));
>+        if (!d3d12_refs.ppTexture2Ds) {
>+            err = AVERROR(ENOMEM);
>+            goto fail;
>+        }
>+
>+        i = 0;
>+        for (j = 0; j < base_pic->nb_refs[0]; j++)
>+            d3d12_refs.ppTexture2Ds[i++] = ((D3D12VAEncodePicture *)base_pic-
>>refs[0][j])->recon_surface->texture;
>+        for (j = 0; j < base_pic->nb_refs[1]; j++)
>+            d3d12_refs.ppTexture2Ds[i++] = ((D3D12VAEncodePicture *)base_pic-
>>refs[1][j])->recon_surface->texture;
>+    }
>+
>+    input_args.PictureControlDesc.IntraRefreshFrameIndex  = 0;
>+    if (base_pic->is_reference)
>+        input_args.PictureControlDesc.Flags |=
>D3D12_VIDEO_ENCODER_PICTURE_CONTROL_FLAG_USED_AS_REFERENCE_PI
>CTURE;
>+
>+    input_args.PictureControlDesc.PictureControlCodecData = pic->pic_ctl;
>+    input_args.PictureControlDesc.ReferenceFrames         = d3d12_refs;
>+    input_args.CurrentFrameBitstreamMetadataSize          = pic->header_size;
>+
>+    output_args.Bitstream.pBuffer                                    = pic->output_buffer;
>+    output_args.Bitstream.FrameStartOffset                           = pic->header_size;
>+    output_args.ReconstructedPicture.pReconstructedPicture           = pic-
>>recon_surface->texture;
>+    output_args.ReconstructedPicture.ReconstructedPictureSubresource = 0;
>+    output_args.EncoderOutputMetadata.pBuffer                        = pic-
>>encoded_metadata;
>+    output_args.EncoderOutputMetadata.Offset                         = 0;
>+
>+    input_metadata.HWLayoutMetadata.pBuffer = pic->encoded_metadata;
>+    input_metadata.HWLayoutMetadata.Offset  = 0;
>+
>+    output_metadata.ResolvedLayoutMetadata.pBuffer = pic-
>>resolved_metadata;
>+    output_metadata.ResolvedLayoutMetadata.Offset  = 0;
>+
>+    err = d3d12va_get_valid_command_allocator(avctx,
>&command_allocator);
>+    if (err < 0)
>+        goto fail;
>+
>+    hr = ID3D12CommandAllocator_Reset(command_allocator);
>+    if (FAILED(hr)) {
>+        err = AVERROR_UNKNOWN;
>+        goto fail;
>+    }
>+
>+    hr = ID3D12VideoEncodeCommandList2_Reset(cmd_list,
>command_allocator);
>+    if (FAILED(hr)) {
>+        err = AVERROR_UNKNOWN;
>+        goto fail;
>+    }
>+
>+#define TRANSITION_BARRIER(res, before, after)                      \
>+    (D3D12_RESOURCE_BARRIER) {                                      \
>+        .Type  = D3D12_RESOURCE_BARRIER_TYPE_TRANSITION,            \
>+        .Flags = D3D12_RESOURCE_BARRIER_FLAG_NONE,                  \
>+        .Transition = {                                             \
>+            .pResource   = res,                                     \
>+            .Subresource = D3D12_RESOURCE_BARRIER_ALL_SUBRESOURCES, \
>+            .StateBefore = before,                                  \
>+            .StateAfter  = after,                                   \
>+        },                                                          \
>+    }
>+
>+    barriers[0] = TRANSITION_BARRIER(pic->input_surface->texture,
>+                                     D3D12_RESOURCE_STATE_COMMON,
>+                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ);
>+    barriers[1] = TRANSITION_BARRIER(pic->output_buffer,
>+                                     D3D12_RESOURCE_STATE_COMMON,
>+                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE);
>+    barriers[2] = TRANSITION_BARRIER(pic->recon_surface->texture,
>+                                     D3D12_RESOURCE_STATE_COMMON,
>+                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE);
>+    barriers[3] = TRANSITION_BARRIER(pic->encoded_metadata,
>+                                     D3D12_RESOURCE_STATE_COMMON,
>+                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE);
>+    barriers[4] = TRANSITION_BARRIER(pic->resolved_metadata,
>+                                     D3D12_RESOURCE_STATE_COMMON,
>+                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE);
>+
>+    ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, 5, barriers);
>+
>+    if (d3d12_refs.NumTexture2Ds) {
>+        D3D12_RESOURCE_BARRIER refs_barriers[3];
>+
>+        for (i = 0; i < d3d12_refs.NumTexture2Ds; i++)
>+            refs_barriers[i] = TRANSITION_BARRIER(d3d12_refs.ppTexture2Ds[i],
>+                                                  D3D12_RESOURCE_STATE_COMMON,
>+
>D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ);
>+
>+        ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list,
>d3d12_refs.NumTexture2Ds,
>+                                                      refs_barriers);
>+    }
>+
>+    ID3D12VideoEncodeCommandList2_EncodeFrame(cmd_list, ctx->encoder,
>ctx->encoder_heap,
>+                                              &input_args, &output_args);
>+
>+    barriers[3] = TRANSITION_BARRIER(pic->encoded_metadata,
>+                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE,
>+                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ);
>+
>+    ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, 1,
>&barriers[3]);
>+
>+
>ID3D12VideoEncodeCommandList2_ResolveEncoderOutputMetadata(cmd_list
>, &input_metadata, &output_metadata);
>+
>+    if (d3d12_refs.NumTexture2Ds) {
>+        D3D12_RESOURCE_BARRIER refs_barriers[3];
>+
>+        for (i = 0; i < d3d12_refs.NumTexture2Ds; i++)
>+                    refs_barriers[i] =
>TRANSITION_BARRIER(d3d12_refs.ppTexture2Ds[i],
>+
>D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ,
>+                                                          D3D12_RESOURCE_STATE_COMMON);
>+
>+        ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list,
>d3d12_refs.NumTexture2Ds,
>+                                                      refs_barriers);
>+    }
>+
>+    barriers[0] = TRANSITION_BARRIER(pic->input_surface->texture,
>+                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ,
>+                                     D3D12_RESOURCE_STATE_COMMON);
>+    barriers[1] = TRANSITION_BARRIER(pic->output_buffer,
>+                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE,
>+                                     D3D12_RESOURCE_STATE_COMMON);
>+    barriers[2] = TRANSITION_BARRIER(pic->recon_surface->texture,
>+                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE,
>+                                     D3D12_RESOURCE_STATE_COMMON);
>+    barriers[3] = TRANSITION_BARRIER(pic->encoded_metadata,
>+                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ,
>+                                     D3D12_RESOURCE_STATE_COMMON);
>+    barriers[4] = TRANSITION_BARRIER(pic->resolved_metadata,
>+                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE,
>+                                     D3D12_RESOURCE_STATE_COMMON);
>+
>+    ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, 5, barriers);
>+
>+    hr = ID3D12VideoEncodeCommandList2_Close(cmd_list);
>+    if (FAILED(hr)) {
>+        err = AVERROR_UNKNOWN;
>+        goto fail;
>+    }
>+
>+    hr = ID3D12CommandQueue_Wait(ctx->command_queue, pic-
>>input_surface->sync_ctx.fence,
>+                                 pic->input_surface->sync_ctx.fence_value);
>+    if (FAILED(hr)) {
>+        err = AVERROR_UNKNOWN;
>+        goto fail;
>+    }
>+
>+    ID3D12CommandQueue_ExecuteCommandLists(ctx->command_queue, 1,
>(ID3D12CommandList **)&ctx->command_list);
>+
>+    hr = ID3D12CommandQueue_Signal(ctx->command_queue, pic-
>>input_surface->sync_ctx.fence,
>+                                   ++pic->input_surface->sync_ctx.fence_value);
>+    if (FAILED(hr)) {
>+        err = AVERROR_UNKNOWN;
>+        goto fail;
>+    }
>+
>+    hr = ID3D12CommandQueue_Signal(ctx->command_queue, ctx-
>>sync_ctx.fence, ++ctx->sync_ctx.fence_value);
>+    if (FAILED(hr)) {
>+        err = AVERROR_UNKNOWN;
>+        goto fail;
>+    }
>+
>+    err = d3d12va_discard_command_allocator(avctx, command_allocator, ctx-
>>sync_ctx.fence_value);
>+    if (err < 0)
>+        goto fail;
>+
>+    pic->fence_value = ctx->sync_ctx.fence_value;
>+
>+    if (d3d12_refs.ppTexture2Ds)
>+        av_freep(&d3d12_refs.ppTexture2Ds);
>+
>+    return 0;
>+
>+fail:
>+    if (command_allocator)
>+        d3d12va_discard_command_allocator(avctx, command_allocator, ctx-
>>sync_ctx.fence_value);
>+
>+    if (d3d12_refs.ppTexture2Ds)
>+        av_freep(&d3d12_refs.ppTexture2Ds);
>+
>+    if (ctx->codec->free_picture_params)
>+        ctx->codec->free_picture_params(pic);
>+
>+    av_buffer_unref(&pic->output_buffer_ref);
>+    pic->output_buffer = NULL;
>+    D3D12_OBJECT_RELEASE(pic->encoded_metadata);
>+    D3D12_OBJECT_RELEASE(pic->resolved_metadata);
>+    return err;
>+}
>+
>+static int d3d12va_encode_discard(AVCodecContext *avctx,
>+                                  D3D12VAEncodePicture *pic)
>+{
>+    HWBaseEncodePicture *base_pic = (HWBaseEncodePicture *)pic;
>+    d3d12va_encode_wait(avctx, pic);
>+
>+    if (pic->output_buffer_ref) {
>+        av_log(avctx, AV_LOG_DEBUG, "Discard output for pic "
>+               "%"PRId64"/%"PRId64".\n",
>+               base_pic->display_order, base_pic->encode_order);
>+
>+        av_buffer_unref(&pic->output_buffer_ref);
>+        pic->output_buffer = NULL;
>+    }
>+
>+    D3D12_OBJECT_RELEASE(pic->encoded_metadata);
>+    D3D12_OBJECT_RELEASE(pic->resolved_metadata);
>+
>+    return 0;
>+}
>+
>+static int d3d12va_encode_free_rc_params(AVCodecContext *avctx)
>+{
>+    D3D12VAEncodeContext *ctx = avctx->priv_data;
>+
>+    switch (ctx->rc.Mode)
>+    {
>+    case D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_CQP:
>+        av_freep(&ctx->rc.ConfigParams.pConfiguration_CQP);
>+        break;
>+    case D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_CBR:
>+        av_freep(&ctx->rc.ConfigParams.pConfiguration_CBR);
>+        break;
>+    case D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_VBR:
>+        av_freep(&ctx->rc.ConfigParams.pConfiguration_VBR);
>+        break;
>+    case D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_QVBR:
>+        av_freep(&ctx->rc.ConfigParams.pConfiguration_QVBR);
>+        break;
>+    default:
>+        break;
>+    }
>+
>+    return 0;
>+}
>+
>+static HWBaseEncodePicture *d3d12va_encode_alloc(AVCodecContext
>*avctx,
>+                                                  const AVFrame *frame)
>+{
>+    D3D12VAEncodeContext *ctx = avctx->priv_data;
>+    D3D12VAEncodePicture *pic;
>+
>+    pic = av_mallocz(sizeof(*pic));
>+    if (!pic)
>+        return NULL;
>+
>+    if (ctx->codec->picture_priv_data_size > 0) {
>+        pic->base.priv_data = av_mallocz(ctx->codec->picture_priv_data_size);
>+        if (!pic->base.priv_data) {
>+            av_freep(&pic);
>+            return NULL;
>+        }
>+    }
>+
>+    pic->input_surface = (AVD3D12VAFrame *)frame->data[0];
>+
>+    return (HWBaseEncodePicture *)pic;
>+}
>+
>+static int d3d12va_encode_free(AVCodecContext *avctx,
>+                               HWBaseEncodePicture *base_pic)
>+{
>+    D3D12VAEncodeContext *ctx = avctx->priv_data;
>+    D3D12VAEncodePicture *pic = (D3D12VAEncodePicture *)base_pic;
>+
>+    if (base_pic->encode_issued)
>+        d3d12va_encode_discard(avctx, pic);
>+
>+    if (ctx->codec->free_picture_params)
>+        ctx->codec->free_picture_params(pic);
>+
>+    ff_hw_base_encode_free(avctx, base_pic);
>+
>+    av_free(pic);
>+
>+    return 0;
>+}
>+
>+static int d3d12va_encode_get_buffer_size(AVCodecContext *avctx,
>+                                          D3D12VAEncodePicture *pic, size_t *size)
>+{
>+    D3D12_VIDEO_ENCODER_OUTPUT_METADATA *meta = NULL;
>+    uint8_t *data;
>+    HRESULT hr;
>+    int err;
>+
>+    hr = ID3D12Resource_Map(pic->resolved_metadata, 0, NULL, (void
>**)&data);
>+    if (FAILED(hr)) {
>+        err = AVERROR_UNKNOWN;
>+        return err;
>+    }
>+
>+    meta = (D3D12_VIDEO_ENCODER_OUTPUT_METADATA *)data;
>+
>+    if (meta->EncodeErrorFlags !=
>D3D12_VIDEO_ENCODER_ENCODE_ERROR_FLAG_NO_ERROR) {
>+        av_log(avctx, AV_LOG_ERROR, "Encode failed %"PRIu64"\n", meta-
>>EncodeErrorFlags);
>+        err = AVERROR(EINVAL);
>+        return err;
>+    }
>+
>+    if (meta->EncodedBitstreamWrittenBytesCount == 0) {
>+        av_log(avctx, AV_LOG_ERROR, "No bytes were written to encoded
>bitstream\n");
>+        err = AVERROR(EINVAL);
>+        return err;
>+    }
>+
>+    *size = meta->EncodedBitstreamWrittenBytesCount;
>+
>+    ID3D12Resource_Unmap(pic->resolved_metadata, 0, NULL);
>+
>+    return 0;
>+}
>+
>+static int d3d12va_encode_get_coded_data(AVCodecContext *avctx,
>+                                         D3D12VAEncodePicture *pic, AVPacket *pkt)
>+{
>+    int err;
>+    uint8_t *ptr, *mapped_data;
>+    size_t total_size = 0;
>+    HRESULT hr;
>+
>+    err = d3d12va_encode_get_buffer_size(avctx, pic, &total_size);
>+    if (err < 0)
>+        goto end;
>+
>+    total_size += pic->header_size;
>+    av_log(avctx, AV_LOG_DEBUG, "Output buffer size %"PRId64"\n",
>total_size);
>+
>+    hr = ID3D12Resource_Map(pic->output_buffer, 0, NULL, (void
>**)&mapped_data);
>+    if (FAILED(hr)) {
>+        err = AVERROR_UNKNOWN;
>+        goto end;
>+    }
>+
>+    err = ff_get_encode_buffer(avctx, pkt, total_size, 0);
>+    if (err < 0)
>+        goto end;
>+    ptr = pkt->data;
>+
>+    memcpy(ptr, mapped_data, total_size);
>+
>+    ID3D12Resource_Unmap(pic->output_buffer, 0, NULL);
>+
>+end:
>+    av_buffer_unref(&pic->output_buffer_ref);
>+    pic->output_buffer = NULL;
>+    return err;
>+}
>+
>+static int d3d12va_encode_output(AVCodecContext *avctx,
>+                                 const HWBaseEncodePicture *base_pic, AVPacket *pkt)
>+{
>+    D3D12VAEncodeContext *ctx = avctx->priv_data;
>+    D3D12VAEncodePicture *pic = (D3D12VAEncodePicture *)base_pic;
>+    AVPacket *pkt_ptr = pkt;
>+    int err;
>+
>+    err = d3d12va_encode_wait(avctx, pic);
>+    if (err < 0)
>+        return err;
>+
>+    err = d3d12va_encode_get_coded_data(avctx, pic, pkt);
>+    if (err < 0)
>+        return err;
>+
>+    av_log(avctx, AV_LOG_DEBUG, "Output read for
>pic %"PRId64"/%"PRId64".\n",
>+           base_pic->display_order, base_pic->encode_order);
>+
>+    ff_hw_base_encode_set_output_property(avctx, (HWBaseEncodePicture
>*)base_pic, pkt_ptr, 0);
>+
>+    return 0;
>+}
>+
>+static int d3d12va_encode_set_profile(AVCodecContext *avctx)
>+{
>+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
>+    D3D12VAEncodeContext *ctx     = avctx->priv_data;
>+    const D3D12VAEncodeProfile *profile;
>+    const AVPixFmtDescriptor *desc;
>+    int i, depth;
>+
>+    desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format);
>+    if (!desc) {
>+        av_log(avctx, AV_LOG_ERROR, "Invalid input pixfmt (%d).\n",
>+               base_ctx->input_frames->sw_format);
>+        return AVERROR(EINVAL);
>+    }
>+
>+    depth = desc->comp[0].depth;
>+    for (i = 1; i < desc->nb_components; i++) {
>+        if (desc->comp[i].depth != depth) {
>+            av_log(avctx, AV_LOG_ERROR, "Invalid input pixfmt (%s).\n",
>+                   desc->name);
>+            return AVERROR(EINVAL);
>+        }
>+    }
>+    av_log(avctx, AV_LOG_VERBOSE, "Input surface format is %s.\n",
>+           desc->name);
>+
>+    av_assert0(ctx->codec->profiles);
>+    for (i = 0; (ctx->codec->profiles[i].av_profile !=
>+                 AV_PROFILE_UNKNOWN); i++) {
>+        profile = &ctx->codec->profiles[i];
>+        if (depth               != profile->depth ||
>+            desc->nb_components != profile->nb_components)
>+            continue;
>+        if (desc->nb_components > 1 &&
>+            (desc->log2_chroma_w != profile->log2_chroma_w ||
>+             desc->log2_chroma_h != profile->log2_chroma_h))
>+            continue;
>+        if (avctx->profile != profile->av_profile &&
>+            avctx->profile != AV_PROFILE_UNKNOWN)
>+            continue;
>+
>+        ctx->profile = profile;
>+        break;
>+    }
>+    if (!ctx->profile) {
>+        av_log(avctx, AV_LOG_ERROR, "No usable encoding profile found.\n");
>+        return AVERROR(ENOSYS);
>+    }
>+
>+    avctx->profile = profile->av_profile;
>+    return 0;
>+}
>+
>+static const D3D12VAEncodeRCMode d3d12va_encode_rc_modes[] = {
>+    //                     Bitrate   Quality
>+    //                        | Maxrate | HRD/VBV
>+    { 0 }, //             |    |    |    |
>+    { RC_MODE_CQP,  "CQP",  0,   0,   1,   0, 1,
>D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_CQP },
>+    { RC_MODE_CBR,  "CBR",  1,   0,   0,   1, 1,
>D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_CBR },
>+    { RC_MODE_VBR,  "VBR",  1,   1,   0,   1, 1,
>D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_VBR },
>+    { RC_MODE_QVBR, "QVBR", 1,   1,   1,   1, 1,
>D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_QVBR },
>+};
>+
>+static int check_rate_control_support(AVCodecContext *avctx, const
>D3D12VAEncodeRCMode *rc_mode)
>+{
>+    HRESULT hr;
>+    D3D12VAEncodeContext *ctx = avctx->priv_data;
>+    D3D12_FEATURE_DATA_VIDEO_ENCODER_RATE_CONTROL_MODE
>d3d12_rc_mode = {
>+        .Codec = ctx->codec->d3d12_codec,
>+    };
>+
>+    if (!rc_mode->d3d12_mode)
>+        return 0;
>+
>+    d3d12_rc_mode.IsSupported = 0;
>+    d3d12_rc_mode.RateControlMode = rc_mode->d3d12_mode;
>+
>+    hr = ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3,
>+
>D3D12_FEATURE_VIDEO_ENCODER_RATE_CONTROL_MODE,
>+                                                &d3d12_rc_mode, sizeof(d3d12_rc_mode));
>+    if (FAILED(hr)) {
>+        av_log(avctx, AV_LOG_ERROR, "Failed to check rate control support.\n");
>+        return 0;
>+    }
>+
>+    return d3d12_rc_mode.IsSupported;
>+}
>+
>+static int d3d12va_encode_init_rate_control(AVCodecContext *avctx)
>+{
>+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
>+    D3D12VAEncodeContext     *ctx = avctx->priv_data;
>+    int64_t rc_target_bitrate;
>+    int64_t rc_peak_bitrate;
>+    int     rc_quality;
>+    int64_t hrd_buffer_size;
>+    int64_t hrd_initial_buffer_fullness;
>+    int fr_num, fr_den;
>+    const D3D12VAEncodeRCMode *rc_mode;
>+
>+    // Rate control mode selection:
>+    // * If the user has set a mode explicitly with the rc_mode option,
>+    //   use it and fail if it is not available.
>+    // * If an explicit QP option has been set, use CQP.
>+    // * If the codec is CQ-only, use CQP.
>+    // * If the QSCALE avcodec option is set, use CQP.
>+    // * If bitrate and quality are both set, try QVBR.
>+    // * If quality is set, try CQP.
>+    // * If bitrate and maxrate are set and have the same value, try CBR.
>+    // * If a bitrate is set, try VBR, then CBR.
>+    // * If no bitrate is set, try CQP.
>+
>+#define TRY_RC_MODE(mode, fail) do { \
>+        rc_mode = &d3d12va_encode_rc_modes[mode]; \
>+        if (!(rc_mode->d3d12_mode && check_rate_control_support(avctx,
>rc_mode))) { \
>+            if (fail) { \
>+                av_log(avctx, AV_LOG_ERROR, "Driver does not support %s " \
>+                       "RC mode.\n", rc_mode->name); \
>+                return AVERROR(EINVAL); \
>+            } \
>+            av_log(avctx, AV_LOG_DEBUG, "Driver does not support %s " \
>+                   "RC mode.\n", rc_mode->name); \
>+            rc_mode = NULL; \
>+        } else { \
>+            goto rc_mode_found; \
>+        } \
>+    } while (0)
>+
>+    if (base_ctx->explicit_rc_mode)
>+        TRY_RC_MODE(base_ctx->explicit_rc_mode, 1);
>+
>+    if (base_ctx->explicit_qp)
>+        TRY_RC_MODE(RC_MODE_CQP, 1);
>+
>+    if (ctx->codec->flags & FLAG_CONSTANT_QUALITY_ONLY)
>+        TRY_RC_MODE(RC_MODE_CQP, 1);
>+
>+    if (avctx->flags & AV_CODEC_FLAG_QSCALE)
>+        TRY_RC_MODE(RC_MODE_CQP, 1);
>+
>+    if (avctx->bit_rate > 0 && avctx->global_quality > 0)
>+        TRY_RC_MODE(RC_MODE_QVBR, 0);
>+
>+    if (avctx->global_quality > 0) {
>+        TRY_RC_MODE(RC_MODE_CQP, 0);
>+    }
>+
>+    if (avctx->bit_rate > 0 && avctx->rc_max_rate == avctx->bit_rate)
>+        TRY_RC_MODE(RC_MODE_CBR, 0);
>+
>+    if (avctx->bit_rate > 0) {
>+        TRY_RC_MODE(RC_MODE_VBR, 0);
>+        TRY_RC_MODE(RC_MODE_CBR, 0);
>+    } else {
>+        TRY_RC_MODE(RC_MODE_CQP, 0);
>+    }
>+
>+    av_log(avctx, AV_LOG_ERROR, "Driver does not support any "
>+           "RC mode compatible with selected options.\n");
>+    return AVERROR(EINVAL);
>+
>+rc_mode_found:
>+    if (rc_mode->bitrate) {
>+        if (avctx->bit_rate <= 0) {
>+            av_log(avctx, AV_LOG_ERROR, "Bitrate must be set for %s "
>+                   "RC mode.\n", rc_mode->name);
>+            return AVERROR(EINVAL);
>+        }
>+
>+        if (rc_mode->maxrate) {
>+            if (avctx->rc_max_rate > 0) {
>+                if (avctx->rc_max_rate < avctx->bit_rate) {
>+                    av_log(avctx, AV_LOG_ERROR, "Invalid bitrate settings: "
>+                           "bitrate (%"PRId64") must not be greater than "
>+                           "maxrate (%"PRId64").\n", avctx->bit_rate,
>+                           avctx->rc_max_rate);
>+                    return AVERROR(EINVAL);
>+                }
>+                rc_target_bitrate = avctx->bit_rate;
>+                rc_peak_bitrate   = avctx->rc_max_rate;
>+            } else {
>+                // We only have a target bitrate, but this mode requires
>+                // that a maximum rate be supplied as well.  Since the
>+                // user does not want this to be a constraint, arbitrarily
>+                // pick a maximum rate of double the target rate.
>+                rc_target_bitrate = avctx->bit_rate;
>+                rc_peak_bitrate   = 2 * avctx->bit_rate;
>+            }
>+        } else {
>+            if (avctx->rc_max_rate > avctx->bit_rate) {
>+                av_log(avctx, AV_LOG_WARNING, "Max bitrate is ignored "
>+                       "in %s RC mode.\n", rc_mode->name);
>+            }
>+            rc_target_bitrate = avctx->bit_rate;
>+            rc_peak_bitrate   = 0;
>+        }
>+    } else {
>+        rc_target_bitrate = 0;
>+        rc_peak_bitrate   = 0;
>+    }
>+
>+    if (rc_mode->quality) {
>+        if (base_ctx->explicit_qp) {
>+            rc_quality = base_ctx->explicit_qp;
>+        } else if (avctx->global_quality > 0) {
>+            rc_quality = avctx->global_quality;
>+        } else {
>+            rc_quality = ctx->codec->default_quality;
>+            av_log(avctx, AV_LOG_WARNING, "No quality level set; "
>+                   "using default (%d).\n", rc_quality);
>+        }
>+    } else {
>+        rc_quality = 0;
>+    }
>+
>+    if (rc_mode->hrd) {
>+        if (avctx->rc_buffer_size)
>+            hrd_buffer_size = avctx->rc_buffer_size;
>+        else if (avctx->rc_max_rate > 0)
>+            hrd_buffer_size = avctx->rc_max_rate;
>+        else
>+            hrd_buffer_size = avctx->bit_rate;
>+        if (avctx->rc_initial_buffer_occupancy) {
>+            if (avctx->rc_initial_buffer_occupancy > hrd_buffer_size) {
>+                av_log(avctx, AV_LOG_ERROR, "Invalid RC buffer settings: "
>+                       "must have initial buffer size (%d) <= "
>+                       "buffer size (%"PRId64").\n",
>+                       avctx->rc_initial_buffer_occupancy, hrd_buffer_size);
>+                return AVERROR(EINVAL);
>+            }
>+            hrd_initial_buffer_fullness = avctx->rc_initial_buffer_occupancy;
>+        } else {
>+            hrd_initial_buffer_fullness = hrd_buffer_size * 3 / 4;
>+        }
>+    } else {
>+        if (avctx->rc_buffer_size || avctx->rc_initial_buffer_occupancy) {
>+            av_log(avctx, AV_LOG_WARNING, "Buffering settings are ignored "
>+                   "in %s RC mode.\n", rc_mode->name);
>+        }
>+
>+        hrd_buffer_size             = 0;
>+        hrd_initial_buffer_fullness = 0;
>+    }
>+
>+    if (rc_target_bitrate          > UINT32_MAX ||
>+        hrd_buffer_size             > UINT32_MAX ||
>+        hrd_initial_buffer_fullness > UINT32_MAX) {
>+        av_log(avctx, AV_LOG_ERROR, "RC parameters of 2^32 or "
>+               "greater are not supported by D3D12.\n");
>+        return AVERROR(EINVAL);
>+    }
>+
>+    base_ctx->rc_quality  = rc_quality;
>+
>+    av_log(avctx, AV_LOG_VERBOSE, "RC mode: %s.\n", rc_mode->name);
>+
>+    if (rc_mode->quality)
>+        av_log(avctx, AV_LOG_VERBOSE, "RC quality: %d.\n", rc_quality);
>+
>+    if (rc_mode->hrd) {
>+        av_log(avctx, AV_LOG_VERBOSE, "RC buffer: %"PRId64" bits, "
>+               "initial fullness %"PRId64" bits.\n",
>+               hrd_buffer_size, hrd_initial_buffer_fullness);
>+    }
>+
>+    if (avctx->framerate.num > 0 && avctx->framerate.den > 0)
>+        av_reduce(&fr_num, &fr_den,
>+                  avctx->framerate.num, avctx->framerate.den, 65535);
>+    else
>+        av_reduce(&fr_num, &fr_den,
>+                  avctx->time_base.den, avctx->time_base.num, 65535);
>+
>+    av_log(avctx, AV_LOG_VERBOSE, "RC framerate: %d/%d (%.2f fps).\n",
>+           fr_num, fr_den, (double)fr_num / fr_den);
>+
>+    ctx->rc.Flags                       =
>D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_NONE;
>+    ctx->rc.TargetFrameRate.Numerator   = fr_num;
>+    ctx->rc.TargetFrameRate.Denominator = fr_den;
>+    ctx->rc.Mode                        = rc_mode->d3d12_mode;
>+
>+    switch (rc_mode->mode) {
>+        case RC_MODE_CQP:
>+            // cqp ConfigParams will be updated in ctx->codec->configure.
>+            break;
>+
>+        case RC_MODE_CBR:
>+            D3D12_VIDEO_ENCODER_RATE_CONTROL_CBR *cbr_ctl;
>+
>+            ctx->rc.ConfigParams.DataSize =
>sizeof(D3D12_VIDEO_ENCODER_RATE_CONTROL_CBR);
>+            cbr_ctl = av_mallocz(ctx->rc.ConfigParams.DataSize);
>+            if (!cbr_ctl)
>+                return AVERROR(ENOMEM);
>+
>+            cbr_ctl->TargetBitRate      = rc_target_bitrate;
>+            cbr_ctl->VBVCapacity        = hrd_buffer_size;
>+            cbr_ctl->InitialVBVFullness = hrd_initial_buffer_fullness;
>+            ctx->rc.Flags |=
>D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_ENABLE_VBV_SIZES;
>+
>+            if (avctx->qmin > 0 || avctx->qmax > 0) {
>+                cbr_ctl->MinQP = avctx->qmin;
>+                cbr_ctl->MaxQP = avctx->qmax;
>+                ctx->rc.Flags |=
>D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_ENABLE_QP_RANGE;
>+            }
>+
>+            ctx->rc.ConfigParams.pConfiguration_CBR = cbr_ctl;
>+            break;
>+
>+        case RC_MODE_VBR:
>+            D3D12_VIDEO_ENCODER_RATE_CONTROL_VBR *vbr_ctl;
>+
>+            ctx->rc.ConfigParams.DataSize =
>sizeof(D3D12_VIDEO_ENCODER_RATE_CONTROL_VBR);
>+            vbr_ctl = av_mallocz(ctx->rc.ConfigParams.DataSize);
>+            if (!vbr_ctl)
>+                return AVERROR(ENOMEM);
>+
>+            vbr_ctl->TargetAvgBitRate   = rc_target_bitrate;
>+            vbr_ctl->PeakBitRate        = rc_peak_bitrate;
>+            vbr_ctl->VBVCapacity        = hrd_buffer_size;
>+            vbr_ctl->InitialVBVFullness = hrd_initial_buffer_fullness;
>+            ctx->rc.Flags |=
>D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_ENABLE_VBV_SIZES;
>+
>+            if (avctx->qmin > 0 || avctx->qmax > 0) {
>+                vbr_ctl->MinQP = avctx->qmin;
>+                vbr_ctl->MaxQP = avctx->qmax;
>+                ctx->rc.Flags |=
>D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_ENABLE_QP_RANGE;
>+            }
>+
>+            ctx->rc.ConfigParams.pConfiguration_VBR = vbr_ctl;
>+            break;
>+
>+        case RC_MODE_QVBR:
>+            D3D12_VIDEO_ENCODER_RATE_CONTROL_QVBR *qvbr_ctl;
>+
>+            ctx->rc.ConfigParams.DataSize =
>sizeof(D3D12_VIDEO_ENCODER_RATE_CONTROL_QVBR);
>+            qvbr_ctl = av_mallocz(ctx->rc.ConfigParams.DataSize);
>+            if (!qvbr_ctl)
>+                return AVERROR(ENOMEM);
>+
>+            qvbr_ctl->TargetAvgBitRate      = rc_target_bitrate;
>+            qvbr_ctl->PeakBitRate           = rc_peak_bitrate;
>+            qvbr_ctl->ConstantQualityTarget = rc_quality;
>+
>+            if (avctx->qmin > 0 || avctx->qmax > 0) {
>+                qvbr_ctl->MinQP = avctx->qmin;
>+                qvbr_ctl->MaxQP = avctx->qmax;
>+                ctx->rc.Flags |=
>D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_ENABLE_QP_RANGE;
>+            }
>+
>+            ctx->rc.ConfigParams.pConfiguration_QVBR = qvbr_ctl;
>+            break;
>+
>+        default:
>+            break;
>+    }
>+    return 0;
>+}
>+
>+static int d3d12va_encode_init_gop_structure(AVCodecContext *avctx)
>+{
>+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
>+    D3D12VAEncodeContext     *ctx = avctx->priv_data;
>+    uint32_t ref_l0, ref_l1;
>+    int err;
>+    HRESULT hr;
>+
>D3D12_FEATURE_DATA_VIDEO_ENCODER_CODEC_PICTURE_CONTROL_SUPPO
>RT support;
>+    union {
>+        D3D12_VIDEO_ENCODER_CODEC_PICTURE_CONTROL_SUPPORT_H264
>h264;
>+        D3D12_VIDEO_ENCODER_CODEC_PICTURE_CONTROL_SUPPORT_HEVC
>hevc;
>+    } codec_support;
>+
>+    support.NodeIndex = 0;
>+    support.Codec     = ctx->codec->d3d12_codec;
>+    support.Profile   = ctx->profile->d3d12_profile;
>+
>+    switch (ctx->codec->d3d12_codec) {
>+        case D3D12_VIDEO_ENCODER_CODEC_H264:
>+            support.PictureSupport.DataSize = sizeof(codec_support.h264);
>+            support.PictureSupport.pH264Support = &codec_support.h264;
>+            break;
>+
>+        case D3D12_VIDEO_ENCODER_CODEC_HEVC:
>+            support.PictureSupport.DataSize = sizeof(codec_support.hevc);
>+            support.PictureSupport.pHEVCSupport = &codec_support.hevc;
>+            break;
>+    }
>+
>+    hr = ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3,
>D3D12_FEATURE_VIDEO_ENCODER_CODEC_PICTURE_CONTROL_SUPPORT,
>+             &support, sizeof(support));
>+    if (FAILED(hr))
>+        return AVERROR(EINVAL);
>+
>+    if (support.IsSupported) {
>+        switch (ctx->codec->d3d12_codec) {
>+            case D3D12_VIDEO_ENCODER_CODEC_H264:
>+                ref_l0 = FFMIN(support.PictureSupport.pH264Support-
>>MaxL0ReferencesForP,
>+                               support.PictureSupport.pH264Support-
>>MaxL1ReferencesForB);
>+                ref_l1 = support.PictureSupport.pH264Support-
>>MaxL1ReferencesForB;
>+                break;
>+
>+            case D3D12_VIDEO_ENCODER_CODEC_HEVC:
>+                ref_l0 = FFMIN(support.PictureSupport.pHEVCSupport-
>>MaxL0ReferencesForP,
>+                               support.PictureSupport.pHEVCSupport-
>>MaxL1ReferencesForB);
>+                ref_l1 = support.PictureSupport.pHEVCSupport-
>>MaxL1ReferencesForB;
>+                break;
>+        }
>+    } else {
>+        ref_l0 = ref_l1 = 0;
>+    }
>+
>+    if (ref_l0 > 0 && ref_l1 > 0 && ctx->bi_not_empty) {
>+        base_ctx->p_to_gpb = 1;
>+        av_log(avctx, AV_LOG_VERBOSE, "Driver does not support P-frames, "
>+               "replacing them with B-frames.\n");
>+    }
>+
>+    err = ff_hw_base_init_gop_structure(avctx, ref_l0, ref_l1, ctx->codec->flags,
>0);
>+    if (err < 0)
>+        return err;
>+
>+    return 0;
>+}
>+
>+static int d3d12va_create_encoder(AVCodecContext *avctx)
>+{
>+    HWBaseEncodeContext    *base_ctx     = avctx->priv_data;
>+    D3D12VAEncodeContext   *ctx          = avctx->priv_data;
>+    AVD3D12VAFramesContext *frames_hwctx = base_ctx->input_frames-
>>hwctx;
>+    HRESULT hr;
>+
>+    D3D12_VIDEO_ENCODER_DESC desc = {
>+        .NodeMask                     = 0,
>+        .Flags                        = D3D12_VIDEO_ENCODER_FLAG_NONE,
>+        .EncodeCodec                  = ctx->codec->d3d12_codec,
>+        .EncodeProfile                = ctx->profile->d3d12_profile,
>+        .InputFormat                  = frames_hwctx->format,
>+        .CodecConfiguration           = ctx->codec_conf,
>+        .MaxMotionEstimationPrecision =
>D3D12_VIDEO_ENCODER_MOTION_ESTIMATION_PRECISION_MODE_MAXIMU
>M,
>+    };
>+
>+    hr = ID3D12VideoDevice3_CreateVideoEncoder(ctx->video_device3, &desc,
>&IID_ID3D12VideoEncoder,
>+                                               (void **)&ctx->encoder);
>+    if (FAILED(hr)) {
>+        av_log(avctx, AV_LOG_ERROR, "Failed to create encoder.\n");
>+        return AVERROR(EINVAL);
>+    }
>+
>+    return 0;
>+}
>+
>+static int d3d12va_create_encoder_heap(AVCodecContext* avctx)
>+{
>+    D3D12VAEncodeContext *ctx = avctx->priv_data;
>+    HRESULT hr;
>+
>+    D3D12_VIDEO_ENCODER_HEAP_DESC desc = {
>+        .NodeMask             = 0,
>+        .Flags                = D3D12_VIDEO_ENCODER_FLAG_NONE,
>+        .EncodeCodec          = ctx->codec->d3d12_codec,
>+        .EncodeProfile        = ctx->profile->d3d12_profile,
>+        .EncodeLevel          = ctx->level,
>+        .ResolutionsListCount = 1,
>+        .pResolutionList      = &ctx->resolution,
>+    };
>+
>+    hr = ID3D12VideoDevice3_CreateVideoEncoderHeap(ctx->video_device3,
>&desc,
>+                                                   &IID_ID3D12VideoEncoderHeap, (void **)&ctx-
>>encoder_heap);
>+    if (FAILED(hr)) {
>+        av_log(avctx, AV_LOG_ERROR, "Failed to create encoder heap.\n");
>+        return AVERROR(EINVAL);
>+    }
>+
>+    return 0;
>+}
>+
>+static void d3d12va_encode_free_buffer(void *opaque, uint8_t *data)
>+{
>+    ID3D12Resource *pResource;
>+
>+    pResource = (ID3D12Resource *)data;
>+    D3D12_OBJECT_RELEASE(pResource);
>+}
>+
>+static AVBufferRef *d3d12va_encode_alloc_output_buffer(void *opaque,
>size_t size)
>+{
>+    AVCodecContext     *avctx = opaque;
>+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
>+    D3D12VAEncodeContext     *ctx = avctx->priv_data;
>+    ID3D12Resource *pResource = NULL;
>+    HRESULT hr;
>+    AVBufferRef *ref;
>+    D3D12_HEAP_PROPERTIES heap_props;
>+    D3D12_HEAP_TYPE heap_type = D3D12_HEAP_TYPE_READBACK;
>+
>+    D3D12_RESOURCE_DESC desc = {
>+        .Dimension        = D3D12_RESOURCE_DIMENSION_BUFFER,
>+        .Alignment        = 0,
>+        .Width            = FFALIGN(3 * base_ctx->surface_width * base_ctx-
>>surface_height + (1 << 16),
>+                                    D3D12_TEXTURE_DATA_PLACEMENT_ALIGNMENT),
>+        .Height           = 1,
>+        .DepthOrArraySize = 1,
>+        .MipLevels        = 1,
>+        .Format           = DXGI_FORMAT_UNKNOWN,
>+        .SampleDesc       = { .Count = 1, .Quality = 0 },
>+        .Layout           = D3D12_TEXTURE_LAYOUT_ROW_MAJOR,
>+        .Flags            = D3D12_RESOURCE_FLAG_NONE,
>+    };
>+
>+    ctx->hwctx->device->lpVtbl->GetCustomHeapProperties(ctx->hwctx-
>>device, &heap_props, 0, heap_type);
>+
>+    hr = ID3D12Device_CreateCommittedResource(ctx->hwctx->device,
>&heap_props, D3D12_HEAP_FLAG_NONE,
>+                                              &desc, D3D12_RESOURCE_STATE_COMMON, NULL,
>&IID_ID3D12Resource,
>+                                              (void **)&pResource);
>+
>+    if (FAILED(hr)) {
>+        av_log(avctx, AV_LOG_ERROR, "Failed to create d3d12 buffer.\n");
>+        return NULL;
>+    }
>+
>+    ref = av_buffer_create((uint8_t *)(uintptr_t)pResource,
>+                           sizeof(pResource),
>+                           &d3d12va_encode_free_buffer,
>+                           avctx, AV_BUFFER_FLAG_READONLY);
>+    if (!ref) {
>+        D3D12_OBJECT_RELEASE(pResource);
>+        return NULL;
>+    }
>+
>+    return ref;
>+}
>+
>+static int d3d12va_encode_prepare_output_buffers(AVCodecContext *avctx)
>+{
>+    HWBaseEncodeContext *base_ctx      = avctx->priv_data;
>+    D3D12VAEncodeContext *ctx          = avctx->priv_data;
>+    AVD3D12VAFramesContext *frames_ctx = base_ctx->input_frames->hwctx;
>+    HRESULT hr;
>+
>+    ctx->req.NodeIndex               = 0;
>+    ctx->req.Codec                   = ctx->codec->d3d12_codec;
>+    ctx->req.Profile                 = ctx->profile->d3d12_profile;
>+    ctx->req.InputFormat             = frames_ctx->format;
>+    ctx->req.PictureTargetResolution = ctx->resolution;
>+
>+    hr = ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3,
>+
>D3D12_FEATURE_VIDEO_ENCODER_RESOURCE_REQUIREMENTS,
>+                                                &ctx->req, sizeof(ctx->req));
>+    if (FAILED(hr)) {
>+        av_log(avctx, AV_LOG_ERROR, "Failed to check encoder resource
>requirements support.\n");
>+        return AVERROR(EINVAL);
>+    }
>+
>+    if (!ctx->req.IsSupported) {
>+        av_log(avctx, AV_LOG_ERROR, "Encoder resource requirements
>unsupported.\n");
>+        return AVERROR(EINVAL);
>+    }
>+
>+    ctx->output_buffer_pool = av_buffer_pool_init2(sizeof(ID3D12Resource *),
>avctx,
>+                                                   &d3d12va_encode_alloc_output_buffer, NULL);
>+    if (!ctx->output_buffer_pool)
>+        return AVERROR(ENOMEM);
>+
>+    return 0;
>+}
>+
>+static int d3d12va_encode_create_command_objects(AVCodecContext
>*avctx)
>+{
>+    D3D12VAEncodeContext *ctx = avctx->priv_data;
>+    ID3D12CommandAllocator *command_allocator = NULL;
>+    int err;
>+    HRESULT hr;
>+
>+    D3D12_COMMAND_QUEUE_DESC queue_desc = {
>+        .Type     = D3D12_COMMAND_LIST_TYPE_VIDEO_ENCODE,
>+        .Priority = 0,
>+        .Flags    = D3D12_COMMAND_QUEUE_FLAG_NONE,
>+        .NodeMask = 0,
>+    };
>+
>+    ctx->allocator_queue =
>av_fifo_alloc2(D3D12VA_VIDEO_ENC_ASYNC_DEPTH,
>+                                          sizeof(CommandAllocator),
>AV_FIFO_FLAG_AUTO_GROW);
>+    if (!ctx->allocator_queue)
>+        return AVERROR(ENOMEM);
>+
>+    hr = ID3D12Device_CreateFence(ctx->hwctx->device, 0,
>D3D12_FENCE_FLAG_NONE,
>+                                  &IID_ID3D12Fence, (void **)&ctx->sync_ctx.fence);
>+    if (FAILED(hr)) {
>+        av_log(avctx, AV_LOG_ERROR, "Failed to create fence(%lx)\n", (long)hr);
>+        err = AVERROR_UNKNOWN;
>+        goto fail;
>+    }
>+
>+    ctx->sync_ctx.event = CreateEvent(NULL, FALSE, FALSE, NULL);
>+    if (!ctx->sync_ctx.event)
>+        goto fail;
>+
>+    err = d3d12va_get_valid_command_allocator(avctx,
>&command_allocator);
>+    if (err < 0)
>+        goto fail;
>+
>+    hr = ID3D12Device_CreateCommandQueue(ctx->hwctx->device,
>&queue_desc,
>+                                         &IID_ID3D12CommandQueue, (void **)&ctx-
>>command_queue);
>+    if (FAILED(hr)) {
>+        av_log(avctx, AV_LOG_ERROR, "Failed to create command queue(%lx)\n",
>(long)hr);
>+        err = AVERROR_UNKNOWN;
>+        goto fail;
>+    }
>+
>+    hr = ID3D12Device_CreateCommandList(ctx->hwctx->device, 0,
>queue_desc.Type,
>+                                        command_allocator, NULL, &IID_ID3D12CommandList,
>+                                        (void **)&ctx->command_list);
>+    if (FAILED(hr)) {
>+        av_log(avctx, AV_LOG_ERROR, "Failed to create command list(%lx)\n",
>(long)hr);
>+        err = AVERROR_UNKNOWN;
>+        goto fail;
>+    }
>+
>+    hr = ID3D12VideoEncodeCommandList2_Close(ctx->command_list);
>+    if (FAILED(hr)) {
>+        av_log(avctx, AV_LOG_ERROR, "Failed to close the command list(%lx)\n",
>(long)hr);
>+        err = AVERROR_UNKNOWN;
>+        goto fail;
>+    }
>+
>+    ID3D12CommandQueue_ExecuteCommandLists(ctx->command_queue, 1,
>(ID3D12CommandList **)&ctx->command_list);
>+
>+    err = d3d12va_sync_with_gpu(avctx);
>+    if (err < 0)
>+        goto fail;
>+
>+    err = d3d12va_discard_command_allocator(avctx, command_allocator, ctx-
>>sync_ctx.fence_value);
>+    if (err < 0)
>+        goto fail;
>+
>+    return 0;
>+
>+fail:
>+    D3D12_OBJECT_RELEASE(command_allocator);
>+    return err;
>+}
>+
>+static int d3d12va_encode_create_recon_frames(AVCodecContext *avctx)
>+{
>+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
>+    AVD3D12VAFramesContext *hwctx;
>+    enum AVPixelFormat recon_format;
>+    int err;
>+
>+    err = ff_hw_base_get_recon_format(avctx, NULL, &recon_format);
>+    if (err < 0)
>+        return err;
>+
>+    base_ctx->recon_frames_ref = av_hwframe_ctx_alloc(base_ctx-
>>device_ref);
>+    if (!base_ctx->recon_frames_ref)
>+        return AVERROR(ENOMEM);
>+
>+    base_ctx->recon_frames = (AVHWFramesContext *)base_ctx-
>>recon_frames_ref->data;
>+    hwctx = (AVD3D12VAFramesContext *)base_ctx->recon_frames->hwctx;
>+
>+    base_ctx->recon_frames->format    = AV_PIX_FMT_D3D12;
>+    base_ctx->recon_frames->sw_format = recon_format;
>+    base_ctx->recon_frames->width     = base_ctx->surface_width;
>+    base_ctx->recon_frames->height    = base_ctx->surface_height;
>+
>+    hwctx->flags =
>D3D12_RESOURCE_FLAG_VIDEO_ENCODE_REFERENCE_ONLY |
>+                   D3D12_RESOURCE_FLAG_DENY_SHADER_RESOURCE;
>+
>+    err = av_hwframe_ctx_init(base_ctx->recon_frames_ref);
>+    if (err < 0) {
>+        av_log(avctx, AV_LOG_ERROR, "Failed to initialise reconstructed "
>+               "frame context: %d.\n", err);
>+        return err;
>+    }
>+
>+    return 0;
>+}
>+
>+static const HWEncodePictureOperation d3d12va_type = {
>+    .alloc  = &d3d12va_encode_alloc,
>+
>+    .issue  = &d3d12va_encode_issue,
>+
>+    .output = &d3d12va_encode_output,
>+
>+    .free   = &d3d12va_encode_free,
>+};
>+
>+int ff_d3d12va_encode_init(AVCodecContext *avctx)
>+{
>+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
>+    D3D12VAEncodeContext     *ctx = avctx->priv_data;
>+    D3D12_FEATURE_DATA_VIDEO_FEATURE_AREA_SUPPORT support = { 0 };
>+    int err;
>+    HRESULT hr;
>+
>+    err = ff_hw_base_encode_init(avctx);
>+    if (err < 0)
>+        goto fail;
>+
>+    base_ctx->op = &d3d12va_type;
>+
>+    ctx->hwctx = base_ctx->device->hwctx;
>+
>+    ctx->resolution.Width  = base_ctx->input_frames->width;
>+    ctx->resolution.Height = base_ctx->input_frames->height;
>+
>+    hr = ID3D12Device_QueryInterface(ctx->hwctx->device,
>&IID_ID3D12Device3, (void **)&ctx->device3);
>+    if (FAILED(hr)) {
>+        av_log(avctx, AV_LOG_ERROR, "ID3D12Device3 interface is not
>supported.\n");
>+        err = AVERROR_UNKNOWN;
>+        goto fail;
>+    }
>+
>+    hr = ID3D12Device3_QueryInterface(ctx->device3,
>&IID_ID3D12VideoDevice3, (void **)&ctx->video_device3);
>+    if (FAILED(hr)) {
>+        av_log(avctx, AV_LOG_ERROR, "ID3D12VideoDevice3 interface is not
>supported.\n");
>+        err = AVERROR_UNKNOWN;
>+        goto fail;
>+    }
>+
>+    if (FAILED(ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3,
>D3D12_FEATURE_VIDEO_FEATURE_AREA_SUPPORT,
>+                                                      &support, sizeof(support)))
>&& !support.VideoEncodeSupport) {
>+        av_log(avctx, AV_LOG_ERROR, "D3D12 video device has no video
>encoder support.\n");
>+        err = AVERROR(EINVAL);
>+        goto fail;
>+    }
>+
>+    err = d3d12va_encode_set_profile(avctx);
>+    if (err < 0)
>+        goto fail;
>+
>+    if (ctx->codec->get_encoder_caps) {
>+        err = ctx->codec->get_encoder_caps(avctx);
>+        if (err < 0)
>+            goto fail;
>+    }
>+
>+    err = d3d12va_encode_init_rate_control(avctx);
>+    if (err < 0)
>+        goto fail;
>+
>+    err = d3d12va_encode_init_gop_structure(avctx);
>+    if (err < 0)
>+        goto fail;
>+
>+    if (!(ctx->codec->flags & FLAG_SLICE_CONTROL) && avctx->slices > 0) {
>+        av_log(avctx, AV_LOG_WARNING, "Multiple slices were requested "
>+               "but this codec does not support controlling slices.\n");
>+    }
>+
>+    err = d3d12va_encode_create_command_objects(avctx);
>+    if (err < 0)
>+        goto fail;
>+
>+    err = d3d12va_encode_create_recon_frames(avctx);
>+    if (err < 0)
>+        goto fail;
>+
>+    err = d3d12va_encode_prepare_output_buffers(avctx);
>+    if (err < 0)
>+        goto fail;
>+
>+    if (ctx->codec->configure) {
>+        err = ctx->codec->configure(avctx);
>+        if (err < 0)
>+            goto fail;
>+    }
>+
>+    if (ctx->codec->init_sequence_params) {
>+        err = ctx->codec->init_sequence_params(avctx);
>+        if (err < 0) {
>+            av_log(avctx, AV_LOG_ERROR, "Codec sequence initialisation "
>+                   "failed: %d.\n", err);
>+            goto fail;
>+        }
>+    }
>+
>+    if (ctx->codec->set_level) {
>+        err = ctx->codec->set_level(avctx);
>+        if (err < 0)
>+            goto fail;
>+    }
>+
>+    base_ctx->output_delay = base_ctx->b_per_p;
>+    base_ctx->decode_delay = base_ctx->max_b_depth;
>+
>+    err = d3d12va_create_encoder(avctx);
>+    if (err < 0)
>+        goto fail;
>+
>+    err = d3d12va_create_encoder_heap(avctx);
>+    if (err < 0)
>+        goto fail;
>+
>+    base_ctx->async_encode = 1;
>+    base_ctx->encode_fifo = av_fifo_alloc2(base_ctx->async_depth,
>+                                           sizeof(D3D12VAEncodePicture *), 0);
>+    if (!base_ctx->encode_fifo)
>+        return AVERROR(ENOMEM);
>+
>+    return 0;
>+
>+fail:
>+    return err;
>+}
>+
>+int ff_d3d12va_encode_close(AVCodecContext *avctx)
>+{
>+    int num_allocator = 0;
>+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
>+    D3D12VAEncodeContext     *ctx = avctx->priv_data;
>+    HWBaseEncodePicture *pic, *next;
>+    CommandAllocator allocator;
>+
>+    if (!base_ctx->frame)
>+        return 0;
>+
>+    for (pic = base_ctx->pic_start; pic; pic = next) {
>+        next = pic->next;
>+        d3d12va_encode_free(avctx, pic);
>+    }
>+
>+    d3d12va_encode_free_rc_params(avctx);
>+
>+    av_buffer_pool_uninit(&ctx->output_buffer_pool);
>+
>+    D3D12_OBJECT_RELEASE(ctx->command_list);
>+    D3D12_OBJECT_RELEASE(ctx->command_queue);
>+
>+    if (ctx->allocator_queue) {
>+        while (av_fifo_read(ctx->allocator_queue, &allocator, 1) >= 0) {
>+            num_allocator++;
>+            D3D12_OBJECT_RELEASE(allocator.command_allocator);
>+        }
>+
>+        av_log(avctx, AV_LOG_VERBOSE, "Total number of command allocators
>reused: %d\n", num_allocator);
>+    }
>+
>+    av_fifo_freep2(&ctx->allocator_queue);
>+
>+    D3D12_OBJECT_RELEASE(ctx->sync_ctx.fence);
>+    if (ctx->sync_ctx.event)
>+        CloseHandle(ctx->sync_ctx.event);
>+
>+    D3D12_OBJECT_RELEASE(ctx->encoder_heap);
>+    D3D12_OBJECT_RELEASE(ctx->encoder);
>+    D3D12_OBJECT_RELEASE(ctx->video_device3);
>+    D3D12_OBJECT_RELEASE(ctx->device3);
>+
>+    ff_hw_base_encode_close(avctx);
>+
>+    return 0;
>+}
>diff --git a/libavcodec/d3d12va_encode.h b/libavcodec/d3d12va_encode.h
>new file mode 100644
>index 0000000000..10e2d87035
>--- /dev/null
>+++ b/libavcodec/d3d12va_encode.h
>@@ -0,0 +1,321 @@
>+/*
>+ * Direct3D 12 HW acceleration video encoder
>+ *
>+ * Copyright (c) 2024 Intel Corporation
>+ *
>+ * This file is part of FFmpeg.
>+ *
>+ * FFmpeg is free software; you can redistribute it and/or
>+ * modify it under the terms of the GNU Lesser General Public
>+ * License as published by the Free Software Foundation; either
>+ * version 2.1 of the License, or (at your option) any later version.
>+ *
>+ * FFmpeg is distributed in the hope that it will be useful,
>+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
>+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
>+ * Lesser General Public License for more details.
>+ *
>+ * You should have received a copy of the GNU Lesser General Public
>+ * License along with FFmpeg; if not, write to the Free Software
>+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
>+ */
>+
>+#ifndef AVCODEC_D3D12VA_ENCODE_H
>+#define AVCODEC_D3D12VA_ENCODE_H
>+
>+#include "libavutil/fifo.h"
>+#include "libavutil/hwcontext.h"
>+#include "libavutil/hwcontext_d3d12va_internal.h"
>+#include "libavutil/hwcontext_d3d12va.h"
>+#include "avcodec.h"
>+#include "internal.h"
>+#include "hwconfig.h"
>+#include "hw_base_encode.h"
>+
>+struct D3D12VAEncodeType;
>+
>+extern const AVCodecHWConfigInternal *const
>ff_d3d12va_encode_hw_configs[];
>+
>+#define MAX_PARAM_BUFFER_SIZE 4096
>+#define D3D12VA_VIDEO_ENC_ASYNC_DEPTH 8
>+
>+typedef struct D3D12VAEncodePicture {
>+    HWBaseEncodePicture base;
>+
>+    int             header_size;
>+
>+    AVD3D12VAFrame *input_surface;
>+    AVD3D12VAFrame *recon_surface;
>+
>+    AVBufferRef    *output_buffer_ref;
>+    ID3D12Resource *output_buffer;
>+
>+    ID3D12Resource *encoded_metadata;
>+    ID3D12Resource *resolved_metadata;
>+
>+    D3D12_VIDEO_ENCODER_PICTURE_CONTROL_CODEC_DATA pic_ctl;
>+
>+    int             fence_value;
>+} D3D12VAEncodePicture;
>+
>+typedef struct D3D12VAEncodeProfile {
>+    /**
>+     * lavc profile value (AV_PROFILE_*).
>+     */
>+    int       av_profile;
>+
>+    /**
>+     * Supported bit depth.
>+     */
>+    int       depth;
>+
>+    /**
>+     * Number of components.
>+     */
>+    int       nb_components;
>+
>+    /**
>+     * Chroma subsampling in width dimension.
>+     */
>+    int       log2_chroma_w;
>+
>+    /**
>+     * Chroma subsampling in height dimension.
>+     */
>+    int       log2_chroma_h;
>+
>+    /**
>+     * D3D12 profile value.
>+     */
>+    D3D12_VIDEO_ENCODER_PROFILE_DESC d3d12_profile;
>+} D3D12VAEncodeProfile;
>+
>+enum {
>+    RC_MODE_AUTO,
>+    RC_MODE_CQP,
>+    RC_MODE_CBR,
>+    RC_MODE_VBR,
>+    RC_MODE_QVBR,
>+    RC_MODE_MAX = RC_MODE_QVBR,
>+};
>+
>+
>+typedef struct D3D12VAEncodeRCMode {
>+    /**
>+     * Mode from above enum (RC_MODE_*).
>+     */
>+    int mode;
>+
>+    /**
>+     * Name.
>+     *
>+     */
>+    const char *name;
>+
>+    /**
>+     * Uses bitrate parameters.
>+     *
>+     */
>+    int bitrate;
>+
>+    /**
>+     * Supports maxrate distinct from bitrate.
>+     *
>+     */
>+    int maxrate;
>+
>+    /**
>+     * Uses quality value.
>+     *
>+     */
>+    int quality;
>+
>+    /**
>+     * Supports HRD/VBV parameters.
>+     *
>+     */
>+    int hrd;
>+
>+    /**
>+     * Supported by D3D12 HW.
>+     */
>+    int supported;
>+
>+    /**
>+     * D3D12 mode value.
>+     */
>+    D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE d3d12_mode;
>+} D3D12VAEncodeRCMode;
>+
>+typedef struct D3D12VAEncodeContext {
>+    HWBaseEncodeContext base;
>+
>+    /**
>+     * Codec-specific hooks.
>+     */
>+    const struct D3D12VAEncodeType *codec;
>+
>+    /**
>+     * Chosen encoding profile details.
>+     */
>+    const D3D12VAEncodeProfile *profile;
>+
>+    AVD3D12VADeviceContext *hwctx;
>+
>+    /**
>+     * ID3D12Device3 interface.
>+     */
>+    ID3D12Device3 *device3;
>+
>+    /**
>+     * ID3D12VideoDevice3 interface.
>+     */
>+    ID3D12VideoDevice3 *video_device3;
>+
>+    /**
>+     * Pool of (reusable) bitstream output buffers.
>+     */
>+    AVBufferPool   *output_buffer_pool;
>+
>+    /**
>+     * D3D12 video encoder.
>+     */
>+    AVBufferRef *encoder_ref;
>+
>+    ID3D12VideoEncoder *encoder;
>+
>+    /**
>+     * D3D12 video encoder heap.
>+     */
>+    ID3D12VideoEncoderHeap *encoder_heap;
>+
>+    /**
>+     * A cached queue for reusing the D3D12 command allocators.
>+     *
>+     * @see https://learn.microsoft.com/en-
>us/windows/win32/direct3d12/recording-command-lists-and-
>bundles#id3d12commandallocator
>+     */
>+    AVFifo *allocator_queue;
>+
>+    /**
>+     * D3D12 command queue.
>+     */
>+    ID3D12CommandQueue *command_queue;
>+
>+    /**
>+     * D3D12 video encode command list.
>+     */
>+    ID3D12VideoEncodeCommandList2 *command_list;
>+
>+    /**
>+     * The sync context used to sync command queue.
>+     */
>+    AVD3D12VASyncContext sync_ctx;
>+
>+    /**
>+     * The bi_not_empty feature.
>+     */
>+    int bi_not_empty;
>+
>+    /**
>+     * D3D12_FEATURE structures.
>+     */
>+    D3D12_FEATURE_DATA_VIDEO_ENCODER_RESOURCE_REQUIREMENTS req;
>+
>+    D3D12_FEATURE_DATA_VIDEO_ENCODER_RESOLUTION_SUPPORT_LIMITS
>res_limits;
>+
>+    /**
>+     * D3D12_VIDEO_ENCODER structures.
>+     */
>+    D3D12_VIDEO_ENCODER_PICTURE_RESOLUTION_DESC resolution;
>+
>+    D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION codec_conf;
>+
>+    D3D12_VIDEO_ENCODER_RATE_CONTROL rc;
>+
>+    D3D12_VIDEO_ENCODER_SEQUENCE_GOP_STRUCTURE gop;
>+
>+    D3D12_VIDEO_ENCODER_LEVEL_SETTING level;
>+} D3D12VAEncodeContext;
>+
>+typedef struct D3D12VAEncodeType {
>+    /**
>+     * List of supported profiles.
>+     */
>+   const D3D12VAEncodeProfile *profiles;
>+
>+    /**
>+     * D3D12 codec name.
>+     */
>+    D3D12_VIDEO_ENCODER_CODEC d3d12_codec;
>+
>+    /**
>+     * Codec feature flags.
>+     */
>+    int flags;
>+
>+    /**
>+     * Default quality for this codec - used as quantiser or RC quality
>+     * factor depending on RC mode.
>+     */
>+    int default_quality;
>+
>+    /**
>+     * Query codec configuration and determine encode parameters like
>+     * block sizes for surface alignment and slices. If not set, assume
>+     * that all blocks are 16x16 and that surfaces should be aligned to match
>+     * this.
>+     */
>+    int (*get_encoder_caps)(AVCodecContext *avctx);
>+
>+    /**
>+     * Perform any extra codec-specific configuration.
>+     */
>+    int (*configure)(AVCodecContext *avctx);
>+
>+    /**
>+     * Set codec-specific level setting.
>+     */
>+    int (*set_level)(AVCodecContext *avctx);
>+
>+    /**
>+     * The size of any private data structure associated with each
>+     * picture (can be zero if not required).
>+     */
>+    size_t picture_priv_data_size;
>+
>+    /**
>+     * Fill the corresponding parameters.
>+     */
>+    int (*init_sequence_params)(AVCodecContext *avctx);
>+
>+    int (*init_picture_params)(AVCodecContext *avctx,
>+                               D3D12VAEncodePicture *pic);
>+
>+    void (*free_picture_params)(D3D12VAEncodePicture *pic);
>+
>+    /**
>+     * Write the packed header data to the provided buffer.
>+     */
>+    int (*write_sequence_header)(AVCodecContext *avctx,
>+                                 char *data, size_t *data_len);
>+} D3D12VAEncodeType;
>+
>+int ff_d3d12va_encode_init(AVCodecContext *avctx);
>+int ff_d3d12va_encode_close(AVCodecContext *avctx);
>+
>+#define D3D12VA_ENCODE_RC_MODE(name, desc) \
>+    { #name, desc, 0, AV_OPT_TYPE_CONST, { .i64 = RC_MODE_ ## name }, \
>+      0, 0, FLAGS, .unit = "rc_mode" }
>+#define D3D12VA_ENCODE_RC_OPTIONS \
>+    { "rc_mode",\
>+      "Set rate control mode", \
>+      OFFSET(common.base.explicit_rc_mode), AV_OPT_TYPE_INT, \
>+      { .i64 = RC_MODE_AUTO }, RC_MODE_AUTO, RC_MODE_MAX,
>FLAGS, .unit = "rc_mode" }, \
>+    { "auto", "Choose mode automatically based on other parameters", \
>+      0, AV_OPT_TYPE_CONST, { .i64 = RC_MODE_AUTO }, 0, 0, FLAGS, .unit =
>"rc_mode" }, \
>+    D3D12VA_ENCODE_RC_MODE(CQP,  "Constant-quality"), \
>+    D3D12VA_ENCODE_RC_MODE(CBR,  "Constant-bitrate"), \
>+    D3D12VA_ENCODE_RC_MODE(VBR,  "Variable-bitrate"), \
>+    D3D12VA_ENCODE_RC_MODE(QVBR, "Quality-defined variable-bitrate")
>+
>+#endif /* AVCODEC_D3D12VA_ENCODE_H */
>diff --git a/libavcodec/d3d12va_encode_hevc.c
>b/libavcodec/d3d12va_encode_hevc.c
>new file mode 100644
>index 0000000000..aec0d9dcec
>--- /dev/null
>+++ b/libavcodec/d3d12va_encode_hevc.c
>@@ -0,0 +1,957 @@
>+/*
>+ * Direct3D 12 HW acceleration video encoder
>+ *
>+ * Copyright (c) 2024 Intel Corporation
>+ *
>+ * This file is part of FFmpeg.
>+ *
>+ * FFmpeg is free software; you can redistribute it and/or
>+ * modify it under the terms of the GNU Lesser General Public
>+ * License as published by the Free Software Foundation; either
>+ * version 2.1 of the License, or (at your option) any later version.
>+ *
>+ * FFmpeg is distributed in the hope that it will be useful,
>+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
>+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
>+ * Lesser General Public License for more details.
>+ *
>+ * You should have received a copy of the GNU Lesser General Public
>+ * License along with FFmpeg; if not, write to the Free Software
>+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
>+ */
>+#include "libavutil/opt.h"
>+#include "libavutil/common.h"
>+#include "libavutil/pixdesc.h"
>+#include "libavutil/hwcontext_d3d12va_internal.h"
>+
>+#include "avcodec.h"
>+#include "cbs.h"
>+#include "cbs_h265.h"
>+#include "h2645data.h"
>+#include "h265_profile_level.h"
>+#include "codec_internal.h"
>+#include "d3d12va_encode.h"
>+
>+typedef struct D3D12VAEncodeHEVCPicture {
>+    int pic_order_cnt;
>+    int64_t last_idr_frame;
>+} D3D12VAEncodeHEVCPicture;
>+
>+typedef struct D3D12VAEncodeHEVCContext {
>+    D3D12VAEncodeContext common;
>+
>+    // User options.
>+    int qp;
>+    int profile;
>+    int tier;
>+    int level;
>+
>+    // Writer structures.
>+    H265RawVPS   raw_vps;
>+    H265RawSPS   raw_sps;
>+    H265RawPPS   raw_pps;
>+
>+    CodedBitstreamContext *cbc;
>+    CodedBitstreamFragment current_access_unit;
>+} D3D12VAEncodeHEVCContext;
>+
>+typedef struct D3D12VAEncodeHEVCLevel {
>+    int level;
>+    D3D12_VIDEO_ENCODER_LEVELS_HEVC d3d12_level;
>+} D3D12VAEncodeHEVCLevel;
>+
>+static const
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC
>hevc_config_support_sets[] =
>+{
>+    {
>+
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_N
>ONE,
>+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8,
>+
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_32x32,
>+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4,
>+
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32,
>+        3,
>+        3,
>+    },
>+    {
>+
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_N
>ONE,
>+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8,
>+
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_32x32,
>+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4,
>+
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32,
>+        0,
>+        0,
>+    },
>+    {
>+
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_N
>ONE,
>+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8,
>+
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_32x32,
>+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4,
>+
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32,
>+        2,
>+        2,
>+    },
>+    {
>+
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_N
>ONE,
>+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8,
>+
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_64x64,
>+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4,
>+
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32,
>+        2,
>+        2,
>+    },
>+    {
>+
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_N
>ONE,
>+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8,
>+
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_64x64,
>+        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4,
>+
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32,
>+        4,
>+        4,
>+    },
>+};
>+
>+static const D3D12VAEncodeHEVCLevel hevc_levels[] = {
>+    { 30,  D3D12_VIDEO_ENCODER_LEVELS_HEVC_1  },
>+    { 60,  D3D12_VIDEO_ENCODER_LEVELS_HEVC_2  },
>+    { 63,  D3D12_VIDEO_ENCODER_LEVELS_HEVC_21 },
>+    { 90,  D3D12_VIDEO_ENCODER_LEVELS_HEVC_3  },
>+    { 93,  D3D12_VIDEO_ENCODER_LEVELS_HEVC_31 },
>+    { 120, D3D12_VIDEO_ENCODER_LEVELS_HEVC_4  },
>+    { 123, D3D12_VIDEO_ENCODER_LEVELS_HEVC_41 },
>+    { 150, D3D12_VIDEO_ENCODER_LEVELS_HEVC_5  },
>+    { 153, D3D12_VIDEO_ENCODER_LEVELS_HEVC_51 },
>+    { 156, D3D12_VIDEO_ENCODER_LEVELS_HEVC_52 },
>+    { 180, D3D12_VIDEO_ENCODER_LEVELS_HEVC_6  },
>+    { 183, D3D12_VIDEO_ENCODER_LEVELS_HEVC_61 },
>+    { 186, D3D12_VIDEO_ENCODER_LEVELS_HEVC_62 },
>+};
>+
>+static const D3D12_VIDEO_ENCODER_PROFILE_HEVC profile_main   =
>D3D12_VIDEO_ENCODER_PROFILE_HEVC_MAIN;
>+static const D3D12_VIDEO_ENCODER_PROFILE_HEVC profile_main10 =
>D3D12_VIDEO_ENCODER_PROFILE_HEVC_MAIN10;
>+
>+#define D3D_PROFILE_DESC(name) \
>+    { sizeof(D3D12_VIDEO_ENCODER_PROFILE_HEVC), { .pHEVCProfile =
>(D3D12_VIDEO_ENCODER_PROFILE_HEVC *)&profile_ ## name } }
>+static const D3D12VAEncodeProfile d3d12va_encode_hevc_profiles[] = {
>+    { AV_PROFILE_HEVC_MAIN,     8, 3, 1, 1, D3D_PROFILE_DESC(main)   },
>+    { AV_PROFILE_HEVC_MAIN_10, 10, 3, 1, 1, D3D_PROFILE_DESC(main10) },
>+    { AV_PROFILE_UNKNOWN },
>+};
>+
>+static uint8_t
>d3d12va_encode_hevc_map_cusize(D3D12_VIDEO_ENCODER_CODEC_CONFI
>GURATION_HEVC_CUSIZE cusize)
>+{
>+    switch (cusize) {
>+        case
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8:
>return 8;
>+        case
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_16x16:
>return 16;
>+        case
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_32x32:
>return 32;
>+        case
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_64x64:
>return 64;
>+        default: av_assert0(0);
>+    }
>+    return 0;
>+}
>+
>+static uint8_t
>d3d12va_encode_hevc_map_tusize(D3D12_VIDEO_ENCODER_CODEC_CONFI
>GURATION_HEVC_TUSIZE tusize)
>+{
>+    switch (tusize) {
>+        case
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4:
>return 4;
>+        case
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_8x8:
>return 8;
>+        case
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_16x16:
>return 16;
>+        case
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32:
>return 32;
>+        default: av_assert0(0);
>+    }
>+    return 0;
>+}
>+
>+static int d3d12va_encode_hevc_write_access_unit(AVCodecContext *avctx,
>+                                                 char *data, size_t *data_len,
>+                                                 CodedBitstreamFragment *au)
>+{
>+    D3D12VAEncodeHEVCContext *priv = avctx->priv_data;
>+    int err;
>+
>+    err = ff_cbs_write_fragment_data(priv->cbc, au);
>+    if (err < 0) {
>+        av_log(avctx, AV_LOG_ERROR, "Failed to write packed header.\n");
>+        return err;
>+    }
>+
>+    if (*data_len < 8 * au->data_size - au->data_bit_padding) {
>+        av_log(avctx, AV_LOG_ERROR, "Access unit too large: "
>+               "%zu < %zu.\n", *data_len,
>+               8 * au->data_size - au->data_bit_padding);
>+        return AVERROR(ENOSPC);
>+    }
>+
>+    memcpy(data, au->data, au->data_size);
>+    *data_len = 8 * au->data_size - au->data_bit_padding;
>+
>+    return 0;
>+}
>+
>+static int d3d12va_encode_hevc_add_nal(AVCodecContext *avctx,
>+                                       CodedBitstreamFragment *au,
>+                                       void *nal_unit)
>+{
>+    H265RawNALUnitHeader *header = nal_unit;
>+    int err;
>+
>+    err = ff_cbs_insert_unit_content(au, -1,
>+                                     header->nal_unit_type, nal_unit, NULL);
>+    if (err < 0) {
>+        av_log(avctx, AV_LOG_ERROR, "Failed to add NAL unit: "
>+               "type = %d.\n", header->nal_unit_type);
>+        return err;
>+    }
>+
>+    return 0;
>+}
>+
>+static int d3d12va_encode_hevc_write_sequence_header(AVCodecContext
>*avctx,
>+                                                     char *data, size_t *data_len)
>+{
>+    D3D12VAEncodeHEVCContext *priv = avctx->priv_data;
>+    CodedBitstreamFragment   *au   = &priv->current_access_unit;
>+    int err;
>+
>+    err = d3d12va_encode_hevc_add_nal(avctx, au, &priv->raw_vps);
>+    if (err < 0)
>+        goto fail;
>+
>+    err = d3d12va_encode_hevc_add_nal(avctx, au, &priv->raw_sps);
>+    if (err < 0)
>+        goto fail;
>+
>+    err = d3d12va_encode_hevc_add_nal(avctx, au, &priv->raw_pps);
>+    if (err < 0)
>+        goto fail;
>+
>+    err = d3d12va_encode_hevc_write_access_unit(avctx, data, data_len, au);
>+fail:
>+    ff_cbs_fragment_reset(au);
>+    return err;
>+
>+}
>+
>+static int d3d12va_encode_hevc_init_sequence_params(AVCodecContext
>*avctx)
>+{
>+    HWBaseEncodeContext  *base_ctx = avctx->priv_data;
>+    D3D12VAEncodeContext     *ctx  = avctx->priv_data;
>+    D3D12VAEncodeHEVCContext *priv = avctx->priv_data;
>+    AVD3D12VAFramesContext  *hwctx = base_ctx->input_frames->hwctx;
>+    H265RawVPS               *vps  = &priv->raw_vps;
>+    H265RawSPS               *sps  = &priv->raw_sps;
>+    H265RawPPS               *pps  = &priv->raw_pps;
>+    H265RawProfileTierLevel  *ptl  = &vps->profile_tier_level;
>+    H265RawVUI               *vui  = &sps->vui;
>+    D3D12_VIDEO_ENCODER_PROFILE_HEVC profile =
>D3D12_VIDEO_ENCODER_PROFILE_HEVC_MAIN;
>+    D3D12_VIDEO_ENCODER_LEVEL_TIER_CONSTRAINTS_HEVC level = { 0 };
>+    const AVPixFmtDescriptor *desc;
>+    uint8_t min_cu_size, max_cu_size, min_tu_size, max_tu_size;
>+    int chroma_format, bit_depth;
>+    HRESULT hr;
>+    int i;
>+
>+    D3D12_FEATURE_DATA_VIDEO_ENCODER_SUPPORT support = {
>+        .NodeIndex                        = 0,
>+        .Codec                            = D3D12_VIDEO_ENCODER_CODEC_HEVC,
>+        .InputFormat                      = hwctx->format,
>+        .RateControl                      = ctx->rc,
>+        .IntraRefresh                     =
>D3D12_VIDEO_ENCODER_INTRA_REFRESH_MODE_NONE,
>+        .SubregionFrameEncoding           =
>D3D12_VIDEO_ENCODER_FRAME_SUBREGION_LAYOUT_MODE_FULL_FRAME,
>+        .ResolutionsListCount             = 1,
>+        .pResolutionList                  = &ctx->resolution,
>+        .CodecGopSequence                 = ctx->gop,
>+        .MaxReferenceFramesInDPB          = MAX_DPB_SIZE - 1,
>+        .CodecConfiguration               = ctx->codec_conf,
>+        .SuggestedProfile.DataSize        =
>sizeof(D3D12_VIDEO_ENCODER_PROFILE_HEVC),
>+        .SuggestedProfile.pHEVCProfile    = &profile,
>+        .SuggestedLevel.DataSize          =
>sizeof(D3D12_VIDEO_ENCODER_LEVEL_TIER_CONSTRAINTS_HEVC),
>+        .SuggestedLevel.pHEVCLevelSetting = &level,
>+        .pResolutionDependentSupport      = &ctx->res_limits,
>+     };
>+
>+    hr = ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3,
>D3D12_FEATURE_VIDEO_ENCODER_SUPPORT,
>+                                                &support, sizeof(support));
>+
>+    if (FAILED(hr)) {
>+        av_log(avctx, AV_LOG_ERROR, "Failed to check encoder support(%lx).\n",
>(long)hr);
>+        return AVERROR(EINVAL);
>+    }
>+
>+    if (!(support.SupportFlags &
>D3D12_VIDEO_ENCODER_SUPPORT_FLAG_GENERAL_SUPPORT_OK)) {
>+        av_log(avctx, AV_LOG_ERROR, "Driver does not support some request
>features. %#x\n",
>+               support.ValidationFlags);
>+        return AVERROR(EINVAL);
>+    }
>+
>+    if (support.SupportFlags &
>D3D12_VIDEO_ENCODER_SUPPORT_FLAG_RECONSTRUCTED_FRAMES_REQUI
>RE_TEXTURE_ARRAYS) {
>+        av_log(avctx, AV_LOG_ERROR, "D3D12 video encode on this device
>requires texture array support, "
>+               "but it's not implemented.\n");
>+        return AVERROR_PATCHWELCOME;
>+    }
>+
>+    memset(vps, 0, sizeof(*vps));
>+    memset(sps, 0, sizeof(*sps));
>+    memset(pps, 0, sizeof(*pps));
>+
>+    desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format);
>+    av_assert0(desc);
>+    if (desc->nb_components == 1) {
>+        chroma_format = 0;
>+    } else {
>+        if (desc->log2_chroma_w == 1 && desc->log2_chroma_h == 1) {
>+            chroma_format = 1;
>+        } else if (desc->log2_chroma_w == 1 && desc->log2_chroma_h == 0) {
>+            chroma_format = 2;
>+        } else if (desc->log2_chroma_w == 0 && desc->log2_chroma_h == 0) {
>+            chroma_format = 3;
>+        } else {
>+            av_log(avctx, AV_LOG_ERROR, "Chroma format of input pixel format "
>+                   "%s is not supported.\n", desc->name);
>+            return AVERROR(EINVAL);
>+        }
>+    }
>+    bit_depth = desc->comp[0].depth;
>+
>+    min_cu_size = d3d12va_encode_hevc_map_cusize(ctx-
>>codec_conf.pHEVCConfig->MinLumaCodingUnitSize);
>+    max_cu_size = d3d12va_encode_hevc_map_cusize(ctx-
>>codec_conf.pHEVCConfig->MaxLumaCodingUnitSize);
>+    min_tu_size = d3d12va_encode_hevc_map_tusize(ctx-
>>codec_conf.pHEVCConfig->MinLumaTransformUnitSize);
>+    max_tu_size = d3d12va_encode_hevc_map_tusize(ctx-
>>codec_conf.pHEVCConfig->MaxLumaTransformUnitSize);
>+
>+    // VPS
>+
>+    vps->nal_unit_header = (H265RawNALUnitHeader) {
>+        .nal_unit_type         = HEVC_NAL_VPS,
>+        .nuh_layer_id          = 0,
>+        .nuh_temporal_id_plus1 = 1,
>+    };
>+
>+    vps->vps_video_parameter_set_id = 0;
>+
>+    vps->vps_base_layer_internal_flag  = 1;
>+    vps->vps_base_layer_available_flag = 1;
>+    vps->vps_max_layers_minus1         = 0;
>+    vps->vps_max_sub_layers_minus1     = 0;
>+    vps->vps_temporal_id_nesting_flag  = 1;
>+
>+    ptl->general_profile_space = 0;
>+    ptl->general_profile_idc   = avctx->profile;
>+    ptl->general_tier_flag     = priv->tier;
>+
>+    ptl->general_profile_compatibility_flag[ptl->general_profile_idc] = 1;
>+
>+    ptl->general_progressive_source_flag    = 1;
>+    ptl->general_interlaced_source_flag     = 0;
>+    ptl->general_non_packed_constraint_flag = 1;
>+    ptl->general_frame_only_constraint_flag = 1;
>+
>+    ptl->general_max_14bit_constraint_flag = bit_depth <= 14;
>+    ptl->general_max_12bit_constraint_flag = bit_depth <= 12;
>+    ptl->general_max_10bit_constraint_flag = bit_depth <= 10;
>+    ptl->general_max_8bit_constraint_flag  = bit_depth ==  8;
>+
>+    ptl->general_max_422chroma_constraint_flag  = chroma_format <= 2;
>+    ptl->general_max_420chroma_constraint_flag  = chroma_format <= 1;
>+    ptl->general_max_monochrome_constraint_flag = chroma_format == 0;
>+
>+    ptl->general_intra_constraint_flag = base_ctx->gop_size == 1;
>+    ptl->general_one_picture_only_constraint_flag = 0;
>+
>+    ptl->general_lower_bit_rate_constraint_flag = 1;
>+
>+    if (avctx->level != FF_LEVEL_UNKNOWN) {
>+        ptl->general_level_idc = avctx->level;
>+    } else {
>+        const H265LevelDescriptor *level;
>+
>+        level = ff_h265_guess_level(ptl, avctx->bit_rate,
>+                                    base_ctx->surface_width, base_ctx->surface_height,
>+                                    1, 1, 1, (base_ctx->b_per_p > 0) + 1);
>+        if (level) {
>+            av_log(avctx, AV_LOG_VERBOSE, "Using level %s.\n", level->name);
>+            ptl->general_level_idc = level->level_idc;
>+        } else {
>+            av_log(avctx, AV_LOG_VERBOSE, "Stream will not conform to "
>+                   "any normal level; using level 8.5.\n");
>+            ptl->general_level_idc = 255;
>+            // The tier flag must be set in level 8.5.
>+            ptl->general_tier_flag = 1;
>+        }
>+        avctx->level = ptl->general_level_idc;
>+    }
>+
>+    vps->vps_sub_layer_ordering_info_present_flag = 0;
>+    vps->vps_max_dec_pic_buffering_minus1[0]      = base_ctx->max_b_depth
>+ 1;
>+    vps->vps_max_num_reorder_pics[0]              = base_ctx->max_b_depth;
>+    vps->vps_max_latency_increase_plus1[0]        = 0;
>+
>+    vps->vps_max_layer_id             = 0;
>+    vps->vps_num_layer_sets_minus1    = 0;
>+    vps->layer_id_included_flag[0][0] = 1;
>+
>+    vps->vps_timing_info_present_flag = 0;
>+
>+    // SPS
>+
>+    sps->nal_unit_header = (H265RawNALUnitHeader) {
>+        .nal_unit_type         = HEVC_NAL_SPS,
>+        .nuh_layer_id          = 0,
>+        .nuh_temporal_id_plus1 = 1,
>+    };
>+
>+    sps->sps_video_parameter_set_id = vps->vps_video_parameter_set_id;
>+
>+    sps->sps_max_sub_layers_minus1    = vps->vps_max_sub_layers_minus1;
>+    sps->sps_temporal_id_nesting_flag = vps->vps_temporal_id_nesting_flag;
>+
>+    sps->profile_tier_level = vps->profile_tier_level;
>+
>+    sps->sps_seq_parameter_set_id = 0;
>+
>+    sps->chroma_format_idc          = chroma_format;
>+    sps->separate_colour_plane_flag = 0;
>+
>+    av_assert0(ctx->res_limits.SubregionBlockPixelsSize % min_cu_size == 0);
>+
>+    sps->pic_width_in_luma_samples  = FFALIGN(base_ctx->surface_width,
>+                                              ctx->res_limits.SubregionBlockPixelsSize);
>+    sps->pic_height_in_luma_samples = FFALIGN(base_ctx->surface_height,
>+                                              ctx->res_limits.SubregionBlockPixelsSize);
>+
>+    if (avctx->width  != sps->pic_width_in_luma_samples ||
>+        avctx->height != sps->pic_height_in_luma_samples) {
>+        sps->conformance_window_flag = 1;
>+        sps->conf_win_left_offset   = 0;
>+        sps->conf_win_right_offset  =
>+            (sps->pic_width_in_luma_samples - avctx->width) >> desc-
>>log2_chroma_w;
>+        sps->conf_win_top_offset    = 0;
>+        sps->conf_win_bottom_offset =
>+            (sps->pic_height_in_luma_samples - avctx->height) >> desc-
>>log2_chroma_h;
>+    } else {
>+        sps->conformance_window_flag = 0;
>+    }
>+
>+    sps->bit_depth_luma_minus8   = bit_depth - 8;
>+    sps->bit_depth_chroma_minus8 = bit_depth - 8;
>+
>+    sps->log2_max_pic_order_cnt_lsb_minus4 = ctx-
>>gop.pHEVCGroupOfPictures->log2_max_pic_order_cnt_lsb_minus4;
>+
>+    sps->sps_sub_layer_ordering_info_present_flag =
>+        vps->vps_sub_layer_ordering_info_present_flag;
>+    for (i = 0; i <= sps->sps_max_sub_layers_minus1; i++) {
>+        sps->sps_max_dec_pic_buffering_minus1[i] =
>+            vps->vps_max_dec_pic_buffering_minus1[i];
>+        sps->sps_max_num_reorder_pics[i] =
>+            vps->vps_max_num_reorder_pics[i];
>+        sps->sps_max_latency_increase_plus1[i] =
>+            vps->vps_max_latency_increase_plus1[i];
>+    }
>+
>+    sps->log2_min_luma_coding_block_size_minus3      =
>(uint8_t)(av_log2(min_cu_size) - 3);
>+    sps->log2_diff_max_min_luma_coding_block_size    =
>(uint8_t)(av_log2(max_cu_size) - av_log2(min_cu_size));
>+    sps->log2_min_luma_transform_block_size_minus2   =
>(uint8_t)(av_log2(min_tu_size) - 2);
>+    sps->log2_diff_max_min_luma_transform_block_size =
>(uint8_t)(av_log2(max_tu_size) - av_log2(min_tu_size));
>+
>+    sps->max_transform_hierarchy_depth_inter = ctx-
>>codec_conf.pHEVCConfig->max_transform_hierarchy_depth_inter;
>+    sps->max_transform_hierarchy_depth_intra = ctx-
>>codec_conf.pHEVCConfig->max_transform_hierarchy_depth_intra;
>+
>+    sps->amp_enabled_flag = !!(ctx->codec_conf.pHEVCConfig-
>>ConfigurationFlags &
>+
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_USE_ASYME
>TRIC_MOTION_PARTITION);
>+    sps->sample_adaptive_offset_enabled_flag = !!(ctx-
>>codec_conf.pHEVCConfig->ConfigurationFlags &
>+
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_ENABLE_SAO
>_FILTER);
>+    sps->sps_temporal_mvp_enabled_flag = 0;
>+    sps->pcm_enabled_flag = 0;
>+
>+    sps->vui_parameters_present_flag = 1;
>+
>+    // vui default parameters
>+    vui->aspect_ratio_idc                        = 0;
>+    vui->video_format                            = 5;
>+    vui->video_full_range_flag                   = 0;
>+    vui->colour_primaries                        = 2;
>+    vui->transfer_characteristics                = 2;
>+    vui->matrix_coefficients                     = 2;
>+    vui->chroma_sample_loc_type_top_field        = 0;
>+    vui->chroma_sample_loc_type_bottom_field     = 0;
>+    vui->tiles_fixed_structure_flag              = 0;
>+    vui->motion_vectors_over_pic_boundaries_flag = 1;
>+    vui->min_spatial_segmentation_idc            = 0;
>+    vui->max_bytes_per_pic_denom                 = 2;
>+    vui->max_bits_per_min_cu_denom               = 1;
>+    vui->log2_max_mv_length_horizontal           = 15;
>+    vui->log2_max_mv_length_vertical             = 15;
>+
>+    // PPS
>+
>+    pps->nal_unit_header = (H265RawNALUnitHeader) {
>+        .nal_unit_type         = HEVC_NAL_PPS,
>+        .nuh_layer_id          = 0,
>+        .nuh_temporal_id_plus1 = 1,
>+    };
>+
>+    pps->pps_pic_parameter_set_id = 0;
>+    pps->pps_seq_parameter_set_id = sps->sps_seq_parameter_set_id;
>+
>+    pps->cabac_init_present_flag = 1;
>+
>+    pps->num_ref_idx_l0_default_active_minus1 = 0;
>+    pps->num_ref_idx_l1_default_active_minus1 = 0;
>+
>+    pps->init_qp_minus26 = 0;
>+
>+    pps->transform_skip_enabled_flag = !!(ctx->codec_conf.pHEVCConfig-
>>ConfigurationFlags &
>+
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_ENABLE_TRA
>NSFORM_SKIPPING);
>+
>+    // cu_qp_delta always required to be 1 in
>https://github.com/microsoft/DirectX-
>Specs/blob/master/d3d/D3D12VideoEncoding.md
>+    pps->cu_qp_delta_enabled_flag = 1;
>+
>+    pps->diff_cu_qp_delta_depth   = 0;
>+
>+    pps->pps_slice_chroma_qp_offsets_present_flag = 1;
>+
>+    pps->tiles_enabled_flag = 0; // no tiling in D3D12
>+
>+    pps->pps_loop_filter_across_slices_enabled_flag = !(ctx-
>>codec_conf.pHEVCConfig->ConfigurationFlags &
>+
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_DISABLE_LO
>OP_FILTER_ACROSS_SLICES);
>+    pps->deblocking_filter_control_present_flag = 1;
>+
>+    return 0;
>+}
>+
>+static int d3d12va_encode_hevc_get_encoder_caps(AVCodecContext *avctx)
>+{
>+    int i;
>+    HRESULT hr;
>+    uint8_t min_cu_size, max_cu_size;
>+    HWBaseEncodeContext *base_ctx = avctx->priv_data;
>+    D3D12VAEncodeContext     *ctx = avctx->priv_data;
>+    D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC *config;
>+    D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC
>hevc_caps;
>+
>+
>D3D12_FEATURE_DATA_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT
>codec_caps = {
>+        .NodeIndex                   = 0,
>+        .Codec                       = D3D12_VIDEO_ENCODER_CODEC_HEVC,
>+        .Profile                     = ctx->profile->d3d12_profile,
>+        .CodecSupportLimits.DataSize =
>sizeof(D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC),
>+    };
>+
>+    for (i = 0; i < FF_ARRAY_ELEMS(hevc_config_support_sets); i++) {
>+        hevc_caps = hevc_config_support_sets[i];
>+        codec_caps.CodecSupportLimits.pHEVCSupport = &hevc_caps;
>+        hr = ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3,
>D3D12_FEATURE_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT,
>+                                                    &codec_caps, sizeof(codec_caps));
>+        if (SUCCEEDED(hr) && codec_caps.IsSupported)
>+            break;
>+    }
>+
>+    if (i == FF_ARRAY_ELEMS(hevc_config_support_sets)) {
>+        av_log(avctx, AV_LOG_ERROR, "Unsupported codec configuration\n");
>+        return AVERROR(EINVAL);
>+    }
>+
>+    ctx->codec_conf.DataSize =
>sizeof(D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC);
>+    ctx->codec_conf.pHEVCConfig = av_mallocz(ctx->codec_conf.DataSize);
>+    if (!ctx->codec_conf.pHEVCConfig)
>+        return AVERROR(ENOMEM);
>+
>+    config = ctx->codec_conf.pHEVCConfig;
>+
>+    config->ConfigurationFlags                  =
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_NONE;
>+    config->MinLumaCodingUnitSize               =
>hevc_caps.MinLumaCodingUnitSize;
>+    config->MaxLumaCodingUnitSize               =
>hevc_caps.MaxLumaCodingUnitSize;
>+    config->MinLumaTransformUnitSize            =
>hevc_caps.MinLumaTransformUnitSize;
>+    config->MaxLumaTransformUnitSize            =
>hevc_caps.MaxLumaTransformUnitSize;
>+    config->max_transform_hierarchy_depth_inter =
>hevc_caps.max_transform_hierarchy_depth_inter;
>+    config->max_transform_hierarchy_depth_intra =
>hevc_caps.max_transform_hierarchy_depth_intra;
>+
>+    if (hevc_caps.SupportFlags &
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_AS
>YMETRIC_MOTION_PARTITION_SUPPORT ||
>+        hevc_caps.SupportFlags &
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_AS
>YMETRIC_MOTION_PARTITION_REQUIRED)
>+        config->ConfigurationFlags |=
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_USE_ASYME
>TRIC_MOTION_PARTITION;
>+
>+    if (hevc_caps.SupportFlags &
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_SA
>O_FILTER_SUPPORT)
>+        config->ConfigurationFlags |=
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_ENABLE_SAO
>_FILTER;
>+
>+    if (hevc_caps.SupportFlags &
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_DI
>SABLING_LOOP_FILTER_ACROSS_SLICES_SUPPORT)
>+        config->ConfigurationFlags |=
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_DISABLE_LO
>OP_FILTER_ACROSS_SLICES;
>+
>+    if (hevc_caps.SupportFlags &
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_TR
>ANSFORM_SKIP_SUPPORT)
>+        config->ConfigurationFlags |=
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_ENABLE_TRA
>NSFORM_SKIPPING;
>+
>+    if (hevc_caps.SupportFlags &
>D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_P_
>FRAMES_IMPLEMENTED_AS_LOW_DELAY_B_FRAMES)
>+        ctx->bi_not_empty = 1;
>+
>+    // block sizes
>+    min_cu_size =
>d3d12va_encode_hevc_map_cusize(hevc_caps.MinLumaCodingUnitSize);
>+    max_cu_size =
>d3d12va_encode_hevc_map_cusize(hevc_caps.MaxLumaCodingUnitSize);
>+
>+    av_log(avctx, AV_LOG_VERBOSE, "Using CTU size %dx%d, "
>+           "min CB size %dx%d.\n", max_cu_size, max_cu_size,
>+           min_cu_size, min_cu_size);
>+
>+    base_ctx->surface_width  = FFALIGN(avctx->width,  min_cu_size);
>+    base_ctx->surface_height = FFALIGN(avctx->height, min_cu_size);
>+
>+    return 0;
>+}
>+
>+static int d3d12va_encode_hevc_configure(AVCodecContext *avctx)
>+{
>+    HWBaseEncodeContext  *base_ctx = avctx->priv_data;
>+    D3D12VAEncodeContext      *ctx = avctx->priv_data;
>+    D3D12VAEncodeHEVCContext *priv = avctx->priv_data;
>+    int fixed_qp_idr, fixed_qp_p, fixed_qp_b;
>+    int err;
>+
>+    err = ff_cbs_init(&priv->cbc, AV_CODEC_ID_HEVC, avctx);
>+    if (err < 0)
>+        return err;
>+
>+    // Rate control
>+    if (ctx->rc.Mode ==
>D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_CQP) {
>+        D3D12_VIDEO_ENCODER_RATE_CONTROL_CQP *cqp_ctl;
>+        fixed_qp_p = av_clip(base_ctx->rc_quality, 1, 51);
>+        if (avctx->i_quant_factor > 0.0)
>+            fixed_qp_idr = av_clip((avctx->i_quant_factor * fixed_qp_p +
>+                                    avctx->i_quant_offset) + 0.5, 1, 51);
>+        else
>+            fixed_qp_idr = fixed_qp_p;
>+        if (avctx->b_quant_factor > 0.0)
>+            fixed_qp_b = av_clip((avctx->b_quant_factor * fixed_qp_p +
>+                                  avctx->b_quant_offset) + 0.5, 1, 51);
>+        else
>+            fixed_qp_b = fixed_qp_p;
>+
>+        av_log(avctx, AV_LOG_DEBUG, "Using fixed QP = "
>+               "%d / %d / %d for IDR- / P- / B-frames.\n",
>+               fixed_qp_idr, fixed_qp_p, fixed_qp_b);
>+
>+        ctx->rc.ConfigParams.DataSize =
>sizeof(D3D12_VIDEO_ENCODER_RATE_CONTROL_CQP);
>+        cqp_ctl = av_mallocz(ctx->rc.ConfigParams.DataSize);
>+        if (!cqp_ctl)
>+            return AVERROR(ENOMEM);
>+
>+        cqp_ctl->ConstantQP_FullIntracodedFrame                  = fixed_qp_idr;
>+        cqp_ctl->ConstantQP_InterPredictedFrame_PrevRefOnly      = fixed_qp_p;
>+        cqp_ctl->ConstantQP_InterPredictedFrame_BiDirectionalRef =
>fixed_qp_b;
>+
>+        ctx->rc.ConfigParams.pConfiguration_CQP = cqp_ctl;
>+    }
>+
>+    // GOP
>+    ctx->gop.DataSize =
>sizeof(D3D12_VIDEO_ENCODER_SEQUENCE_GOP_STRUCTURE_HEVC);
>+    ctx->gop.pHEVCGroupOfPictures = av_mallocz(ctx->gop.DataSize);
>+    if (!ctx->gop.pHEVCGroupOfPictures)
>+        return AVERROR(ENOMEM);
>+
>+    ctx->gop.pHEVCGroupOfPictures->GOPLength      = base_ctx->gop_size;
>+    ctx->gop.pHEVCGroupOfPictures->PPicturePeriod = base_ctx->b_per_p + 1;
>+    // Power of 2
>+    if (base_ctx->gop_size & base_ctx->gop_size - 1 == 0)
>+        ctx->gop.pHEVCGroupOfPictures->log2_max_pic_order_cnt_lsb_minus4
>=
>+            FFMAX(av_log2(base_ctx->gop_size) - 4, 0);
>+    else
>+        ctx->gop.pHEVCGroupOfPictures->log2_max_pic_order_cnt_lsb_minus4
>=
>+            FFMAX(av_log2(base_ctx->gop_size) - 3, 0);
>+
>+    return 0;
>+}
>+
>+static int d3d12va_encode_hevc_set_level(AVCodecContext *avctx)
>+{
>+    D3D12VAEncodeContext      *ctx = avctx->priv_data;
>+    D3D12VAEncodeHEVCContext *priv = avctx->priv_data;
>+    int i;
>+
>+    ctx->level.DataSize =
>sizeof(D3D12_VIDEO_ENCODER_LEVEL_TIER_CONSTRAINTS_HEVC);
>+    ctx->level.pHEVCLevelSetting = av_mallocz(ctx->level.DataSize);
>+    if (!ctx->level.pHEVCLevelSetting)
>+        return AVERROR(ENOMEM);
>+
>+    for (i = 0; i < FF_ARRAY_ELEMS(hevc_levels); i++) {
>+        if (avctx->level == hevc_levels[i].level) {
>+            ctx->level.pHEVCLevelSetting->Level = hevc_levels[i].d3d12_level;
>+            break;
>+        }
>+    }
>+
>+    if (i == FF_ARRAY_ELEMS(hevc_levels)) {
>+        av_log(avctx, AV_LOG_ERROR, "Invalid level %d.\n", avctx->level);
>+        return AVERROR(EINVAL);
>+    }
>+
>+    ctx->level.pHEVCLevelSetting->Tier = priv-
>>raw_vps.profile_tier_level.general_tier_flag == 0 ?
>+                                         D3D12_VIDEO_ENCODER_TIER_HEVC_MAIN :
>+                                         D3D12_VIDEO_ENCODER_TIER_HEVC_HIGH;
>+
>+    return 0;
>+}
>+
>+static void
>d3d12va_encode_hevc_free_picture_params(D3D12VAEncodePicture *pic)
>+{
>+    if (!pic->pic_ctl.pHEVCPicData)
>+        return;
>+
>+    av_freep(&pic->pic_ctl.pHEVCPicData->pList0ReferenceFrames);
>+    av_freep(&pic->pic_ctl.pHEVCPicData->pList1ReferenceFrames);
>+    av_freep(&pic->pic_ctl.pHEVCPicData-
>>pReferenceFramesReconPictureDescriptors);
>+    av_freep(&pic->pic_ctl.pHEVCPicData);
>+}
>+
>+static int d3d12va_encode_hevc_init_picture_params(AVCodecContext
>*avctx,
>+                                                   D3D12VAEncodePicture *pic)
>+{
>+    HWBaseEncodePicture                             *base_pic = (HWBaseEncodePicture
>*)pic;
>+    D3D12VAEncodeHEVCPicture                            *hpic = base_pic->priv_data;
>+    HWBaseEncodePicture                                 *prev = base_pic->prev;
>+    D3D12VAEncodeHEVCPicture                           *hprev = prev ? prev-
>>priv_data : NULL;
>+    D3D12_VIDEO_ENCODER_REFERENCE_PICTURE_DESCRIPTOR_HEVC *pd =
>NULL;
>+    UINT                                           *ref_list0 = NULL, *ref_list1 = NULL;
>+    int i, idx = 0;
>+
>+    pic->pic_ctl.DataSize =
>sizeof(D3D12_VIDEO_ENCODER_PICTURE_CONTROL_CODEC_DATA_HEVC);
>+    pic->pic_ctl.pHEVCPicData = av_mallocz(pic->pic_ctl.DataSize);
>+    if (!pic->pic_ctl.pHEVCPicData)
>+        return AVERROR(ENOMEM);
>+
>+    if (base_pic->type == PICTURE_TYPE_IDR) {
>+        av_assert0(base_pic->display_order == base_pic->encode_order);
>+        hpic->last_idr_frame = base_pic->display_order;
>+    } else {
>+        av_assert0(prev);
>+        hpic->last_idr_frame = hprev->last_idr_frame;
>+    }
>+    hpic->pic_order_cnt = base_pic->display_order - hpic->last_idr_frame;
>+
>+    switch(base_pic->type) {
>+        case PICTURE_TYPE_IDR:
>+            pic->pic_ctl.pHEVCPicData->FrameType =
>D3D12_VIDEO_ENCODER_FRAME_TYPE_HEVC_IDR_FRAME;
>+            break;
>+        case PICTURE_TYPE_I:
>+            pic->pic_ctl.pHEVCPicData->FrameType =
>D3D12_VIDEO_ENCODER_FRAME_TYPE_HEVC_I_FRAME;
>+            break;
>+        case PICTURE_TYPE_P:
>+            pic->pic_ctl.pHEVCPicData->FrameType =
>D3D12_VIDEO_ENCODER_FRAME_TYPE_HEVC_P_FRAME;
>+            break;
>+        case PICTURE_TYPE_B:
>+            pic->pic_ctl.pHEVCPicData->FrameType =
>D3D12_VIDEO_ENCODER_FRAME_TYPE_HEVC_B_FRAME;
>+            break;
>+        default:
>+            av_assert0(0 && "invalid picture type");
>+    }
>+
>+    pic->pic_ctl.pHEVCPicData->slice_pic_parameter_set_id = 0;
>+    pic->pic_ctl.pHEVCPicData->PictureOrderCountNumber    = hpic-
>>pic_order_cnt;
>+
>+    if (base_pic->type == PICTURE_TYPE_P || base_pic->type ==
>PICTURE_TYPE_B) {
>+        pd = av_calloc(MAX_PICTURE_REFERENCES, sizeof(*pd));
>+        if (!pd)
>+            return AVERROR(ENOMEM);
>+
>+        ref_list0 = av_calloc(MAX_PICTURE_REFERENCES, sizeof(*ref_list0));
>+        if (!ref_list0)
>+            return AVERROR(ENOMEM);
>+
>+        pic->pic_ctl.pHEVCPicData->List0ReferenceFramesCount = base_pic-
>>nb_refs[0];
>+        for (i = 0; i < base_pic->nb_refs[0]; i++) {
>+            HWBaseEncodePicture      *ref = base_pic->refs[0][i];
>+            D3D12VAEncodeHEVCPicture *href;
>+
>+            av_assert0(ref && ref->encode_order < base_pic->encode_order);
>+            href = ref->priv_data;
>+
>+            ref_list0[i] = idx;
>+            pd[idx].ReconstructedPictureResourceIndex = idx;
>+            pd[idx].IsRefUsedByCurrentPic = TRUE;
>+            pd[idx].PictureOrderCountNumber = href->pic_order_cnt;
>+            idx++;
>+        }
>+    }
>+
>+    if (base_pic->type == PICTURE_TYPE_B) {
>+        ref_list1 = av_calloc(MAX_PICTURE_REFERENCES, sizeof(*ref_list1));
>+        if (!ref_list1)
>+            return AVERROR(ENOMEM);
>+
>+        pic->pic_ctl.pHEVCPicData->List1ReferenceFramesCount = base_pic-
>>nb_refs[1];
>+        for (i = 0; i < base_pic->nb_refs[1]; i++) {
>+            HWBaseEncodePicture      *ref = base_pic->refs[1][i];
>+            D3D12VAEncodeHEVCPicture *href;
>+
>+            av_assert0(ref && ref->encode_order < base_pic->encode_order);
>+            href = ref->priv_data;
>+
>+            ref_list1[i] = idx;
>+            pd[idx].ReconstructedPictureResourceIndex = idx;
>+            pd[idx].IsRefUsedByCurrentPic = TRUE;
>+            pd[idx].PictureOrderCountNumber = href->pic_order_cnt;
>+            idx++;
>+        }
>+    }
>+
>+    pic->pic_ctl.pHEVCPicData->pList0ReferenceFrames = ref_list0;
>+    pic->pic_ctl.pHEVCPicData->pList1ReferenceFrames = ref_list1;
>+    pic->pic_ctl.pHEVCPicData->ReferenceFramesReconPictureDescriptorsCount
>= idx;
>+    pic->pic_ctl.pHEVCPicData->pReferenceFramesReconPictureDescriptors =
>pd;
>+
>+    return 0;
>+}
>+
>+static const D3D12VAEncodeType d3d12va_encode_type_hevc = {
>+    .profiles               = d3d12va_encode_hevc_profiles,
>+
>+    .d3d12_codec            = D3D12_VIDEO_ENCODER_CODEC_HEVC,
>+
>+    .flags                  = FLAG_B_PICTURES |
>+                              FLAG_B_PICTURE_REFERENCES |
>+                              FLAG_NON_IDR_KEY_PICTURES,
>+
>+    .default_quality        = 25,
>+
>+    .get_encoder_caps       = &d3d12va_encode_hevc_get_encoder_caps,
>+
>+    .configure              = &d3d12va_encode_hevc_configure,
>+
>+    .set_level              = &d3d12va_encode_hevc_set_level,
>+
>+    .picture_priv_data_size = sizeof(D3D12VAEncodeHEVCPicture),
>+
>+    .init_sequence_params   =
>&d3d12va_encode_hevc_init_sequence_params,
>+
>+    .init_picture_params    = &d3d12va_encode_hevc_init_picture_params,
>+
>+    .free_picture_params    = &d3d12va_encode_hevc_free_picture_params,
>+
>+    .write_sequence_header  =
>&d3d12va_encode_hevc_write_sequence_header,
>+};
>+
>+static int d3d12va_encode_hevc_init(AVCodecContext *avctx)
>+{
>+    HWBaseEncodeContext  *base_ctx = avctx->priv_data;
>+    D3D12VAEncodeContext      *ctx = avctx->priv_data;
>+    D3D12VAEncodeHEVCContext *priv = avctx->priv_data;
>+
>+    ctx->codec = &d3d12va_encode_type_hevc;
>+
>+    if (avctx->profile == AV_PROFILE_UNKNOWN)
>+        avctx->profile = priv->profile;
>+    if (avctx->level == FF_LEVEL_UNKNOWN)
>+        avctx->level = priv->level;
>+
>+    if (avctx->level != FF_LEVEL_UNKNOWN && avctx->level & ~0xff) {
>+        av_log(avctx, AV_LOG_ERROR, "Invalid level %d: must fit "
>+               "in 8-bit unsigned integer.\n", avctx->level);
>+        return AVERROR(EINVAL);
>+    }
>+
>+    if (priv->qp > 0)
>+        base_ctx->explicit_qp = priv->qp;
>+
>+    return ff_d3d12va_encode_init(avctx);
>+}
>+
>+static int d3d12va_encode_hevc_close(AVCodecContext *avctx)
>+{
>+    D3D12VAEncodeHEVCContext *priv = avctx->priv_data;
>+
>+    ff_cbs_fragment_free(&priv->current_access_unit);
>+    ff_cbs_close(&priv->cbc);
>+
>+    av_freep(&priv->common.codec_conf.pHEVCConfig);
>+    av_freep(&priv->common.gop.pHEVCGroupOfPictures);
>+    av_freep(&priv->common.level.pHEVCLevelSetting);
>+
>+    return ff_d3d12va_encode_close(avctx);
>+}
>+
>+#define OFFSET(x) offsetof(D3D12VAEncodeHEVCContext, x)
>+#define FLAGS (AV_OPT_FLAG_VIDEO_PARAM |
>AV_OPT_FLAG_ENCODING_PARAM)
>+static const AVOption d3d12va_encode_hevc_options[] = {
>+    HW_BASE_ENCODE_COMMON_OPTIONS,
>+    D3D12VA_ENCODE_RC_OPTIONS,
>+
>+    { "qp", "Constant QP (for P-frames; scaled by qfactor/qoffset for I/B)",
>+      OFFSET(qp), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 52, FLAGS },
>+
>+    { "profile", "Set profile (general_profile_idc)",
>+      OFFSET(profile), AV_OPT_TYPE_INT,
>+      { .i64 = AV_PROFILE_UNKNOWN }, AV_PROFILE_UNKNOWN, 0xff, FLAGS,
>"profile" },
>+
>+#define PROFILE(name, value)  name, NULL, 0, AV_OPT_TYPE_CONST, \
>+      { .i64 = value }, 0, 0, FLAGS, "profile"
>+    { PROFILE("main",               AV_PROFILE_HEVC_MAIN) },
>+    { PROFILE("main10",             AV_PROFILE_HEVC_MAIN_10) },
>+    { PROFILE("rext",               AV_PROFILE_HEVC_REXT) },
>+#undef PROFILE
>+
>+    { "tier", "Set tier (general_tier_flag)",
>+      OFFSET(tier), AV_OPT_TYPE_INT,
>+      { .i64 = 0 }, 0, 1, FLAGS, "tier" },
>+    { "main", NULL, 0, AV_OPT_TYPE_CONST,
>+      { .i64 = 0 }, 0, 0, FLAGS, "tier" },
>+    { "high", NULL, 0, AV_OPT_TYPE_CONST,
>+      { .i64 = 1 }, 0, 0, FLAGS, "tier" },
>+
>+    { "level", "Set level (general_level_idc)",
>+      OFFSET(level), AV_OPT_TYPE_INT,
>+      { .i64 = FF_LEVEL_UNKNOWN }, FF_LEVEL_UNKNOWN, 0xff, FLAGS, "level" },
>+
>+#define LEVEL(name, value) name, NULL, 0, AV_OPT_TYPE_CONST, \
>+      { .i64 = value }, 0, 0, FLAGS, "level"
>+    { LEVEL("1",    30) },
>+    { LEVEL("2",    60) },
>+    { LEVEL("2.1",  63) },
>+    { LEVEL("3",    90) },
>+    { LEVEL("3.1",  93) },
>+    { LEVEL("4",   120) },
>+    { LEVEL("4.1", 123) },
>+    { LEVEL("5",   150) },
>+    { LEVEL("5.1", 153) },
>+    { LEVEL("5.2", 156) },
>+    { LEVEL("6",   180) },
>+    { LEVEL("6.1", 183) },
>+    { LEVEL("6.2", 186) },
>+#undef LEVEL
>+
>+    { NULL },
>+};
>+
>+static const FFCodecDefault d3d12va_encode_hevc_defaults[] = {
>+    { "b",              "0"   },
>+    { "bf",             "2"   },
>+    { "g",              "120" },
>+    { "i_qfactor",      "1"   },
>+    { "i_qoffset",      "0"   },
>+    { "b_qfactor",      "1" },
>+    { "b_qoffset",      "0"   },
>+    { "qmin",           "-1"  },
>+    { "qmax",           "-1"  },
>+    { NULL },
>+};
>+
>+static const AVClass d3d12va_encode_hevc_class = {
>+    .class_name = "hevc_d3d12va",
>+    .item_name  = av_default_item_name,
>+    .option     = d3d12va_encode_hevc_options,
>+    .version    = LIBAVUTIL_VERSION_INT,
>+};
>+
>+const FFCodec ff_hevc_d3d12va_encoder = {
>+    .p.name         = "hevc_d3d12va",
>+    CODEC_LONG_NAME("D3D12VA hevc encoder"),
>+    .p.type         = AVMEDIA_TYPE_VIDEO,
>+    .p.id           = AV_CODEC_ID_HEVC,
>+    .priv_data_size = sizeof(D3D12VAEncodeHEVCContext),
>+    .init           = &d3d12va_encode_hevc_init,
>+    FF_CODEC_RECEIVE_PACKET_CB(&ff_hw_base_encode_receive_packet),
>+    .close          = &d3d12va_encode_hevc_close,
>+    .p.priv_class   = &d3d12va_encode_hevc_class,
>+    .p.capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE |
>+                      AV_CODEC_CAP_DR1 |
>AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE,
>+    .caps_internal  = FF_CODEC_CAP_NOT_INIT_THREADSAFE |
>+                      FF_CODEC_CAP_INIT_CLEANUP,
>+    .defaults       = d3d12va_encode_hevc_defaults,
>+    .p.pix_fmts = (const enum AVPixelFormat[]) {
>+        AV_PIX_FMT_D3D12,
>+        AV_PIX_FMT_NONE,
>+    },
>+    .hw_configs     = ff_d3d12va_encode_hw_configs,
>+    .p.wrapper_name = "d3d12va",
>+};
>--
>2.41.0.windows.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [FFmpeg-devel] [PATCH v7 02/12] avcodec/vaapi_encode: introduce a base layer for vaapi encode
  2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 02/12] avcodec/vaapi_encode: introduce a base layer for vaapi encode tong1.wu-at-intel.com
@ 2024-04-15  7:29   ` Xiang, Haihao
  0 siblings, 0 replies; 15+ messages in thread
From: Xiang, Haihao @ 2024-04-15  7:29 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Wu, Tong1

On Do, 2024-03-14 at 16:14 +0800, tong1.wu-at-intel.com@ffmpeg.org wrote:
> From: Tong Wu <tong1.wu@intel.com>
> 
> Since VAAPI and future D3D12VA implementation may share some common
> parameters,
> a base layer encode context is introduced as vaapi context's base.
> 
> Signed-off-by: Tong Wu <tong1.wu@intel.com>
> ---
>  libavcodec/hw_base_encode.h     | 241 ++++++++++++++++++++
>  libavcodec/vaapi_encode.c       | 392 +++++++++++++++++---------------
>  libavcodec/vaapi_encode.h       | 198 +---------------
>  libavcodec/vaapi_encode_av1.c   |  69 +++---
>  libavcodec/vaapi_encode_h264.c  | 197 ++++++++--------
>  libavcodec/vaapi_encode_h265.c  | 159 ++++++-------
>  libavcodec/vaapi_encode_mjpeg.c |  20 +-
>  libavcodec/vaapi_encode_mpeg2.c |  49 ++--
>  libavcodec/vaapi_encode_vp8.c   |  24 +-
>  libavcodec/vaapi_encode_vp9.c   |  66 +++---
>  10 files changed, 764 insertions(+), 651 deletions(-)
>  create mode 100644 libavcodec/hw_base_encode.h
> 
> diff --git a/libavcodec/hw_base_encode.h b/libavcodec/hw_base_encode.h
> new file mode 100644
> index 0000000000..41b68aa073
> --- /dev/null
> +++ b/libavcodec/hw_base_encode.h
> @@ -0,0 +1,241 @@
> +/*
> + * This file is part of FFmpeg.
> + *
> + * FFmpeg is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2.1 of the License, or (at your option) any later version.
> + *
> + * FFmpeg is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with FFmpeg; if not, write to the Free Software
> + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301
> USA
> + */
> +
> +#ifndef AVCODEC_HW_BASE_ENCODE_H
> +#define AVCODEC_HW_BASE_ENCODE_H
> +
> +#include "libavutil/hwcontext.h"
> +#include "libavutil/fifo.h"
> +
> +#include "avcodec.h"
> +
> +#define MAX_DPB_SIZE 16
> +#define MAX_PICTURE_REFERENCES 2
> +#define MAX_REORDER_DELAY 16
> +#define MAX_ASYNC_DEPTH 64
> +#define MAX_REFERENCE_LIST_NUM 2
> +
> +static inline const char *ff_hw_base_encode_get_pictype_name(const int type)
> {
> +    const char * const picture_type_name[] = { "IDR", "I", "P", "B" };
> +    return picture_type_name[type];
> +}
> +
> +enum {
> +    PICTURE_TYPE_IDR = 0,
> +    PICTURE_TYPE_I   = 1,
> +    PICTURE_TYPE_P   = 2,
> +    PICTURE_TYPE_B   = 3,
> +};
> +
> +enum {
> +    // Codec supports controlling the subdivision of pictures into slices.
> +    FLAG_SLICE_CONTROL         = 1 << 0,
> +    // Codec only supports constant quality (no rate control).
> +    FLAG_CONSTANT_QUALITY_ONLY = 1 << 1,
> +    // Codec is intra-only.
> +    FLAG_INTRA_ONLY            = 1 << 2,
> +    // Codec supports B-pictures.
> +    FLAG_B_PICTURES            = 1 << 3,
> +    // Codec supports referencing B-pictures.
> +    FLAG_B_PICTURE_REFERENCES  = 1 << 4,
> +    // Codec supports non-IDR key pictures (that is, key pictures do
> +    // not necessarily empty the DPB).
> +    FLAG_NON_IDR_KEY_PICTURES  = 1 << 5,
> +};
> +
> +typedef struct HWBaseEncodePicture {
> +    struct HWBaseEncodePicture *next;
> +
> +    int64_t         display_order;
> +    int64_t         encode_order;
> +    int64_t         pts;
> +    int64_t         duration;
> +    int             force_idr;
> +
> +    void           *opaque;
> +    AVBufferRef    *opaque_ref;
> +
> +    int             type;
> +    int             b_depth;
> +    int             encode_issued;
> +    int             encode_complete;
> +
> +    AVFrame        *input_image;
> +    AVFrame        *recon_image;
> +
> +    void           *priv_data;
> +
> +    // Whether this picture is a reference picture.
> +    int             is_reference;
> +
> +    // The contents of the DPB after this picture has been decoded.
> +    // This will contain the picture itself if it is a reference picture,
> +    // but not if it isn't.
> +    int                     nb_dpb_pics;
> +    struct HWBaseEncodePicture *dpb[MAX_DPB_SIZE];
> +    // The reference pictures used in decoding this picture. If they are
> +    // used by later pictures they will also appear in the DPB. ref[0][] for
> +    // previous reference frames. ref[1][] for future reference frames.
> +    int                     nb_refs[MAX_REFERENCE_LIST_NUM];
> +    struct HWBaseEncodePicture
> *refs[MAX_REFERENCE_LIST_NUM][MAX_PICTURE_REFERENCES];
> +    // The previous reference picture in encode order.  Must be in at least
> +    // one of the reference list and DPB list.
> +    struct HWBaseEncodePicture *prev;
> +    // Reference count for other pictures referring to this one through
> +    // the above pointers, directly from incomplete pictures and indirectly
> +    // through completed pictures.
> +    int             ref_count[2];
> +    int             ref_removed[2];
> +} HWBaseEncodePicture;
> +
> +typedef struct HWEncodePictureOperation {
> +    // Alloc memory for the picture structure.
> +    HWBaseEncodePicture * (*alloc)(AVCodecContext *avctx, const AVFrame
> *frame);
> +    // Issue the picture structure, which will send the frame surface to HW
> Encode API.
> +    int (*issue)(AVCodecContext *avctx, const HWBaseEncodePicture *base_pic);
> +    // Get the output AVPacket.
> +    int (*output)(AVCodecContext *avctx, const HWBaseEncodePicture *base_pic,
> AVPacket *pkt);
> +    // Free the picture structure.
> +    int (*free)(AVCodecContext *avctx, HWBaseEncodePicture *base_pic);
> +}  HWEncodePictureOperation;
> +
> +typedef struct HWBaseEncodeContext {
> +    const AVClass *class;
> +
> +    // Hardware-specific hooks.
> +    const struct HWEncodePictureOperation *op;
> +
> +    // Global options.
> +
> +    // Number of I frames between IDR frames.
> +    int             idr_interval;
> +
> +    // Desired B frame reference depth.
> +    int             desired_b_depth;
> +
> +    // Explicitly set RC mode (otherwise attempt to pick from
> +    // available modes).
> +    int             explicit_rc_mode;
> +
> +    // Explicitly-set QP, for use with the "qp" options.
> +    // (Forces CQP mode when set, overriding everything else.)
> +    int             explicit_qp;
> +
> +    // The required size of surfaces.  This is probably the input
> +    // size (AVCodecContext.width|height) aligned up to whatever
> +    // block size is required by the codec.
> +    int             surface_width;
> +    int             surface_height;
> +
> +    // The block size for slice calculations.
> +    int             slice_block_width;
> +    int             slice_block_height;
> +
> +    // RC quality level - meaning depends on codec and RC mode.
> +    // In CQP mode this sets the fixed quantiser value.
> +    int             rc_quality;
> +
> +    AVBufferRef    *device_ref;
> +    AVHWDeviceContext *device;
> +
> +    // The hardware frame context containing the input frames.
> +    AVBufferRef    *input_frames_ref;
> +    AVHWFramesContext *input_frames;
> +
> +    // The hardware frame context containing the reconstructed frames.
> +    AVBufferRef    *recon_frames_ref;
> +    AVHWFramesContext *recon_frames;
> +
> +    // Current encoding window, in display (input) order.
> +    HWBaseEncodePicture *pic_start, *pic_end;
> +    // The next picture to use as the previous reference picture in
> +    // encoding order. Order from small to large in encoding order.
> +    HWBaseEncodePicture *next_prev[MAX_PICTURE_REFERENCES];
> +    int                  nb_next_prev;
> +
> +    // Next input order index (display order).
> +    int64_t         input_order;
> +    // Number of frames that output is behind input.
> +    int64_t         output_delay;
> +    // Next encode order index.
> +    int64_t         encode_order;
> +    // Number of frames decode output will need to be delayed.
> +    int64_t         decode_delay;
> +    // Next output order index (in encode order).
> +    int64_t         output_order;
> +
> +    // Timestamp handling.
> +    int64_t         first_pts;
> +    int64_t         dts_pts_diff;
> +    int64_t         ts_ring[MAX_REORDER_DELAY * 3 +
> +                            MAX_ASYNC_DEPTH];
> +
> +    // Frame type decision.
> +    int gop_size;
> +    int closed_gop;
> +    int gop_per_idr;
> +    int p_per_i;
> +    int max_b_depth;
> +    int b_per_p;
> +    int force_idr;
> +    int idr_counter;
> +    int gop_counter;
> +    int end_of_stream;
> +    int p_to_gpb;
> +
> +    // Whether the driver supports ROI at all.
> +    int             roi_allowed;
> +
> +    // The encoder does not support cropping information, so warn about
> +    // it the first time we encounter any nonzero crop fields.
> +    int             crop_warned;
> +    // If the driver does not support ROI then warn the first time we
> +    // encounter a frame with ROI side data.
> +    int             roi_warned;
> +
> +    AVFrame         *frame;

Could you add more comments for members in base structures ? 

Thanks
Haihao

> +
> +    // Whether the HW supports sync buffer function.
> +    // If supported, encode_fifo/async_depth will be used together.
> +    // Used for output buffer synchronization.
> +    int             async_encode;
> +
> +    // Store buffered pic
> +    AVFifo          *encode_fifo;
> +    // Max number of frame buffered in encoder.
> +    int             async_depth;
> +
> +    /** Tail data of a pic, now only used for av1 repeat frame header. */
> +    AVPacket        *tail_pkt;
> +} HWBaseEncodeContext;
> +
> +#define HW_BASE_ENCODE_COMMON_OPTIONS \
> +    { "idr_interval", \
> +      "Distance (in I-frames) between key frames", \
> +      OFFSET(common.base.idr_interval), AV_OPT_TYPE_INT, \
> +      { .i64 = 0 }, 0, INT_MAX, FLAGS }, \
> +    { "b_depth", \
> +      "Maximum B-frame reference depth", \
> +      OFFSET(common.base.desired_b_depth), AV_OPT_TYPE_INT, \
> +      { .i64 = 1 }, 1, INT_MAX, FLAGS }, \
> +    { "async_depth", "Maximum processing parallelism. " \
> +      "Increase this to improve single channel performance.", \
> +      OFFSET(common.base.async_depth), AV_OPT_TYPE_INT, \
> +      { .i64 = 2 }, 1, MAX_ASYNC_DEPTH, FLAGS }
> +
> +#endif /* AVCODEC_HW_BASE_ENCODE_H */
> diff --git a/libavcodec/vaapi_encode.c b/libavcodec/vaapi_encode.c
> index bd29dbf0b4..4350960248 100644
> --- a/libavcodec/vaapi_encode.c
> +++ b/libavcodec/vaapi_encode.c
> @@ -37,8 +37,6 @@ const AVCodecHWConfigInternal *const
> ff_vaapi_encode_hw_configs[] = {
>      NULL,
>  };
>  
> -static const char * const picture_type_name[] = { "IDR", "I", "P", "B" };
> -
>  static int vaapi_encode_make_packed_header(AVCodecContext *avctx,
>                                             VAAPIEncodePicture *pic,
>                                             int type, char *data, size_t
> bit_len)
> @@ -139,22 +137,24 @@ static int
> vaapi_encode_make_misc_param_buffer(AVCodecContext *avctx,
>  static int vaapi_encode_wait(AVCodecContext *avctx,
>                               VAAPIEncodePicture *pic)
>  {
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
>      VAAPIEncodeContext *ctx = avctx->priv_data;
> +    HWBaseEncodePicture *base_pic = (HWBaseEncodePicture*)pic;
>      VAStatus vas;
>  
> -    av_assert0(pic->encode_issued);
> +    av_assert0(base_pic->encode_issued);
>  
> -    if (pic->encode_complete) {
> +    if (base_pic->encode_complete) {
>          // Already waited for this picture.
>          return 0;
>      }
>  
>      av_log(avctx, AV_LOG_DEBUG, "Sync to pic %"PRId64"/%"PRId64" "
> -           "(input surface %#x).\n", pic->display_order,
> -           pic->encode_order, pic->input_surface);
> +           "(input surface %#x).\n", base_pic->display_order,
> +           base_pic->encode_order, pic->input_surface);
>  
>  #if VA_CHECK_VERSION(1, 9, 0)
> -    if (ctx->has_sync_buffer_func) {
> +    if (base_ctx->async_encode) {
>          vas = vaSyncBuffer(ctx->hwctx->display,
>                             pic->output_buffer,
>                             VA_TIMEOUT_INFINITE);
> @@ -175,9 +175,9 @@ static int vaapi_encode_wait(AVCodecContext *avctx,
>      }
>  
>      // Input is definitely finished with now.
> -    av_frame_free(&pic->input_image);
> +    av_frame_free(&base_pic->input_image);
>  
> -    pic->encode_complete = 1;
> +    base_pic->encode_complete = 1;
>      return 0;
>  }
>  
> @@ -264,9 +264,11 @@ static int vaapi_encode_make_tile_slice(AVCodecContext
> *avctx,
>  }
>  
>  static int vaapi_encode_issue(AVCodecContext *avctx,
> -                              VAAPIEncodePicture *pic)
> +                              HWBaseEncodePicture *base_pic)
>  {
> -    VAAPIEncodeContext *ctx = avctx->priv_data;
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    VAAPIEncodeContext       *ctx = avctx->priv_data;
> +    VAAPIEncodePicture *pic = (VAAPIEncodePicture*)base_pic;
>      VAAPIEncodeSlice *slice;
>      VAStatus vas;
>      int err, i;
> @@ -275,52 +277,52 @@ static int vaapi_encode_issue(AVCodecContext *avctx,
>      av_unused AVFrameSideData *sd;
>  
>      av_log(avctx, AV_LOG_DEBUG, "Issuing encode for pic %"PRId64"/%"PRId64" "
> -           "as type %s.\n", pic->display_order, pic->encode_order,
> -           picture_type_name[pic->type]);
> -    if (pic->nb_refs[0] == 0 && pic->nb_refs[1] == 0) {
> +           "as type %s.\n", base_pic->display_order, base_pic->encode_order,
> +           ff_hw_base_encode_get_pictype_name(base_pic->type));
> +    if (base_pic->nb_refs[0] == 0 && base_pic->nb_refs[1] == 0) {
>          av_log(avctx, AV_LOG_DEBUG, "No reference pictures.\n");
>      } else {
>          av_log(avctx, AV_LOG_DEBUG, "L0 refers to");
> -        for (i = 0; i < pic->nb_refs[0]; i++) {
> +        for (i = 0; i < base_pic->nb_refs[0]; i++) {
>              av_log(avctx, AV_LOG_DEBUG, " %"PRId64"/%"PRId64,
> -                   pic->refs[0][i]->display_order, pic->refs[0][i]-
> >encode_order);
> +                   base_pic->refs[0][i]->display_order, base_pic->refs[0][i]-
> >encode_order);
>          }
>          av_log(avctx, AV_LOG_DEBUG, ".\n");
>  
> -        if (pic->nb_refs[1]) {
> +        if (base_pic->nb_refs[1]) {
>              av_log(avctx, AV_LOG_DEBUG, "L1 refers to");
> -            for (i = 0; i < pic->nb_refs[1]; i++) {
> +            for (i = 0; i < base_pic->nb_refs[1]; i++) {
>                  av_log(avctx, AV_LOG_DEBUG, " %"PRId64"/%"PRId64,
> -                       pic->refs[1][i]->display_order, pic->refs[1][i]-
> >encode_order);
> +                       base_pic->refs[1][i]->display_order, base_pic-
> >refs[1][i]->encode_order);
>              }
>              av_log(avctx, AV_LOG_DEBUG, ".\n");
>          }
>      }
>  
> -    av_assert0(!pic->encode_issued);
> -    for (i = 0; i < pic->nb_refs[0]; i++) {
> -        av_assert0(pic->refs[0][i]);
> -        av_assert0(pic->refs[0][i]->encode_issued);
> +    av_assert0(!base_pic->encode_issued);
> +    for (i = 0; i < base_pic->nb_refs[0]; i++) {
> +        av_assert0(base_pic->refs[0][i]);
> +        av_assert0(base_pic->refs[0][i]->encode_issued);
>      }
> -    for (i = 0; i < pic->nb_refs[1]; i++) {
> -        av_assert0(pic->refs[1][i]);
> -        av_assert0(pic->refs[1][i]->encode_issued);
> +    for (i = 0; i < base_pic->nb_refs[1]; i++) {
> +        av_assert0(base_pic->refs[1][i]);
> +        av_assert0(base_pic->refs[1][i]->encode_issued);
>      }
>  
>      av_log(avctx, AV_LOG_DEBUG, "Input surface is %#x.\n", pic-
> >input_surface);
>  
> -    pic->recon_image = av_frame_alloc();
> -    if (!pic->recon_image) {
> +    base_pic->recon_image = av_frame_alloc();
> +    if (!base_pic->recon_image) {
>          err = AVERROR(ENOMEM);
>          goto fail;
>      }
>  
> -    err = av_hwframe_get_buffer(ctx->recon_frames_ref, pic->recon_image, 0);
> +    err = av_hwframe_get_buffer(base_ctx->recon_frames_ref, base_pic-
> >recon_image, 0);
>      if (err < 0) {
>          err = AVERROR(ENOMEM);
>          goto fail;
>      }
> -    pic->recon_surface = (VASurfaceID)(uintptr_t)pic->recon_image->data[3];
> +    pic->recon_surface = (VASurfaceID)(uintptr_t)base_pic->recon_image-
> >data[3];
>      av_log(avctx, AV_LOG_DEBUG, "Recon surface is %#x.\n", pic-
> >recon_surface);
>  
>      pic->output_buffer_ref = ff_refstruct_pool_get(ctx->output_buffer_pool);
> @@ -344,7 +346,7 @@ static int vaapi_encode_issue(AVCodecContext *avctx,
>  
>      pic->nb_param_buffers = 0;
>  
> -    if (pic->type == PICTURE_TYPE_IDR && ctx->codec->init_sequence_params) {
> +    if (base_pic->type == PICTURE_TYPE_IDR && ctx->codec-
> >init_sequence_params) {
>          err = vaapi_encode_make_param_buffer(avctx, pic,
>                                              
> VAEncSequenceParameterBufferType,
>                                               ctx->codec_sequence_params,
> @@ -353,7 +355,7 @@ static int vaapi_encode_issue(AVCodecContext *avctx,
>              goto fail;
>      }
>  
> -    if (pic->type == PICTURE_TYPE_IDR) {
> +    if (base_pic->type == PICTURE_TYPE_IDR) {
>          for (i = 0; i < ctx->nb_global_params; i++) {
>              err = vaapi_encode_make_misc_param_buffer(avctx, pic,
>                                                        ctx-
> >global_params_type[i],
> @@ -390,7 +392,7 @@ static int vaapi_encode_issue(AVCodecContext *avctx,
>      }
>  #endif
>  
> -    if (pic->type == PICTURE_TYPE_IDR) {
> +    if (base_pic->type == PICTURE_TYPE_IDR) {
>          if (ctx->va_packed_headers & VA_ENC_PACKED_HEADER_SEQUENCE &&
>              ctx->codec->write_sequence_header) {
>              bit_len = 8 * sizeof(data);
> @@ -530,9 +532,9 @@ static int vaapi_encode_issue(AVCodecContext *avctx,
>      }
>  
>  #if VA_CHECK_VERSION(1, 0, 0)
> -    sd = av_frame_get_side_data(pic->input_image,
> +    sd = av_frame_get_side_data(base_pic->input_image,
>                                  AV_FRAME_DATA_REGIONS_OF_INTEREST);
> -    if (sd && ctx->roi_allowed) {
> +    if (sd && base_ctx->roi_allowed) {
>          const AVRegionOfInterest *roi;
>          uint32_t roi_size;
>          VAEncMiscParameterBufferROI param_roi;
> @@ -543,11 +545,11 @@ static int vaapi_encode_issue(AVCodecContext *avctx,
>          av_assert0(roi_size && sd->size % roi_size == 0);
>          nb_roi = sd->size / roi_size;
>          if (nb_roi > ctx->roi_max_regions) {
> -            if (!ctx->roi_warned) {
> +            if (!base_ctx->roi_warned) {
>                  av_log(avctx, AV_LOG_WARNING, "More ROIs set than "
>                         "supported by driver (%d > %d).\n",
>                         nb_roi, ctx->roi_max_regions);
> -                ctx->roi_warned = 1;
> +                base_ctx->roi_warned = 1;
>              }
>              nb_roi = ctx->roi_max_regions;
>          }
> @@ -640,7 +642,7 @@ static int vaapi_encode_issue(AVCodecContext *avctx,
>          }
>      }
>  
> -    pic->encode_issued = 1;
> +    base_pic->encode_issued = 1;
>  
>      return 0;
>  
> @@ -658,17 +660,18 @@ fail_at_end:
>      av_freep(&pic->param_buffers);
>      av_freep(&pic->slices);
>      av_freep(&pic->roi);
> -    av_frame_free(&pic->recon_image);
> +    av_frame_free(&base_pic->recon_image);
>      ff_refstruct_unref(&pic->output_buffer_ref);
>      pic->output_buffer = VA_INVALID_ID;
>      return err;
>  }
>  
>  static int vaapi_encode_set_output_property(AVCodecContext *avctx,
> -                                            VAAPIEncodePicture *pic,
> +                                            HWBaseEncodePicture *pic,
>                                              AVPacket *pkt)
>  {
> -    VAAPIEncodeContext *ctx = avctx->priv_data;
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    VAAPIEncodeContext       *ctx = avctx->priv_data;
>  
>      if (pic->type == PICTURE_TYPE_IDR)
>          pkt->flags |= AV_PKT_FLAG_KEY;
> @@ -689,16 +692,16 @@ static int
> vaapi_encode_set_output_property(AVCodecContext *avctx,
>          return 0;
>      }
>  
> -    if (ctx->output_delay == 0) {
> +    if (base_ctx->output_delay == 0) {
>          pkt->dts = pkt->pts;
> -    } else if (pic->encode_order < ctx->decode_delay) {
> -        if (ctx->ts_ring[pic->encode_order] < INT64_MIN + ctx->dts_pts_diff)
> +    } else if (pic->encode_order < base_ctx->decode_delay) {
> +        if (base_ctx->ts_ring[pic->encode_order] < INT64_MIN + base_ctx-
> >dts_pts_diff)
>              pkt->dts = INT64_MIN;
>          else
> -            pkt->dts = ctx->ts_ring[pic->encode_order] - ctx->dts_pts_diff;
> +            pkt->dts = base_ctx->ts_ring[pic->encode_order] - base_ctx-
> >dts_pts_diff;
>      } else {
> -        pkt->dts = ctx->ts_ring[(pic->encode_order - ctx->decode_delay) %
> -                                (3 * ctx->output_delay + ctx->async_depth)];
> +        pkt->dts = base_ctx->ts_ring[(pic->encode_order - base_ctx-
> >decode_delay) %
> +                                     (3 * base_ctx->output_delay + base_ctx-
> >async_depth)];
>      }
>  
>      return 0;
> @@ -817,9 +820,11 @@ end:
>  }
>  
>  static int vaapi_encode_output(AVCodecContext *avctx,
> -                               VAAPIEncodePicture *pic, AVPacket *pkt)
> +                               HWBaseEncodePicture *base_pic, AVPacket *pkt)
>  {
> -    VAAPIEncodeContext *ctx = avctx->priv_data;
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    VAAPIEncodeContext       *ctx = avctx->priv_data;
> +    VAAPIEncodePicture       *pic = (VAAPIEncodePicture*)base_pic;
>      AVPacket *pkt_ptr = pkt;
>      int err;
>  
> @@ -832,17 +837,17 @@ static int vaapi_encode_output(AVCodecContext *avctx,
>          ctx->coded_buffer_ref = ff_refstruct_ref(pic->output_buffer_ref);
>  
>          if (pic->tail_size) {
> -            if (ctx->tail_pkt->size) {
> +            if (base_ctx->tail_pkt->size) {
>                  err = AVERROR_BUG;
>                  goto end;
>              }
>  
> -            err = ff_get_encode_buffer(avctx, ctx->tail_pkt, pic->tail_size,
> 0);
> +            err = ff_get_encode_buffer(avctx, base_ctx->tail_pkt, pic-
> >tail_size, 0);
>              if (err < 0)
>                  goto end;
>  
> -            memcpy(ctx->tail_pkt->data, pic->tail_data, pic->tail_size);
> -            pkt_ptr = ctx->tail_pkt;
> +            memcpy(base_ctx->tail_pkt->data, pic->tail_data, pic->tail_size);
> +            pkt_ptr = base_ctx->tail_pkt;
>          }
>      } else {
>          err = vaapi_encode_get_coded_data(avctx, pic, pkt);
> @@ -851,9 +856,9 @@ static int vaapi_encode_output(AVCodecContext *avctx,
>      }
>  
>      av_log(avctx, AV_LOG_DEBUG, "Output read for pic %"PRId64"/%"PRId64".\n",
> -           pic->display_order, pic->encode_order);
> +           base_pic->display_order, base_pic->encode_order);
>  
> -    vaapi_encode_set_output_property(avctx, pic, pkt_ptr);
> +    vaapi_encode_set_output_property(avctx, base_pic, pkt_ptr);
>  
>  end:
>      ff_refstruct_unref(&pic->output_buffer_ref);
> @@ -864,12 +869,13 @@ end:
>  static int vaapi_encode_discard(AVCodecContext *avctx,
>                                  VAAPIEncodePicture *pic)
>  {
> +    HWBaseEncodePicture *base_pic = (HWBaseEncodePicture*)pic;
>      vaapi_encode_wait(avctx, pic);
>  
>      if (pic->output_buffer_ref) {
>          av_log(avctx, AV_LOG_DEBUG, "Discard output for pic "
>                 "%"PRId64"/%"PRId64".\n",
> -               pic->display_order, pic->encode_order);
> +               base_pic->display_order, base_pic->encode_order);
>  
>          ff_refstruct_unref(&pic->output_buffer_ref);
>          pic->output_buffer = VA_INVALID_ID;
> @@ -878,8 +884,8 @@ static int vaapi_encode_discard(AVCodecContext *avctx,
>      return 0;
>  }
>  
> -static VAAPIEncodePicture *vaapi_encode_alloc(AVCodecContext *avctx,
> -                                              const AVFrame *frame)
> +static HWBaseEncodePicture *vaapi_encode_alloc(AVCodecContext *avctx,
> +                                               const AVFrame *frame)
>  {
>      VAAPIEncodeContext *ctx = avctx->priv_data;
>      VAAPIEncodePicture *pic;
> @@ -889,8 +895,8 @@ static VAAPIEncodePicture
> *vaapi_encode_alloc(AVCodecContext *avctx,
>          return NULL;
>  
>      if (ctx->codec->picture_priv_data_size > 0) {
> -        pic->priv_data = av_mallocz(ctx->codec->picture_priv_data_size);
> -        if (!pic->priv_data) {
> +        pic->base.priv_data = av_mallocz(ctx->codec->picture_priv_data_size);
> +        if (!pic->base.priv_data) {
>              av_freep(&pic);
>              return NULL;
>          }
> @@ -900,15 +906,16 @@ static VAAPIEncodePicture
> *vaapi_encode_alloc(AVCodecContext *avctx,
>      pic->recon_surface = VA_INVALID_ID;
>      pic->output_buffer = VA_INVALID_ID;
>  
> -    return pic;
> +    return (HWBaseEncodePicture*)pic;
>  }
>  
>  static int vaapi_encode_free(AVCodecContext *avctx,
> -                             VAAPIEncodePicture *pic)
> +                             HWBaseEncodePicture *base_pic)
>  {
> +    VAAPIEncodePicture *pic = (VAAPIEncodePicture*)base_pic;
>      int i;
>  
> -    if (pic->encode_issued)
> +    if (base_pic->encode_issued)
>          vaapi_encode_discard(avctx, pic);
>  
>      if (pic->slices) {
> @@ -916,17 +923,17 @@ static int vaapi_encode_free(AVCodecContext *avctx,
>              av_freep(&pic->slices[i].codec_slice_params);
>      }
>  
> -    av_frame_free(&pic->input_image);
> -    av_frame_free(&pic->recon_image);
> +    av_frame_free(&base_pic->input_image);
> +    av_frame_free(&base_pic->recon_image);
>  
> -    av_buffer_unref(&pic->opaque_ref);
> +    av_buffer_unref(&base_pic->opaque_ref);
>  
>      av_freep(&pic->param_buffers);
>      av_freep(&pic->slices);
>      // Output buffer should already be destroyed.
>      av_assert0(pic->output_buffer == VA_INVALID_ID);
>  
> -    av_freep(&pic->priv_data);
> +    av_freep(&base_pic->priv_data);
>      av_freep(&pic->codec_picture_params);
>      av_freep(&pic->roi);
>  
> @@ -936,8 +943,8 @@ static int vaapi_encode_free(AVCodecContext *avctx,
>  }
>  
>  static void vaapi_encode_add_ref(AVCodecContext *avctx,
> -                                 VAAPIEncodePicture *pic,
> -                                 VAAPIEncodePicture *target,
> +                                 HWBaseEncodePicture *pic,
> +                                 HWBaseEncodePicture *target,
>                                   int is_ref, int in_dpb, int prev)
>  {
>      int refs = 0;
> @@ -970,7 +977,7 @@ static void vaapi_encode_add_ref(AVCodecContext *avctx,
>  }
>  
>  static void vaapi_encode_remove_refs(AVCodecContext *avctx,
> -                                     VAAPIEncodePicture *pic,
> +                                     HWBaseEncodePicture *pic,
>                                       int level)
>  {
>      int i;
> @@ -1006,14 +1013,14 @@ static void vaapi_encode_remove_refs(AVCodecContext
> *avctx,
>  }
>  
>  static void vaapi_encode_set_b_pictures(AVCodecContext *avctx,
> -                                        VAAPIEncodePicture *start,
> -                                        VAAPIEncodePicture *end,
> -                                        VAAPIEncodePicture *prev,
> +                                        HWBaseEncodePicture *start,
> +                                        HWBaseEncodePicture *end,
> +                                        HWBaseEncodePicture *prev,
>                                          int current_depth,
> -                                        VAAPIEncodePicture **last)
> +                                        HWBaseEncodePicture **last)
>  {
> -    VAAPIEncodeContext *ctx = avctx->priv_data;
> -    VAAPIEncodePicture *pic, *next, *ref;
> +    HWBaseEncodeContext *ctx = avctx->priv_data;
> +    HWBaseEncodePicture *pic, *next, *ref;
>      int i, len;
>  
>      av_assert0(start && end && start != end && start->next != end);
> @@ -1070,9 +1077,9 @@ static void vaapi_encode_set_b_pictures(AVCodecContext
> *avctx,
>  }
>  
>  static void vaapi_encode_add_next_prev(AVCodecContext *avctx,
> -                                       VAAPIEncodePicture *pic)
> +                                       HWBaseEncodePicture *pic)
>  {
> -    VAAPIEncodeContext *ctx = avctx->priv_data;
> +    HWBaseEncodeContext *ctx = avctx->priv_data;
>      int i;
>  
>      if (!pic)
> @@ -1103,10 +1110,10 @@ static void vaapi_encode_add_next_prev(AVCodecContext
> *avctx,
>  }
>  
>  static int vaapi_encode_pick_next(AVCodecContext *avctx,
> -                                  VAAPIEncodePicture **pic_out)
> +                                  HWBaseEncodePicture **pic_out)
>  {
> -    VAAPIEncodeContext *ctx = avctx->priv_data;
> -    VAAPIEncodePicture *pic = NULL, *prev = NULL, *next, *start;
> +    HWBaseEncodeContext *ctx = avctx->priv_data;
> +    HWBaseEncodePicture *pic = NULL, *prev = NULL, *next, *start;
>      int i, b_counter, closed_gop_end;
>  
>      // If there are any B-frames already queued, the next one to encode
> @@ -1256,8 +1263,8 @@ static int vaapi_encode_pick_next(AVCodecContext *avctx,
>  
>  static int vaapi_encode_clear_old(AVCodecContext *avctx)
>  {
> -    VAAPIEncodeContext *ctx = avctx->priv_data;
> -    VAAPIEncodePicture *pic, *prev, *next;
> +    HWBaseEncodeContext *ctx = avctx->priv_data;
> +    HWBaseEncodePicture *pic, *prev, *next;
>  
>      av_assert0(ctx->pic_start);
>  
> @@ -1295,7 +1302,7 @@ static int vaapi_encode_clear_old(AVCodecContext *avctx)
>  static int vaapi_encode_check_frame(AVCodecContext *avctx,
>                                      const AVFrame *frame)
>  {
> -    VAAPIEncodeContext *ctx = avctx->priv_data;
> +    HWBaseEncodeContext *ctx = avctx->priv_data;
>  
>      if ((frame->crop_top  || frame->crop_bottom ||
>           frame->crop_left || frame->crop_right) && !ctx->crop_warned) {
> @@ -1320,8 +1327,8 @@ static int vaapi_encode_check_frame(AVCodecContext
> *avctx,
>  
>  static int vaapi_encode_send_frame(AVCodecContext *avctx, AVFrame *frame)
>  {
> -    VAAPIEncodeContext *ctx = avctx->priv_data;
> -    VAAPIEncodePicture *pic;
> +    HWBaseEncodeContext *ctx = avctx->priv_data;
> +    HWBaseEncodePicture *pic;
>      int err;
>  
>      if (frame) {
> @@ -1395,15 +1402,15 @@ fail:
>  
>  int ff_vaapi_encode_receive_packet(AVCodecContext *avctx, AVPacket *pkt)
>  {
> -    VAAPIEncodeContext *ctx = avctx->priv_data;
> -    VAAPIEncodePicture *pic = NULL;
> +    HWBaseEncodeContext *ctx = avctx->priv_data;
> +    HWBaseEncodePicture *pic = NULL;
>      AVFrame *frame = ctx->frame;
>      int err;
>  
>  start:
>      /** if no B frame before repeat P frame, sent repeat P frame out. */
>      if (ctx->tail_pkt->size) {
> -        for (VAAPIEncodePicture *tmp = ctx->pic_start; tmp; tmp = tmp->next)
> {
> +        for (HWBaseEncodePicture *tmp = ctx->pic_start; tmp; tmp = tmp->next)
> {
>              if (tmp->type == PICTURE_TYPE_B && tmp->pts < ctx->tail_pkt->pts)
>                  break;
>              else if (!tmp->next) {
> @@ -1431,7 +1438,7 @@ start:
>              return AVERROR(EAGAIN);
>      }
>  
> -    if (ctx->has_sync_buffer_func) {
> +    if (ctx->async_encode) {
>          if (av_fifo_can_write(ctx->encode_fifo)) {
>              err = vaapi_encode_pick_next(avctx, &pic);
>              if (!err) {
> @@ -1551,9 +1558,10 @@ static const VAEntrypoint
> vaapi_encode_entrypoints_low_power[] = {
>  
>  static av_cold int vaapi_encode_profile_entrypoint(AVCodecContext *avctx)
>  {
> -    VAAPIEncodeContext      *ctx = avctx->priv_data;
> -    VAProfile    *va_profiles    = NULL;
> -    VAEntrypoint *va_entrypoints = NULL;
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    VAAPIEncodeContext       *ctx = avctx->priv_data;
> +    VAProfile     *va_profiles    = NULL;
> +    VAEntrypoint  *va_entrypoints = NULL;
>      VAStatus vas;
>      const VAEntrypoint *usable_entrypoints;
>      const VAAPIEncodeProfile *profile;
> @@ -1576,10 +1584,10 @@ static av_cold int
> vaapi_encode_profile_entrypoint(AVCodecContext *avctx)
>          usable_entrypoints = vaapi_encode_entrypoints_normal;
>      }
>  
> -    desc = av_pix_fmt_desc_get(ctx->input_frames->sw_format);
> +    desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format);
>      if (!desc) {
>          av_log(avctx, AV_LOG_ERROR, "Invalid input pixfmt (%d).\n",
> -               ctx->input_frames->sw_format);
> +               base_ctx->input_frames->sw_format);
>          return AVERROR(EINVAL);
>      }
>      depth = desc->comp[0].depth;
> @@ -1772,7 +1780,8 @@ static const VAAPIEncodeRCMode vaapi_encode_rc_modes[] =
> {
>  
>  static av_cold int vaapi_encode_init_rate_control(AVCodecContext *avctx)
>  {
> -    VAAPIEncodeContext *ctx = avctx->priv_data;
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    VAAPIEncodeContext       *ctx = avctx->priv_data;
>      uint32_t supported_va_rc_modes;
>      const VAAPIEncodeRCMode *rc_mode;
>      int64_t rc_bits_per_second;
> @@ -1855,10 +1864,10 @@ static av_cold int
> vaapi_encode_init_rate_control(AVCodecContext *avctx)
>          } \
>      } while (0)
>  
> -    if (ctx->explicit_rc_mode)
> -        TRY_RC_MODE(ctx->explicit_rc_mode, 1);
> +    if (base_ctx->explicit_rc_mode)
> +        TRY_RC_MODE(base_ctx->explicit_rc_mode, 1);
>  
> -    if (ctx->explicit_qp)
> +    if (base_ctx->explicit_qp)
>          TRY_RC_MODE(RC_MODE_CQP, 1);
>  
>      if (ctx->codec->flags & FLAG_CONSTANT_QUALITY_ONLY)
> @@ -1953,8 +1962,8 @@ rc_mode_found:
>      }
>  
>      if (rc_mode->quality) {
> -        if (ctx->explicit_qp) {
> -            rc_quality = ctx->explicit_qp;
> +        if (base_ctx->explicit_qp) {
> +            rc_quality = base_ctx->explicit_qp;
>          } else if (avctx->global_quality > 0) {
>              rc_quality = avctx->global_quality;
>          } else {
> @@ -2010,10 +2019,10 @@ rc_mode_found:
>          return AVERROR(EINVAL);
>      }
>  
> -    ctx->rc_mode     = rc_mode;
> -    ctx->rc_quality  = rc_quality;
> -    ctx->va_rc_mode  = rc_mode->va_mode;
> -    ctx->va_bit_rate = rc_bits_per_second;
> +    ctx->rc_mode          = rc_mode;
> +    base_ctx->rc_quality  = rc_quality;
> +    ctx->va_rc_mode       = rc_mode->va_mode;
> +    ctx->va_bit_rate      = rc_bits_per_second;
>  
>      av_log(avctx, AV_LOG_VERBOSE, "RC mode: %s.\n", rc_mode->name);
>      if (rc_attr.value == VA_ATTRIB_NOT_SUPPORTED) {
> @@ -2159,7 +2168,8 @@ static av_cold int
> vaapi_encode_init_max_frame_size(AVCodecContext *avctx)
>  
>  static av_cold int vaapi_encode_init_gop_structure(AVCodecContext *avctx)
>  {
> -    VAAPIEncodeContext *ctx = avctx->priv_data;
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    VAAPIEncodeContext       *ctx = avctx->priv_data;
>      VAStatus vas;
>      VAConfigAttrib attr = { VAConfigAttribEncMaxRefFrames };
>      uint32_t ref_l0, ref_l1;
> @@ -2182,7 +2192,7 @@ static av_cold int
> vaapi_encode_init_gop_structure(AVCodecContext *avctx)
>          ref_l1 = attr.value >> 16 & 0xffff;
>      }
>  
> -    ctx->p_to_gpb = 0;
> +    base_ctx->p_to_gpb = 0;
>      prediction_pre_only = 0;
>  
>  #if VA_CHECK_VERSION(1, 9, 0)
> @@ -2218,7 +2228,7 @@ static av_cold int
> vaapi_encode_init_gop_structure(AVCodecContext *avctx)
>  
>              if (attr.value & VA_PREDICTION_DIRECTION_BI_NOT_EMPTY) {
>                  if (ref_l0 > 0 && ref_l1 > 0) {
> -                    ctx->p_to_gpb = 1;
> +                    base_ctx->p_to_gpb = 1;
>                      av_log(avctx, AV_LOG_VERBOSE, "Driver does not support P-
> frames, "
>                             "replacing them with B-frames.\n");
>                  }
> @@ -2230,7 +2240,7 @@ static av_cold int
> vaapi_encode_init_gop_structure(AVCodecContext *avctx)
>      if (ctx->codec->flags & FLAG_INTRA_ONLY ||
>          avctx->gop_size <= 1) {
>          av_log(avctx, AV_LOG_VERBOSE, "Using intra frames only.\n");
> -        ctx->gop_size = 1;
> +        base_ctx->gop_size = 1;
>      } else if (ref_l0 < 1) {
>          av_log(avctx, AV_LOG_ERROR, "Driver does not support any "
>                 "reference frames.\n");
> @@ -2238,41 +2248,41 @@ static av_cold int
> vaapi_encode_init_gop_structure(AVCodecContext *avctx)
>      } else if (!(ctx->codec->flags & FLAG_B_PICTURES) ||
>                 ref_l1 < 1 || avctx->max_b_frames < 1 ||
>                 prediction_pre_only) {
> -        if (ctx->p_to_gpb)
> +        if (base_ctx->p_to_gpb)
>             av_log(avctx, AV_LOG_VERBOSE, "Using intra and B-frames "
>                    "(supported references: %d / %d).\n",
>                    ref_l0, ref_l1);
>          else
>              av_log(avctx, AV_LOG_VERBOSE, "Using intra and P-frames "
>                     "(supported references: %d / %d).\n", ref_l0, ref_l1);
> -        ctx->gop_size = avctx->gop_size;
> -        ctx->p_per_i  = INT_MAX;
> -        ctx->b_per_p  = 0;
> +        base_ctx->gop_size = avctx->gop_size;
> +        base_ctx->p_per_i  = INT_MAX;
> +        base_ctx->b_per_p  = 0;
>      } else {
> -       if (ctx->p_to_gpb)
> +       if (base_ctx->p_to_gpb)
>             av_log(avctx, AV_LOG_VERBOSE, "Using intra and B-frames "
>                    "(supported references: %d / %d).\n",
>                    ref_l0, ref_l1);
>         else
>             av_log(avctx, AV_LOG_VERBOSE, "Using intra, P- and B-frames "
>                    "(supported references: %d / %d).\n", ref_l0, ref_l1);
> -        ctx->gop_size = avctx->gop_size;
> -        ctx->p_per_i  = INT_MAX;
> -        ctx->b_per_p  = avctx->max_b_frames;
> +        base_ctx->gop_size = avctx->gop_size;
> +        base_ctx->p_per_i  = INT_MAX;
> +        base_ctx->b_per_p  = avctx->max_b_frames;
>          if (ctx->codec->flags & FLAG_B_PICTURE_REFERENCES) {
> -            ctx->max_b_depth = FFMIN(ctx->desired_b_depth,
> -                                     av_log2(ctx->b_per_p) + 1);
> +            base_ctx->max_b_depth = FFMIN(base_ctx->desired_b_depth,
> +                                          av_log2(base_ctx->b_per_p) + 1);
>          } else {
> -            ctx->max_b_depth = 1;
> +            base_ctx->max_b_depth = 1;
>          }
>      }
>  
>      if (ctx->codec->flags & FLAG_NON_IDR_KEY_PICTURES) {
> -        ctx->closed_gop  = !!(avctx->flags & AV_CODEC_FLAG_CLOSED_GOP);
> -        ctx->gop_per_idr = ctx->idr_interval + 1;
> +        base_ctx->closed_gop  = !!(avctx->flags & AV_CODEC_FLAG_CLOSED_GOP);
> +        base_ctx->gop_per_idr = base_ctx->idr_interval + 1;
>      } else {
> -        ctx->closed_gop  = 1;
> -        ctx->gop_per_idr = 1;
> +        base_ctx->closed_gop  = 1;
> +        base_ctx->gop_per_idr = 1;
>      }
>  
>      return 0;
> @@ -2386,6 +2396,7 @@ static av_cold int
> vaapi_encode_init_tile_slice_structure(AVCodecContext *avctx,
>  
>  static av_cold int vaapi_encode_init_slice_structure(AVCodecContext *avctx)
>  {
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
>      VAAPIEncodeContext *ctx = avctx->priv_data;
>      VAConfigAttrib attr[3] = { { VAConfigAttribEncMaxSlices },
>                                 { VAConfigAttribEncSliceStructure },
> @@ -2405,12 +2416,12 @@ static av_cold int
> vaapi_encode_init_slice_structure(AVCodecContext *avctx)
>          return 0;
>      }
>  
> -    av_assert0(ctx->slice_block_height > 0 && ctx->slice_block_width > 0);
> +    av_assert0(base_ctx->slice_block_height > 0 && base_ctx-
> >slice_block_width > 0);
>  
> -    ctx->slice_block_rows = (avctx->height + ctx->slice_block_height - 1) /
> -                             ctx->slice_block_height;
> -    ctx->slice_block_cols = (avctx->width  + ctx->slice_block_width  - 1) /
> -                             ctx->slice_block_width;
> +    ctx->slice_block_rows = (avctx->height + base_ctx->slice_block_height -
> 1) /
> +                             base_ctx->slice_block_height;
> +    ctx->slice_block_cols = (avctx->width  + base_ctx->slice_block_width  -
> 1) /
> +                             base_ctx->slice_block_width;
>  
>      if (avctx->slices <= 1 && !ctx->tile_rows && !ctx->tile_cols) {
>          ctx->nb_slices  = 1;
> @@ -2585,7 +2596,8 @@ static av_cold int
> vaapi_encode_init_quality(AVCodecContext *avctx)
>  static av_cold int vaapi_encode_init_roi(AVCodecContext *avctx)
>  {
>  #if VA_CHECK_VERSION(1, 0, 0)
> -    VAAPIEncodeContext *ctx = avctx->priv_data;
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    VAAPIEncodeContext       *ctx = avctx->priv_data;
>      VAStatus vas;
>      VAConfigAttrib attr = { VAConfigAttribEncROI };
>  
> @@ -2600,14 +2612,14 @@ static av_cold int
> vaapi_encode_init_roi(AVCodecContext *avctx)
>      }
>  
>      if (attr.value == VA_ATTRIB_NOT_SUPPORTED) {
> -        ctx->roi_allowed = 0;
> +        base_ctx->roi_allowed = 0;
>      } else {
>          VAConfigAttribValEncROI roi = {
>              .value = attr.value,
>          };
>  
>          ctx->roi_max_regions = roi.bits.num_roi_regions;
> -        ctx->roi_allowed = ctx->roi_max_regions > 0 &&
> +        base_ctx->roi_allowed = ctx->roi_max_regions > 0 &&
>              (ctx->va_rc_mode == VA_RC_CQP ||
>               roi.bits.roi_rc_qp_delta_support);
>      }
> @@ -2631,7 +2643,8 @@ static void
> vaapi_encode_free_output_buffer(FFRefStructOpaque opaque,
>  static int vaapi_encode_alloc_output_buffer(FFRefStructOpaque opaque, void
> *obj)
>  {
>      AVCodecContext   *avctx = opaque.nc;
> -    VAAPIEncodeContext *ctx = avctx->priv_data;
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    VAAPIEncodeContext       *ctx = avctx->priv_data;
>      VABufferID *buffer_id = obj;
>      VAStatus vas;
>  
> @@ -2641,7 +2654,7 @@ static int
> vaapi_encode_alloc_output_buffer(FFRefStructOpaque opaque, void *obj)
>      // bound on that.
>      vas = vaCreateBuffer(ctx->hwctx->display, ctx->va_context,
>                           VAEncCodedBufferType,
> -                         3 * ctx->surface_width * ctx->surface_height +
> +                         3 * base_ctx->surface_width * base_ctx-
> >surface_height +
>                           (1 << 16), 1, 0, buffer_id);
>      if (vas != VA_STATUS_SUCCESS) {
>          av_log(avctx, AV_LOG_ERROR, "Failed to create bitstream "
> @@ -2656,20 +2669,21 @@ static int
> vaapi_encode_alloc_output_buffer(FFRefStructOpaque opaque, void *obj)
>  
>  static av_cold int vaapi_encode_create_recon_frames(AVCodecContext *avctx)
>  {
> -    VAAPIEncodeContext *ctx = avctx->priv_data;
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    VAAPIEncodeContext       *ctx = avctx->priv_data;
>      AVVAAPIHWConfig *hwconfig = NULL;
>      AVHWFramesConstraints *constraints = NULL;
>      enum AVPixelFormat recon_format;
>      int err, i;
>  
> -    hwconfig = av_hwdevice_hwconfig_alloc(ctx->device_ref);
> +    hwconfig = av_hwdevice_hwconfig_alloc(base_ctx->device_ref);
>      if (!hwconfig) {
>          err = AVERROR(ENOMEM);
>          goto fail;
>      }
>      hwconfig->config_id = ctx->va_config;
>  
> -    constraints = av_hwdevice_get_hwframe_constraints(ctx->device_ref,
> +    constraints = av_hwdevice_get_hwframe_constraints(base_ctx->device_ref,
>                                                        hwconfig);
>      if (!constraints) {
>          err = AVERROR(ENOMEM);
> @@ -2682,9 +2696,9 @@ static av_cold int
> vaapi_encode_create_recon_frames(AVCodecContext *avctx)
>      recon_format = AV_PIX_FMT_NONE;
>      if (constraints->valid_sw_formats) {
>          for (i = 0; constraints->valid_sw_formats[i] != AV_PIX_FMT_NONE; i++)
> {
> -            if (ctx->input_frames->sw_format ==
> +            if (base_ctx->input_frames->sw_format ==
>                  constraints->valid_sw_formats[i]) {
> -                recon_format = ctx->input_frames->sw_format;
> +                recon_format = base_ctx->input_frames->sw_format;
>                  break;
>              }
>          }
> @@ -2695,18 +2709,18 @@ static av_cold int
> vaapi_encode_create_recon_frames(AVCodecContext *avctx)
>          }
>      } else {
>          // No idea what to use; copy input format.
> -        recon_format = ctx->input_frames->sw_format;
> +        recon_format = base_ctx->input_frames->sw_format;
>      }
>      av_log(avctx, AV_LOG_DEBUG, "Using %s as format of "
>             "reconstructed frames.\n", av_get_pix_fmt_name(recon_format));
>  
> -    if (ctx->surface_width  < constraints->min_width  ||
> -        ctx->surface_height < constraints->min_height ||
> -        ctx->surface_width  > constraints->max_width ||
> -        ctx->surface_height > constraints->max_height) {
> +    if (base_ctx->surface_width  < constraints->min_width  ||
> +        base_ctx->surface_height < constraints->min_height ||
> +        base_ctx->surface_width  > constraints->max_width ||
> +        base_ctx->surface_height > constraints->max_height) {
>          av_log(avctx, AV_LOG_ERROR, "Hardware does not support encoding at "
>                 "size %dx%d (constraints: width %d-%d height %d-%d).\n",
> -               ctx->surface_width, ctx->surface_height,
> +               base_ctx->surface_width, base_ctx->surface_height,
>                 constraints->min_width,  constraints->max_width,
>                 constraints->min_height, constraints->max_height);
>          err = AVERROR(EINVAL);
> @@ -2716,19 +2730,19 @@ static av_cold int
> vaapi_encode_create_recon_frames(AVCodecContext *avctx)
>      av_freep(&hwconfig);
>      av_hwframe_constraints_free(&constraints);
>  
> -    ctx->recon_frames_ref = av_hwframe_ctx_alloc(ctx->device_ref);
> -    if (!ctx->recon_frames_ref) {
> +    base_ctx->recon_frames_ref = av_hwframe_ctx_alloc(base_ctx->device_ref);
> +    if (!base_ctx->recon_frames_ref) {
>          err = AVERROR(ENOMEM);
>          goto fail;
>      }
> -    ctx->recon_frames = (AVHWFramesContext*)ctx->recon_frames_ref->data;
> +    base_ctx->recon_frames = (AVHWFramesContext*)base_ctx->recon_frames_ref-
> >data;
>  
> -    ctx->recon_frames->format    = AV_PIX_FMT_VAAPI;
> -    ctx->recon_frames->sw_format = recon_format;
> -    ctx->recon_frames->width     = ctx->surface_width;
> -    ctx->recon_frames->height    = ctx->surface_height;
> +    base_ctx->recon_frames->format    = AV_PIX_FMT_VAAPI;
> +    base_ctx->recon_frames->sw_format = recon_format;
> +    base_ctx->recon_frames->width     = base_ctx->surface_width;
> +    base_ctx->recon_frames->height    = base_ctx->surface_height;
>  
> -    err = av_hwframe_ctx_init(ctx->recon_frames_ref);
> +    err = av_hwframe_ctx_init(base_ctx->recon_frames_ref);
>      if (err < 0) {
>          av_log(avctx, AV_LOG_ERROR, "Failed to initialise reconstructed "
>                 "frame context: %d.\n", err);
> @@ -2744,7 +2758,8 @@ static av_cold int
> vaapi_encode_create_recon_frames(AVCodecContext *avctx)
>  
>  av_cold int ff_vaapi_encode_init(AVCodecContext *avctx)
>  {
> -    VAAPIEncodeContext *ctx = avctx->priv_data;
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    VAAPIEncodeContext       *ctx = avctx->priv_data;
>      AVVAAPIFramesContext *recon_hwctx = NULL;
>      VAStatus vas;
>      int err;
> @@ -2754,8 +2769,8 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx)
>  
>      /* If you add something that can fail above this av_frame_alloc(),
>       * modify ff_vaapi_encode_close() accordingly. */
> -    ctx->frame = av_frame_alloc();
> -    if (!ctx->frame) {
> +    base_ctx->frame = av_frame_alloc();
> +    if (!base_ctx->frame) {
>          return AVERROR(ENOMEM);
>      }
>  
> @@ -2765,23 +2780,23 @@ av_cold int ff_vaapi_encode_init(AVCodecContext
> *avctx)
>          return AVERROR(EINVAL);
>      }
>  
> -    ctx->input_frames_ref = av_buffer_ref(avctx->hw_frames_ctx);
> -    if (!ctx->input_frames_ref) {
> +    base_ctx->input_frames_ref = av_buffer_ref(avctx->hw_frames_ctx);
> +    if (!base_ctx->input_frames_ref) {
>          err = AVERROR(ENOMEM);
>          goto fail;
>      }
> -    ctx->input_frames = (AVHWFramesContext*)ctx->input_frames_ref->data;
> +    base_ctx->input_frames = (AVHWFramesContext*)base_ctx->input_frames_ref-
> >data;
>  
> -    ctx->device_ref = av_buffer_ref(ctx->input_frames->device_ref);
> -    if (!ctx->device_ref) {
> +    base_ctx->device_ref = av_buffer_ref(base_ctx->input_frames->device_ref);
> +    if (!base_ctx->device_ref) {
>          err = AVERROR(ENOMEM);
>          goto fail;
>      }
> -    ctx->device = (AVHWDeviceContext*)ctx->device_ref->data;
> -    ctx->hwctx = ctx->device->hwctx;
> +    base_ctx->device = (AVHWDeviceContext*)base_ctx->device_ref->data;
> +    ctx->hwctx = base_ctx->device->hwctx;
>  
> -    ctx->tail_pkt = av_packet_alloc();
> -    if (!ctx->tail_pkt) {
> +    base_ctx->tail_pkt = av_packet_alloc();
> +    if (!base_ctx->tail_pkt) {
>          err = AVERROR(ENOMEM);
>          goto fail;
>      }
> @@ -2796,11 +2811,11 @@ av_cold int ff_vaapi_encode_init(AVCodecContext
> *avctx)
>              goto fail;
>      } else {
>          // Assume 16x16 blocks.
> -        ctx->surface_width  = FFALIGN(avctx->width,  16);
> -        ctx->surface_height = FFALIGN(avctx->height, 16);
> +        base_ctx->surface_width  = FFALIGN(avctx->width,  16);
> +        base_ctx->surface_height = FFALIGN(avctx->height, 16);
>          if (ctx->codec->flags & FLAG_SLICE_CONTROL) {
> -            ctx->slice_block_width  = 16;
> -            ctx->slice_block_height = 16;
> +            base_ctx->slice_block_width  = 16;
> +            base_ctx->slice_block_height = 16;
>          }
>      }
>  
> @@ -2851,9 +2866,9 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx)
>      if (err < 0)
>          goto fail;
>  
> -    recon_hwctx = ctx->recon_frames->hwctx;
> +    recon_hwctx = base_ctx->recon_frames->hwctx;
>      vas = vaCreateContext(ctx->hwctx->display, ctx->va_config,
> -                          ctx->surface_width, ctx->surface_height,
> +                          base_ctx->surface_width, base_ctx->surface_height,
>                            VA_PROGRESSIVE,
>                            recon_hwctx->surface_ids,
>                            recon_hwctx->nb_surfaces,
> @@ -2880,8 +2895,8 @@ av_cold int ff_vaapi_encode_init(AVCodecContext *avctx)
>              goto fail;
>      }
>  
> -    ctx->output_delay = ctx->b_per_p;
> -    ctx->decode_delay = ctx->max_b_depth;
> +    base_ctx->output_delay = base_ctx->b_per_p;
> +    base_ctx->decode_delay = base_ctx->max_b_depth;
>  
>      if (ctx->codec->sequence_params_size > 0) {
>          ctx->codec_sequence_params =
> @@ -2936,11 +2951,11 @@ av_cold int ff_vaapi_encode_init(AVCodecContext
> *avctx)
>      // check vaSyncBuffer function
>      vas = vaSyncBuffer(ctx->hwctx->display, VA_INVALID_ID, 0);
>      if (vas != VA_STATUS_ERROR_UNIMPLEMENTED) {
> -        ctx->has_sync_buffer_func = 1;
> -        ctx->encode_fifo = av_fifo_alloc2(ctx->async_depth,
> -                                          sizeof(VAAPIEncodePicture *),
> -                                          0);
> -        if (!ctx->encode_fifo)
> +        base_ctx->async_encode = 1;
> +        base_ctx->encode_fifo = av_fifo_alloc2(base_ctx->async_depth,
> +                                               sizeof(VAAPIEncodePicture*),
> +                                               0);
> +        if (!base_ctx->encode_fifo)
>              return AVERROR(ENOMEM);
>      }
>  #endif
> @@ -2953,15 +2968,16 @@ fail:
>  
>  av_cold int ff_vaapi_encode_close(AVCodecContext *avctx)
>  {
> -    VAAPIEncodeContext *ctx = avctx->priv_data;
> -    VAAPIEncodePicture *pic, *next;
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    VAAPIEncodeContext       *ctx = avctx->priv_data;
> +    HWBaseEncodePicture *pic, *next;
>  
>      /* We check ctx->frame to know whether ff_vaapi_encode_init()
>       * has been called and va_config/va_context initialized. */
> -    if (!ctx->frame)
> +    if (!base_ctx->frame)
>          return 0;
>  
> -    for (pic = ctx->pic_start; pic; pic = next) {
> +    for (pic = base_ctx->pic_start; pic; pic = next) {
>          next = pic->next;
>          vaapi_encode_free(avctx, pic);
>      }
> @@ -2978,16 +2994,16 @@ av_cold int ff_vaapi_encode_close(AVCodecContext
> *avctx)
>          ctx->va_config = VA_INVALID_ID;
>      }
>  
> -    av_frame_free(&ctx->frame);
> -    av_packet_free(&ctx->tail_pkt);
> +    av_frame_free(&base_ctx->frame);
> +    av_packet_free(&base_ctx->tail_pkt);
>  
>      av_freep(&ctx->codec_sequence_params);
>      av_freep(&ctx->codec_picture_params);
> -    av_fifo_freep2(&ctx->encode_fifo);
> +    av_fifo_freep2(&base_ctx->encode_fifo);
>  
> -    av_buffer_unref(&ctx->recon_frames_ref);
> -    av_buffer_unref(&ctx->input_frames_ref);
> -    av_buffer_unref(&ctx->device_ref);
> +    av_buffer_unref(&base_ctx->recon_frames_ref);
> +    av_buffer_unref(&base_ctx->input_frames_ref);
> +    av_buffer_unref(&base_ctx->device_ref);
>  
>      return 0;
>  }
> diff --git a/libavcodec/vaapi_encode.h b/libavcodec/vaapi_encode.h
> index 6964055b93..8eee455881 100644
> --- a/libavcodec/vaapi_encode.h
> +++ b/libavcodec/vaapi_encode.h
> @@ -29,38 +29,30 @@
>  
>  #include "libavutil/hwcontext.h"
>  #include "libavutil/hwcontext_vaapi.h"
> -#include "libavutil/fifo.h"
>  
>  #include "avcodec.h"
>  #include "hwconfig.h"
> +#include "hw_base_encode.h"
>  
>  struct VAAPIEncodeType;
>  struct VAAPIEncodePicture;
>  
> +// Codec output packet without timestamp delay, which means the
> +// output packet has same PTS and DTS.
> +#define FLAG_TIMESTAMP_NO_DELAY 1 << 6
> +
>  enum {
>      MAX_CONFIG_ATTRIBUTES  = 4,
>      MAX_GLOBAL_PARAMS      = 4,
> -    MAX_DPB_SIZE           = 16,
> -    MAX_PICTURE_REFERENCES = 2,
> -    MAX_REORDER_DELAY      = 16,
>      MAX_PARAM_BUFFER_SIZE  = 1024,
>      // A.4.1: table A.6 allows at most 22 tile rows for any level.
>      MAX_TILE_ROWS          = 22,
>      // A.4.1: table A.6 allows at most 20 tile columns for any level.
>      MAX_TILE_COLS          = 20,
> -    MAX_ASYNC_DEPTH        = 64,
> -    MAX_REFERENCE_LIST_NUM = 2,
>  };
>  
>  extern const AVCodecHWConfigInternal *const ff_vaapi_encode_hw_configs[];
>  
> -enum {
> -    PICTURE_TYPE_IDR = 0,
> -    PICTURE_TYPE_I   = 1,
> -    PICTURE_TYPE_P   = 2,
> -    PICTURE_TYPE_B   = 3,
> -};
> -
>  typedef struct VAAPIEncodeSlice {
>      int             index;
>      int             row_start;
> @@ -71,16 +63,7 @@ typedef struct VAAPIEncodeSlice {
>  } VAAPIEncodeSlice;
>  
>  typedef struct VAAPIEncodePicture {
> -    struct VAAPIEncodePicture *next;
> -
> -    int64_t         display_order;
> -    int64_t         encode_order;
> -    int64_t         pts;
> -    int64_t         duration;
> -    int             force_idr;
> -
> -    void           *opaque;
> -    AVBufferRef    *opaque_ref;
> +    HWBaseEncodePicture base;
>  
>  #if VA_CHECK_VERSION(1, 0, 0)
>      // ROI regions.
> @@ -89,15 +72,7 @@ typedef struct VAAPIEncodePicture {
>      void           *roi;
>  #endif
>  
> -    int             type;
> -    int             b_depth;
> -    int             encode_issued;
> -    int             encode_complete;
> -
> -    AVFrame        *input_image;
>      VASurfaceID     input_surface;
> -
> -    AVFrame        *recon_image;
>      VASurfaceID     recon_surface;
>  
>      int          nb_param_buffers;
> @@ -107,34 +82,10 @@ typedef struct VAAPIEncodePicture {
>      VABufferID     *output_buffer_ref;
>      VABufferID      output_buffer;
>  
> -    void           *priv_data;
>      void           *codec_picture_params;
>  
> -    // Whether this picture is a reference picture.
> -    int             is_reference;
> -
> -    // The contents of the DPB after this picture has been decoded.
> -    // This will contain the picture itself if it is a reference picture,
> -    // but not if it isn't.
> -    int                     nb_dpb_pics;
> -    struct VAAPIEncodePicture *dpb[MAX_DPB_SIZE];
> -    // The reference pictures used in decoding this picture. If they are
> -    // used by later pictures they will also appear in the DPB. ref[0][] for
> -    // previous reference frames. ref[1][] for future reference frames.
> -    int                     nb_refs[MAX_REFERENCE_LIST_NUM];
> -    struct VAAPIEncodePicture
> *refs[MAX_REFERENCE_LIST_NUM][MAX_PICTURE_REFERENCES];
> -    // The previous reference picture in encode order.  Must be in at least
> -    // one of the reference list and DPB list.
> -    struct VAAPIEncodePicture *prev;
> -    // Reference count for other pictures referring to this one through
> -    // the above pointers, directly from incomplete pictures and indirectly
> -    // through completed pictures.
> -    int             ref_count[2];
> -    int             ref_removed[2];
> -
>      int          nb_slices;
>      VAAPIEncodeSlice *slices;
> -
>      /**
>       * indicate if current frame is an independent frame that the coded data
>       * can be pushed to downstream directly. Coded of non-independent frame
> @@ -193,57 +144,26 @@ typedef struct VAAPIEncodeRCMode {
>  } VAAPIEncodeRCMode;
>  
>  typedef struct VAAPIEncodeContext {
> -    const AVClass *class;
> +    // Base.
> +    HWBaseEncodeContext base;
>  
>      // Codec-specific hooks.
>      const struct VAAPIEncodeType *codec;
>  
> -    // Global options.
> -
>      // Use low power encoding mode.
>      int             low_power;
>  
> -    // Number of I frames between IDR frames.
> -    int             idr_interval;
> -
> -    // Desired B frame reference depth.
> -    int             desired_b_depth;
> -
>      // Max Frame Size
>      int             max_frame_size;
>  
> -    // Explicitly set RC mode (otherwise attempt to pick from
> -    // available modes).
> -    int             explicit_rc_mode;
> -
> -    // Explicitly-set QP, for use with the "qp" options.
> -    // (Forces CQP mode when set, overriding everything else.)
> -    int             explicit_qp;
> -
>      // Desired packed headers.
>      unsigned int    desired_packed_headers;
>  
> -    // The required size of surfaces.  This is probably the input
> -    // size (AVCodecContext.width|height) aligned up to whatever
> -    // block size is required by the codec.
> -    int             surface_width;
> -    int             surface_height;
> -
> -    // The block size for slice calculations.
> -    int             slice_block_width;
> -    int             slice_block_height;
> -
> -    // Everything above this point must be set before calling
> -    // ff_vaapi_encode_init().
> -
>      // Chosen encoding profile details.
>      const VAAPIEncodeProfile *profile;
>  
>      // Chosen rate control mode details.
>      const VAAPIEncodeRCMode *rc_mode;
> -    // RC quality level - meaning depends on codec and RC mode.
> -    // In CQP mode this sets the fixed quantiser value.
> -    int             rc_quality;
>  
>      // Encoding profile (VAProfile*).
>      VAProfile       va_profile;
> @@ -263,18 +183,8 @@ typedef struct VAAPIEncodeContext {
>      VAConfigID      va_config;
>      VAContextID     va_context;
>  
> -    AVBufferRef    *device_ref;
> -    AVHWDeviceContext *device;
>      AVVAAPIDeviceContext *hwctx;
>  
> -    // The hardware frame context containing the input frames.
> -    AVBufferRef    *input_frames_ref;
> -    AVHWFramesContext *input_frames;
> -
> -    // The hardware frame context containing the reconstructed frames.
> -    AVBufferRef    *recon_frames_ref;
> -    AVHWFramesContext *recon_frames;
> -
>      // Pool of (reusable) bitstream output buffers.
>      struct FFRefStructPool *output_buffer_pool;
>  
> @@ -301,30 +211,6 @@ typedef struct VAAPIEncodeContext {
>      // structure (VAEncPictureParameterBuffer*).
>      void           *codec_picture_params;
>  
> -    // Current encoding window, in display (input) order.
> -    VAAPIEncodePicture *pic_start, *pic_end;
> -    // The next picture to use as the previous reference picture in
> -    // encoding order. Order from small to large in encoding order.
> -    VAAPIEncodePicture *next_prev[MAX_PICTURE_REFERENCES];
> -    int                 nb_next_prev;
> -
> -    // Next input order index (display order).
> -    int64_t         input_order;
> -    // Number of frames that output is behind input.
> -    int64_t         output_delay;
> -    // Next encode order index.
> -    int64_t         encode_order;
> -    // Number of frames decode output will need to be delayed.
> -    int64_t         decode_delay;
> -    // Next output order index (in encode order).
> -    int64_t         output_order;
> -
> -    // Timestamp handling.
> -    int64_t         first_pts;
> -    int64_t         dts_pts_diff;
> -    int64_t         ts_ring[MAX_REORDER_DELAY * 3 +
> -                            MAX_ASYNC_DEPTH];
> -
>      // Slice structure.
>      int slice_block_rows;
>      int slice_block_cols;
> @@ -343,43 +229,12 @@ typedef struct VAAPIEncodeContext {
>      // Location of the i-th tile row boundary.
>      int row_bd[MAX_TILE_ROWS + 1];
>  
> -    // Frame type decision.
> -    int gop_size;
> -    int closed_gop;
> -    int gop_per_idr;
> -    int p_per_i;
> -    int max_b_depth;
> -    int b_per_p;
> -    int force_idr;
> -    int idr_counter;
> -    int gop_counter;
> -    int end_of_stream;
> -    int p_to_gpb;
> -
> -    // Whether the driver supports ROI at all.
> -    int             roi_allowed;
>      // Maximum number of regions supported by the driver.
>      int             roi_max_regions;
>      // Quantisation range for offset calculations.  Set by codec-specific
>      // code, as it may change based on parameters.
>      int             roi_quant_range;
>  
> -    // The encoder does not support cropping information, so warn about
> -    // it the first time we encounter any nonzero crop fields.
> -    int             crop_warned;
> -    // If the driver does not support ROI then warn the first time we
> -    // encounter a frame with ROI side data.
> -    int             roi_warned;
> -
> -    AVFrame         *frame;
> -
> -    // Whether the driver support vaSyncBuffer
> -    int             has_sync_buffer_func;
> -    // Store buffered pic
> -    AVFifo          *encode_fifo;
> -    // Max number of frame buffered in encoder.
> -    int             async_depth;
> -
>      /** Head data for current output pkt, used only for AV1. */
>      //void  *header_data;
>      //size_t header_data_size;
> @@ -389,30 +244,8 @@ typedef struct VAAPIEncodeContext {
>       * This is a RefStruct reference.
>       */
>      VABufferID     *coded_buffer_ref;
> -
> -    /** Tail data of a pic, now only used for av1 repeat frame header. */
> -    AVPacket        *tail_pkt;
>  } VAAPIEncodeContext;
>  
> -enum {
> -    // Codec supports controlling the subdivision of pictures into slices.
> -    FLAG_SLICE_CONTROL         = 1 << 0,
> -    // Codec only supports constant quality (no rate control).
> -    FLAG_CONSTANT_QUALITY_ONLY = 1 << 1,
> -    // Codec is intra-only.
> -    FLAG_INTRA_ONLY            = 1 << 2,
> -    // Codec supports B-pictures.
> -    FLAG_B_PICTURES            = 1 << 3,
> -    // Codec supports referencing B-pictures.
> -    FLAG_B_PICTURE_REFERENCES  = 1 << 4,
> -    // Codec supports non-IDR key pictures (that is, key pictures do
> -    // not necessarily empty the DPB).
> -    FLAG_NON_IDR_KEY_PICTURES  = 1 << 5,
> -    // Codec output packet without timestamp delay, which means the
> -    // output packet has same PTS and DTS.
> -    FLAG_TIMESTAMP_NO_DELAY    = 1 << 6,
> -};
> -
>  typedef struct VAAPIEncodeType {
>      // List of supported profiles and corresponding VAAPI profiles.
>      // (Must end with AV_PROFILE_UNKNOWN.)
> @@ -505,19 +338,6 @@ int ff_vaapi_encode_close(AVCodecContext *avctx);
>        "may not support all encoding features)", \
>        OFFSET(common.low_power), AV_OPT_TYPE_BOOL, \
>        { .i64 = 0 }, 0, 1, FLAGS }, \
> -    { "idr_interval", \
> -      "Distance (in I-frames) between IDR frames", \
> -      OFFSET(common.idr_interval), AV_OPT_TYPE_INT, \
> -      { .i64 = 0 }, 0, INT_MAX, FLAGS }, \
> -    { "b_depth", \
> -      "Maximum B-frame reference depth", \
> -      OFFSET(common.desired_b_depth), AV_OPT_TYPE_INT, \
> -      { .i64 = 1 }, 1, INT_MAX, FLAGS }, \
> -    { "async_depth", "Maximum processing parallelism. " \
> -      "Increase this to improve single channel performance. This option " \
> -      "doesn't work if driver doesn't implement vaSyncBuffer function.", \
> -      OFFSET(common.async_depth), AV_OPT_TYPE_INT, \
> -      { .i64 = 2 }, 1, MAX_ASYNC_DEPTH, FLAGS }, \
>      { "max_frame_size", \
>        "Maximum frame size (in bytes)",\
>        OFFSET(common.max_frame_size), AV_OPT_TYPE_INT, \
> @@ -529,7 +349,7 @@ int ff_vaapi_encode_close(AVCodecContext *avctx);
>  #define VAAPI_ENCODE_RC_OPTIONS \
>      { "rc_mode",\
>        "Set rate control mode", \
> -      OFFSET(common.explicit_rc_mode), AV_OPT_TYPE_INT, \
> +      OFFSET(common.base.explicit_rc_mode), AV_OPT_TYPE_INT, \
>        { .i64 = RC_MODE_AUTO }, RC_MODE_AUTO, RC_MODE_MAX, FLAGS, .unit =
> "rc_mode" }, \
>      { "auto", "Choose mode automatically based on other parameters", \
>        0, AV_OPT_TYPE_CONST, { .i64 = RC_MODE_AUTO }, 0, 0, FLAGS, .unit =
> "rc_mode" }, \
> diff --git a/libavcodec/vaapi_encode_av1.c b/libavcodec/vaapi_encode_av1.c
> index a46b882ab9..512b4e3733 100644
> --- a/libavcodec/vaapi_encode_av1.c
> +++ b/libavcodec/vaapi_encode_av1.c
> @@ -109,20 +109,21 @@ static void vaapi_encode_av1_trace_write_log(void *ctx,
>  
>  static av_cold int vaapi_encode_av1_get_encoder_caps(AVCodecContext *avctx)
>  {
> -    VAAPIEncodeContext *ctx = avctx->priv_data;
> -    VAAPIEncodeAV1Context *priv = avctx->priv_data;
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    VAAPIEncodeAV1Context   *priv = avctx->priv_data;
>  
>      // Surfaces must be aligned to superblock boundaries.
> -    ctx->surface_width  = FFALIGN(avctx->width,  priv->use_128x128_superblock
> ? 128 : 64);
> -    ctx->surface_height = FFALIGN(avctx->height, priv->use_128x128_superblock
> ? 128 : 64);
> +    base_ctx->surface_width  = FFALIGN(avctx->width,  priv-
> >use_128x128_superblock ? 128 : 64);
> +    base_ctx->surface_height = FFALIGN(avctx->height, priv-
> >use_128x128_superblock ? 128 : 64);
>  
>      return 0;
>  }
>  
>  static av_cold int vaapi_encode_av1_configure(AVCodecContext *avctx)
>  {
> -    VAAPIEncodeContext     *ctx = avctx->priv_data;
> -    VAAPIEncodeAV1Context *priv = avctx->priv_data;
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    VAAPIEncodeContext       *ctx = avctx->priv_data;
> +    VAAPIEncodeAV1Context   *priv = avctx->priv_data;
>      int ret;
>  
>      ret = ff_cbs_init(&priv->cbc, AV_CODEC_ID_AV1, avctx);
> @@ -134,7 +135,7 @@ static av_cold int
> vaapi_encode_av1_configure(AVCodecContext *avctx)
>      priv->cbc->trace_write_callback = vaapi_encode_av1_trace_write_log;
>  
>      if (ctx->rc_mode->quality) {
> -        priv->q_idx_p = av_clip(ctx->rc_quality, 0, AV1_MAX_QUANT);
> +        priv->q_idx_p = av_clip(base_ctx->rc_quality, 0, AV1_MAX_QUANT);
>          if (fabs(avctx->i_quant_factor) > 0.0)
>              priv->q_idx_idr =
>                  av_clip((fabs(avctx->i_quant_factor) * priv->q_idx_p  +
> @@ -355,6 +356,7 @@ static int
> vaapi_encode_av1_write_sequence_header(AVCodecContext *avctx,
>  
>  static int vaapi_encode_av1_init_sequence_params(AVCodecContext *avctx)
>  {
> +    HWBaseEncodeContext         *base_ctx = avctx->priv_data;
>      VAAPIEncodeContext               *ctx = avctx->priv_data;
>      VAAPIEncodeAV1Context           *priv = avctx->priv_data;
>      AV1RawOBU                     *sh_obu = &priv->sh;
> @@ -367,7 +369,7 @@ static int
> vaapi_encode_av1_init_sequence_params(AVCodecContext *avctx)
>      memset(sh_obu, 0, sizeof(*sh_obu));
>      sh_obu->header.obu_type = AV1_OBU_SEQUENCE_HEADER;
>  
> -    desc = av_pix_fmt_desc_get(priv->common.input_frames->sw_format);
> +    desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format);
>      av_assert0(desc);
>  
>      sh->seq_profile  = avctx->profile;
> @@ -419,7 +421,7 @@ static int
> vaapi_encode_av1_init_sequence_params(AVCodecContext *avctx)
>              framerate = 0;
>  
>          level = ff_av1_guess_level(avctx->bit_rate, priv->tier,
> -                                   ctx->surface_width, ctx->surface_height,
> +                                   base_ctx->surface_width, base_ctx-
> >surface_height,
>                                     priv->tile_rows * priv->tile_cols,
>                                     priv->tile_cols, framerate);
>          if (level) {
> @@ -436,8 +438,8 @@ static int
> vaapi_encode_av1_init_sequence_params(AVCodecContext *avctx)
>      vseq->seq_level_idx           = sh->seq_level_idx[0];
>      vseq->seq_tier                = sh->seq_tier[0];
>      vseq->order_hint_bits_minus_1 = sh->order_hint_bits_minus_1;
> -    vseq->intra_period            = ctx->gop_size;
> -    vseq->ip_period               = ctx->b_per_p + 1;
> +    vseq->intra_period            = base_ctx->gop_size;
> +    vseq->ip_period               = base_ctx->b_per_p + 1;
>  
>      vseq->seq_fields.bits.enable_order_hint = sh->enable_order_hint;
>  
> @@ -464,12 +466,13 @@ static int
> vaapi_encode_av1_init_picture_params(AVCodecContext *avctx,
>  {
>      VAAPIEncodeContext              *ctx = avctx->priv_data;
>      VAAPIEncodeAV1Context          *priv = avctx->priv_data;
> -    VAAPIEncodeAV1Picture          *hpic = pic->priv_data;
> +    HWBaseEncodePicture        *base_pic = (HWBaseEncodePicture *)pic;
> +    VAAPIEncodeAV1Picture          *hpic = base_pic->priv_data;
>      AV1RawOBU                    *fh_obu = &priv->fh;
>      AV1RawFrameHeader                *fh = &fh_obu->obu.frame.header;
>      VAEncPictureParameterBufferAV1 *vpic = pic->codec_picture_params;
>      CodedBitstreamFragment          *obu = &priv->current_obu;
> -    VAAPIEncodePicture    *ref;
> +    HWBaseEncodePicture    *ref;
>      VAAPIEncodeAV1Picture *href;
>      int slot, i;
>      int ret;
> @@ -478,24 +481,24 @@ static int
> vaapi_encode_av1_init_picture_params(AVCodecContext *avctx,
>  
>      memset(fh_obu, 0, sizeof(*fh_obu));
>      pic->nb_slices = priv->tile_groups;
> -    pic->non_independent_frame = pic->encode_order < pic->display_order;
> +    pic->non_independent_frame = base_pic->encode_order < base_pic-
> >display_order;
>      fh_obu->header.obu_type = AV1_OBU_FRAME_HEADER;
>      fh_obu->header.obu_has_size_field = 1;
>  
> -    switch (pic->type) {
> +    switch (base_pic->type) {
>      case PICTURE_TYPE_IDR:
> -        av_assert0(pic->nb_refs[0] == 0 || pic->nb_refs[1]);
> +        av_assert0(base_pic->nb_refs[0] == 0 || base_pic->nb_refs[1]);
>          fh->frame_type = AV1_FRAME_KEY;
>          fh->refresh_frame_flags = 0xFF;
>          fh->base_q_idx = priv->q_idx_idr;
>          hpic->slot = 0;
> -        hpic->last_idr_frame = pic->display_order;
> +        hpic->last_idr_frame = base_pic->display_order;
>          break;
>      case PICTURE_TYPE_P:
> -        av_assert0(pic->nb_refs[0]);
> +        av_assert0(base_pic->nb_refs[0]);
>          fh->frame_type = AV1_FRAME_INTER;
>          fh->base_q_idx = priv->q_idx_p;
> -        ref = pic->refs[0][pic->nb_refs[0] - 1];
> +        ref = base_pic->refs[0][base_pic->nb_refs[0] - 1];
>          href = ref->priv_data;
>          hpic->slot = !href->slot;
>          hpic->last_idr_frame = href->last_idr_frame;
> @@ -510,8 +513,8 @@ static int
> vaapi_encode_av1_init_picture_params(AVCodecContext *avctx,
>          vpic->ref_frame_ctrl_l0.fields.search_idx0 = AV1_REF_FRAME_LAST;
>  
>          /** set the 2nd nearest frame in L0 as Golden frame. */
> -        if (pic->nb_refs[0] > 1) {
> -            ref = pic->refs[0][pic->nb_refs[0] - 2];
> +        if (base_pic->nb_refs[0] > 1) {
> +            ref = base_pic->refs[0][base_pic->nb_refs[0] - 2];
>              href = ref->priv_data;
>              fh->ref_frame_idx[3] = href->slot;
>              fh->ref_order_hint[href->slot] = ref->display_order - href-
> >last_idr_frame;
> @@ -519,7 +522,7 @@ static int
> vaapi_encode_av1_init_picture_params(AVCodecContext *avctx,
>          }
>          break;
>      case PICTURE_TYPE_B:
> -        av_assert0(pic->nb_refs[0] && pic->nb_refs[1]);
> +        av_assert0(base_pic->nb_refs[0] && base_pic->nb_refs[1]);
>          fh->frame_type = AV1_FRAME_INTER;
>          fh->base_q_idx = priv->q_idx_b;
>          fh->refresh_frame_flags = 0x0;
> @@ -532,7 +535,7 @@ static int
> vaapi_encode_av1_init_picture_params(AVCodecContext *avctx,
>          vpic->ref_frame_ctrl_l0.fields.search_idx0 = AV1_REF_FRAME_LAST;
>          vpic->ref_frame_ctrl_l1.fields.search_idx0 = AV1_REF_FRAME_BWDREF;
>  
> -        ref                            = pic->refs[0][pic->nb_refs[0] - 1];
> +        ref                            = base_pic->refs[0][base_pic-
> >nb_refs[0] - 1];
>          href                           = ref->priv_data;
>          hpic->last_idr_frame           = href->last_idr_frame;
>          fh->primary_ref_frame          = href->slot;
> @@ -541,7 +544,7 @@ static int
> vaapi_encode_av1_init_picture_params(AVCodecContext *avctx,
>              fh->ref_frame_idx[i] = href->slot;
>          }
>  
> -        ref                            = pic->refs[1][pic->nb_refs[1] - 1];
> +        ref                            = base_pic->refs[1][base_pic-
> >nb_refs[1] - 1];
>          href                           = ref->priv_data;
>          fh->ref_order_hint[href->slot] = ref->display_order - href-
> >last_idr_frame;
>          for (i = AV1_REF_FRAME_GOLDEN; i < AV1_REFS_PER_FRAME; i++) {
> @@ -552,13 +555,13 @@ static int
> vaapi_encode_av1_init_picture_params(AVCodecContext *avctx,
>          av_assert0(0 && "invalid picture type");
>      }
>  
> -    fh->show_frame                = pic->display_order <= pic->encode_order;
> +    fh->show_frame                = base_pic->display_order <= base_pic-
> >encode_order;
>      fh->showable_frame            = fh->frame_type != AV1_FRAME_KEY;
>      fh->frame_width_minus_1       = avctx->width - 1;
>      fh->frame_height_minus_1      = avctx->height - 1;
>      fh->render_width_minus_1      = fh->frame_width_minus_1;
>      fh->render_height_minus_1     = fh->frame_height_minus_1;
> -    fh->order_hint                = pic->display_order - hpic-
> >last_idr_frame;
> +    fh->order_hint                = base_pic->display_order - hpic-
> >last_idr_frame;
>      fh->tile_cols                 = priv->tile_cols;
>      fh->tile_rows                 = priv->tile_rows;
>      fh->tile_cols_log2            = priv->tile_cols_log2;
> @@ -624,13 +627,13 @@ static int
> vaapi_encode_av1_init_picture_params(AVCodecContext *avctx,
>          vpic->reference_frames[i] = VA_INVALID_SURFACE;
>  
>      for (i = 0; i < MAX_REFERENCE_LIST_NUM; i++) {
> -        for (int j = 0; j < pic->nb_refs[i]; j++) {
> -            VAAPIEncodePicture *ref_pic = pic->refs[i][j];
> +        for (int j = 0; j < base_pic->nb_refs[i]; j++) {
> +            HWBaseEncodePicture *ref_pic = base_pic->refs[i][j];
>  
>              slot = ((VAAPIEncodeAV1Picture*)ref_pic->priv_data)->slot;
>              av_assert0(vpic->reference_frames[slot] == VA_INVALID_SURFACE);
>  
> -            vpic->reference_frames[slot] = ref_pic->recon_surface;
> +            vpic->reference_frames[slot] = ((VAAPIEncodePicture *)ref_pic)-
> >recon_surface;
>          }
>      }
>  
> @@ -651,7 +654,7 @@ static int
> vaapi_encode_av1_init_picture_params(AVCodecContext *avctx,
>          vpic->bit_offset_cdef_params         = priv->cdef_start_offset;
>          vpic->size_in_bits_cdef_params       = priv->cdef_param_size;
>          vpic->size_in_bits_frame_hdr_obu     = priv->fh_data_len;
> -        vpic->byte_offset_frame_hdr_obu_size = (((pic->type ==
> PICTURE_TYPE_IDR) ?
> +        vpic->byte_offset_frame_hdr_obu_size = (((base_pic->type ==
> PICTURE_TYPE_IDR) ?
>                                                 priv->sh_data_len / 8 : 0) +
>                                                 (fh_obu-
> >header.obu_extension_flag ?
>                                                 2 : 1));
> @@ -693,14 +696,15 @@ static int
> vaapi_encode_av1_write_picture_header(AVCodecContext *avctx,
>      CodedBitstreamAV1Context *cbctx = priv->cbc->priv_data;
>      AV1RawOBU               *fh_obu = &priv->fh;
>      AV1RawFrameHeader       *rep_fh = &fh_obu->obu.frame_header;
> +    HWBaseEncodePicture *base_pic   = (HWBaseEncodePicture *)pic;
>      VAAPIEncodeAV1Picture *href;
>      int ret = 0;
>  
>      pic->tail_size = 0;
>      /** Pack repeat frame header. */
> -    if (pic->display_order > pic->encode_order) {
> +    if (base_pic->display_order > base_pic->encode_order) {
>          memset(fh_obu, 0, sizeof(*fh_obu));
> -        href = pic->refs[0][pic->nb_refs[0] - 1]->priv_data;
> +        href = base_pic->refs[0][base_pic->nb_refs[0] - 1]->priv_data;
>          fh_obu->header.obu_type = AV1_OBU_FRAME_HEADER;
>          fh_obu->header.obu_has_size_field = 1;
>  
> @@ -862,6 +866,7 @@ static av_cold int vaapi_encode_av1_close(AVCodecContext
> *avctx)
>  #define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM)
>  
>  static const AVOption vaapi_encode_av1_options[] = {
> +    HW_BASE_ENCODE_COMMON_OPTIONS,
>      VAAPI_ENCODE_COMMON_OPTIONS,
>      VAAPI_ENCODE_RC_OPTIONS,
>      { "profile", "Set profile (seq_profile)",
> diff --git a/libavcodec/vaapi_encode_h264.c b/libavcodec/vaapi_encode_h264.c
> index 37df9103ae..aa011ba307 100644
> --- a/libavcodec/vaapi_encode_h264.c
> +++ b/libavcodec/vaapi_encode_h264.c
> @@ -234,7 +234,7 @@ static int
> vaapi_encode_h264_write_extra_header(AVCodecContext *avctx,
>                  goto fail;
>          }
>          if (priv->sei_needed & SEI_TIMING) {
> -            if (pic->type == PICTURE_TYPE_IDR) {
> +            if (pic->base.type == PICTURE_TYPE_IDR) {
>                  err = ff_cbs_sei_add_message(priv->cbc, au, 1,
>                                               SEI_TYPE_BUFFERING_PERIOD,
>                                               &priv->sei_buffering_period,
> NULL);
> @@ -296,6 +296,7 @@ fail:
>  
>  static int vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx)
>  {
> +    HWBaseEncodeContext          *base_ctx = avctx->priv_data;
>      VAAPIEncodeContext                *ctx = avctx->priv_data;
>      VAAPIEncodeH264Context           *priv = avctx->priv_data;
>      H264RawSPS                        *sps = &priv->raw_sps;
> @@ -308,7 +309,7 @@ static int
> vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx)
>      memset(sps, 0, sizeof(*sps));
>      memset(pps, 0, sizeof(*pps));
>  
> -    desc = av_pix_fmt_desc_get(priv->common.input_frames->sw_format);
> +    desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format);
>      av_assert0(desc);
>      if (desc->nb_components == 1 || desc->log2_chroma_w != 1 || desc-
> >log2_chroma_h != 1) {
>          av_log(avctx, AV_LOG_ERROR, "Chroma format of input pixel format "
> @@ -327,18 +328,18 @@ static int
> vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx)
>          sps->constraint_set1_flag = 1;
>  
>      if (avctx->profile == AV_PROFILE_H264_HIGH || avctx->profile ==
> AV_PROFILE_H264_HIGH_10)
> -        sps->constraint_set3_flag = ctx->gop_size == 1;
> +        sps->constraint_set3_flag = base_ctx->gop_size == 1;
>  
>      if (avctx->profile == AV_PROFILE_H264_MAIN ||
>          avctx->profile == AV_PROFILE_H264_HIGH || avctx->profile ==
> AV_PROFILE_H264_HIGH_10) {
>          sps->constraint_set4_flag = 1;
> -        sps->constraint_set5_flag = ctx->b_per_p == 0;
> +        sps->constraint_set5_flag = base_ctx->b_per_p == 0;
>      }
>  
> -    if (ctx->gop_size == 1)
> +    if (base_ctx->gop_size == 1)
>          priv->dpb_frames = 0;
>      else
> -        priv->dpb_frames = 1 + ctx->max_b_depth;
> +        priv->dpb_frames = 1 + base_ctx->max_b_depth;
>  
>      if (avctx->level != AV_LEVEL_UNKNOWN) {
>          sps->level_idc = avctx->level;
> @@ -375,7 +376,7 @@ static int
> vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx)
>      sps->bit_depth_chroma_minus8 = bit_depth - 8;
>  
>      sps->log2_max_frame_num_minus4 = 4;
> -    sps->pic_order_cnt_type        = ctx->max_b_depth ? 0 : 2;
> +    sps->pic_order_cnt_type        = base_ctx->max_b_depth ? 0 : 2;
>      if (sps->pic_order_cnt_type == 0) {
>          sps->log2_max_pic_order_cnt_lsb_minus4 = 4;
>      }
> @@ -502,8 +503,8 @@ static int
> vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx)
>      sps->vui.motion_vectors_over_pic_boundaries_flag = 1;
>      sps->vui.log2_max_mv_length_horizontal = 15;
>      sps->vui.log2_max_mv_length_vertical   = 15;
> -    sps->vui.max_num_reorder_frames        = ctx->max_b_depth;
> -    sps->vui.max_dec_frame_buffering       = ctx->max_b_depth + 1;
> +    sps->vui.max_num_reorder_frames        = base_ctx->max_b_depth;
> +    sps->vui.max_dec_frame_buffering       = base_ctx->max_b_depth + 1;
>  
>      pps->nal_unit_header.nal_ref_idc = 3;
>      pps->nal_unit_header.nal_unit_type = H264_NAL_PPS;
> @@ -536,9 +537,9 @@ static int
> vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx)
>      *vseq = (VAEncSequenceParameterBufferH264) {
>          .seq_parameter_set_id = sps->seq_parameter_set_id,
>          .level_idc        = sps->level_idc,
> -        .intra_period     = ctx->gop_size,
> -        .intra_idr_period = ctx->gop_size,
> -        .ip_period        = ctx->b_per_p + 1,
> +        .intra_period     = base_ctx->gop_size,
> +        .intra_idr_period = base_ctx->gop_size,
> +        .ip_period        = base_ctx->b_per_p + 1,
>  
>          .bits_per_second       = ctx->va_bit_rate,
>          .max_num_ref_frames    = sps->max_num_ref_frames,
> @@ -622,19 +623,20 @@ static int
> vaapi_encode_h264_init_sequence_params(AVCodecContext *avctx)
>  static int vaapi_encode_h264_init_picture_params(AVCodecContext *avctx,
>                                                   VAAPIEncodePicture *pic)
>  {
> -    VAAPIEncodeContext               *ctx = avctx->priv_data;
> +    HWBaseEncodeContext         *base_ctx = avctx->priv_data;
>      VAAPIEncodeH264Context          *priv = avctx->priv_data;
> -    VAAPIEncodeH264Picture          *hpic = pic->priv_data;
> -    VAAPIEncodePicture              *prev = pic->prev;
> +    HWBaseEncodePicture         *base_pic = (HWBaseEncodePicture *)pic;
> +    VAAPIEncodeH264Picture          *hpic = base_pic->priv_data;
> +    HWBaseEncodePicture             *prev = base_pic->prev;
>      VAAPIEncodeH264Picture         *hprev = prev ? prev->priv_data : NULL;
>      VAEncPictureParameterBufferH264 *vpic = pic->codec_picture_params;
>      int i, j = 0;
>  
> -    if (pic->type == PICTURE_TYPE_IDR) {
> -        av_assert0(pic->display_order == pic->encode_order);
> +    if (base_pic->type == PICTURE_TYPE_IDR) {
> +        av_assert0(base_pic->display_order == base_pic->encode_order);
>  
>          hpic->frame_num      = 0;
> -        hpic->last_idr_frame = pic->display_order;
> +        hpic->last_idr_frame = base_pic->display_order;
>          hpic->idr_pic_id     = hprev ? hprev->idr_pic_id + 1 : 0;
>  
>          hpic->primary_pic_type = 0;
> @@ -647,10 +649,10 @@ static int
> vaapi_encode_h264_init_picture_params(AVCodecContext *avctx,
>          hpic->last_idr_frame = hprev->last_idr_frame;
>          hpic->idr_pic_id     = hprev->idr_pic_id;
>  
> -        if (pic->type == PICTURE_TYPE_I) {
> +        if (base_pic->type == PICTURE_TYPE_I) {
>              hpic->slice_type       = 7;
>              hpic->primary_pic_type = 0;
> -        } else if (pic->type == PICTURE_TYPE_P) {
> +        } else if (base_pic->type == PICTURE_TYPE_P) {
>              hpic->slice_type       = 5;
>              hpic->primary_pic_type = 1;
>          } else {
> @@ -658,13 +660,13 @@ static int
> vaapi_encode_h264_init_picture_params(AVCodecContext *avctx,
>              hpic->primary_pic_type = 2;
>          }
>      }
> -    hpic->pic_order_cnt = pic->display_order - hpic->last_idr_frame;
> +    hpic->pic_order_cnt = base_pic->display_order - hpic->last_idr_frame;
>      if (priv->raw_sps.pic_order_cnt_type == 2) {
>          hpic->pic_order_cnt *= 2;
>      }
>  
> -    hpic->dpb_delay     = pic->display_order - pic->encode_order + ctx-
> >max_b_depth;
> -    hpic->cpb_delay     = pic->encode_order - hpic->last_idr_frame;
> +    hpic->dpb_delay     = base_pic->display_order - base_pic->encode_order +
> base_ctx->max_b_depth;
> +    hpic->cpb_delay     = base_pic->encode_order - hpic->last_idr_frame;
>  
>      if (priv->aud) {
>          priv->aud_needed = 1;
> @@ -680,7 +682,7 @@ static int
> vaapi_encode_h264_init_picture_params(AVCodecContext *avctx,
>  
>      priv->sei_needed = 0;
>  
> -    if (priv->sei & SEI_IDENTIFIER && pic->encode_order == 0)
> +    if (priv->sei & SEI_IDENTIFIER && base_pic->encode_order == 0)
>          priv->sei_needed |= SEI_IDENTIFIER;
>  #if !CONFIG_VAAPI_1
>      if (ctx->va_rc_mode == VA_RC_CBR)
> @@ -696,11 +698,11 @@ static int
> vaapi_encode_h264_init_picture_params(AVCodecContext *avctx,
>          priv->sei_needed |= SEI_TIMING;
>      }
>  
> -    if (priv->sei & SEI_RECOVERY_POINT && pic->type == PICTURE_TYPE_I) {
> +    if (priv->sei & SEI_RECOVERY_POINT && base_pic->type == PICTURE_TYPE_I) {
>          priv->sei_recovery_point = (H264RawSEIRecoveryPoint) {
>              .recovery_frame_cnt = 0,
>              .exact_match_flag   = 1,
> -            .broken_link_flag   = ctx->b_per_p > 0,
> +            .broken_link_flag   = base_ctx->b_per_p > 0,
>          };
>  
>          priv->sei_needed |= SEI_RECOVERY_POINT;
> @@ -710,7 +712,7 @@ static int
> vaapi_encode_h264_init_picture_params(AVCodecContext *avctx,
>          int err;
>          size_t sei_a53cc_len;
>          av_freep(&priv->sei_a53cc_data);
> -        err = ff_alloc_a53_sei(pic->input_image, 0, &priv->sei_a53cc_data,
> &sei_a53cc_len);
> +        err = ff_alloc_a53_sei(base_pic->input_image, 0, &priv-
> >sei_a53cc_data, &sei_a53cc_len);
>          if (err < 0)
>              return err;
>          if (priv->sei_a53cc_data != NULL) {
> @@ -730,15 +732,15 @@ static int
> vaapi_encode_h264_init_picture_params(AVCodecContext *avctx,
>          .BottomFieldOrderCnt = hpic->pic_order_cnt,
>      };
>      for (int k = 0; k < MAX_REFERENCE_LIST_NUM; k++) {
> -        for (i = 0; i < pic->nb_refs[k]; i++) {
> -            VAAPIEncodePicture      *ref = pic->refs[k][i];
> +        for (i = 0; i < base_pic->nb_refs[k]; i++) {
> +            HWBaseEncodePicture    *ref = base_pic->refs[k][i];
>              VAAPIEncodeH264Picture *href;
>  
> -            av_assert0(ref && ref->encode_order < pic->encode_order);
> +            av_assert0(ref && ref->encode_order < base_pic->encode_order);
>              href = ref->priv_data;
>  
>              vpic->ReferenceFrames[j++] = (VAPictureH264) {
> -                .picture_id          = ref->recon_surface,
> +                .picture_id          = ((VAAPIEncodePicture *)ref)-
> >recon_surface,
>                  .frame_idx           = href->frame_num,
>                  .flags               = VA_PICTURE_H264_SHORT_TERM_REFERENCE,
>                  .TopFieldOrderCnt    = href->pic_order_cnt,
> @@ -758,8 +760,8 @@ static int
> vaapi_encode_h264_init_picture_params(AVCodecContext *avctx,
>  
>      vpic->frame_num = hpic->frame_num;
>  
> -    vpic->pic_fields.bits.idr_pic_flag       = (pic->type ==
> PICTURE_TYPE_IDR);
> -    vpic->pic_fields.bits.reference_pic_flag = (pic->type != PICTURE_TYPE_B);
> +    vpic->pic_fields.bits.idr_pic_flag       = (base_pic->type ==
> PICTURE_TYPE_IDR);
> +    vpic->pic_fields.bits.reference_pic_flag = (base_pic->type !=
> PICTURE_TYPE_B);
>  
>      return 0;
>  }
> @@ -770,31 +772,32 @@ static void
> vaapi_encode_h264_default_ref_pic_list(AVCodecContext *avctx,
>                                                     VAAPIEncodePicture **rpl1,
>                                                     int *rpl_size)
>  {
> -    VAAPIEncodePicture *prev;
> +    HWBaseEncodePicture *base_pic = (HWBaseEncodePicture *)pic;
> +    HWBaseEncodePicture *prev;
>      VAAPIEncodeH264Picture *hp, *hn, *hc;
>      int i, j, n = 0;
>  
> -    prev = pic->prev;
> +    prev = base_pic->prev;
>      av_assert0(prev);
> -    hp = pic->priv_data;
> +    hp = base_pic->priv_data;
>  
> -    for (i = 0; i < pic->prev->nb_dpb_pics; i++) {
> +    for (i = 0; i < base_pic->prev->nb_dpb_pics; i++) {
>          hn = prev->dpb[i]->priv_data;
>          av_assert0(hn->frame_num < hp->frame_num);
>  
> -        if (pic->type == PICTURE_TYPE_P) {
> +        if (base_pic->type == PICTURE_TYPE_P) {
>              for (j = n; j > 0; j--) {
> -                hc = rpl0[j - 1]->priv_data;
> +                hc = rpl0[j - 1]->base.priv_data;
>                  av_assert0(hc->frame_num != hn->frame_num);
>                  if (hc->frame_num > hn->frame_num)
>                      break;
>                  rpl0[j] = rpl0[j - 1];
>              }
> -            rpl0[j] = prev->dpb[i];
> +            rpl0[j] = (VAAPIEncodePicture *)prev->dpb[i];
>  
> -        } else if (pic->type == PICTURE_TYPE_B) {
> +        } else if (base_pic->type == PICTURE_TYPE_B) {
>              for (j = n; j > 0; j--) {
> -                hc = rpl0[j - 1]->priv_data;
> +                hc = rpl0[j - 1]->base.priv_data;
>                  av_assert0(hc->pic_order_cnt != hp->pic_order_cnt);
>                  if (hc->pic_order_cnt < hp->pic_order_cnt) {
>                      if (hn->pic_order_cnt > hp->pic_order_cnt ||
> @@ -806,10 +809,10 @@ static void
> vaapi_encode_h264_default_ref_pic_list(AVCodecContext *avctx,
>                  }
>                  rpl0[j] = rpl0[j - 1];
>              }
> -            rpl0[j] = prev->dpb[i];
> +            rpl0[j] = (VAAPIEncodePicture *)prev->dpb[i];
>  
>              for (j = n; j > 0; j--) {
> -                hc = rpl1[j - 1]->priv_data;
> +                hc = rpl1[j - 1]->base.priv_data;
>                  av_assert0(hc->pic_order_cnt != hp->pic_order_cnt);
>                  if (hc->pic_order_cnt > hp->pic_order_cnt) {
>                      if (hn->pic_order_cnt < hp->pic_order_cnt ||
> @@ -821,13 +824,13 @@ static void
> vaapi_encode_h264_default_ref_pic_list(AVCodecContext *avctx,
>                  }
>                  rpl1[j] = rpl1[j - 1];
>              }
> -            rpl1[j] = prev->dpb[i];
> +            rpl1[j] = (VAAPIEncodePicture *)prev->dpb[i];
>          }
>  
>          ++n;
>      }
>  
> -    if (pic->type == PICTURE_TYPE_B) {
> +    if (base_pic->type == PICTURE_TYPE_B) {
>          for (i = 0; i < n; i++) {
>              if (rpl0[i] != rpl1[i])
>                  break;
> @@ -836,22 +839,22 @@ static void
> vaapi_encode_h264_default_ref_pic_list(AVCodecContext *avctx,
>              FFSWAP(VAAPIEncodePicture*, rpl1[0], rpl1[1]);
>      }
>  
> -    if (pic->type == PICTURE_TYPE_P ||
> -        pic->type == PICTURE_TYPE_B) {
> +    if (base_pic->type == PICTURE_TYPE_P ||
> +        base_pic->type == PICTURE_TYPE_B) {
>          av_log(avctx, AV_LOG_DEBUG, "Default RefPicList0 for fn=%d/poc=%d:",
>                 hp->frame_num, hp->pic_order_cnt);
>          for (i = 0; i < n; i++) {
> -            hn = rpl0[i]->priv_data;
> +            hn = rpl0[i]->base.priv_data;
>              av_log(avctx, AV_LOG_DEBUG, "  fn=%d/poc=%d",
>                     hn->frame_num, hn->pic_order_cnt);
>          }
>          av_log(avctx, AV_LOG_DEBUG, "\n");
>      }
> -    if (pic->type == PICTURE_TYPE_B) {
> +    if (base_pic->type == PICTURE_TYPE_B) {
>          av_log(avctx, AV_LOG_DEBUG, "Default RefPicList1 for fn=%d/poc=%d:",
>                 hp->frame_num, hp->pic_order_cnt);
>          for (i = 0; i < n; i++) {
> -            hn = rpl1[i]->priv_data;
> +            hn = rpl1[i]->base.priv_data;
>              av_log(avctx, AV_LOG_DEBUG, "  fn=%d/poc=%d",
>                     hn->frame_num, hn->pic_order_cnt);
>          }
> @@ -866,8 +869,9 @@ static int
> vaapi_encode_h264_init_slice_params(AVCodecContext *avctx,
>                                                 VAAPIEncodeSlice *slice)
>  {
>      VAAPIEncodeH264Context          *priv = avctx->priv_data;
> -    VAAPIEncodeH264Picture          *hpic = pic->priv_data;
> -    VAAPIEncodePicture              *prev = pic->prev;
> +    HWBaseEncodePicture          *base_pic = (HWBaseEncodePicture *)pic;
> +    VAAPIEncodeH264Picture          *hpic = base_pic->priv_data;
> +    HWBaseEncodePicture             *prev = base_pic->prev;
>      H264RawSPS                       *sps = &priv->raw_sps;
>      H264RawPPS                       *pps = &priv->raw_pps;
>      H264RawSliceHeader                *sh = &priv->raw_slice.header;
> @@ -875,12 +879,12 @@ static int
> vaapi_encode_h264_init_slice_params(AVCodecContext *avctx,
>      VAEncSliceParameterBufferH264 *vslice = slice->codec_slice_params;
>      int i, j;
>  
> -    if (pic->type == PICTURE_TYPE_IDR) {
> +    if (base_pic->type == PICTURE_TYPE_IDR) {
>          sh->nal_unit_header.nal_unit_type = H264_NAL_IDR_SLICE;
>          sh->nal_unit_header.nal_ref_idc   = 3;
>      } else {
>          sh->nal_unit_header.nal_unit_type = H264_NAL_SLICE;
> -        sh->nal_unit_header.nal_ref_idc   = pic->is_reference;
> +        sh->nal_unit_header.nal_ref_idc   = base_pic->is_reference;
>      }
>  
>      sh->first_mb_in_slice = slice->block_start;
> @@ -896,25 +900,25 @@ static int
> vaapi_encode_h264_init_slice_params(AVCodecContext *avctx,
>  
>      sh->direct_spatial_mv_pred_flag = 1;
>  
> -    if (pic->type == PICTURE_TYPE_B)
> +    if (base_pic->type == PICTURE_TYPE_B)
>          sh->slice_qp_delta = priv->fixed_qp_b - (pps->pic_init_qp_minus26 +
> 26);
> -    else if (pic->type == PICTURE_TYPE_P)
> +    else if (base_pic->type == PICTURE_TYPE_P)
>          sh->slice_qp_delta = priv->fixed_qp_p - (pps->pic_init_qp_minus26 +
> 26);
>      else
>          sh->slice_qp_delta = priv->fixed_qp_idr - (pps->pic_init_qp_minus26 +
> 26);
>  
> -    if (pic->is_reference && pic->type != PICTURE_TYPE_IDR) {
> -        VAAPIEncodePicture *discard_list[MAX_DPB_SIZE];
> +    if (base_pic->is_reference && base_pic->type != PICTURE_TYPE_IDR) {
> +        HWBaseEncodePicture *discard_list[MAX_DPB_SIZE];
>          int discard = 0, keep = 0;
>  
>          // Discard everything which is in the DPB of the previous frame but
>          // not in the DPB of this one.
>          for (i = 0; i < prev->nb_dpb_pics; i++) {
> -            for (j = 0; j < pic->nb_dpb_pics; j++) {
> -                if (prev->dpb[i] == pic->dpb[j])
> +            for (j = 0; j < base_pic->nb_dpb_pics; j++) {
> +                if (prev->dpb[i] == base_pic->dpb[j])
>                      break;
>              }
> -            if (j == pic->nb_dpb_pics) {
> +            if (j == base_pic->nb_dpb_pics) {
>                  discard_list[discard] = prev->dpb[i];
>                  ++discard;
>              } else {
> @@ -940,7 +944,7 @@ static int
> vaapi_encode_h264_init_slice_params(AVCodecContext *avctx,
>  
>      // If the intended references are not the first entries of RefPicListN
>      // by default, use ref-pic-list-modification to move them there.
> -    if (pic->type == PICTURE_TYPE_P || pic->type == PICTURE_TYPE_B) {
> +    if (base_pic->type == PICTURE_TYPE_P || base_pic->type == PICTURE_TYPE_B)
> {
>          VAAPIEncodePicture *def_l0[MAX_DPB_SIZE], *def_l1[MAX_DPB_SIZE];
>          VAAPIEncodeH264Picture *href;
>          int n;
> @@ -948,19 +952,19 @@ static int
> vaapi_encode_h264_init_slice_params(AVCodecContext *avctx,
>          vaapi_encode_h264_default_ref_pic_list(avctx, pic,
>                                                 def_l0, def_l1, &n);
>  
> -        if (pic->type == PICTURE_TYPE_P) {
> +        if (base_pic->type == PICTURE_TYPE_P) {
>              int need_rplm = 0;
> -            for (i = 0; i < pic->nb_refs[0]; i++) {
> -                av_assert0(pic->refs[0][i]);
> -                if (pic->refs[0][i] != def_l0[i])
> +            for (i = 0; i < base_pic->nb_refs[0]; i++) {
> +                av_assert0(base_pic->refs[0][i]);
> +                if (base_pic->refs[0][i] != (HWBaseEncodePicture *)def_l0[i])
>                      need_rplm = 1;
>              }
>  
>              sh->ref_pic_list_modification_flag_l0 = need_rplm;
>              if (need_rplm) {
>                  int pic_num = hpic->frame_num;
> -                for (i = 0; i < pic->nb_refs[0]; i++) {
> -                    href = pic->refs[0][i]->priv_data;
> +                for (i = 0; i < base_pic->nb_refs[0]; i++) {
> +                    href = base_pic->refs[0][i]->priv_data;
>                      av_assert0(href->frame_num != pic_num);
>                      if (href->frame_num < pic_num) {
>                          sh->rplm_l0[i].modification_of_pic_nums_idc = 0;
> @@ -979,20 +983,20 @@ static int
> vaapi_encode_h264_init_slice_params(AVCodecContext *avctx,
>          } else {
>              int need_rplm_l0 = 0, need_rplm_l1 = 0;
>              int n0 = 0, n1 = 0;
> -            for (i = 0; i < pic->nb_refs[0]; i++) {
> -                av_assert0(pic->refs[0][i]);
> -                href = pic->refs[0][i]->priv_data;
> +            for (i = 0; i < base_pic->nb_refs[0]; i++) {
> +                av_assert0(base_pic->refs[0][i]);
> +                href = base_pic->refs[0][i]->priv_data;
>                  av_assert0(href->pic_order_cnt < hpic->pic_order_cnt);
> -                if (pic->refs[0][i] != def_l0[n0])
> +                if (base_pic->refs[0][i] != (HWBaseEncodePicture
> *)def_l0[n0])
>                      need_rplm_l0 = 1;
>                  ++n0;
>              }
>  
> -            for (i = 0; i < pic->nb_refs[1]; i++) {
> -                av_assert0(pic->refs[1][i]);
> -                href = pic->refs[1][i]->priv_data;
> +            for (i = 0; i < base_pic->nb_refs[1]; i++) {
> +                av_assert0(base_pic->refs[1][i]);
> +                href = base_pic->refs[1][i]->priv_data;
>                  av_assert0(href->pic_order_cnt > hpic->pic_order_cnt);
> -                if (pic->refs[1][i] != def_l1[n1])
> +                if (base_pic->refs[1][i] != (HWBaseEncodePicture
> *)def_l1[n1])
>                      need_rplm_l1 = 1;
>                  ++n1;
>              }
> @@ -1000,8 +1004,8 @@ static int
> vaapi_encode_h264_init_slice_params(AVCodecContext *avctx,
>              sh->ref_pic_list_modification_flag_l0 = need_rplm_l0;
>              if (need_rplm_l0) {
>                  int pic_num = hpic->frame_num;
> -                for (i = j = 0; i < pic->nb_refs[0]; i++) {
> -                    href = pic->refs[0][i]->priv_data;
> +                for (i = j = 0; i < base_pic->nb_refs[0]; i++) {
> +                    href = base_pic->refs[0][i]->priv_data;
>                      av_assert0(href->frame_num != pic_num);
>                      if (href->frame_num < pic_num) {
>                          sh->rplm_l0[j].modification_of_pic_nums_idc = 0;
> @@ -1022,8 +1026,8 @@ static int
> vaapi_encode_h264_init_slice_params(AVCodecContext *avctx,
>              sh->ref_pic_list_modification_flag_l1 = need_rplm_l1;
>              if (need_rplm_l1) {
>                  int pic_num = hpic->frame_num;
> -                for (i = j = 0; i < pic->nb_refs[1]; i++) {
> -                    href = pic->refs[1][i]->priv_data;
> +                for (i = j = 0; i < base_pic->nb_refs[1]; i++) {
> +                    href = base_pic->refs[1][i]->priv_data;
>                      av_assert0(href->frame_num != pic_num);
>                      if (href->frame_num < pic_num) {
>                          sh->rplm_l1[j].modification_of_pic_nums_idc = 0;
> @@ -1063,15 +1067,15 @@ static int
> vaapi_encode_h264_init_slice_params(AVCodecContext *avctx,
>          vslice->RefPicList1[i].flags      = VA_PICTURE_H264_INVALID;
>      }
>  
> -    if (pic->nb_refs[0]) {
> +    if (base_pic->nb_refs[0]) {
>          // Backward reference for P- or B-frame.
> -        av_assert0(pic->type == PICTURE_TYPE_P ||
> -                   pic->type == PICTURE_TYPE_B);
> +        av_assert0(base_pic->type == PICTURE_TYPE_P ||
> +                   base_pic->type == PICTURE_TYPE_B);
>          vslice->RefPicList0[0] = vpic->ReferenceFrames[0];
>      }
> -    if (pic->nb_refs[1]) {
> +    if (base_pic->nb_refs[1]) {
>          // Forward reference for B-frame.
> -        av_assert0(pic->type == PICTURE_TYPE_B);
> +        av_assert0(base_pic->type == PICTURE_TYPE_B);
>          vslice->RefPicList1[0] = vpic->ReferenceFrames[1];
>      }
>  
> @@ -1082,8 +1086,9 @@ static int
> vaapi_encode_h264_init_slice_params(AVCodecContext *avctx,
>  
>  static av_cold int vaapi_encode_h264_configure(AVCodecContext *avctx)
>  {
> -    VAAPIEncodeContext      *ctx = avctx->priv_data;
> -    VAAPIEncodeH264Context *priv = avctx->priv_data;
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    VAAPIEncodeContext       *ctx = avctx->priv_data;
> +    VAAPIEncodeH264Context  *priv = avctx->priv_data;
>      int err;
>  
>      err = ff_cbs_init(&priv->cbc, AV_CODEC_ID_H264, avctx);
> @@ -1094,7 +1099,7 @@ static av_cold int
> vaapi_encode_h264_configure(AVCodecContext *avctx)
>      priv->mb_height = FFALIGN(avctx->height, 16) / 16;
>  
>      if (ctx->va_rc_mode == VA_RC_CQP) {
> -        priv->fixed_qp_p = av_clip(ctx->rc_quality, 1, 51);
> +        priv->fixed_qp_p = av_clip(base_ctx->rc_quality, 1, 51);
>          if (avctx->i_quant_factor > 0.0)
>              priv->fixed_qp_idr =
>                  av_clip((avctx->i_quant_factor * priv->fixed_qp_p +
> @@ -1202,8 +1207,9 @@ static const VAAPIEncodeType vaapi_encode_type_h264 = {
>  
>  static av_cold int vaapi_encode_h264_init(AVCodecContext *avctx)
>  {
> -    VAAPIEncodeContext      *ctx = avctx->priv_data;
> -    VAAPIEncodeH264Context *priv = avctx->priv_data;
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    VAAPIEncodeContext       *ctx = avctx->priv_data;
> +    VAAPIEncodeH264Context  *priv = avctx->priv_data;
>  
>      ctx->codec = &vaapi_encode_type_h264;
>  
> @@ -1251,13 +1257,13 @@ static av_cold int
> vaapi_encode_h264_init(AVCodecContext *avctx)
>          VA_ENC_PACKED_HEADER_SLICE    | // Slice headers.
>          VA_ENC_PACKED_HEADER_MISC;      // SEI.
>  
> -    ctx->surface_width  = FFALIGN(avctx->width,  16);
> -    ctx->surface_height = FFALIGN(avctx->height, 16);
> +    base_ctx->surface_width  = FFALIGN(avctx->width,  16);
> +    base_ctx->surface_height = FFALIGN(avctx->height, 16);
>  
> -    ctx->slice_block_height = ctx->slice_block_width = 16;
> +    base_ctx->slice_block_height = base_ctx->slice_block_width = 16;
>  
>      if (priv->qp > 0)
> -        ctx->explicit_qp = priv->qp;
> +        base_ctx->explicit_qp = priv->qp;
>  
>      return ff_vaapi_encode_init(avctx);
>  }
> @@ -1277,6 +1283,7 @@ static av_cold int
> vaapi_encode_h264_close(AVCodecContext *avctx)
>  #define OFFSET(x) offsetof(VAAPIEncodeH264Context, x)
>  #define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM)
>  static const AVOption vaapi_encode_h264_options[] = {
> +    HW_BASE_ENCODE_COMMON_OPTIONS,
>      VAAPI_ENCODE_COMMON_OPTIONS,
>      VAAPI_ENCODE_RC_OPTIONS,
>  
> diff --git a/libavcodec/vaapi_encode_h265.c b/libavcodec/vaapi_encode_h265.c
> index c4aabbf5ed..4f5d8fc76f 100644
> --- a/libavcodec/vaapi_encode_h265.c
> +++ b/libavcodec/vaapi_encode_h265.c
> @@ -260,6 +260,7 @@ fail:
>  
>  static int vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx)
>  {
> +    HWBaseEncodeContext          *base_ctx = avctx->priv_data;
>      VAAPIEncodeContext                *ctx = avctx->priv_data;
>      VAAPIEncodeH265Context           *priv = avctx->priv_data;
>      H265RawVPS                        *vps = &priv->raw_vps;
> @@ -278,7 +279,7 @@ static int
> vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx)
>      memset(pps, 0, sizeof(*pps));
>  
>  
> -    desc = av_pix_fmt_desc_get(priv->common.input_frames->sw_format);
> +    desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format);
>      av_assert0(desc);
>      if (desc->nb_components == 1) {
>          chroma_format = 0;
> @@ -341,7 +342,7 @@ static int
> vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx)
>      ptl->general_max_420chroma_constraint_flag  = chroma_format <= 1;
>      ptl->general_max_monochrome_constraint_flag = chroma_format == 0;
>  
> -    ptl->general_intra_constraint_flag = ctx->gop_size == 1;
> +    ptl->general_intra_constraint_flag = base_ctx->gop_size == 1;
>      ptl->general_one_picture_only_constraint_flag = 0;
>  
>      ptl->general_lower_bit_rate_constraint_flag = 1;
> @@ -352,9 +353,9 @@ static int
> vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx)
>          const H265LevelDescriptor *level;
>  
>          level = ff_h265_guess_level(ptl, avctx->bit_rate,
> -                                    ctx->surface_width, ctx->surface_height,
> +                                    base_ctx->surface_width, base_ctx-
> >surface_height,
>                                      ctx->nb_slices, ctx->tile_rows, ctx-
> >tile_cols,
> -                                    (ctx->b_per_p > 0) + 1);
> +                                    (base_ctx->b_per_p > 0) + 1);
>          if (level) {
>              av_log(avctx, AV_LOG_VERBOSE, "Using level %s.\n", level->name);
>              ptl->general_level_idc = level->level_idc;
> @@ -368,8 +369,8 @@ static int
> vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx)
>      }
>  
>      vps->vps_sub_layer_ordering_info_present_flag = 0;
> -    vps->vps_max_dec_pic_buffering_minus1[0]      = ctx->max_b_depth + 1;
> -    vps->vps_max_num_reorder_pics[0]              = ctx->max_b_depth;
> +    vps->vps_max_dec_pic_buffering_minus1[0]      = base_ctx->max_b_depth +
> 1;
> +    vps->vps_max_num_reorder_pics[0]              = base_ctx->max_b_depth;
>      vps->vps_max_latency_increase_plus1[0]        = 0;
>  
>      vps->vps_max_layer_id             = 0;
> @@ -410,18 +411,18 @@ static int
> vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx)
>      sps->chroma_format_idc          = chroma_format;
>      sps->separate_colour_plane_flag = 0;
>  
> -    sps->pic_width_in_luma_samples  = ctx->surface_width;
> -    sps->pic_height_in_luma_samples = ctx->surface_height;
> +    sps->pic_width_in_luma_samples  = base_ctx->surface_width;
> +    sps->pic_height_in_luma_samples = base_ctx->surface_height;
>  
> -    if (avctx->width  != ctx->surface_width ||
> -        avctx->height != ctx->surface_height) {
> +    if (avctx->width  != base_ctx->surface_width ||
> +        avctx->height != base_ctx->surface_height) {
>          sps->conformance_window_flag = 1;
>          sps->conf_win_left_offset   = 0;
>          sps->conf_win_right_offset  =
> -            (ctx->surface_width - avctx->width) >> desc->log2_chroma_w;
> +            (base_ctx->surface_width - avctx->width) >> desc->log2_chroma_w;
>          sps->conf_win_top_offset    = 0;
>          sps->conf_win_bottom_offset =
> -            (ctx->surface_height - avctx->height) >> desc->log2_chroma_h;
> +            (base_ctx->surface_height - avctx->height) >> desc-
> >log2_chroma_h;
>      } else {
>          sps->conformance_window_flag = 0;
>      }
> @@ -643,9 +644,9 @@ static int
> vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx)
>          .general_level_idc   = vps->profile_tier_level.general_level_idc,
>          .general_tier_flag   = vps->profile_tier_level.general_tier_flag,
>  
> -        .intra_period     = ctx->gop_size,
> -        .intra_idr_period = ctx->gop_size,
> -        .ip_period        = ctx->b_per_p + 1,
> +        .intra_period     = base_ctx->gop_size,
> +        .intra_idr_period = base_ctx->gop_size,
> +        .ip_period        = base_ctx->b_per_p + 1,
>          .bits_per_second  = ctx->va_bit_rate,
>  
>          .pic_width_in_luma_samples  = sps->pic_width_in_luma_samples,
> @@ -758,18 +759,19 @@ static int
> vaapi_encode_h265_init_sequence_params(AVCodecContext *avctx)
>  static int vaapi_encode_h265_init_picture_params(AVCodecContext *avctx,
>                                                   VAAPIEncodePicture *pic)
>  {
> -    VAAPIEncodeContext               *ctx = avctx->priv_data;
> +    HWBaseEncodeContext         *base_ctx = avctx->priv_data;
>      VAAPIEncodeH265Context          *priv = avctx->priv_data;
> -    VAAPIEncodeH265Picture          *hpic = pic->priv_data;
> -    VAAPIEncodePicture              *prev = pic->prev;
> +    HWBaseEncodePicture         *base_pic = (HWBaseEncodePicture *)pic;
> +    VAAPIEncodeH265Picture          *hpic = base_pic->priv_data;
> +    HWBaseEncodePicture             *prev = base_pic->prev;
>      VAAPIEncodeH265Picture         *hprev = prev ? prev->priv_data : NULL;
>      VAEncPictureParameterBufferHEVC *vpic = pic->codec_picture_params;
>      int i, j = 0;
>  
> -    if (pic->type == PICTURE_TYPE_IDR) {
> -        av_assert0(pic->display_order == pic->encode_order);
> +    if (base_pic->type == PICTURE_TYPE_IDR) {
> +        av_assert0(base_pic->display_order == base_pic->encode_order);
>  
> -        hpic->last_idr_frame = pic->display_order;
> +        hpic->last_idr_frame = base_pic->display_order;
>  
>          hpic->slice_nal_unit = HEVC_NAL_IDR_W_RADL;
>          hpic->slice_type     = HEVC_SLICE_I;
> @@ -778,23 +780,23 @@ static int
> vaapi_encode_h265_init_picture_params(AVCodecContext *avctx,
>          av_assert0(prev);
>          hpic->last_idr_frame = hprev->last_idr_frame;
>  
> -        if (pic->type == PICTURE_TYPE_I) {
> +        if (base_pic->type == PICTURE_TYPE_I) {
>              hpic->slice_nal_unit = HEVC_NAL_CRA_NUT;
>              hpic->slice_type     = HEVC_SLICE_I;
>              hpic->pic_type       = 0;
> -        } else if (pic->type == PICTURE_TYPE_P) {
> -            av_assert0(pic->refs[0]);
> +        } else if (base_pic->type == PICTURE_TYPE_P) {
> +            av_assert0(base_pic->refs[0]);
>              hpic->slice_nal_unit = HEVC_NAL_TRAIL_R;
>              hpic->slice_type     = HEVC_SLICE_P;
>              hpic->pic_type       = 1;
>          } else {
> -            VAAPIEncodePicture *irap_ref;
> -            av_assert0(pic->refs[0][0] && pic->refs[1][0]);
> -            for (irap_ref = pic; irap_ref; irap_ref = irap_ref->refs[1][0]) {
> +            HWBaseEncodePicture *irap_ref;
> +            av_assert0(base_pic->refs[0][0] && base_pic->refs[1][0]);
> +            for (irap_ref = base_pic; irap_ref; irap_ref = irap_ref-
> >refs[1][0]) {
>                  if (irap_ref->type == PICTURE_TYPE_I)
>                      break;
>              }
> -            if (pic->b_depth == ctx->max_b_depth) {
> +            if (base_pic->b_depth == base_ctx->max_b_depth) {
>                  hpic->slice_nal_unit = irap_ref ? HEVC_NAL_RASL_N
>                                                  : HEVC_NAL_TRAIL_N;
>              } else {
> @@ -805,7 +807,7 @@ static int
> vaapi_encode_h265_init_picture_params(AVCodecContext *avctx,
>              hpic->pic_type   = 2;
>          }
>      }
> -    hpic->pic_order_cnt = pic->display_order - hpic->last_idr_frame;
> +    hpic->pic_order_cnt = base_pic->display_order - hpic->last_idr_frame;
>  
>      if (priv->aud) {
>          priv->aud_needed = 1;
> @@ -827,9 +829,9 @@ static int
> vaapi_encode_h265_init_picture_params(AVCodecContext *avctx,
>      // may force an IDR frame on the output where the medadata gets
>      // changed on the input frame.
>      if ((priv->sei & SEI_MASTERING_DISPLAY) &&
> -        (pic->type == PICTURE_TYPE_I || pic->type == PICTURE_TYPE_IDR)) {
> +        (base_pic->type == PICTURE_TYPE_I || base_pic->type ==
> PICTURE_TYPE_IDR)) {
>          AVFrameSideData *sd =
> -            av_frame_get_side_data(pic->input_image,
> +            av_frame_get_side_data(base_pic->input_image,
>                                     AV_FRAME_DATA_MASTERING_DISPLAY_METADATA);
>  
>          if (sd) {
> @@ -875,9 +877,9 @@ static int
> vaapi_encode_h265_init_picture_params(AVCodecContext *avctx,
>      }
>  
>      if ((priv->sei & SEI_CONTENT_LIGHT_LEVEL) &&
> -        (pic->type == PICTURE_TYPE_I || pic->type == PICTURE_TYPE_IDR)) {
> +        (base_pic->type == PICTURE_TYPE_I || base_pic->type ==
> PICTURE_TYPE_IDR)) {
>          AVFrameSideData *sd =
> -            av_frame_get_side_data(pic->input_image,
> +            av_frame_get_side_data(base_pic->input_image,
>                                     AV_FRAME_DATA_CONTENT_LIGHT_LEVEL);
>  
>          if (sd) {
> @@ -897,7 +899,7 @@ static int
> vaapi_encode_h265_init_picture_params(AVCodecContext *avctx,
>          int err;
>          size_t sei_a53cc_len;
>          av_freep(&priv->sei_a53cc_data);
> -        err = ff_alloc_a53_sei(pic->input_image, 0, &priv->sei_a53cc_data,
> &sei_a53cc_len);
> +        err = ff_alloc_a53_sei(base_pic->input_image, 0, &priv-
> >sei_a53cc_data, &sei_a53cc_len);
>          if (err < 0)
>              return err;
>          if (priv->sei_a53cc_data != NULL) {
> @@ -916,19 +918,19 @@ static int
> vaapi_encode_h265_init_picture_params(AVCodecContext *avctx,
>      };
>  
>      for (int k = 0; k < MAX_REFERENCE_LIST_NUM; k++) {
> -        for (i = 0; i < pic->nb_refs[k]; i++) {
> -            VAAPIEncodePicture      *ref = pic->refs[k][i];
> +        for (i = 0; i < base_pic->nb_refs[k]; i++) {
> +            HWBaseEncodePicture    *ref = base_pic->refs[k][i];
>              VAAPIEncodeH265Picture *href;
>  
> -            av_assert0(ref && ref->encode_order < pic->encode_order);
> +            av_assert0(ref && ref->encode_order < base_pic->encode_order);
>              href = ref->priv_data;
>  
>              vpic->reference_frames[j++] = (VAPictureHEVC) {
> -                .picture_id    = ref->recon_surface,
> +                .picture_id    = ((VAAPIEncodePicture *)ref)->recon_surface,
>                  .pic_order_cnt = href->pic_order_cnt,
> -                .flags = (ref->display_order < pic->display_order ?
> +                .flags = (ref->display_order < base_pic->display_order ?
>                            VA_PICTURE_HEVC_RPS_ST_CURR_BEFORE : 0) |
> -                          (ref->display_order > pic->display_order ?
> +                          (ref->display_order > base_pic->display_order ?
>                            VA_PICTURE_HEVC_RPS_ST_CURR_AFTER  : 0),
>              };
>          }
> @@ -945,7 +947,7 @@ static int
> vaapi_encode_h265_init_picture_params(AVCodecContext *avctx,
>  
>      vpic->nal_unit_type = hpic->slice_nal_unit;
>  
> -    switch (pic->type) {
> +    switch (base_pic->type) {
>      case PICTURE_TYPE_IDR:
>          vpic->pic_fields.bits.idr_pic_flag       = 1;
>          vpic->pic_fields.bits.coding_type        = 1;
> @@ -977,9 +979,10 @@ static int
> vaapi_encode_h265_init_slice_params(AVCodecContext *avctx,
>                                                 VAAPIEncodePicture *pic,
>                                                 VAAPIEncodeSlice *slice)
>  {
> -    VAAPIEncodeContext                *ctx = avctx->priv_data;
> +    HWBaseEncodeContext          *base_ctx = avctx->priv_data;
>      VAAPIEncodeH265Context           *priv = avctx->priv_data;
> -    VAAPIEncodeH265Picture           *hpic = pic->priv_data;
> +    HWBaseEncodePicture          *base_pic = (HWBaseEncodePicture *)pic;
> +    VAAPIEncodeH265Picture           *hpic = base_pic->priv_data;
>      const H265RawSPS                  *sps = &priv->raw_sps;
>      const H265RawPPS                  *pps = &priv->raw_pps;
>      H265RawSliceHeader                 *sh = &priv->raw_slice.header;
> @@ -1000,13 +1003,13 @@ static int
> vaapi_encode_h265_init_slice_params(AVCodecContext *avctx,
>  
>      sh->slice_type = hpic->slice_type;
>  
> -    if (sh->slice_type == HEVC_SLICE_P && ctx->p_to_gpb)
> +    if (sh->slice_type == HEVC_SLICE_P && base_ctx->p_to_gpb)
>          sh->slice_type = HEVC_SLICE_B;
>  
>      sh->slice_pic_order_cnt_lsb = hpic->pic_order_cnt &
>          (1 << (sps->log2_max_pic_order_cnt_lsb_minus4 + 4)) - 1;
>  
> -    if (pic->type != PICTURE_TYPE_IDR) {
> +    if (base_pic->type != PICTURE_TYPE_IDR) {
>          H265RawSTRefPicSet *rps;
>          const VAAPIEncodeH265Picture *strp;
>          int rps_poc[MAX_DPB_SIZE];
> @@ -1020,33 +1023,33 @@ static int
> vaapi_encode_h265_init_slice_params(AVCodecContext *avctx,
>  
>          rps_pics = 0;
>          for (i = 0; i < MAX_REFERENCE_LIST_NUM; i++) {
> -            for (j = 0; j < pic->nb_refs[i]; j++) {
> -                strp = pic->refs[i][j]->priv_data;
> +            for (j = 0; j < base_pic->nb_refs[i]; j++) {
> +                strp = base_pic->refs[i][j]->priv_data;
>                  rps_poc[rps_pics]  = strp->pic_order_cnt;
>                  rps_used[rps_pics] = 1;
>                  ++rps_pics;
>              }
>          }
>  
> -        for (i = 0; i < pic->nb_dpb_pics; i++) {
> -            if (pic->dpb[i] == pic)
> +        for (i = 0; i < base_pic->nb_dpb_pics; i++) {
> +            if (base_pic->dpb[i] == base_pic)
>                  continue;
>  
> -            for (j = 0; j < pic->nb_refs[0]; j++) {
> -                if (pic->dpb[i] == pic->refs[0][j])
> +            for (j = 0; j < base_pic->nb_refs[0]; j++) {
> +                if (base_pic->dpb[i] == base_pic->refs[0][j])
>                      break;
>              }
> -            if (j < pic->nb_refs[0])
> +            if (j < base_pic->nb_refs[0])
>                  continue;
>  
> -            for (j = 0; j < pic->nb_refs[1]; j++) {
> -                if (pic->dpb[i] == pic->refs[1][j])
> +            for (j = 0; j < base_pic->nb_refs[1]; j++) {
> +                if (base_pic->dpb[i] == base_pic->refs[1][j])
>                      break;
>              }
> -            if (j < pic->nb_refs[1])
> +            if (j < base_pic->nb_refs[1])
>                  continue;
>  
> -            strp = pic->dpb[i]->priv_data;
> +            strp = base_pic->dpb[i]->priv_data;
>              rps_poc[rps_pics]  = strp->pic_order_cnt;
>              rps_used[rps_pics] = 0;
>              ++rps_pics;
> @@ -1113,9 +1116,9 @@ static int
> vaapi_encode_h265_init_slice_params(AVCodecContext *avctx,
>      sh->slice_sao_luma_flag = sh->slice_sao_chroma_flag =
>          sps->sample_adaptive_offset_enabled_flag;
>  
> -    if (pic->type == PICTURE_TYPE_B)
> +    if (base_pic->type == PICTURE_TYPE_B)
>          sh->slice_qp_delta = priv->fixed_qp_b - (pps->init_qp_minus26 + 26);
> -    else if (pic->type == PICTURE_TYPE_P)
> +    else if (base_pic->type == PICTURE_TYPE_P)
>          sh->slice_qp_delta = priv->fixed_qp_p - (pps->init_qp_minus26 + 26);
>      else
>          sh->slice_qp_delta = priv->fixed_qp_idr - (pps->init_qp_minus26 +
> 26);
> @@ -1170,22 +1173,22 @@ static int
> vaapi_encode_h265_init_slice_params(AVCodecContext *avctx,
>          vslice->ref_pic_list1[i].flags      = VA_PICTURE_HEVC_INVALID;
>      }
>  
> -    if (pic->nb_refs[0]) {
> +    if (base_pic->nb_refs[0]) {
>          // Backward reference for P- or B-frame.
> -        av_assert0(pic->type == PICTURE_TYPE_P ||
> -                   pic->type == PICTURE_TYPE_B);
> +        av_assert0(base_pic->type == PICTURE_TYPE_P ||
> +                   base_pic->type == PICTURE_TYPE_B);
>          vslice->ref_pic_list0[0] = vpic->reference_frames[0];
> -        if (ctx->p_to_gpb && pic->type == PICTURE_TYPE_P)
> +        if (base_ctx->p_to_gpb && base_pic->type == PICTURE_TYPE_P)
>              // Reference for GPB B-frame, L0 == L1
>              vslice->ref_pic_list1[0] = vpic->reference_frames[0];
>      }
> -    if (pic->nb_refs[1]) {
> +    if (base_pic->nb_refs[1]) {
>          // Forward reference for B-frame.
> -        av_assert0(pic->type == PICTURE_TYPE_B);
> +        av_assert0(base_pic->type == PICTURE_TYPE_B);
>          vslice->ref_pic_list1[0] = vpic->reference_frames[1];
>      }
>  
> -    if (pic->type == PICTURE_TYPE_P && ctx->p_to_gpb) {
> +    if (base_pic->type == PICTURE_TYPE_P && base_ctx->p_to_gpb) {
>          vslice->slice_type = HEVC_SLICE_B;
>          for (i = 0; i < FF_ARRAY_ELEMS(vslice->ref_pic_list0); i++) {
>              vslice->ref_pic_list1[i].picture_id = vslice-
> >ref_pic_list0[i].picture_id;
> @@ -1198,8 +1201,9 @@ static int
> vaapi_encode_h265_init_slice_params(AVCodecContext *avctx,
>  
>  static av_cold int vaapi_encode_h265_get_encoder_caps(AVCodecContext *avctx)
>  {
> -    VAAPIEncodeContext      *ctx = avctx->priv_data;
> -    VAAPIEncodeH265Context *priv = avctx->priv_data;
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    VAAPIEncodeContext       *ctx = avctx->priv_data;
> +    VAAPIEncodeH265Context  *priv = avctx->priv_data;
>  
>  #if VA_CHECK_VERSION(1, 13, 0)
>      {
> @@ -1250,18 +1254,19 @@ static av_cold int
> vaapi_encode_h265_get_encoder_caps(AVCodecContext *avctx)
>             "min CB size %dx%d.\n", priv->ctu_size, priv->ctu_size,
>             priv->min_cb_size, priv->min_cb_size);
>  
> -    ctx->surface_width  = FFALIGN(avctx->width,  priv->min_cb_size);
> -    ctx->surface_height = FFALIGN(avctx->height, priv->min_cb_size);
> +    base_ctx->surface_width  = FFALIGN(avctx->width,  priv->min_cb_size);
> +    base_ctx->surface_height = FFALIGN(avctx->height, priv->min_cb_size);
>  
> -    ctx->slice_block_width = ctx->slice_block_height = priv->ctu_size;
> +    base_ctx->slice_block_width = base_ctx->slice_block_height = priv-
> >ctu_size;
>  
>      return 0;
>  }
>  
>  static av_cold int vaapi_encode_h265_configure(AVCodecContext *avctx)
>  {
> -    VAAPIEncodeContext      *ctx = avctx->priv_data;
> -    VAAPIEncodeH265Context *priv = avctx->priv_data;
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    VAAPIEncodeContext       *ctx = avctx->priv_data;
> +    VAAPIEncodeH265Context  *priv = avctx->priv_data;
>      int err;
>  
>      err = ff_cbs_init(&priv->cbc, AV_CODEC_ID_HEVC, avctx);
> @@ -1273,7 +1278,7 @@ static av_cold int
> vaapi_encode_h265_configure(AVCodecContext *avctx)
>          // therefore always bounded below by 1, even in 10-bit mode where
>          // it should go down to -12.
>  
> -        priv->fixed_qp_p = av_clip(ctx->rc_quality, 1, 51);
> +        priv->fixed_qp_p = av_clip(base_ctx->rc_quality, 1, 51);
>          if (avctx->i_quant_factor > 0.0)
>              priv->fixed_qp_idr =
>                  av_clip((avctx->i_quant_factor * priv->fixed_qp_p +
> @@ -1357,8 +1362,9 @@ static const VAAPIEncodeType vaapi_encode_type_h265 = {
>  
>  static av_cold int vaapi_encode_h265_init(AVCodecContext *avctx)
>  {
> -    VAAPIEncodeContext      *ctx = avctx->priv_data;
> -    VAAPIEncodeH265Context *priv = avctx->priv_data;
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    VAAPIEncodeContext       *ctx = avctx->priv_data;
> +    VAAPIEncodeH265Context  *priv = avctx->priv_data;
>  
>      ctx->codec = &vaapi_encode_type_h265;
>  
> @@ -1379,7 +1385,7 @@ static av_cold int vaapi_encode_h265_init(AVCodecContext
> *avctx)
>          VA_ENC_PACKED_HEADER_MISC;      // SEI
>  
>      if (priv->qp > 0)
> -        ctx->explicit_qp = priv->qp;
> +        base_ctx->explicit_qp = priv->qp;
>  
>      return ff_vaapi_encode_init(avctx);
>  }
> @@ -1398,6 +1404,7 @@ static av_cold int
> vaapi_encode_h265_close(AVCodecContext *avctx)
>  #define OFFSET(x) offsetof(VAAPIEncodeH265Context, x)
>  #define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM)
>  static const AVOption vaapi_encode_h265_options[] = {
> +    HW_BASE_ENCODE_COMMON_OPTIONS,
>      VAAPI_ENCODE_COMMON_OPTIONS,
>      VAAPI_ENCODE_RC_OPTIONS,
>  
> diff --git a/libavcodec/vaapi_encode_mjpeg.c b/libavcodec/vaapi_encode_mjpeg.c
> index c17747e3a9..91829b1e0e 100644
> --- a/libavcodec/vaapi_encode_mjpeg.c
> +++ b/libavcodec/vaapi_encode_mjpeg.c
> @@ -222,7 +222,9 @@ static int
> vaapi_encode_mjpeg_write_extra_buffer(AVCodecContext *avctx,
>  static int vaapi_encode_mjpeg_init_picture_params(AVCodecContext *avctx,
>                                                    VAAPIEncodePicture *pic)
>  {
> +    HWBaseEncodeContext         *base_ctx = avctx->priv_data;
>      VAAPIEncodeMJPEGContext         *priv = avctx->priv_data;
> +    HWBaseEncodePicture         *base_pic = (HWBaseEncodePicture *)pic;
>      JPEGRawFrameHeader                *fh = &priv->frame_header;
>      JPEGRawScanHeader                 *sh = &priv->scan.header;
>      VAEncPictureParameterBufferJPEG *vpic = pic->codec_picture_params;
> @@ -232,9 +234,9 @@ static int
> vaapi_encode_mjpeg_init_picture_params(AVCodecContext *avctx,
>      const uint8_t *components;
>      int t, i, quant_scale, len;
>  
> -    av_assert0(pic->type == PICTURE_TYPE_IDR);
> +    av_assert0(base_pic->type == PICTURE_TYPE_IDR);
>  
> -    desc = av_pix_fmt_desc_get(priv->common.input_frames->sw_format);
> +    desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format);
>      av_assert0(desc);
>      if (desc->flags & AV_PIX_FMT_FLAG_RGB)
>          components = components_rgb;
> @@ -261,7 +263,7 @@ static int
> vaapi_encode_mjpeg_init_picture_params(AVCodecContext *avctx,
>      // JFIF header.
>      if (priv->jfif) {
>          JPEGRawApplicationData *app = &priv->jfif_header;
> -        AVRational sar = pic->input_image->sample_aspect_ratio;
> +        AVRational sar = base_pic->input_image->sample_aspect_ratio;
>          int sar_w, sar_h;
>          PutByteContext pbc;
>  
> @@ -436,25 +438,26 @@ static int
> vaapi_encode_mjpeg_init_slice_params(AVCodecContext *avctx,
>  
>  static av_cold int vaapi_encode_mjpeg_get_encoder_caps(AVCodecContext *avctx)
>  {
> -    VAAPIEncodeContext *ctx = avctx->priv_data;
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
>      const AVPixFmtDescriptor *desc;
>  
> -    desc = av_pix_fmt_desc_get(ctx->input_frames->sw_format);
> +    desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format);
>      av_assert0(desc);
>  
> -    ctx->surface_width  = FFALIGN(avctx->width,  8 << desc->log2_chroma_w);
> -    ctx->surface_height = FFALIGN(avctx->height, 8 << desc->log2_chroma_h);
> +    base_ctx->surface_width  = FFALIGN(avctx->width,  8 << desc-
> >log2_chroma_w);
> +    base_ctx->surface_height = FFALIGN(avctx->height, 8 << desc-
> >log2_chroma_h);
>  
>      return 0;
>  }
>  
>  static av_cold int vaapi_encode_mjpeg_configure(AVCodecContext *avctx)
>  {
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
>      VAAPIEncodeContext       *ctx = avctx->priv_data;
>      VAAPIEncodeMJPEGContext *priv = avctx->priv_data;
>      int err;
>  
> -    priv->quality = ctx->rc_quality;
> +    priv->quality = base_ctx->rc_quality;
>      if (priv->quality < 1 || priv->quality > 100) {
>          av_log(avctx, AV_LOG_ERROR, "Invalid quality value %d "
>                 "(must be 1-100).\n", priv->quality);
> @@ -540,6 +543,7 @@ static av_cold int vaapi_encode_mjpeg_close(AVCodecContext
> *avctx)
>  #define OFFSET(x) offsetof(VAAPIEncodeMJPEGContext, x)
>  #define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM)
>  static const AVOption vaapi_encode_mjpeg_options[] = {
> +    HW_BASE_ENCODE_COMMON_OPTIONS,
>      VAAPI_ENCODE_COMMON_OPTIONS,
>  
>      { "jfif", "Include JFIF header",
> diff --git a/libavcodec/vaapi_encode_mpeg2.c b/libavcodec/vaapi_encode_mpeg2.c
> index c9b16fbcfc..aa8e6d6bdf 100644
> --- a/libavcodec/vaapi_encode_mpeg2.c
> +++ b/libavcodec/vaapi_encode_mpeg2.c
> @@ -166,6 +166,7 @@ fail:
>  
>  static int vaapi_encode_mpeg2_init_sequence_params(AVCodecContext *avctx)
>  {
> +    HWBaseEncodeContext           *base_ctx = avctx->priv_data;
>      VAAPIEncodeContext                 *ctx = avctx->priv_data;
>      VAAPIEncodeMPEG2Context           *priv = avctx->priv_data;
>      MPEG2RawSequenceHeader              *sh = &priv->sequence_header;
> @@ -281,7 +282,7 @@ static int
> vaapi_encode_mpeg2_init_sequence_params(AVCodecContext *avctx)
>  
>      se->bit_rate_extension        = priv->bit_rate >> 18;
>      se->vbv_buffer_size_extension = priv->vbv_buffer_size >> 10;
> -    se->low_delay                 = ctx->b_per_p == 0;
> +    se->low_delay                 = base_ctx->b_per_p == 0;
>  
>      se->frame_rate_extension_n = ext_n;
>      se->frame_rate_extension_d = ext_d;
> @@ -353,8 +354,8 @@ static int
> vaapi_encode_mpeg2_init_sequence_params(AVCodecContext *avctx)
>  
>  
>      *vseq = (VAEncSequenceParameterBufferMPEG2) {
> -        .intra_period = ctx->gop_size,
> -        .ip_period    = ctx->b_per_p + 1,
> +        .intra_period = base_ctx->gop_size,
> +        .ip_period    = base_ctx->b_per_p + 1,
>  
>          .picture_width  = avctx->width,
>          .picture_height = avctx->height,
> @@ -417,30 +418,31 @@ static int
> vaapi_encode_mpeg2_init_sequence_params(AVCodecContext *avctx)
>  }
>  
>  static int vaapi_encode_mpeg2_init_picture_params(AVCodecContext *avctx,
> -                                                 VAAPIEncodePicture *pic)
> +                                                  VAAPIEncodePicture *pic)
>  {
>      VAAPIEncodeMPEG2Context          *priv = avctx->priv_data;
> +    HWBaseEncodePicture          *base_pic = (HWBaseEncodePicture *)pic;
>      MPEG2RawPictureHeader              *ph = &priv->picture_header;
>      MPEG2RawPictureCodingExtension    *pce = &priv-
> >picture_coding_extension.data.picture_coding;
>      VAEncPictureParameterBufferMPEG2 *vpic = pic->codec_picture_params;
>  
> -    if (pic->type == PICTURE_TYPE_IDR || pic->type == PICTURE_TYPE_I) {
> +    if (base_pic->type == PICTURE_TYPE_IDR || base_pic->type ==
> PICTURE_TYPE_I) {
>          ph->temporal_reference  = 0;
>          ph->picture_coding_type = 1;
> -        priv->last_i_frame = pic->display_order;
> +        priv->last_i_frame = base_pic->display_order;
>      } else {
> -        ph->temporal_reference = pic->display_order - priv->last_i_frame;
> -        ph->picture_coding_type = pic->type == PICTURE_TYPE_B ? 3 : 2;
> +        ph->temporal_reference = base_pic->display_order - priv-
> >last_i_frame;
> +        ph->picture_coding_type = base_pic->type == PICTURE_TYPE_B ? 3 : 2;
>      }
>  
> -    if (pic->type == PICTURE_TYPE_P || pic->type == PICTURE_TYPE_B) {
> +    if (base_pic->type == PICTURE_TYPE_P || base_pic->type == PICTURE_TYPE_B)
> {
>          pce->f_code[0][0] = priv->f_code_horizontal;
>          pce->f_code[0][1] = priv->f_code_vertical;
>      } else {
>          pce->f_code[0][0] = 15;
>          pce->f_code[0][1] = 15;
>      }
> -    if (pic->type == PICTURE_TYPE_B) {
> +    if (base_pic->type == PICTURE_TYPE_B) {
>          pce->f_code[1][0] = priv->f_code_horizontal;
>          pce->f_code[1][1] = priv->f_code_vertical;
>      } else {
> @@ -451,19 +453,19 @@ static int
> vaapi_encode_mpeg2_init_picture_params(AVCodecContext *avctx,
>      vpic->reconstructed_picture = pic->recon_surface;
>      vpic->coded_buf             = pic->output_buffer;
>  
> -    switch (pic->type) {
> +    switch (base_pic->type) {
>      case PICTURE_TYPE_IDR:
>      case PICTURE_TYPE_I:
>          vpic->picture_type = VAEncPictureTypeIntra;
>          break;
>      case PICTURE_TYPE_P:
>          vpic->picture_type = VAEncPictureTypePredictive;
> -        vpic->forward_reference_picture = pic->refs[0][0]->recon_surface;
> +        vpic->forward_reference_picture = ((VAAPIEncodePicture *)base_pic-
> >refs[0][0])->recon_surface;
>          break;
>      case PICTURE_TYPE_B:
>          vpic->picture_type = VAEncPictureTypeBidirectional;
> -        vpic->forward_reference_picture  = pic->refs[0][0]->recon_surface;
> -        vpic->backward_reference_picture = pic->refs[1][0]->recon_surface;
> +        vpic->forward_reference_picture  = ((VAAPIEncodePicture *)base_pic-
> >refs[0][0])->recon_surface;
> +        vpic->backward_reference_picture = ((VAAPIEncodePicture *)base_pic-
> >refs[1][0])->recon_surface;
>          break;
>      default:
>          av_assert0(0 && "invalid picture type");
> @@ -479,17 +481,18 @@ static int
> vaapi_encode_mpeg2_init_picture_params(AVCodecContext *avctx,
>  }
>  
>  static int vaapi_encode_mpeg2_init_slice_params(AVCodecContext *avctx,
> -                                               VAAPIEncodePicture *pic,
> -                                               VAAPIEncodeSlice *slice)
> +                                                VAAPIEncodePicture *pic,
> +                                                VAAPIEncodeSlice *slice)
>  {
> -    VAAPIEncodeMPEG2Context            *priv = avctx->priv_data;
> -    VAEncSliceParameterBufferMPEG2   *vslice = slice->codec_slice_params;
> +    HWBaseEncodePicture          *base_pic = (HWBaseEncodePicture *)pic;
> +    VAAPIEncodeMPEG2Context          *priv = avctx->priv_data;
> +    VAEncSliceParameterBufferMPEG2 *vslice = slice->codec_slice_params;
>      int qp;
>  
>      vslice->macroblock_address = slice->block_start;
>      vslice->num_macroblocks    = slice->block_size;
>  
> -    switch (pic->type) {
> +    switch (base_pic->type) {
>      case PICTURE_TYPE_IDR:
>      case PICTURE_TYPE_I:
>          qp = priv->quant_i;
> @@ -505,14 +508,15 @@ static int
> vaapi_encode_mpeg2_init_slice_params(AVCodecContext *avctx,
>      }
>  
>      vslice->quantiser_scale_code = qp;
> -    vslice->is_intra_slice = (pic->type == PICTURE_TYPE_IDR ||
> -                              pic->type == PICTURE_TYPE_I);
> +    vslice->is_intra_slice = (base_pic->type == PICTURE_TYPE_IDR ||
> +                              base_pic->type == PICTURE_TYPE_I);
>  
>      return 0;
>  }
>  
>  static av_cold int vaapi_encode_mpeg2_configure(AVCodecContext *avctx)
>  {
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
>      VAAPIEncodeContext       *ctx = avctx->priv_data;
>      VAAPIEncodeMPEG2Context *priv = avctx->priv_data;
>      int err;
> @@ -522,7 +526,7 @@ static av_cold int
> vaapi_encode_mpeg2_configure(AVCodecContext *avctx)
>          return err;
>  
>      if (ctx->va_rc_mode == VA_RC_CQP) {
> -        priv->quant_p = av_clip(ctx->rc_quality, 1, 31);
> +        priv->quant_p = av_clip(base_ctx->rc_quality, 1, 31);
>          if (avctx->i_quant_factor > 0.0)
>              priv->quant_i =
>                  av_clip((avctx->i_quant_factor * priv->quant_p +
> @@ -639,6 +643,7 @@ static av_cold int vaapi_encode_mpeg2_close(AVCodecContext
> *avctx)
>  #define OFFSET(x) offsetof(VAAPIEncodeMPEG2Context, x)
>  #define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM)
>  static const AVOption vaapi_encode_mpeg2_options[] = {
> +    HW_BASE_ENCODE_COMMON_OPTIONS,
>      VAAPI_ENCODE_COMMON_OPTIONS,
>      VAAPI_ENCODE_RC_OPTIONS,
>  
> diff --git a/libavcodec/vaapi_encode_vp8.c b/libavcodec/vaapi_encode_vp8.c
> index 8a557b967e..c8203dcbc9 100644
> --- a/libavcodec/vaapi_encode_vp8.c
> +++ b/libavcodec/vaapi_encode_vp8.c
> @@ -52,6 +52,7 @@ typedef struct VAAPIEncodeVP8Context {
>  
>  static int vaapi_encode_vp8_init_sequence_params(AVCodecContext *avctx)
>  {
> +    HWBaseEncodeContext         *base_ctx = avctx->priv_data;
>      VAAPIEncodeContext               *ctx = avctx->priv_data;
>      VAEncSequenceParameterBufferVP8 *vseq = ctx->codec_sequence_params;
>  
> @@ -66,7 +67,7 @@ static int
> vaapi_encode_vp8_init_sequence_params(AVCodecContext *avctx)
>  
>      if (!(ctx->va_rc_mode & VA_RC_CQP)) {
>          vseq->bits_per_second = ctx->va_bit_rate;
> -        vseq->intra_period    = ctx->gop_size;
> +        vseq->intra_period    = base_ctx->gop_size;
>      }
>  
>      return 0;
> @@ -75,6 +76,7 @@ static int
> vaapi_encode_vp8_init_sequence_params(AVCodecContext *avctx)
>  static int vaapi_encode_vp8_init_picture_params(AVCodecContext *avctx,
>                                                  VAAPIEncodePicture *pic)
>  {
> +    HWBaseEncodePicture        *base_pic = (HWBaseEncodePicture *)pic;
>      VAAPIEncodeVP8Context          *priv = avctx->priv_data;
>      VAEncPictureParameterBufferVP8 *vpic = pic->codec_picture_params;
>      int i;
> @@ -83,10 +85,10 @@ static int
> vaapi_encode_vp8_init_picture_params(AVCodecContext *avctx,
>  
>      vpic->coded_buf = pic->output_buffer;
>  
> -    switch (pic->type) {
> +    switch (base_pic->type) {
>      case PICTURE_TYPE_IDR:
>      case PICTURE_TYPE_I:
> -        av_assert0(pic->nb_refs[0] == 0 && pic->nb_refs[1] == 0);
> +        av_assert0(base_pic->nb_refs[0] == 0 && base_pic->nb_refs[1] == 0);
>          vpic->ref_flags.bits.force_kf = 1;
>          vpic->ref_last_frame =
>          vpic->ref_gf_frame   =
> @@ -94,20 +96,20 @@ static int
> vaapi_encode_vp8_init_picture_params(AVCodecContext *avctx,
>              VA_INVALID_SURFACE;
>          break;
>      case PICTURE_TYPE_P:
> -        av_assert0(!pic->nb_refs[1]);
> +        av_assert0(!base_pic->nb_refs[1]);
>          vpic->ref_flags.bits.no_ref_last = 0;
>          vpic->ref_flags.bits.no_ref_gf   = 1;
>          vpic->ref_flags.bits.no_ref_arf  = 1;
>          vpic->ref_last_frame =
>          vpic->ref_gf_frame   =
>          vpic->ref_arf_frame  =
> -            pic->refs[0][0]->recon_surface;
> +            ((VAAPIEncodePicture *)base_pic->refs[0][0])->recon_surface;
>          break;
>      default:
>          av_assert0(0 && "invalid picture type");
>      }
>  
> -    vpic->pic_flags.bits.frame_type = (pic->type != PICTURE_TYPE_IDR);
> +    vpic->pic_flags.bits.frame_type = (base_pic->type != PICTURE_TYPE_IDR);
>      vpic->pic_flags.bits.show_frame = 1;
>  
>      vpic->pic_flags.bits.refresh_last            = 1;
> @@ -145,7 +147,7 @@ static int
> vaapi_encode_vp8_write_quant_table(AVCodecContext *avctx,
>  
>      memset(&quant, 0, sizeof(quant));
>  
> -    if (pic->type == PICTURE_TYPE_P)
> +    if (pic->base.type == PICTURE_TYPE_P)
>          q = priv->q_index_p;
>      else
>          q = priv->q_index_i;
> @@ -161,10 +163,11 @@ static int
> vaapi_encode_vp8_write_quant_table(AVCodecContext *avctx,
>  
>  static av_cold int vaapi_encode_vp8_configure(AVCodecContext *avctx)
>  {
> -    VAAPIEncodeContext     *ctx = avctx->priv_data;
> -    VAAPIEncodeVP8Context *priv = avctx->priv_data;
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    VAAPIEncodeContext      *ctx = avctx->priv_data;
> +    VAAPIEncodeVP8Context  *priv = avctx->priv_data;
>  
> -    priv->q_index_p = av_clip(ctx->rc_quality, 0, VP8_MAX_QUANT);
> +    priv->q_index_p = av_clip(base_ctx->rc_quality, 0, VP8_MAX_QUANT);
>      if (avctx->i_quant_factor > 0.0)
>          priv->q_index_i =
>              av_clip((avctx->i_quant_factor * priv->q_index_p  +
> @@ -216,6 +219,7 @@ static av_cold int vaapi_encode_vp8_init(AVCodecContext
> *avctx)
>  #define OFFSET(x) offsetof(VAAPIEncodeVP8Context, x)
>  #define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM)
>  static const AVOption vaapi_encode_vp8_options[] = {
> +    HW_BASE_ENCODE_COMMON_OPTIONS,
>      VAAPI_ENCODE_COMMON_OPTIONS,
>      VAAPI_ENCODE_RC_OPTIONS,
>  
> diff --git a/libavcodec/vaapi_encode_vp9.c b/libavcodec/vaapi_encode_vp9.c
> index c2a8dec71b..7a0cb0c7fc 100644
> --- a/libavcodec/vaapi_encode_vp9.c
> +++ b/libavcodec/vaapi_encode_vp9.c
> @@ -53,6 +53,7 @@ typedef struct VAAPIEncodeVP9Context {
>  
>  static int vaapi_encode_vp9_init_sequence_params(AVCodecContext *avctx)
>  {
> +    HWBaseEncodeContext         *base_ctx = avctx->priv_data;
>      VAAPIEncodeContext               *ctx = avctx->priv_data;
>      VAEncSequenceParameterBufferVP9 *vseq = ctx->codec_sequence_params;
>      VAEncPictureParameterBufferVP9  *vpic = ctx->codec_picture_params;
> @@ -64,7 +65,7 @@ static int
> vaapi_encode_vp9_init_sequence_params(AVCodecContext *avctx)
>  
>      if (!(ctx->va_rc_mode & VA_RC_CQP)) {
>          vseq->bits_per_second = ctx->va_bit_rate;
> -        vseq->intra_period    = ctx->gop_size;
> +        vseq->intra_period    = base_ctx->gop_size;
>      }
>  
>      vpic->frame_width_src  = avctx->width;
> @@ -78,9 +79,10 @@ static int
> vaapi_encode_vp9_init_sequence_params(AVCodecContext *avctx)
>  static int vaapi_encode_vp9_init_picture_params(AVCodecContext *avctx,
>                                                  VAAPIEncodePicture *pic)
>  {
> -    VAAPIEncodeContext              *ctx = avctx->priv_data;
> +    HWBaseEncodeContext        *base_ctx = avctx->priv_data;
>      VAAPIEncodeVP9Context          *priv = avctx->priv_data;
> -    VAAPIEncodeVP9Picture          *hpic = pic->priv_data;
> +    HWBaseEncodePicture        *base_pic = (HWBaseEncodePicture *)pic;
> +    VAAPIEncodeVP9Picture          *hpic = base_pic->priv_data;
>      VAEncPictureParameterBufferVP9 *vpic = pic->codec_picture_params;
>      int i;
>      int num_tile_columns;
> @@ -94,20 +96,20 @@ static int
> vaapi_encode_vp9_init_picture_params(AVCodecContext *avctx,
>      num_tile_columns = (vpic->frame_width_src + VP9_MAX_TILE_WIDTH - 1) /
> VP9_MAX_TILE_WIDTH;
>      vpic->log2_tile_columns = num_tile_columns == 1 ? 0 :
> av_log2(num_tile_columns - 1) + 1;
>  
> -    switch (pic->type) {
> +    switch (base_pic->type) {
>      case PICTURE_TYPE_IDR:
> -        av_assert0(pic->nb_refs[0] == 0 && pic->nb_refs[1] == 0);
> +        av_assert0(base_pic->nb_refs[0] == 0 && base_pic->nb_refs[1] == 0);
>          vpic->ref_flags.bits.force_kf = 1;
>          vpic->refresh_frame_flags = 0xff;
>          hpic->slot = 0;
>          break;
>      case PICTURE_TYPE_P:
> -        av_assert0(!pic->nb_refs[1]);
> +        av_assert0(!base_pic->nb_refs[1]);
>          {
> -            VAAPIEncodeVP9Picture *href = pic->refs[0][0]->priv_data;
> +            VAAPIEncodeVP9Picture *href = base_pic->refs[0][0]->priv_data;
>              av_assert0(href->slot == 0 || href->slot == 1);
>  
> -            if (ctx->max_b_depth > 0) {
> +            if (base_ctx->max_b_depth > 0) {
>                  hpic->slot = !href->slot;
>                  vpic->refresh_frame_flags = 1 << hpic->slot | 0xfc;
>              } else {
> @@ -120,20 +122,20 @@ static int
> vaapi_encode_vp9_init_picture_params(AVCodecContext *avctx,
>          }
>          break;
>      case PICTURE_TYPE_B:
> -        av_assert0(pic->nb_refs[0] && pic->nb_refs[1]);
> +        av_assert0(base_pic->nb_refs[0] && base_pic->nb_refs[1]);
>          {
> -            VAAPIEncodeVP9Picture *href0 = pic->refs[0][0]->priv_data,
> -                                  *href1 = pic->refs[1][0]->priv_data;
> -            av_assert0(href0->slot < pic->b_depth + 1 &&
> -                       href1->slot < pic->b_depth + 1);
> +            VAAPIEncodeVP9Picture *href0 = base_pic->refs[0][0]->priv_data,
> +                                  *href1 = base_pic->refs[1][0]->priv_data;
> +            av_assert0(href0->slot < base_pic->b_depth + 1 &&
> +                       href1->slot < base_pic->b_depth + 1);
>  
> -            if (pic->b_depth == ctx->max_b_depth) {
> +            if (base_pic->b_depth == base_ctx->max_b_depth) {
>                  // Unreferenced frame.
>                  vpic->refresh_frame_flags = 0x00;
>                  hpic->slot = 8;
>              } else {
> -                vpic->refresh_frame_flags = 0xfe << pic->b_depth & 0xff;
> -                hpic->slot = 1 + pic->b_depth;
> +                vpic->refresh_frame_flags = 0xfe << base_pic->b_depth & 0xff;
> +                hpic->slot = 1 + base_pic->b_depth;
>              }
>              vpic->ref_flags.bits.ref_frame_ctrl_l0  = 1;
>              vpic->ref_flags.bits.ref_frame_ctrl_l1  = 2;
> @@ -148,31 +150,31 @@ static int
> vaapi_encode_vp9_init_picture_params(AVCodecContext *avctx,
>      }
>      if (vpic->refresh_frame_flags == 0x00) {
>          av_log(avctx, AV_LOG_DEBUG, "Pic %"PRId64" not stored.\n",
> -               pic->display_order);
> +               base_pic->display_order);
>      } else {
>          av_log(avctx, AV_LOG_DEBUG, "Pic %"PRId64" stored in slot %d.\n",
> -               pic->display_order, hpic->slot);
> +               base_pic->display_order, hpic->slot);
>      }
>  
>      for (i = 0; i < FF_ARRAY_ELEMS(vpic->reference_frames); i++)
>          vpic->reference_frames[i] = VA_INVALID_SURFACE;
>  
>      for (i = 0; i < MAX_REFERENCE_LIST_NUM; i++) {
> -        for (int j = 0; j < pic->nb_refs[i]; j++) {
> -            VAAPIEncodePicture *ref_pic = pic->refs[i][j];
> +        for (int j = 0; j < base_pic->nb_refs[i]; j++) {
> +            HWBaseEncodePicture *ref_pic = base_pic->refs[i][j];
>              int slot;
>              slot = ((VAAPIEncodeVP9Picture*)ref_pic->priv_data)->slot;
>              av_assert0(vpic->reference_frames[slot] == VA_INVALID_SURFACE);
> -            vpic->reference_frames[slot] = ref_pic->recon_surface;
> +            vpic->reference_frames[slot] = ((VAAPIEncodePicture *)ref_pic)-
> >recon_surface;
>          }
>      }
>  
> -    vpic->pic_flags.bits.frame_type = (pic->type != PICTURE_TYPE_IDR);
> -    vpic->pic_flags.bits.show_frame = pic->display_order <= pic-
> >encode_order;
> +    vpic->pic_flags.bits.frame_type = (base_pic->type != PICTURE_TYPE_IDR);
> +    vpic->pic_flags.bits.show_frame = base_pic->display_order <= base_pic-
> >encode_order;
>  
> -    if (pic->type == PICTURE_TYPE_IDR)
> +    if (base_pic->type == PICTURE_TYPE_IDR)
>          vpic->luma_ac_qindex     = priv->q_idx_idr;
> -    else if (pic->type == PICTURE_TYPE_P)
> +    else if (base_pic->type == PICTURE_TYPE_P)
>          vpic->luma_ac_qindex     = priv->q_idx_p;
>      else
>          vpic->luma_ac_qindex     = priv->q_idx_b;
> @@ -188,22 +190,23 @@ static int
> vaapi_encode_vp9_init_picture_params(AVCodecContext *avctx,
>  
>  static av_cold int vaapi_encode_vp9_get_encoder_caps(AVCodecContext *avctx)
>  {
> -    VAAPIEncodeContext *ctx = avctx->priv_data;
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
>  
>      // Surfaces must be aligned to 64x64 superblock boundaries.
> -    ctx->surface_width  = FFALIGN(avctx->width,  64);
> -    ctx->surface_height = FFALIGN(avctx->height, 64);
> +    base_ctx->surface_width  = FFALIGN(avctx->width,  64);
> +    base_ctx->surface_height = FFALIGN(avctx->height, 64);
>  
>      return 0;
>  }
>  
>  static av_cold int vaapi_encode_vp9_configure(AVCodecContext *avctx)
>  {
> -    VAAPIEncodeContext     *ctx = avctx->priv_data;
> -    VAAPIEncodeVP9Context *priv = avctx->priv_data;
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    VAAPIEncodeContext       *ctx = avctx->priv_data;
> +    VAAPIEncodeVP9Context   *priv = avctx->priv_data;
>  
>      if (ctx->rc_mode->quality) {
> -        priv->q_idx_p = av_clip(ctx->rc_quality, 0, VP9_MAX_QUANT);
> +        priv->q_idx_p = av_clip(base_ctx->rc_quality, 0, VP9_MAX_QUANT);
>          if (avctx->i_quant_factor > 0.0)
>              priv->q_idx_idr =
>                  av_clip((avctx->i_quant_factor * priv->q_idx_p  +
> @@ -273,6 +276,7 @@ static av_cold int vaapi_encode_vp9_init(AVCodecContext
> *avctx)
>  #define OFFSET(x) offsetof(VAAPIEncodeVP9Context, x)
>  #define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM)
>  static const AVOption vaapi_encode_vp9_options[] = {
> +    HW_BASE_ENCODE_COMMON_OPTIONS,
>      VAAPI_ENCODE_COMMON_OPTIONS,
>      VAAPI_ENCODE_RC_OPTIONS,
>  

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [FFmpeg-devel] [PATCH v7 11/12] avcodec: add D3D12VA hardware HEVC encoder
  2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 11/12] avcodec: add D3D12VA hardware HEVC encoder tong1.wu-at-intel.com
  2024-03-28  2:35   ` Wu, Tong1
@ 2024-04-15  8:42   ` Xiang, Haihao
  1 sibling, 0 replies; 15+ messages in thread
From: Xiang, Haihao @ 2024-04-15  8:42 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Wu, Tong1

On Do, 2024-03-14 at 16:14 +0800, tong1.wu-at-intel.com@ffmpeg.org wrote:
> From: Tong Wu <tong1.wu@intel.com>
> 
> This implementation is based on D3D12 Video Encoding Spec:
> https://microsoft.github.io/DirectX-Specs/d3d/D3D12VideoEncoding.html
> 
> Sample command line for transcoding:
> ffmpeg.exe -hwaccel d3d12va -hwaccel_output_format d3d12 -i input.mp4
> -c:v hevc_d3d12va output.mp4
> 
> Signed-off-by: Tong Wu <tong1.wu@intel.com>
> ---
>  configure                        |    6 +
>  libavcodec/Makefile              |    4 +-
>  libavcodec/allcodecs.c           |    1 +
>  libavcodec/d3d12va_encode.c      | 1550 ++++++++++++++++++++++++++++++
>  libavcodec/d3d12va_encode.h      |  321 +++++++
>  libavcodec/d3d12va_encode_hevc.c |  957 ++++++++++++++++++
>  6 files changed, 2838 insertions(+), 1 deletion(-)
>  create mode 100644 libavcodec/d3d12va_encode.c
>  create mode 100644 libavcodec/d3d12va_encode.h
>  create mode 100644 libavcodec/d3d12va_encode_hevc.c
> 
> diff --git a/configure b/configure
> index c34bdd13f5..53076fbf22 100755
> --- a/configure
> +++ b/configure
> @@ -2570,6 +2570,7 @@ CONFIG_EXTRA="
>      tpeldsp
>      vaapi_1
>      vaapi_encode
> +    d3d12va_encode

Please add the entry in alphabetic order


>      vc1dsp
>      videodsp
>      vp3dsp
> @@ -3214,6 +3215,7 @@ wmv3_vaapi_hwaccel_select="vc1_vaapi_hwaccel"
>  wmv3_vdpau_hwaccel_select="vc1_vdpau_hwaccel"
>  
>  # hardware-accelerated codecs
> +d3d12va_encode_deps="d3d12va ID3D12VideoEncoder d3d12_encoder_feature"
>  mediafoundation_deps="mftransform_h MFCreateAlignedMemoryBuffer"
>  omx_deps="libdl pthreads"
>  omx_rpi_select="omx"
> @@ -3280,6 +3282,7 @@ h264_v4l2m2m_encoder_deps="v4l2_m2m h264_v4l2_m2m"
>  hevc_amf_encoder_deps="amf"
>  hevc_cuvid_decoder_deps="cuvid"
>  hevc_cuvid_decoder_select="hevc_mp4toannexb_bsf"
> +hevc_d3d12va_encoder_select="cbs_h265 d3d12va_encode"
>  hevc_mediacodec_decoder_deps="mediacodec"
>  hevc_mediacodec_decoder_select="hevc_mp4toannexb_bsf hevc_parser"
>  hevc_mediacodec_encoder_deps="mediacodec"
> @@ -6620,6 +6623,9 @@ check_type "windows.h d3d11.h" "ID3D11VideoDecoder"
>  check_type "windows.h d3d11.h" "ID3D11VideoContext"
>  check_type "windows.h d3d12.h" "ID3D12Device"
>  check_type "windows.h d3d12video.h" "ID3D12VideoDecoder"
> +check_type "windows.h d3d12video.h" "ID3D12VideoEncoder"
> +test_code cc "windows.h d3d12video.h" "D3D12_FEATURE_VIDEO feature =
> D3D12_FEATURE_VIDEO_ENCODER_CODEC" && \
> +test_code cc "windows.h d3d12video.h"
> "D3D12_FEATURE_DATA_VIDEO_ENCODER_RESOURCE_REQUIREMENTS req" && enable
> d3d12_encoder_feature
>  check_type "windows.h" "DPI_AWARENESS_CONTEXT" -D_WIN32_WINNT=0x0A00
>  check_type "d3d9.h dxva2api.h" DXVA2_ConfigPictureDecode -
> D_WIN32_WINNT=0x0602
>  check_func_headers mfapi.h MFCreateAlignedMemoryBuffer -lmfplat
> diff --git a/libavcodec/Makefile b/libavcodec/Makefile
> index cbfae5f182..cdda3f0d0a 100644
> --- a/libavcodec/Makefile
> +++ b/libavcodec/Makefile
> @@ -84,6 +84,7 @@ OBJS-$(CONFIG_CBS_JPEG)                += cbs_jpeg.o
>  OBJS-$(CONFIG_CBS_MPEG2)               += cbs_mpeg2.o
>  OBJS-$(CONFIG_CBS_VP8)                 += cbs_vp8.o vp8data.o
>  OBJS-$(CONFIG_CBS_VP9)                 += cbs_vp9.o
> +OBJS-$(CONFIG_D3D12VA_ENCODE)          += d3d12va_encode.o hw_base_encode.o
>  OBJS-$(CONFIG_DEFLATE_WRAPPER)         += zlib_wrapper.o
>  OBJS-$(CONFIG_DOVI_RPU)                += dovi_rpu.o
>  OBJS-$(CONFIG_ERROR_RESILIENCE)        += error_resilience.o
> @@ -435,6 +436,7 @@ OBJS-$(CONFIG_HEVC_DECODER)            += hevcdec.o
> hevc_mvs.o \
>                                            h274.o
>  OBJS-$(CONFIG_HEVC_AMF_ENCODER)        += amfenc_hevc.o
>  OBJS-$(CONFIG_HEVC_CUVID_DECODER)      += cuviddec.o
> +OBJS-$(CONFIG_HEVC_D3D12VA_ENCODER)    += d3d12va_encode_hevc.o
>  OBJS-$(CONFIG_HEVC_MEDIACODEC_DECODER) += mediacodecdec.o
>  OBJS-$(CONFIG_HEVC_MEDIACODEC_ENCODER) += mediacodecenc.o
>  OBJS-$(CONFIG_HEVC_MF_ENCODER)         += mfenc.o mf_utils.o
> @@ -1263,7 +1265,7 @@ SKIPHEADERS                            +=
> %_tablegen.h                  \
>  
>  SKIPHEADERS-$(CONFIG_AMF)              += amfenc.h
>  SKIPHEADERS-$(CONFIG_D3D11VA)          += d3d11va.h dxva2_internal.h
> -SKIPHEADERS-$(CONFIG_D3D12VA)          += d3d12va_decode.h
> +SKIPHEADERS-$(CONFIG_D3D12VA)          += d3d12va_decode.h d3d12va_encode.h
>  SKIPHEADERS-$(CONFIG_DXVA2)            += dxva2.h dxva2_internal.h
>  SKIPHEADERS-$(CONFIG_JNI)              += ffjni.h
>  SKIPHEADERS-$(CONFIG_LCMS2)            += fflcms2.h
> diff --git a/libavcodec/allcodecs.c b/libavcodec/allcodecs.c
> index 2386b450a6..7b5093233c 100644
> --- a/libavcodec/allcodecs.c
> +++ b/libavcodec/allcodecs.c
> @@ -855,6 +855,7 @@ extern const FFCodec ff_h264_vaapi_encoder;
>  extern const FFCodec ff_h264_videotoolbox_encoder;
>  extern const FFCodec ff_hevc_amf_encoder;
>  extern const FFCodec ff_hevc_cuvid_decoder;
> +extern const FFCodec ff_hevc_d3d12va_encoder;
>  extern const FFCodec ff_hevc_mediacodec_decoder;
>  extern const FFCodec ff_hevc_mediacodec_encoder;
>  extern const FFCodec ff_hevc_mf_encoder;
> diff --git a/libavcodec/d3d12va_encode.c b/libavcodec/d3d12va_encode.c
> new file mode 100644
> index 0000000000..88a08efa76
> --- /dev/null
> +++ b/libavcodec/d3d12va_encode.c
> @@ -0,0 +1,1550 @@
> +/*
> + * Direct3D 12 HW acceleration video encoder
> + *
> + * Copyright (c) 2024 Intel Corporation
> + *
> + * This file is part of FFmpeg.
> + *
> + * FFmpeg is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2.1 of the License, or (at your option) any later version.
> + *
> + * FFmpeg is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with FFmpeg; if not, write to the Free Software
> + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301
> USA
> + */
> +
> +#include "libavutil/avassert.h"
> +#include "libavutil/common.h"
> +#include "libavutil/internal.h"
> +#include "libavutil/log.h"
> +#include "libavutil/pixdesc.h"
> +#include "libavutil/hwcontext_d3d12va_internal.h"
> +#include "libavutil/hwcontext_d3d12va.h"
> +
> +#include "avcodec.h"
> +#include "d3d12va_encode.h"
> +#include "encode.h"
> +
> +const AVCodecHWConfigInternal *const ff_d3d12va_encode_hw_configs[] = {
> +    HW_CONFIG_ENCODER_FRAMES(D3D12, D3D12VA),
> +    NULL,
> +};
> +
> +static int d3d12va_fence_completion(AVD3D12VASyncContext *psync_ctx)
> +{
> +    uint64_t completion = ID3D12Fence_GetCompletedValue(psync_ctx->fence);
> +    if (completion < psync_ctx->fence_value) {
> +        if (FAILED(ID3D12Fence_SetEventOnCompletion(psync_ctx->fence,
> psync_ctx->fence_value, psync_ctx->event)))
> +            return AVERROR(EINVAL);
> +
> +        WaitForSingleObjectEx(psync_ctx->event, INFINITE, FALSE);
> +    }
> +
> +    return 0;
> +}
> +
> +static int d3d12va_sync_with_gpu(AVCodecContext *avctx)
> +{
> +    D3D12VAEncodeContext *ctx = avctx->priv_data;
> +
> +    DX_CHECK(ID3D12CommandQueue_Signal(ctx->command_queue, ctx-
> >sync_ctx.fence, ++ctx->sync_ctx.fence_value));
> +    return d3d12va_fence_completion(&ctx->sync_ctx);
> +
> +fail:
> +    return AVERROR(EINVAL);
> +}
> +
> +typedef struct CommandAllocator {
> +    ID3D12CommandAllocator *command_allocator;
> +    uint64_t fence_value;
> +} CommandAllocator;
> +
> +static int d3d12va_get_valid_command_allocator(AVCodecContext *avctx,
> ID3D12CommandAllocator **ppAllocator)
> +{
> +    HRESULT hr;
> +    D3D12VAEncodeContext *ctx = avctx->priv_data;
> +    CommandAllocator allocator;
> +
> +    if (av_fifo_peek(ctx->allocator_queue, &allocator, 1, 0) >= 0) {
> +        uint64_t completion = ID3D12Fence_GetCompletedValue(ctx-
> >sync_ctx.fence);
> +        if (completion >= allocator.fence_value) {
> +            *ppAllocator = allocator.command_allocator;
> +            av_fifo_read(ctx->allocator_queue, &allocator, 1);
> +            return 0;
> +        }
> +    }
> +
> +    hr = ID3D12Device_CreateCommandAllocator(ctx->hwctx->device,
> D3D12_COMMAND_LIST_TYPE_VIDEO_ENCODE,
> +                                             &IID_ID3D12CommandAllocator,
> (void **)ppAllocator);
> +    if (FAILED(hr)) {
> +        av_log(avctx, AV_LOG_ERROR, "Failed to create a new command
> allocator!\n");
> +        return AVERROR(EINVAL);
> +    }
> +
> +    return 0;
> +}
> +
> +static int d3d12va_discard_command_allocator(AVCodecContext *avctx,
> ID3D12CommandAllocator *pAllocator, uint64_t fence_value)
> +{
> +    D3D12VAEncodeContext *ctx = avctx->priv_data;
> +
> +    CommandAllocator allocator = {
> +        .command_allocator = pAllocator,
> +        .fence_value = fence_value,
> +    };
> +
> +    av_fifo_write(ctx->allocator_queue, &allocator, 1);
> +
> +    return 0;
> +}
> +
> +static int d3d12va_encode_wait(AVCodecContext *avctx,
> +                               D3D12VAEncodePicture *pic)
> +{
> +    D3D12VAEncodeContext *ctx     = avctx->priv_data;
> +    HWBaseEncodePicture *base_pic = (HWBaseEncodePicture *)pic;
> +    uint64_t completion;
> +
> +    av_assert0(base_pic->encode_issued);
> +
> +    if (base_pic->encode_complete) {
> +        // Already waited for this picture.
> +        return 0;
> +    }
> +
> +    completion = ID3D12Fence_GetCompletedValue(ctx->sync_ctx.fence);
> +    if (completion < pic->fence_value) {
> +        if (FAILED(ID3D12Fence_SetEventOnCompletion(ctx->sync_ctx.fence, pic-
> >fence_value,
> +                                                    ctx->sync_ctx.event)))
> +            return AVERROR(EINVAL);
> +
> +        WaitForSingleObjectEx(ctx->sync_ctx.event, INFINITE, FALSE);
> +    }
> +
> +    av_log(avctx, AV_LOG_DEBUG, "Sync to pic %"PRId64"/%"PRId64" "
> +           "(input surface %p).\n", base_pic->display_order,
> +           base_pic->encode_order, pic->input_surface->texture);
> +
> +    av_frame_free(&base_pic->input_image);
> +
> +    base_pic->encode_complete = 1;
> +    return 0;
> +}
> +
> +static int d3d12va_encode_create_metadata_buffers(AVCodecContext *avctx,
> +                                                  D3D12VAEncodePicture *pic)
> +{
> +    D3D12VAEncodeContext *ctx = avctx->priv_data;
> +    int width = sizeof(D3D12_VIDEO_ENCODER_OUTPUT_METADATA) +
> sizeof(D3D12_VIDEO_ENCODER_FRAME_SUBREGION_METADATA);
> +    D3D12_HEAP_PROPERTIES encoded_meta_props = { .Type =
> D3D12_HEAP_TYPE_DEFAULT }, resolved_meta_props;
> +    D3D12_HEAP_TYPE resolved_heap_type = D3D12_HEAP_TYPE_READBACK;
> +    HRESULT hr;
> +
> +    D3D12_RESOURCE_DESC meta_desc = {
> +        .Dimension        = D3D12_RESOURCE_DIMENSION_BUFFER,
> +        .Alignment        = 0,
> +        .Width            = ctx->req.MaxEncoderOutputMetadataBufferSize,
> +        .Height           = 1,
> +        .DepthOrArraySize = 1,
> +        .MipLevels        = 1,
> +        .Format           = DXGI_FORMAT_UNKNOWN,
> +        .SampleDesc       = { .Count = 1, .Quality = 0 },
> +        .Layout           = D3D12_TEXTURE_LAYOUT_ROW_MAJOR,
> +        .Flags            = D3D12_RESOURCE_FLAG_NONE,
> +    };
> +
> +    hr = ID3D12Device_CreateCommittedResource(ctx->hwctx->device,
> &encoded_meta_props, D3D12_HEAP_FLAG_NONE,
> +                                              &meta_desc,
> D3D12_RESOURCE_STATE_COMMON, NULL,
> +                                              &IID_ID3D12Resource, (void
> **)&pic->encoded_metadata);
> +    if (FAILED(hr)) {
> +        av_log(avctx, AV_LOG_ERROR, "Failed to create metadata buffer.\n");
> +        return AVERROR_UNKNOWN;
> +    }
> +
> +    ctx->hwctx->device->lpVtbl->GetCustomHeapProperties(ctx->hwctx->device,
> &resolved_meta_props, 0, resolved_heap_type);
> +
> +    meta_desc.Width = width;
> +
> +    hr = ID3D12Device_CreateCommittedResource(ctx->hwctx->device,
> &resolved_meta_props, D3D12_HEAP_FLAG_NONE,
> +                                              &meta_desc,
> D3D12_RESOURCE_STATE_COMMON, NULL,
> +                                              &IID_ID3D12Resource, (void
> **)&pic->resolved_metadata);
> +
> +    if (FAILED(hr)) {
> +        av_log(avctx, AV_LOG_ERROR, "Failed to create output metadata
> buffer.\n");
> +        return AVERROR_UNKNOWN;
> +    }
> +
> +    return 0;
> +}
> +
> +static int d3d12va_encode_issue(AVCodecContext *avctx,
> +                                const HWBaseEncodePicture *base_pic)
> +{
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    D3D12VAEncodeContext     *ctx = avctx->priv_data;
> +    AVD3D12VAFramesContext *frames_hwctx = base_ctx->input_frames->hwctx;
> +    D3D12VAEncodePicture *pic = (D3D12VAEncodePicture *)base_pic;
> +    int err, i, j;
> +    HRESULT hr;
> +    char data[MAX_PARAM_BUFFER_SIZE];
> +    void *ptr;
> +    size_t bit_len;
> +    ID3D12CommandAllocator *command_allocator = NULL;
> +    ID3D12VideoEncodeCommandList2 *cmd_list = ctx->command_list;
> +    D3D12_RESOURCE_BARRIER barriers[32] = { 0 };
> +    D3D12_VIDEO_ENCODE_REFERENCE_FRAMES d3d12_refs = { 0 };
> +
> +    D3D12_VIDEO_ENCODER_ENCODEFRAME_INPUT_ARGUMENTS input_args = {
> +        .SequenceControlDesc = {
> +            .Flags = D3D12_VIDEO_ENCODER_SEQUENCE_CONTROL_FLAG_NONE,
> +            .IntraRefreshConfig = { 0 },
> +            .RateControl = ctx->rc,
> +            .PictureTargetResolution = ctx->resolution,
> +            .SelectedLayoutMode =
> D3D12_VIDEO_ENCODER_FRAME_SUBREGION_LAYOUT_MODE_FULL_FRAME,
> +            .FrameSubregionsLayoutData = { 0 },
> +            .CodecGopSequence = ctx->gop,
> +        },
> +        .pInputFrame = pic->input_surface->texture,
> +        .InputFrameSubresource = 0,
> +    };
> +
> +    D3D12_VIDEO_ENCODER_ENCODEFRAME_OUTPUT_ARGUMENTS output_args = { 0 };
> +
> +    D3D12_VIDEO_ENCODER_RESOLVE_METADATA_INPUT_ARGUMENTS input_metadata = {
> +        .EncoderCodec = ctx->codec->d3d12_codec,
> +        .EncoderProfile = ctx->profile->d3d12_profile,
> +        .EncoderInputFormat = frames_hwctx->format,
> +        .EncodedPictureEffectiveResolution = ctx->resolution,
> +    };
> +
> +    D3D12_VIDEO_ENCODER_RESOLVE_METADATA_OUTPUT_ARGUMENTS output_metadata = {
> 0 };
> +
> +    memset(data, 0, sizeof(data));
> +
> +    av_log(avctx, AV_LOG_DEBUG, "Issuing encode for pic %"PRId64"/%"PRId64" "
> +           "as type %s.\n", base_pic->display_order, base_pic->encode_order,
> +           ff_hw_base_encode_get_pictype_name(base_pic->type));
> +    if (base_pic->nb_refs[0] == 0 && base_pic->nb_refs[1] == 0) {
> +        av_log(avctx, AV_LOG_DEBUG, "No reference pictures.\n");
> +    } else {
> +        av_log(avctx, AV_LOG_DEBUG, "L0 refers to");
> +        for (i = 0; i < base_pic->nb_refs[0]; i++) {
> +            av_log(avctx, AV_LOG_DEBUG, " %"PRId64"/%"PRId64,
> +                   base_pic->refs[0][i]->display_order, base_pic->refs[0][i]-
> >encode_order);
> +        }
> +        av_log(avctx, AV_LOG_DEBUG, ".\n");
> +
> +        if (base_pic->nb_refs[1]) {
> +            av_log(avctx, AV_LOG_DEBUG, "L1 refers to");
> +            for (i = 0; i < base_pic->nb_refs[1]; i++) {
> +                av_log(avctx, AV_LOG_DEBUG, " %"PRId64"/%"PRId64,
> +                       base_pic->refs[1][i]->display_order, base_pic-
> >refs[1][i]->encode_order);
> +            }
> +            av_log(avctx, AV_LOG_DEBUG, ".\n");
> +        }
> +    }
> +
> +    av_assert0(!base_pic->encode_issued);
> +    for (i = 0; i < base_pic->nb_refs[0]; i++) {
> +        av_assert0(base_pic->refs[0][i]);
> +        av_assert0(base_pic->refs[0][i]->encode_issued);
> +    }
> +    for (i = 0; i < base_pic->nb_refs[1]; i++) {
> +        av_assert0(base_pic->refs[1][i]);
> +        av_assert0(base_pic->refs[1][i]->encode_issued);
> +    }
> +
> +    av_log(avctx, AV_LOG_DEBUG, "Input surface is %p.\n", pic->input_surface-
> >texture);
> +
> +    err = av_hwframe_get_buffer(base_ctx->recon_frames_ref, base_pic-
> >recon_image, 0);
> +    if (err < 0) {
> +        err = AVERROR(ENOMEM);
> +        goto fail;
> +    }
> +
> +    pic->recon_surface = (AVD3D12VAFrame *)base_pic->recon_image->data[0];
> +    av_log(avctx, AV_LOG_DEBUG, "Recon surface is %p.\n",
> +           pic->recon_surface->texture);
> +
> +    pic->output_buffer_ref = av_buffer_pool_get(ctx->output_buffer_pool);
> +    if (!pic->output_buffer_ref) {
> +        err = AVERROR(ENOMEM);
> +        goto fail;
> +    }
> +    pic->output_buffer = (ID3D12Resource *)pic->output_buffer_ref->data;
> +    av_log(avctx, AV_LOG_DEBUG, "Output buffer is %p.\n",
> +           pic->output_buffer);
> +
> +    err = d3d12va_encode_create_metadata_buffers(avctx, pic);
> +    if (err < 0)
> +        goto fail;
> +
> +    if (ctx->codec->init_picture_params) {
> +        err = ctx->codec->init_picture_params(avctx, pic);
> +        if (err < 0) {
> +            av_log(avctx, AV_LOG_ERROR, "Failed to initialise picture "
> +                   "parameters: %d.\n", err);
> +            goto fail;
> +        }
> +    }
> +
> +    if (base_pic->type == PICTURE_TYPE_IDR) {
> +        if (ctx->codec->write_sequence_header) {
> +            bit_len = 8 * sizeof(data);
> +            err = ctx->codec->write_sequence_header(avctx, data, &bit_len);
> +            if (err < 0) {
> +                av_log(avctx, AV_LOG_ERROR, "Failed to write per-sequence "
> +                       "header: %d.\n", err);
> +                goto fail;
> +            }
> +        }
> +
> +        pic->header_size = (int)bit_len / 8;
> +        pic->header_size = pic->header_size % ctx-
> >req.CompressedBitstreamBufferAccessAlignment ?
> +                           FFALIGN(pic->header_size, ctx-
> >req.CompressedBitstreamBufferAccessAlignment) :
> +                           pic->header_size;
> +
> +        hr = ID3D12Resource_Map(pic->output_buffer, 0, NULL, (void **)&ptr);
> +        if (FAILED(hr)) {
> +            err = AVERROR_UNKNOWN;
> +            goto fail;
> +        }
> +
> +        memcpy(ptr, data, pic->header_size);
> +        ID3D12Resource_Unmap(pic->output_buffer, 0, NULL);
> +    }
> +
> +    d3d12_refs.NumTexture2Ds = base_pic->nb_refs[0] + base_pic->nb_refs[1];
> +    if (d3d12_refs.NumTexture2Ds) {
> +        d3d12_refs.ppTexture2Ds = av_calloc(d3d12_refs.NumTexture2Ds,
> +                                           
> sizeof(*d3d12_refs.ppTexture2Ds));
> +        if (!d3d12_refs.ppTexture2Ds) {
> +            err = AVERROR(ENOMEM);
> +            goto fail;
> +        }
> +
> +        i = 0;
> +        for (j = 0; j < base_pic->nb_refs[0]; j++)
> +            d3d12_refs.ppTexture2Ds[i++] = ((D3D12VAEncodePicture *)base_pic-
> >refs[0][j])->recon_surface->texture;
> +        for (j = 0; j < base_pic->nb_refs[1]; j++)
> +            d3d12_refs.ppTexture2Ds[i++] = ((D3D12VAEncodePicture *)base_pic-
> >refs[1][j])->recon_surface->texture;
> +    }
> +
> +    input_args.PictureControlDesc.IntraRefreshFrameIndex  = 0;
> +    if (base_pic->is_reference)
> +        input_args.PictureControlDesc.Flags |=
> D3D12_VIDEO_ENCODER_PICTURE_CONTROL_FLAG_USED_AS_REFERENCE_PICTURE;
> +
> +    input_args.PictureControlDesc.PictureControlCodecData = pic->pic_ctl;
> +    input_args.PictureControlDesc.ReferenceFrames         = d3d12_refs;
> +    input_args.CurrentFrameBitstreamMetadataSize          = pic->header_size;
> +
> +    output_args.Bitstream.pBuffer                                    = pic-
> >output_buffer;
> +    output_args.Bitstream.FrameStartOffset                           = pic-
> >header_size;
> +    output_args.ReconstructedPicture.pReconstructedPicture           = pic-
> >recon_surface->texture;
> +    output_args.ReconstructedPicture.ReconstructedPictureSubresource = 0;
> +    output_args.EncoderOutputMetadata.pBuffer                        = pic-
> >encoded_metadata;
> +    output_args.EncoderOutputMetadata.Offset                         = 0;
> +
> +    input_metadata.HWLayoutMetadata.pBuffer = pic->encoded_metadata;
> +    input_metadata.HWLayoutMetadata.Offset  = 0;
> +
> +    output_metadata.ResolvedLayoutMetadata.pBuffer = pic->resolved_metadata;
> +    output_metadata.ResolvedLayoutMetadata.Offset  = 0;
> +
> +    err = d3d12va_get_valid_command_allocator(avctx, &command_allocator);
> +    if (err < 0)
> +        goto fail;
> +
> +    hr = ID3D12CommandAllocator_Reset(command_allocator);
> +    if (FAILED(hr)) {
> +        err = AVERROR_UNKNOWN;
> +        goto fail;
> +    }
> +
> +    hr = ID3D12VideoEncodeCommandList2_Reset(cmd_list, command_allocator);
> +    if (FAILED(hr)) {
> +        err = AVERROR_UNKNOWN;
> +        goto fail;
> +    }
> +
> +#define TRANSITION_BARRIER(res, before, after)                      \
> +    (D3D12_RESOURCE_BARRIER) {                                      \
> +        .Type  = D3D12_RESOURCE_BARRIER_TYPE_TRANSITION,            \
> +        .Flags = D3D12_RESOURCE_BARRIER_FLAG_NONE,                  \
> +        .Transition = {                                             \
> +            .pResource   = res,                                     \
> +            .Subresource = D3D12_RESOURCE_BARRIER_ALL_SUBRESOURCES, \
> +            .StateBefore = before,                                  \
> +            .StateAfter  = after,                                   \
> +        },                                                          \
> +    }
> +
> +    barriers[0] = TRANSITION_BARRIER(pic->input_surface->texture,
> +                                     D3D12_RESOURCE_STATE_COMMON,
> +                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ);
> +    barriers[1] = TRANSITION_BARRIER(pic->output_buffer,
> +                                     D3D12_RESOURCE_STATE_COMMON,
> +                                    
> D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE);
> +    barriers[2] = TRANSITION_BARRIER(pic->recon_surface->texture,
> +                                     D3D12_RESOURCE_STATE_COMMON,
> +                                    
> D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE);
> +    barriers[3] = TRANSITION_BARRIER(pic->encoded_metadata,
> +                                     D3D12_RESOURCE_STATE_COMMON,
> +                                    
> D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE);
> +    barriers[4] = TRANSITION_BARRIER(pic->resolved_metadata,
> +                                     D3D12_RESOURCE_STATE_COMMON,
> +                                    
> D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE);
> +
> +    ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, 5, barriers);
> +
> +    if (d3d12_refs.NumTexture2Ds) {
> +        D3D12_RESOURCE_BARRIER refs_barriers[3];
> +
> +        for (i = 0; i < d3d12_refs.NumTexture2Ds; i++)
> +            refs_barriers[i] = TRANSITION_BARRIER(d3d12_refs.ppTexture2Ds[i],
> +                                                 
> D3D12_RESOURCE_STATE_COMMON,
> +                                                 
> D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ);
> +
> +        ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list,
> d3d12_refs.NumTexture2Ds,
> +                                                      refs_barriers);
> +    }
> +
> +    ID3D12VideoEncodeCommandList2_EncodeFrame(cmd_list, ctx->encoder, ctx-
> >encoder_heap,
> +                                              &input_args, &output_args);
> +
> +    barriers[3] = TRANSITION_BARRIER(pic->encoded_metadata,
> +                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE,
> +                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ);
> +
> +    ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, 1, &barriers[3]);
> +
> +    ID3D12VideoEncodeCommandList2_ResolveEncoderOutputMetadata(cmd_list,
> &input_metadata, &output_metadata);
> +
> +    if (d3d12_refs.NumTexture2Ds) {
> +        D3D12_RESOURCE_BARRIER refs_barriers[3];
> +
> +        for (i = 0; i < d3d12_refs.NumTexture2Ds; i++)
> +                    refs_barriers[i] =
> TRANSITION_BARRIER(d3d12_refs.ppTexture2Ds[i],
> +                                                         
> D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ,
> +                                                         
> D3D12_RESOURCE_STATE_COMMON);
> +
> +        ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list,
> d3d12_refs.NumTexture2Ds,
> +                                                      refs_barriers);
> +    }
> +
> +    barriers[0] = TRANSITION_BARRIER(pic->input_surface->texture,
> +                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ,
> +                                     D3D12_RESOURCE_STATE_COMMON);
> +    barriers[1] = TRANSITION_BARRIER(pic->output_buffer,
> +                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE,
> +                                     D3D12_RESOURCE_STATE_COMMON);
> +    barriers[2] = TRANSITION_BARRIER(pic->recon_surface->texture,
> +                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE,
> +                                     D3D12_RESOURCE_STATE_COMMON);
> +    barriers[3] = TRANSITION_BARRIER(pic->encoded_metadata,
> +                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ,
> +                                     D3D12_RESOURCE_STATE_COMMON);
> +    barriers[4] = TRANSITION_BARRIER(pic->resolved_metadata,
> +                                     D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE,
> +                                     D3D12_RESOURCE_STATE_COMMON);
> +
> +    ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, 5, barriers);
> +
> +    hr = ID3D12VideoEncodeCommandList2_Close(cmd_list);
> +    if (FAILED(hr)) {
> +        err = AVERROR_UNKNOWN;
> +        goto fail;
> +    }
> +
> +    hr = ID3D12CommandQueue_Wait(ctx->command_queue, pic->input_surface-
> >sync_ctx.fence,
> +                                 pic->input_surface->sync_ctx.fence_value);
> +    if (FAILED(hr)) {
> +        err = AVERROR_UNKNOWN;
> +        goto fail;
> +    }
> +
> +    ID3D12CommandQueue_ExecuteCommandLists(ctx->command_queue, 1,
> (ID3D12CommandList **)&ctx->command_list);
> +
> +    hr = ID3D12CommandQueue_Signal(ctx->command_queue, pic->input_surface-
> >sync_ctx.fence,
> +                                   ++pic->input_surface-
> >sync_ctx.fence_value);
> +    if (FAILED(hr)) {
> +        err = AVERROR_UNKNOWN;
> +        goto fail;
> +    }
> +
> +    hr = ID3D12CommandQueue_Signal(ctx->command_queue, ctx->sync_ctx.fence,
> ++ctx->sync_ctx.fence_value);
> +    if (FAILED(hr)) {
> +        err = AVERROR_UNKNOWN;
> +        goto fail;
> +    }
> +
> +    err = d3d12va_discard_command_allocator(avctx, command_allocator, ctx-
> >sync_ctx.fence_value);
> +    if (err < 0)
> +        goto fail;
> +
> +    pic->fence_value = ctx->sync_ctx.fence_value;
> +
> +    if (d3d12_refs.ppTexture2Ds)
> +        av_freep(&d3d12_refs.ppTexture2Ds);
> +
> +    return 0;
> +
> +fail:
> +    if (command_allocator)
> +        d3d12va_discard_command_allocator(avctx, command_allocator, ctx-
> >sync_ctx.fence_value);
> +
> +    if (d3d12_refs.ppTexture2Ds)
> +        av_freep(&d3d12_refs.ppTexture2Ds);
> +
> +    if (ctx->codec->free_picture_params)
> +        ctx->codec->free_picture_params(pic);
> +
> +    av_buffer_unref(&pic->output_buffer_ref);
> +    pic->output_buffer = NULL;
> +    D3D12_OBJECT_RELEASE(pic->encoded_metadata);
> +    D3D12_OBJECT_RELEASE(pic->resolved_metadata);
> +    return err;
> +}
> +
> +static int d3d12va_encode_discard(AVCodecContext *avctx,
> +                                  D3D12VAEncodePicture *pic)
> +{
> +    HWBaseEncodePicture *base_pic = (HWBaseEncodePicture *)pic;
> +    d3d12va_encode_wait(avctx, pic);
> +
> +    if (pic->output_buffer_ref) {
> +        av_log(avctx, AV_LOG_DEBUG, "Discard output for pic "
> +               "%"PRId64"/%"PRId64".\n",
> +               base_pic->display_order, base_pic->encode_order);
> +
> +        av_buffer_unref(&pic->output_buffer_ref);
> +        pic->output_buffer = NULL;
> +    }
> +
> +    D3D12_OBJECT_RELEASE(pic->encoded_metadata);
> +    D3D12_OBJECT_RELEASE(pic->resolved_metadata);
> +
> +    return 0;
> +}
> +
> +static int d3d12va_encode_free_rc_params(AVCodecContext *avctx)
> +{
> +    D3D12VAEncodeContext *ctx = avctx->priv_data;
> +
> +    switch (ctx->rc.Mode)
> +    {
> +    case D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_CQP:
> +        av_freep(&ctx->rc.ConfigParams.pConfiguration_CQP);
> +        break;
> +    case D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_CBR:
> +        av_freep(&ctx->rc.ConfigParams.pConfiguration_CBR);
> +        break;
> +    case D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_VBR:
> +        av_freep(&ctx->rc.ConfigParams.pConfiguration_VBR);
> +        break;
> +    case D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_QVBR:
> +        av_freep(&ctx->rc.ConfigParams.pConfiguration_QVBR);
> +        break;
> +    default:
> +        break;
> +    }
> +
> +    return 0;
> +}
> +
> +static HWBaseEncodePicture *d3d12va_encode_alloc(AVCodecContext *avctx,
> +                                                  const AVFrame *frame)
> +{
> +    D3D12VAEncodeContext *ctx = avctx->priv_data;
> +    D3D12VAEncodePicture *pic;
> +
> +    pic = av_mallocz(sizeof(*pic));
> +    if (!pic)
> +        return NULL;
> +
> +    if (ctx->codec->picture_priv_data_size > 0) {
> +        pic->base.priv_data = av_mallocz(ctx->codec->picture_priv_data_size);
> +        if (!pic->base.priv_data) {
> +            av_freep(&pic);
> +            return NULL;
> +        }
> +    }
> +
> +    pic->input_surface = (AVD3D12VAFrame *)frame->data[0];
> +
> +    return (HWBaseEncodePicture *)pic;
> +}
> +
> +static int d3d12va_encode_free(AVCodecContext *avctx,
> +                               HWBaseEncodePicture *base_pic)
> +{
> +    D3D12VAEncodeContext *ctx = avctx->priv_data;
> +    D3D12VAEncodePicture *pic = (D3D12VAEncodePicture *)base_pic;
> +
> +    if (base_pic->encode_issued)
> +        d3d12va_encode_discard(avctx, pic);
> +
> +    if (ctx->codec->free_picture_params)
> +        ctx->codec->free_picture_params(pic);
> +
> +    ff_hw_base_encode_free(avctx, base_pic);
> +
> +    av_free(pic);
> +
> +    return 0;
> +}
> +
> +static int d3d12va_encode_get_buffer_size(AVCodecContext *avctx,
> +                                          D3D12VAEncodePicture *pic, size_t
> *size)
> +{
> +    D3D12_VIDEO_ENCODER_OUTPUT_METADATA *meta = NULL;
> +    uint8_t *data;
> +    HRESULT hr;
> +    int err;
> +
> +    hr = ID3D12Resource_Map(pic->resolved_metadata, 0, NULL, (void **)&data);
> +    if (FAILED(hr)) {
> +        err = AVERROR_UNKNOWN;
> +        return err;
> +    }
> +
> +    meta = (D3D12_VIDEO_ENCODER_OUTPUT_METADATA *)data;
> +
> +    if (meta->EncodeErrorFlags !=
> D3D12_VIDEO_ENCODER_ENCODE_ERROR_FLAG_NO_ERROR) {
> +        av_log(avctx, AV_LOG_ERROR, "Encode failed %"PRIu64"\n", meta-
> >EncodeErrorFlags);
> +        err = AVERROR(EINVAL);
> +        return err;
> +    }
> +
> +    if (meta->EncodedBitstreamWrittenBytesCount == 0) {
> +        av_log(avctx, AV_LOG_ERROR, "No bytes were written to encoded
> bitstream\n");
> +        err = AVERROR(EINVAL);
> +        return err;
> +    }
> +
> +    *size = meta->EncodedBitstreamWrittenBytesCount;
> +
> +    ID3D12Resource_Unmap(pic->resolved_metadata, 0, NULL);
> +
> +    return 0;
> +}
> +
> +static int d3d12va_encode_get_coded_data(AVCodecContext *avctx,
> +                                         D3D12VAEncodePicture *pic, AVPacket
> *pkt)
> +{
> +    int err;
> +    uint8_t *ptr, *mapped_data;
> +    size_t total_size = 0;
> +    HRESULT hr;
> +
> +    err = d3d12va_encode_get_buffer_size(avctx, pic, &total_size);
> +    if (err < 0)
> +        goto end;
> +
> +    total_size += pic->header_size;
> +    av_log(avctx, AV_LOG_DEBUG, "Output buffer size %"PRId64"\n",
> total_size);
> +
> +    hr = ID3D12Resource_Map(pic->output_buffer, 0, NULL, (void
> **)&mapped_data);
> +    if (FAILED(hr)) {
> +        err = AVERROR_UNKNOWN;
> +        goto end;
> +    }
> +
> +    err = ff_get_encode_buffer(avctx, pkt, total_size, 0);
> +    if (err < 0)
> +        goto end;
> +    ptr = pkt->data;
> +
> +    memcpy(ptr, mapped_data, total_size);
> +
> +    ID3D12Resource_Unmap(pic->output_buffer, 0, NULL);
> +
> +end:
> +    av_buffer_unref(&pic->output_buffer_ref);
> +    pic->output_buffer = NULL;
> +    return err;
> +}
> +
> +static int d3d12va_encode_output(AVCodecContext *avctx,
> +                                 const HWBaseEncodePicture *base_pic,
> AVPacket *pkt)
> +{
> +    D3D12VAEncodeContext *ctx = avctx->priv_data;
> +    D3D12VAEncodePicture *pic = (D3D12VAEncodePicture *)base_pic;
> +    AVPacket *pkt_ptr = pkt;
> +    int err;
> +
> +    err = d3d12va_encode_wait(avctx, pic);
> +    if (err < 0)
> +        return err;
> +
> +    err = d3d12va_encode_get_coded_data(avctx, pic, pkt);
> +    if (err < 0)
> +        return err;
> +
> +    av_log(avctx, AV_LOG_DEBUG, "Output read for pic %"PRId64"/%"PRId64".\n",
> +           base_pic->display_order, base_pic->encode_order);
> +
> +    ff_hw_base_encode_set_output_property(avctx, (HWBaseEncodePicture
> *)base_pic, pkt_ptr, 0);
> +
> +    return 0;
> +}
> +
> +static int d3d12va_encode_set_profile(AVCodecContext *avctx)
> +{
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    D3D12VAEncodeContext *ctx     = avctx->priv_data;
> +    const D3D12VAEncodeProfile *profile;
> +    const AVPixFmtDescriptor *desc;
> +    int i, depth;
> +
> +    desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format);
> +    if (!desc) {
> +        av_log(avctx, AV_LOG_ERROR, "Invalid input pixfmt (%d).\n",
> +               base_ctx->input_frames->sw_format);
> +        return AVERROR(EINVAL);
> +    }
> +
> +    depth = desc->comp[0].depth;
> +    for (i = 1; i < desc->nb_components; i++) {
> +        if (desc->comp[i].depth != depth) {
> +            av_log(avctx, AV_LOG_ERROR, "Invalid input pixfmt (%s).\n",
> +                   desc->name);
> +            return AVERROR(EINVAL);
> +        }
> +    }
> +    av_log(avctx, AV_LOG_VERBOSE, "Input surface format is %s.\n",
> +           desc->name);
> +
> +    av_assert0(ctx->codec->profiles);
> +    for (i = 0; (ctx->codec->profiles[i].av_profile !=
> +                 AV_PROFILE_UNKNOWN); i++) {
> +        profile = &ctx->codec->profiles[i];
> +        if (depth               != profile->depth ||
> +            desc->nb_components != profile->nb_components)
> +            continue;
> +        if (desc->nb_components > 1 &&
> +            (desc->log2_chroma_w != profile->log2_chroma_w ||
> +             desc->log2_chroma_h != profile->log2_chroma_h))
> +            continue;
> +        if (avctx->profile != profile->av_profile &&
> +            avctx->profile != AV_PROFILE_UNKNOWN)
> +            continue;
> +
> +        ctx->profile = profile;
> +        break;
> +    }
> +    if (!ctx->profile) {
> +        av_log(avctx, AV_LOG_ERROR, "No usable encoding profile found.\n");
> +        return AVERROR(ENOSYS);
> +    }
> +
> +    avctx->profile = profile->av_profile;
> +    return 0;
> +}
> +
> +static const D3D12VAEncodeRCMode d3d12va_encode_rc_modes[] = {
> +    //                     Bitrate   Quality
> +    //                        | Maxrate | HRD/VBV
> +    { 0 }, //             |    |    |    |
> +    { RC_MODE_CQP,  "CQP",  0,   0,   1,   0, 1,
> D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_CQP },
> +    { RC_MODE_CBR,  "CBR",  1,   0,   0,   1, 1,
> D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_CBR },
> +    { RC_MODE_VBR,  "VBR",  1,   1,   0,   1, 1,
> D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_VBR },
> +    { RC_MODE_QVBR, "QVBR", 1,   1,   1,   1, 1,
> D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_QVBR },
> +};
> +
> +static int check_rate_control_support(AVCodecContext *avctx, const
> D3D12VAEncodeRCMode *rc_mode)
> +{
> +    HRESULT hr;
> +    D3D12VAEncodeContext *ctx = avctx->priv_data;
> +    D3D12_FEATURE_DATA_VIDEO_ENCODER_RATE_CONTROL_MODE d3d12_rc_mode = {
> +        .Codec = ctx->codec->d3d12_codec,
> +    };
> +
> +    if (!rc_mode->d3d12_mode)
> +        return 0;
> +
> +    d3d12_rc_mode.IsSupported = 0;
> +    d3d12_rc_mode.RateControlMode = rc_mode->d3d12_mode;
> +
> +    hr = ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3,
> +                                               
> D3D12_FEATURE_VIDEO_ENCODER_RATE_CONTROL_MODE,
> +                                                &d3d12_rc_mode,
> sizeof(d3d12_rc_mode));
> +    if (FAILED(hr)) {
> +        av_log(avctx, AV_LOG_ERROR, "Failed to check rate control
> support.\n");
> +        return 0;
> +    }
> +
> +    return d3d12_rc_mode.IsSupported;
> +}
> +
> +static int d3d12va_encode_init_rate_control(AVCodecContext *avctx)
> +{
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    D3D12VAEncodeContext     *ctx = avctx->priv_data;
> +    int64_t rc_target_bitrate;
> +    int64_t rc_peak_bitrate;
> +    int     rc_quality;
> +    int64_t hrd_buffer_size;
> +    int64_t hrd_initial_buffer_fullness;
> +    int fr_num, fr_den;
> +    const D3D12VAEncodeRCMode *rc_mode;
> +
> +    // Rate control mode selection:
> +    // * If the user has set a mode explicitly with the rc_mode option,
> +    //   use it and fail if it is not available.
> +    // * If an explicit QP option has been set, use CQP.
> +    // * If the codec is CQ-only, use CQP.
> +    // * If the QSCALE avcodec option is set, use CQP.
> +    // * If bitrate and quality are both set, try QVBR.
> +    // * If quality is set, try CQP.
> +    // * If bitrate and maxrate are set and have the same value, try CBR.
> +    // * If a bitrate is set, try VBR, then CBR.
> +    // * If no bitrate is set, try CQP.
> +
> +#define TRY_RC_MODE(mode, fail) do { \
> +        rc_mode = &d3d12va_encode_rc_modes[mode]; \
> +        if (!(rc_mode->d3d12_mode && check_rate_control_support(avctx,
> rc_mode))) { \
> +            if (fail) { \
> +                av_log(avctx, AV_LOG_ERROR, "Driver does not support %s " \
> +                       "RC mode.\n", rc_mode->name); \
> +                return AVERROR(EINVAL); \
> +            } \
> +            av_log(avctx, AV_LOG_DEBUG, "Driver does not support %s " \
> +                   "RC mode.\n", rc_mode->name); \
> +            rc_mode = NULL; \
> +        } else { \
> +            goto rc_mode_found; \
> +        } \
> +    } while (0)
> +
> +    if (base_ctx->explicit_rc_mode)
> +        TRY_RC_MODE(base_ctx->explicit_rc_mode, 1);
> +
> +    if (base_ctx->explicit_qp)
> +        TRY_RC_MODE(RC_MODE_CQP, 1);
> +
> +    if (ctx->codec->flags & FLAG_CONSTANT_QUALITY_ONLY)
> +        TRY_RC_MODE(RC_MODE_CQP, 1);
> +
> +    if (avctx->flags & AV_CODEC_FLAG_QSCALE)
> +        TRY_RC_MODE(RC_MODE_CQP, 1);
> +
> +    if (avctx->bit_rate > 0 && avctx->global_quality > 0)
> +        TRY_RC_MODE(RC_MODE_QVBR, 0);
> +
> +    if (avctx->global_quality > 0) {
> +        TRY_RC_MODE(RC_MODE_CQP, 0);
> +    }
> +
> +    if (avctx->bit_rate > 0 && avctx->rc_max_rate == avctx->bit_rate)
> +        TRY_RC_MODE(RC_MODE_CBR, 0);
> +
> +    if (avctx->bit_rate > 0) {
> +        TRY_RC_MODE(RC_MODE_VBR, 0);
> +        TRY_RC_MODE(RC_MODE_CBR, 0);
> +    } else {
> +        TRY_RC_MODE(RC_MODE_CQP, 0);
> +    }
> +
> +    av_log(avctx, AV_LOG_ERROR, "Driver does not support any "
> +           "RC mode compatible with selected options.\n");
> +    return AVERROR(EINVAL);
> +
> +rc_mode_found:
> +    if (rc_mode->bitrate) {
> +        if (avctx->bit_rate <= 0) {
> +            av_log(avctx, AV_LOG_ERROR, "Bitrate must be set for %s "
> +                   "RC mode.\n", rc_mode->name);
> +            return AVERROR(EINVAL);
> +        }
> +
> +        if (rc_mode->maxrate) {
> +            if (avctx->rc_max_rate > 0) {
> +                if (avctx->rc_max_rate < avctx->bit_rate) {
> +                    av_log(avctx, AV_LOG_ERROR, "Invalid bitrate settings: "
> +                           "bitrate (%"PRId64") must not be greater than "
> +                           "maxrate (%"PRId64").\n", avctx->bit_rate,
> +                           avctx->rc_max_rate);
> +                    return AVERROR(EINVAL);
> +                }
> +                rc_target_bitrate = avctx->bit_rate;
> +                rc_peak_bitrate   = avctx->rc_max_rate;
> +            } else {
> +                // We only have a target bitrate, but this mode requires
> +                // that a maximum rate be supplied as well.  Since the
> +                // user does not want this to be a constraint, arbitrarily
> +                // pick a maximum rate of double the target rate.
> +                rc_target_bitrate = avctx->bit_rate;
> +                rc_peak_bitrate   = 2 * avctx->bit_rate;
> +            }
> +        } else {
> +            if (avctx->rc_max_rate > avctx->bit_rate) {
> +                av_log(avctx, AV_LOG_WARNING, "Max bitrate is ignored "
> +                       "in %s RC mode.\n", rc_mode->name);
> +            }
> +            rc_target_bitrate = avctx->bit_rate;
> +            rc_peak_bitrate   = 0;
> +        }
> +    } else {
> +        rc_target_bitrate = 0;
> +        rc_peak_bitrate   = 0;
> +    }
> +
> +    if (rc_mode->quality) {
> +        if (base_ctx->explicit_qp) {
> +            rc_quality = base_ctx->explicit_qp;
> +        } else if (avctx->global_quality > 0) {
> +            rc_quality = avctx->global_quality;

When AV_CODEC_FLAG_QSCALE is set, avctx->global_quality is lambda, you'd convert
lambda to qp. 


> +        } else {
> +            rc_quality = ctx->codec->default_quality;
> +            av_log(avctx, AV_LOG_WARNING, "No quality level set; "
> +                   "using default (%d).\n", rc_quality);
> +        }
> +    } else {
> +        rc_quality = 0;
> +    }
> +
> +    if (rc_mode->hrd) {
> +        if (avctx->rc_buffer_size)
> +            hrd_buffer_size = avctx->rc_buffer_size;
> +        else if (avctx->rc_max_rate > 0)
> +            hrd_buffer_size = avctx->rc_max_rate;
> +        else
> +            hrd_buffer_size = avctx->bit_rate;
> +        if (avctx->rc_initial_buffer_occupancy) {
> +            if (avctx->rc_initial_buffer_occupancy > hrd_buffer_size) {
> +                av_log(avctx, AV_LOG_ERROR, "Invalid RC buffer settings: "
> +                       "must have initial buffer size (%d) <= "
> +                       "buffer size (%"PRId64").\n",
> +                       avctx->rc_initial_buffer_occupancy, hrd_buffer_size);
> +                return AVERROR(EINVAL);
> +            }
> +            hrd_initial_buffer_fullness = avctx->rc_initial_buffer_occupancy;
> +        } else {
> +            hrd_initial_buffer_fullness = hrd_buffer_size * 3 / 4;
> +        }
> +    } else {
> +        if (avctx->rc_buffer_size || avctx->rc_initial_buffer_occupancy) {
> +            av_log(avctx, AV_LOG_WARNING, "Buffering settings are ignored "
> +                   "in %s RC mode.\n", rc_mode->name);
> +        }
> +
> +        hrd_buffer_size             = 0;
> +        hrd_initial_buffer_fullness = 0;
> +    }
> +
> +    if (rc_target_bitrate          > UINT32_MAX ||
> +        hrd_buffer_size             > UINT32_MAX ||
> +        hrd_initial_buffer_fullness > UINT32_MAX) {
> +        av_log(avctx, AV_LOG_ERROR, "RC parameters of 2^32 or "
> +               "greater are not supported by D3D12.\n");
> +        return AVERROR(EINVAL);
> +    }
> +
> +    base_ctx->rc_quality  = rc_quality;
> +
> +    av_log(avctx, AV_LOG_VERBOSE, "RC mode: %s.\n", rc_mode->name);
> +
> +    if (rc_mode->quality)
> +        av_log(avctx, AV_LOG_VERBOSE, "RC quality: %d.\n", rc_quality);
> +
> +    if (rc_mode->hrd) {
> +        av_log(avctx, AV_LOG_VERBOSE, "RC buffer: %"PRId64" bits, "
> +               "initial fullness %"PRId64" bits.\n",
> +               hrd_buffer_size, hrd_initial_buffer_fullness);
> +    }
> +
> +    if (avctx->framerate.num > 0 && avctx->framerate.den > 0)
> +        av_reduce(&fr_num, &fr_den,
> +                  avctx->framerate.num, avctx->framerate.den, 65535);
> +    else
> +        av_reduce(&fr_num, &fr_den,
> +                  avctx->time_base.den, avctx->time_base.num, 65535);
> +
> +    av_log(avctx, AV_LOG_VERBOSE, "RC framerate: %d/%d (%.2f fps).\n",
> +           fr_num, fr_den, (double)fr_num / fr_den);
> +
> +    ctx->rc.Flags                       =
> D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_NONE;
> +    ctx->rc.TargetFrameRate.Numerator   = fr_num;
> +    ctx->rc.TargetFrameRate.Denominator = fr_den;
> +    ctx->rc.Mode                        = rc_mode->d3d12_mode;
> +
> +    switch (rc_mode->mode) {
> +        case RC_MODE_CQP:
> +            // cqp ConfigParams will be updated in ctx->codec->configure.
> +            break;
> +
> +        case RC_MODE_CBR:
> +            D3D12_VIDEO_ENCODER_RATE_CONTROL_CBR *cbr_ctl;
> +
> +            ctx->rc.ConfigParams.DataSize =
> sizeof(D3D12_VIDEO_ENCODER_RATE_CONTROL_CBR);
> +            cbr_ctl = av_mallocz(ctx->rc.ConfigParams.DataSize);
> +            if (!cbr_ctl)
> +                return AVERROR(ENOMEM);
> +
> +            cbr_ctl->TargetBitRate      = rc_target_bitrate;
> +            cbr_ctl->VBVCapacity        = hrd_buffer_size;
> +            cbr_ctl->InitialVBVFullness = hrd_initial_buffer_fullness;
> +            ctx->rc.Flags |=
> D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_ENABLE_VBV_SIZES;
> +
> +            if (avctx->qmin > 0 || avctx->qmax > 0) {
> +                cbr_ctl->MinQP = avctx->qmin;
> +                cbr_ctl->MaxQP = avctx->qmax;
> +                ctx->rc.Flags |=
> D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_ENABLE_QP_RANGE;
> +            }
> +
> +            ctx->rc.ConfigParams.pConfiguration_CBR = cbr_ctl;
> +            break;
> +
> +        case RC_MODE_VBR:
> +            D3D12_VIDEO_ENCODER_RATE_CONTROL_VBR *vbr_ctl;
> +
> +            ctx->rc.ConfigParams.DataSize =
> sizeof(D3D12_VIDEO_ENCODER_RATE_CONTROL_VBR);
> +            vbr_ctl = av_mallocz(ctx->rc.ConfigParams.DataSize);
> +            if (!vbr_ctl)
> +                return AVERROR(ENOMEM);
> +
> +            vbr_ctl->TargetAvgBitRate   = rc_target_bitrate;
> +            vbr_ctl->PeakBitRate        = rc_peak_bitrate;
> +            vbr_ctl->VBVCapacity        = hrd_buffer_size;
> +            vbr_ctl->InitialVBVFullness = hrd_initial_buffer_fullness;
> +            ctx->rc.Flags |=
> D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_ENABLE_VBV_SIZES;
> +
> +            if (avctx->qmin > 0 || avctx->qmax > 0) {
> +                vbr_ctl->MinQP = avctx->qmin;
> +                vbr_ctl->MaxQP = avctx->qmax;
> +                ctx->rc.Flags |=
> D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_ENABLE_QP_RANGE;
> +            }
> +
> +            ctx->rc.ConfigParams.pConfiguration_VBR = vbr_ctl;
> +            break;
> +
> +        case RC_MODE_QVBR:
> +            D3D12_VIDEO_ENCODER_RATE_CONTROL_QVBR *qvbr_ctl;
> +
> +            ctx->rc.ConfigParams.DataSize =
> sizeof(D3D12_VIDEO_ENCODER_RATE_CONTROL_QVBR);
> +            qvbr_ctl = av_mallocz(ctx->rc.ConfigParams.DataSize);
> +            if (!qvbr_ctl)
> +                return AVERROR(ENOMEM);
> +
> +            qvbr_ctl->TargetAvgBitRate      = rc_target_bitrate;
> +            qvbr_ctl->PeakBitRate           = rc_peak_bitrate;
> +            qvbr_ctl->ConstantQualityTarget = rc_quality;
> +
> +            if (avctx->qmin > 0 || avctx->qmax > 0) {
> +                qvbr_ctl->MinQP = avctx->qmin;
> +                qvbr_ctl->MaxQP = avctx->qmax;
> +                ctx->rc.Flags |=
> D3D12_VIDEO_ENCODER_RATE_CONTROL_FLAG_ENABLE_QP_RANGE;
> +            }
> +
> +            ctx->rc.ConfigParams.pConfiguration_QVBR = qvbr_ctl;
> +            break;
> +
> +        default:
> +            break;
> +    }
> +    return 0;
> +}
> +
> +static int d3d12va_encode_init_gop_structure(AVCodecContext *avctx)
> +{
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    D3D12VAEncodeContext     *ctx = avctx->priv_data;
> +    uint32_t ref_l0, ref_l1;
> +    int err;
> +    HRESULT hr;
> +    D3D12_FEATURE_DATA_VIDEO_ENCODER_CODEC_PICTURE_CONTROL_SUPPORT support;
> +    union {
> +        D3D12_VIDEO_ENCODER_CODEC_PICTURE_CONTROL_SUPPORT_H264 h264;
> +        D3D12_VIDEO_ENCODER_CODEC_PICTURE_CONTROL_SUPPORT_HEVC hevc;
> +    } codec_support;
> +
> +    support.NodeIndex = 0;
> +    support.Codec     = ctx->codec->d3d12_codec;
> +    support.Profile   = ctx->profile->d3d12_profile;
> +
> +    switch (ctx->codec->d3d12_codec) {
> +        case D3D12_VIDEO_ENCODER_CODEC_H264:
> +            support.PictureSupport.DataSize = sizeof(codec_support.h264);
> +            support.PictureSupport.pH264Support = &codec_support.h264;
> +            break;
> +
> +        case D3D12_VIDEO_ENCODER_CODEC_HEVC:
> +            support.PictureSupport.DataSize = sizeof(codec_support.hevc);
> +            support.PictureSupport.pHEVCSupport = &codec_support.hevc;
> +            break;
> +    }
> +
> +    hr = ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3,
> D3D12_FEATURE_VIDEO_ENCODER_CODEC_PICTURE_CONTROL_SUPPORT,
> +             &support, sizeof(support));
> +    if (FAILED(hr))
> +        return AVERROR(EINVAL);
> +
> +    if (support.IsSupported) {
> +        switch (ctx->codec->d3d12_codec) {
> +            case D3D12_VIDEO_ENCODER_CODEC_H264:
> +                ref_l0 = FFMIN(support.PictureSupport.pH264Support-
> >MaxL0ReferencesForP,
> +                               support.PictureSupport.pH264Support-
> >MaxL1ReferencesForB);
> +                ref_l1 = support.PictureSupport.pH264Support-
> >MaxL1ReferencesForB;
> +                break;
> +
> +            case D3D12_VIDEO_ENCODER_CODEC_HEVC:
> +                ref_l0 = FFMIN(support.PictureSupport.pHEVCSupport-
> >MaxL0ReferencesForP,
> +                               support.PictureSupport.pHEVCSupport-
> >MaxL1ReferencesForB);
> +                ref_l1 = support.PictureSupport.pHEVCSupport-
> >MaxL1ReferencesForB;
> +                break;
> +        }
> +    } else {
> +        ref_l0 = ref_l1 = 0;
> +    }
> +
> +    if (ref_l0 > 0 && ref_l1 > 0 && ctx->bi_not_empty) {
> +        base_ctx->p_to_gpb = 1;
> +        av_log(avctx, AV_LOG_VERBOSE, "Driver does not support P-frames, "
> +               "replacing them with B-frames.\n");
> +    }
> +
> +    err = ff_hw_base_init_gop_structure(avctx, ref_l0, ref_l1, ctx->codec-
> >flags, 0);
> +    if (err < 0)
> +        return err;
> +
> +    return 0;
> +}
> +
> +static int d3d12va_create_encoder(AVCodecContext *avctx)
> +{
> +    HWBaseEncodeContext    *base_ctx     = avctx->priv_data;
> +    D3D12VAEncodeContext   *ctx          = avctx->priv_data;
> +    AVD3D12VAFramesContext *frames_hwctx = base_ctx->input_frames->hwctx;
> +    HRESULT hr;
> +
> +    D3D12_VIDEO_ENCODER_DESC desc = {
> +        .NodeMask                     = 0,
> +        .Flags                        = D3D12_VIDEO_ENCODER_FLAG_NONE,
> +        .EncodeCodec                  = ctx->codec->d3d12_codec,
> +        .EncodeProfile                = ctx->profile->d3d12_profile,
> +        .InputFormat                  = frames_hwctx->format,
> +        .CodecConfiguration           = ctx->codec_conf,
> +        .MaxMotionEstimationPrecision =
> D3D12_VIDEO_ENCODER_MOTION_ESTIMATION_PRECISION_MODE_MAXIMUM,
> +    };
> +
> +    hr = ID3D12VideoDevice3_CreateVideoEncoder(ctx->video_device3, &desc,
> &IID_ID3D12VideoEncoder,
> +                                               (void **)&ctx->encoder);
> +    if (FAILED(hr)) {
> +        av_log(avctx, AV_LOG_ERROR, "Failed to create encoder.\n");
> +        return AVERROR(EINVAL);
> +    }
> +
> +    return 0;
> +}
> +
> +static int d3d12va_create_encoder_heap(AVCodecContext* avctx)
> +{
> +    D3D12VAEncodeContext *ctx = avctx->priv_data;
> +    HRESULT hr;
> +
> +    D3D12_VIDEO_ENCODER_HEAP_DESC desc = {
> +        .NodeMask             = 0,
> +        .Flags                = D3D12_VIDEO_ENCODER_FLAG_NONE,
> +        .EncodeCodec          = ctx->codec->d3d12_codec,
> +        .EncodeProfile        = ctx->profile->d3d12_profile,
> +        .EncodeLevel          = ctx->level,
> +        .ResolutionsListCount = 1,
> +        .pResolutionList      = &ctx->resolution,
> +    };
> +
> +    hr = ID3D12VideoDevice3_CreateVideoEncoderHeap(ctx->video_device3, &desc,
> +                                                  
> &IID_ID3D12VideoEncoderHeap, (void **)&ctx->encoder_heap);
> +    if (FAILED(hr)) {
> +        av_log(avctx, AV_LOG_ERROR, "Failed to create encoder heap.\n");
> +        return AVERROR(EINVAL);
> +    }
> +
> +    return 0;
> +}
> +
> +static void d3d12va_encode_free_buffer(void *opaque, uint8_t *data)
> +{
> +    ID3D12Resource *pResource;
> +
> +    pResource = (ID3D12Resource *)data;
> +    D3D12_OBJECT_RELEASE(pResource);
> +}
> +
> +static AVBufferRef *d3d12va_encode_alloc_output_buffer(void *opaque, size_t
> size)
> +{
> +    AVCodecContext     *avctx = opaque;
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    D3D12VAEncodeContext     *ctx = avctx->priv_data;
> +    ID3D12Resource *pResource = NULL;
> +    HRESULT hr;
> +    AVBufferRef *ref;
> +    D3D12_HEAP_PROPERTIES heap_props;
> +    D3D12_HEAP_TYPE heap_type = D3D12_HEAP_TYPE_READBACK;
> +
> +    D3D12_RESOURCE_DESC desc = {
> +        .Dimension        = D3D12_RESOURCE_DIMENSION_BUFFER,
> +        .Alignment        = 0,
> +        .Width            = FFALIGN(3 * base_ctx->surface_width * base_ctx-
> >surface_height + (1 << 16),
> +                                    D3D12_TEXTURE_DATA_PLACEMENT_ALIGNMENT),
> +        .Height           = 1,
> +        .DepthOrArraySize = 1,
> +        .MipLevels        = 1,
> +        .Format           = DXGI_FORMAT_UNKNOWN,
> +        .SampleDesc       = { .Count = 1, .Quality = 0 },
> +        .Layout           = D3D12_TEXTURE_LAYOUT_ROW_MAJOR,
> +        .Flags            = D3D12_RESOURCE_FLAG_NONE,
> +    };
> +
> +    ctx->hwctx->device->lpVtbl->GetCustomHeapProperties(ctx->hwctx->device,
> &heap_props, 0, heap_type);
> +
> +    hr = ID3D12Device_CreateCommittedResource(ctx->hwctx->device,
> &heap_props, D3D12_HEAP_FLAG_NONE,
> +                                              &desc,
> D3D12_RESOURCE_STATE_COMMON, NULL, &IID_ID3D12Resource,
> +                                              (void **)&pResource);
> +
> +    if (FAILED(hr)) {
> +        av_log(avctx, AV_LOG_ERROR, "Failed to create d3d12 buffer.\n");
> +        return NULL;
> +    }
> +
> +    ref = av_buffer_create((uint8_t *)(uintptr_t)pResource,
> +                           sizeof(pResource),
> +                           &d3d12va_encode_free_buffer,
> +                           avctx, AV_BUFFER_FLAG_READONLY);
> +    if (!ref) {
> +        D3D12_OBJECT_RELEASE(pResource);
> +        return NULL;
> +    }
> +
> +    return ref;
> +}
> +
> +static int d3d12va_encode_prepare_output_buffers(AVCodecContext *avctx)
> +{
> +    HWBaseEncodeContext *base_ctx      = avctx->priv_data;
> +    D3D12VAEncodeContext *ctx          = avctx->priv_data;
> +    AVD3D12VAFramesContext *frames_ctx = base_ctx->input_frames->hwctx;
> +    HRESULT hr;
> +
> +    ctx->req.NodeIndex               = 0;
> +    ctx->req.Codec                   = ctx->codec->d3d12_codec;
> +    ctx->req.Profile                 = ctx->profile->d3d12_profile;
> +    ctx->req.InputFormat             = frames_ctx->format;
> +    ctx->req.PictureTargetResolution = ctx->resolution;
> +
> +    hr = ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3,
> +                                               
> D3D12_FEATURE_VIDEO_ENCODER_RESOURCE_REQUIREMENTS,
> +                                                &ctx->req, sizeof(ctx->req));
> +    if (FAILED(hr)) {
> +        av_log(avctx, AV_LOG_ERROR, "Failed to check encoder resource
> requirements support.\n");
> +        return AVERROR(EINVAL);
> +    }
> +
> +    if (!ctx->req.IsSupported) {
> +        av_log(avctx, AV_LOG_ERROR, "Encoder resource requirements
> unsupported.\n");
> +        return AVERROR(EINVAL);
> +    }
> +
> +    ctx->output_buffer_pool = av_buffer_pool_init2(sizeof(ID3D12Resource *),
> avctx,
> +                                                  
> &d3d12va_encode_alloc_output_buffer, NULL);
> +    if (!ctx->output_buffer_pool)
> +        return AVERROR(ENOMEM);
> +
> +    return 0;
> +}
> +
> +static int d3d12va_encode_create_command_objects(AVCodecContext *avctx)
> +{
> +    D3D12VAEncodeContext *ctx = avctx->priv_data;
> +    ID3D12CommandAllocator *command_allocator = NULL;
> +    int err;
> +    HRESULT hr;
> +
> +    D3D12_COMMAND_QUEUE_DESC queue_desc = {
> +        .Type     = D3D12_COMMAND_LIST_TYPE_VIDEO_ENCODE,
> +        .Priority = 0,
> +        .Flags    = D3D12_COMMAND_QUEUE_FLAG_NONE,
> +        .NodeMask = 0,
> +    };
> +
> +    ctx->allocator_queue = av_fifo_alloc2(D3D12VA_VIDEO_ENC_ASYNC_DEPTH,
> +                                          sizeof(CommandAllocator),
> AV_FIFO_FLAG_AUTO_GROW);
> +    if (!ctx->allocator_queue)
> +        return AVERROR(ENOMEM);
> +
> +    hr = ID3D12Device_CreateFence(ctx->hwctx->device, 0,
> D3D12_FENCE_FLAG_NONE,
> +                                  &IID_ID3D12Fence, (void **)&ctx-
> >sync_ctx.fence);
> +    if (FAILED(hr)) {
> +        av_log(avctx, AV_LOG_ERROR, "Failed to create fence(%lx)\n",
> (long)hr);
> +        err = AVERROR_UNKNOWN;
> +        goto fail;
> +    }
> +
> +    ctx->sync_ctx.event = CreateEvent(NULL, FALSE, FALSE, NULL);
> +    if (!ctx->sync_ctx.event)
> +        goto fail;
> +
> +    err = d3d12va_get_valid_command_allocator(avctx, &command_allocator);
> +    if (err < 0)
> +        goto fail;
> +
> +    hr = ID3D12Device_CreateCommandQueue(ctx->hwctx->device, &queue_desc,
> +                                         &IID_ID3D12CommandQueue, (void
> **)&ctx->command_queue);
> +    if (FAILED(hr)) {
> +        av_log(avctx, AV_LOG_ERROR, "Failed to create command queue(%lx)\n",
> (long)hr);
> +        err = AVERROR_UNKNOWN;
> +        goto fail;
> +    }
> +
> +    hr = ID3D12Device_CreateCommandList(ctx->hwctx->device, 0,
> queue_desc.Type,
> +                                        command_allocator, NULL,
> &IID_ID3D12CommandList,
> +                                        (void **)&ctx->command_list);
> +    if (FAILED(hr)) {
> +        av_log(avctx, AV_LOG_ERROR, "Failed to create command list(%lx)\n",
> (long)hr);
> +        err = AVERROR_UNKNOWN;
> +        goto fail;
> +    }
> +
> +    hr = ID3D12VideoEncodeCommandList2_Close(ctx->command_list);
> +    if (FAILED(hr)) {
> +        av_log(avctx, AV_LOG_ERROR, "Failed to close the command
> list(%lx)\n", (long)hr);
> +        err = AVERROR_UNKNOWN;
> +        goto fail;
> +    }
> +
> +    ID3D12CommandQueue_ExecuteCommandLists(ctx->command_queue, 1,
> (ID3D12CommandList **)&ctx->command_list);
> +
> +    err = d3d12va_sync_with_gpu(avctx);
> +    if (err < 0)
> +        goto fail;
> +
> +    err = d3d12va_discard_command_allocator(avctx, command_allocator, ctx-
> >sync_ctx.fence_value);
> +    if (err < 0)
> +        goto fail;
> +
> +    return 0;
> +
> +fail:
> +    D3D12_OBJECT_RELEASE(command_allocator);
> +    return err;
> +}
> +
> +static int d3d12va_encode_create_recon_frames(AVCodecContext *avctx)
> +{
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    AVD3D12VAFramesContext *hwctx;
> +    enum AVPixelFormat recon_format;
> +    int err;
> +
> +    err = ff_hw_base_get_recon_format(avctx, NULL, &recon_format);
> +    if (err < 0)
> +        return err;
> +
> +    base_ctx->recon_frames_ref = av_hwframe_ctx_alloc(base_ctx->device_ref);
> +    if (!base_ctx->recon_frames_ref)
> +        return AVERROR(ENOMEM);
> +
> +    base_ctx->recon_frames = (AVHWFramesContext *)base_ctx->recon_frames_ref-
> >data;
> +    hwctx = (AVD3D12VAFramesContext *)base_ctx->recon_frames->hwctx;
> +
> +    base_ctx->recon_frames->format    = AV_PIX_FMT_D3D12;
> +    base_ctx->recon_frames->sw_format = recon_format;
> +    base_ctx->recon_frames->width     = base_ctx->surface_width;
> +    base_ctx->recon_frames->height    = base_ctx->surface_height;
> +
> +    hwctx->flags = D3D12_RESOURCE_FLAG_VIDEO_ENCODE_REFERENCE_ONLY |
> +                   D3D12_RESOURCE_FLAG_DENY_SHADER_RESOURCE;
> +
> +    err = av_hwframe_ctx_init(base_ctx->recon_frames_ref);
> +    if (err < 0) {
> +        av_log(avctx, AV_LOG_ERROR, "Failed to initialise reconstructed "
> +               "frame context: %d.\n", err);
> +        return err;
> +    }
> +
> +    return 0;
> +}
> +
> +static const HWEncodePictureOperation d3d12va_type = {
> +    .alloc  = &d3d12va_encode_alloc,
> +
> +    .issue  = &d3d12va_encode_issue,
> +
> +    .output = &d3d12va_encode_output,
> +
> +    .free   = &d3d12va_encode_free,
> +};
> +
> +int ff_d3d12va_encode_init(AVCodecContext *avctx)
> +{
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    D3D12VAEncodeContext     *ctx = avctx->priv_data;
> +    D3D12_FEATURE_DATA_VIDEO_FEATURE_AREA_SUPPORT support = { 0 };
> +    int err;
> +    HRESULT hr;
> +
> +    err = ff_hw_base_encode_init(avctx);
> +    if (err < 0)
> +        goto fail;
> +
> +    base_ctx->op = &d3d12va_type;
> +
> +    ctx->hwctx = base_ctx->device->hwctx;
> +
> +    ctx->resolution.Width  = base_ctx->input_frames->width;
> +    ctx->resolution.Height = base_ctx->input_frames->height;
> +
> +    hr = ID3D12Device_QueryInterface(ctx->hwctx->device, &IID_ID3D12Device3,
> (void **)&ctx->device3);
> +    if (FAILED(hr)) {
> +        av_log(avctx, AV_LOG_ERROR, "ID3D12Device3 interface is not
> supported.\n");
> +        err = AVERROR_UNKNOWN;
> +        goto fail;
> +    }
> +
> +    hr = ID3D12Device3_QueryInterface(ctx->device3, &IID_ID3D12VideoDevice3,
> (void **)&ctx->video_device3);
> +    if (FAILED(hr)) {
> +        av_log(avctx, AV_LOG_ERROR, "ID3D12VideoDevice3 interface is not
> supported.\n");
> +        err = AVERROR_UNKNOWN;
> +        goto fail;
> +    }
> +
> +    if (FAILED(ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3,
> D3D12_FEATURE_VIDEO_FEATURE_AREA_SUPPORT,
> +                                                      &support,
> sizeof(support))) && !support.VideoEncodeSupport) {
> +        av_log(avctx, AV_LOG_ERROR, "D3D12 video device has no video encoder
> support.\n");
> +        err = AVERROR(EINVAL);
> +        goto fail;
> +    }
> +
> +    err = d3d12va_encode_set_profile(avctx);
> +    if (err < 0)
> +        goto fail;
> +
> +    if (ctx->codec->get_encoder_caps) {
> +        err = ctx->codec->get_encoder_caps(avctx);
> +        if (err < 0)
> +            goto fail;
> +    }
> +
> +    err = d3d12va_encode_init_rate_control(avctx);
> +    if (err < 0)
> +        goto fail;
> +
> +    err = d3d12va_encode_init_gop_structure(avctx);
> +    if (err < 0)
> +        goto fail;
> +
> +    if (!(ctx->codec->flags & FLAG_SLICE_CONTROL) && avctx->slices > 0) {
> +        av_log(avctx, AV_LOG_WARNING, "Multiple slices were requested "
> +               "but this codec does not support controlling slices.\n");
> +    }
> +
> +    err = d3d12va_encode_create_command_objects(avctx);
> +    if (err < 0)
> +        goto fail;
> +
> +    err = d3d12va_encode_create_recon_frames(avctx);
> +    if (err < 0)
> +        goto fail;
> +
> +    err = d3d12va_encode_prepare_output_buffers(avctx);
> +    if (err < 0)
> +        goto fail;
> +
> +    if (ctx->codec->configure) {
> +        err = ctx->codec->configure(avctx);
> +        if (err < 0)
> +            goto fail;
> +    }
> +
> +    if (ctx->codec->init_sequence_params) {
> +        err = ctx->codec->init_sequence_params(avctx);
> +        if (err < 0) {
> +            av_log(avctx, AV_LOG_ERROR, "Codec sequence initialisation "
> +                   "failed: %d.\n", err);
> +            goto fail;
> +        }
> +    }
> +
> +    if (ctx->codec->set_level) {
> +        err = ctx->codec->set_level(avctx);
> +        if (err < 0)
> +            goto fail;
> +    }
> +
> +    base_ctx->output_delay = base_ctx->b_per_p;
> +    base_ctx->decode_delay = base_ctx->max_b_depth;
> +
> +    err = d3d12va_create_encoder(avctx);
> +    if (err < 0)
> +        goto fail;
> +
> +    err = d3d12va_create_encoder_heap(avctx);
> +    if (err < 0)
> +        goto fail;
> +
> +    base_ctx->async_encode = 1;
> +    base_ctx->encode_fifo = av_fifo_alloc2(base_ctx->async_depth,
> +                                           sizeof(D3D12VAEncodePicture *),
> 0);
> +    if (!base_ctx->encode_fifo)
> +        return AVERROR(ENOMEM);
> +
> +    return 0;
> +
> +fail:
> +    return err;
> +}
> +
> +int ff_d3d12va_encode_close(AVCodecContext *avctx)
> +{
> +    int num_allocator = 0;
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    D3D12VAEncodeContext     *ctx = avctx->priv_data;
> +    HWBaseEncodePicture *pic, *next;
> +    CommandAllocator allocator;
> +
> +    if (!base_ctx->frame)
> +        return 0;
> +
> +    for (pic = base_ctx->pic_start; pic; pic = next) {
> +        next = pic->next;
> +        d3d12va_encode_free(avctx, pic);
> +    }
> +
> +    d3d12va_encode_free_rc_params(avctx);
> +
> +    av_buffer_pool_uninit(&ctx->output_buffer_pool);
> +
> +    D3D12_OBJECT_RELEASE(ctx->command_list);
> +    D3D12_OBJECT_RELEASE(ctx->command_queue);
> +
> +    if (ctx->allocator_queue) {
> +        while (av_fifo_read(ctx->allocator_queue, &allocator, 1) >= 0) {
> +            num_allocator++;
> +            D3D12_OBJECT_RELEASE(allocator.command_allocator);
> +        }
> +
> +        av_log(avctx, AV_LOG_VERBOSE, "Total number of command allocators
> reused: %d\n", num_allocator);
> +    }
> +
> +    av_fifo_freep2(&ctx->allocator_queue);
> +
> +    D3D12_OBJECT_RELEASE(ctx->sync_ctx.fence);
> +    if (ctx->sync_ctx.event)
> +        CloseHandle(ctx->sync_ctx.event);
> +
> +    D3D12_OBJECT_RELEASE(ctx->encoder_heap);
> +    D3D12_OBJECT_RELEASE(ctx->encoder);
> +    D3D12_OBJECT_RELEASE(ctx->video_device3);
> +    D3D12_OBJECT_RELEASE(ctx->device3);
> +
> +    ff_hw_base_encode_close(avctx);
> +
> +    return 0;
> +}
> diff --git a/libavcodec/d3d12va_encode.h b/libavcodec/d3d12va_encode.h
> new file mode 100644
> index 0000000000..10e2d87035
> --- /dev/null
> +++ b/libavcodec/d3d12va_encode.h
> @@ -0,0 +1,321 @@
> +/*
> + * Direct3D 12 HW acceleration video encoder
> + *
> + * Copyright (c) 2024 Intel Corporation
> + *
> + * This file is part of FFmpeg.
> + *
> + * FFmpeg is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2.1 of the License, or (at your option) any later version.
> + *
> + * FFmpeg is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with FFmpeg; if not, write to the Free Software
> + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301
> USA
> + */
> +
> +#ifndef AVCODEC_D3D12VA_ENCODE_H
> +#define AVCODEC_D3D12VA_ENCODE_H
> +
> +#include "libavutil/fifo.h"
> +#include "libavutil/hwcontext.h"
> +#include "libavutil/hwcontext_d3d12va_internal.h"
> +#include "libavutil/hwcontext_d3d12va.h"
> +#include "avcodec.h"
> +#include "internal.h"
> +#include "hwconfig.h"
> +#include "hw_base_encode.h"
> +
> +struct D3D12VAEncodeType;
> +
> +extern const AVCodecHWConfigInternal *const ff_d3d12va_encode_hw_configs[];
> +
> +#define MAX_PARAM_BUFFER_SIZE 4096
> +#define D3D12VA_VIDEO_ENC_ASYNC_DEPTH 8
> +
> +typedef struct D3D12VAEncodePicture {
> +    HWBaseEncodePicture base;
> +
> +    int             header_size;
> +
> +    AVD3D12VAFrame *input_surface;
> +    AVD3D12VAFrame *recon_surface;
> +
> +    AVBufferRef    *output_buffer_ref;
> +    ID3D12Resource *output_buffer;
> +
> +    ID3D12Resource *encoded_metadata;
> +    ID3D12Resource *resolved_metadata;
> +
> +    D3D12_VIDEO_ENCODER_PICTURE_CONTROL_CODEC_DATA pic_ctl;
> +
> +    int             fence_value;
> +} D3D12VAEncodePicture;
> +
> +typedef struct D3D12VAEncodeProfile {
> +    /**
> +     * lavc profile value (AV_PROFILE_*).
> +     */
> +    int       av_profile;
> +
> +    /**
> +     * Supported bit depth.
> +     */
> +    int       depth;
> +
> +    /**
> +     * Number of components.
> +     */
> +    int       nb_components;
> +
> +    /**
> +     * Chroma subsampling in width dimension.
> +     */
> +    int       log2_chroma_w;
> +
> +    /**
> +     * Chroma subsampling in height dimension.
> +     */
> +    int       log2_chroma_h;
> +
> +    /**
> +     * D3D12 profile value.
> +     */
> +    D3D12_VIDEO_ENCODER_PROFILE_DESC d3d12_profile;
> +} D3D12VAEncodeProfile;
> +
> +enum {
> +    RC_MODE_AUTO,
> +    RC_MODE_CQP,
> +    RC_MODE_CBR,
> +    RC_MODE_VBR,
> +    RC_MODE_QVBR,
> +    RC_MODE_MAX = RC_MODE_QVBR,
> +};
> +
> +
> +typedef struct D3D12VAEncodeRCMode {
> +    /**
> +     * Mode from above enum (RC_MODE_*).
> +     */
> +    int mode;
> +
> +    /**
> +     * Name.
> +     *
> +     */
> +    const char *name;
> +
> +    /**
> +     * Uses bitrate parameters.
> +     *
> +     */
> +    int bitrate;
> +
> +    /**
> +     * Supports maxrate distinct from bitrate.
> +     *
> +     */
> +    int maxrate;
> +
> +    /**
> +     * Uses quality value.
> +     *
> +     */
> +    int quality;
> +
> +    /**
> +     * Supports HRD/VBV parameters.
> +     *
> +     */
> +    int hrd;
> +
> +    /**
> +     * Supported by D3D12 HW.
> +     */
> +    int supported;
> +
> +    /**
> +     * D3D12 mode value.
> +     */
> +    D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE d3d12_mode;
> +} D3D12VAEncodeRCMode;
> +
> +typedef struct D3D12VAEncodeContext {
> +    HWBaseEncodeContext base;
> +
> +    /**
> +     * Codec-specific hooks.
> +     */
> +    const struct D3D12VAEncodeType *codec;
> +
> +    /**
> +     * Chosen encoding profile details.
> +     */
> +    const D3D12VAEncodeProfile *profile;
> +
> +    AVD3D12VADeviceContext *hwctx;
> +
> +    /**
> +     * ID3D12Device3 interface.
> +     */
> +    ID3D12Device3 *device3;
> +
> +    /**
> +     * ID3D12VideoDevice3 interface.
> +     */
> +    ID3D12VideoDevice3 *video_device3;
> +
> +    /**
> +     * Pool of (reusable) bitstream output buffers.
> +     */
> +    AVBufferPool   *output_buffer_pool;
> +
> +    /**
> +     * D3D12 video encoder.
> +     */
> +    AVBufferRef *encoder_ref;
> +
> +    ID3D12VideoEncoder *encoder;
> +
> +    /**
> +     * D3D12 video encoder heap.
> +     */
> +    ID3D12VideoEncoderHeap *encoder_heap;
> +
> +    /**
> +     * A cached queue for reusing the D3D12 command allocators.
> +     *
> +     * @see
> https://learn.microsoft.com/en-us/windows/win32/direct3d12/recording-command-lists-and-bundles#id3d12commandallocator
> +     */
> +    AVFifo *allocator_queue;
> +
> +    /**
> +     * D3D12 command queue.
> +     */
> +    ID3D12CommandQueue *command_queue;
> +
> +    /**
> +     * D3D12 video encode command list.
> +     */
> +    ID3D12VideoEncodeCommandList2 *command_list;
> +
> +    /**
> +     * The sync context used to sync command queue.
> +     */
> +    AVD3D12VASyncContext sync_ctx;
> +
> +    /**
> +     * The bi_not_empty feature.
> +     */
> +    int bi_not_empty;
> +
> +    /**
> +     * D3D12_FEATURE structures.
> +     */
> +    D3D12_FEATURE_DATA_VIDEO_ENCODER_RESOURCE_REQUIREMENTS req;
> +
> +    D3D12_FEATURE_DATA_VIDEO_ENCODER_RESOLUTION_SUPPORT_LIMITS res_limits;
> +
> +    /**
> +     * D3D12_VIDEO_ENCODER structures.
> +     */
> +    D3D12_VIDEO_ENCODER_PICTURE_RESOLUTION_DESC resolution;
> +
> +    D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION codec_conf;
> +
> +    D3D12_VIDEO_ENCODER_RATE_CONTROL rc;
> +
> +    D3D12_VIDEO_ENCODER_SEQUENCE_GOP_STRUCTURE gop;
> +
> +    D3D12_VIDEO_ENCODER_LEVEL_SETTING level;
> +} D3D12VAEncodeContext;
> +
> +typedef struct D3D12VAEncodeType {
> +    /**
> +     * List of supported profiles.
> +     */
> +   const D3D12VAEncodeProfile *profiles;
> +
> +    /**
> +     * D3D12 codec name.
> +     */
> +    D3D12_VIDEO_ENCODER_CODEC d3d12_codec;
> +
> +    /**
> +     * Codec feature flags.
> +     */
> +    int flags;
> +
> +    /**
> +     * Default quality for this codec - used as quantiser or RC quality
> +     * factor depending on RC mode.
> +     */
> +    int default_quality;
> +
> +    /**
> +     * Query codec configuration and determine encode parameters like
> +     * block sizes for surface alignment and slices. If not set, assume
> +     * that all blocks are 16x16 and that surfaces should be aligned to match
> +     * this.
> +     */
> +    int (*get_encoder_caps)(AVCodecContext *avctx);
> +
> +    /**
> +     * Perform any extra codec-specific configuration.
> +     */
> +    int (*configure)(AVCodecContext *avctx);
> +
> +    /**
> +     * Set codec-specific level setting.
> +     */
> +    int (*set_level)(AVCodecContext *avctx);
> +
> +    /**
> +     * The size of any private data structure associated with each
> +     * picture (can be zero if not required).
> +     */
> +    size_t picture_priv_data_size;
> +
> +    /**
> +     * Fill the corresponding parameters.
> +     */
> +    int (*init_sequence_params)(AVCodecContext *avctx);
> +
> +    int (*init_picture_params)(AVCodecContext *avctx,
> +                               D3D12VAEncodePicture *pic);
> +
> +    void (*free_picture_params)(D3D12VAEncodePicture *pic);
> +
> +    /**
> +     * Write the packed header data to the provided buffer.
> +     */
> +    int (*write_sequence_header)(AVCodecContext *avctx,
> +                                 char *data, size_t *data_len);
> +} D3D12VAEncodeType;
> +
> +int ff_d3d12va_encode_init(AVCodecContext *avctx);
> +int ff_d3d12va_encode_close(AVCodecContext *avctx);
> +
> +#define D3D12VA_ENCODE_RC_MODE(name, desc) \
> +    { #name, desc, 0, AV_OPT_TYPE_CONST, { .i64 = RC_MODE_ ## name }, \
> +      0, 0, FLAGS, .unit = "rc_mode" }
> +#define D3D12VA_ENCODE_RC_OPTIONS \
> +    { "rc_mode",\
> +      "Set rate control mode", \
> +      OFFSET(common.base.explicit_rc_mode), AV_OPT_TYPE_INT, \
> +      { .i64 = RC_MODE_AUTO }, RC_MODE_AUTO, RC_MODE_MAX, FLAGS, .unit =
> "rc_mode" }, \
> +    { "auto", "Choose mode automatically based on other parameters", \
> +      0, AV_OPT_TYPE_CONST, { .i64 = RC_MODE_AUTO }, 0, 0, FLAGS, .unit =
> "rc_mode" }, \
> +    D3D12VA_ENCODE_RC_MODE(CQP,  "Constant-quality"), \
> +    D3D12VA_ENCODE_RC_MODE(CBR,  "Constant-bitrate"), \
> +    D3D12VA_ENCODE_RC_MODE(VBR,  "Variable-bitrate"), \
> +    D3D12VA_ENCODE_RC_MODE(QVBR, "Quality-defined variable-bitrate")
> +
> +#endif /* AVCODEC_D3D12VA_ENCODE_H */
> diff --git a/libavcodec/d3d12va_encode_hevc.c
> b/libavcodec/d3d12va_encode_hevc.c
> new file mode 100644
> index 0000000000..aec0d9dcec
> --- /dev/null
> +++ b/libavcodec/d3d12va_encode_hevc.c
> @@ -0,0 +1,957 @@
> +/*
> + * Direct3D 12 HW acceleration video encoder
> + *
> + * Copyright (c) 2024 Intel Corporation
> + *
> + * This file is part of FFmpeg.
> + *
> + * FFmpeg is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2.1 of the License, or (at your option) any later version.
> + *
> + * FFmpeg is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with FFmpeg; if not, write to the Free Software
> + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301
> USA
> + */
> +#include "libavutil/opt.h"
> +#include "libavutil/common.h"
> +#include "libavutil/pixdesc.h"
> +#include "libavutil/hwcontext_d3d12va_internal.h"
> +
> +#include "avcodec.h"
> +#include "cbs.h"
> +#include "cbs_h265.h"
> +#include "h2645data.h"
> +#include "h265_profile_level.h"
> +#include "codec_internal.h"
> +#include "d3d12va_encode.h"
> +
> +typedef struct D3D12VAEncodeHEVCPicture {
> +    int pic_order_cnt;
> +    int64_t last_idr_frame;
> +} D3D12VAEncodeHEVCPicture;
> +
> +typedef struct D3D12VAEncodeHEVCContext {
> +    D3D12VAEncodeContext common;
> +
> +    // User options.
> +    int qp;
> +    int profile;
> +    int tier;
> +    int level;
> +
> +    // Writer structures.
> +    H265RawVPS   raw_vps;
> +    H265RawSPS   raw_sps;
> +    H265RawPPS   raw_pps;
> +
> +    CodedBitstreamContext *cbc;
> +    CodedBitstreamFragment current_access_unit;
> +} D3D12VAEncodeHEVCContext;
> +
> +typedef struct D3D12VAEncodeHEVCLevel {
> +    int level;
> +    D3D12_VIDEO_ENCODER_LEVELS_HEVC d3d12_level;
> +} D3D12VAEncodeHEVCLevel;
> +
> +static const D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC
> hevc_config_support_sets[] =
> +{
> +    {
> +        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_NONE,
> +        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8,
> +        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_32x32,
> +        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4,
> +        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32,
> +        3,
> +        3,
> +    },
> +    {
> +        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_NONE,
> +        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8,
> +        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_32x32,
> +        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4,
> +        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32,
> +        0,
> +        0,
> +    },
> +    {
> +        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_NONE,
> +        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8,
> +        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_32x32,
> +        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4,
> +        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32,
> +        2,
> +        2,
> +    },
> +    {
> +        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_NONE,
> +        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8,
> +        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_64x64,
> +        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4,
> +        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32,
> +        2,
> +        2,
> +    },
> +    {
> +        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_NONE,
> +        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8,
> +        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_64x64,
> +        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4,
> +        D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32,
> +        4,
> +        4,
> +    },
> +};
> +
> +static const D3D12VAEncodeHEVCLevel hevc_levels[] = {
> +    { 30,  D3D12_VIDEO_ENCODER_LEVELS_HEVC_1  },
> +    { 60,  D3D12_VIDEO_ENCODER_LEVELS_HEVC_2  },
> +    { 63,  D3D12_VIDEO_ENCODER_LEVELS_HEVC_21 },
> +    { 90,  D3D12_VIDEO_ENCODER_LEVELS_HEVC_3  },
> +    { 93,  D3D12_VIDEO_ENCODER_LEVELS_HEVC_31 },
> +    { 120, D3D12_VIDEO_ENCODER_LEVELS_HEVC_4  },
> +    { 123, D3D12_VIDEO_ENCODER_LEVELS_HEVC_41 },
> +    { 150, D3D12_VIDEO_ENCODER_LEVELS_HEVC_5  },
> +    { 153, D3D12_VIDEO_ENCODER_LEVELS_HEVC_51 },
> +    { 156, D3D12_VIDEO_ENCODER_LEVELS_HEVC_52 },
> +    { 180, D3D12_VIDEO_ENCODER_LEVELS_HEVC_6  },
> +    { 183, D3D12_VIDEO_ENCODER_LEVELS_HEVC_61 },
> +    { 186, D3D12_VIDEO_ENCODER_LEVELS_HEVC_62 },
> +};
> +
> +static const D3D12_VIDEO_ENCODER_PROFILE_HEVC profile_main   =
> D3D12_VIDEO_ENCODER_PROFILE_HEVC_MAIN;
> +static const D3D12_VIDEO_ENCODER_PROFILE_HEVC profile_main10 =
> D3D12_VIDEO_ENCODER_PROFILE_HEVC_MAIN10;
> +
> +#define D3D_PROFILE_DESC(name) \
> +    { sizeof(D3D12_VIDEO_ENCODER_PROFILE_HEVC), { .pHEVCProfile =
> (D3D12_VIDEO_ENCODER_PROFILE_HEVC *)&profile_ ## name } }
> +static const D3D12VAEncodeProfile d3d12va_encode_hevc_profiles[] = {
> +    { AV_PROFILE_HEVC_MAIN,     8, 3, 1, 1, D3D_PROFILE_DESC(main)   },
> +    { AV_PROFILE_HEVC_MAIN_10, 10, 3, 1, 1, D3D_PROFILE_DESC(main10) },
> +    { AV_PROFILE_UNKNOWN },
> +};
> +
> +static uint8_t
> d3d12va_encode_hevc_map_cusize(D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CU
> SIZE cusize)
> +{
> +    switch (cusize) {
> +        case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_8x8:  
> return 8;
> +        case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_16x16:
> return 16;
> +        case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_32x32:
> return 32;
> +        case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_CUSIZE_64x64:
> return 64;
> +        default: av_assert0(0);
> +    }
> +    return 0;
> +}
> +
> +static uint8_t
> d3d12va_encode_hevc_map_tusize(D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TU
> SIZE tusize)
> +{
> +    switch (tusize) {
> +        case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_4x4:  
> return 4;
> +        case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_8x8:  
> return 8;
> +        case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_16x16:
> return 16;
> +        case D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_TUSIZE_32x32:
> return 32;
> +        default: av_assert0(0);
> +    }
> +    return 0;
> +}
> +
> +static int d3d12va_encode_hevc_write_access_unit(AVCodecContext *avctx,
> +                                                 char *data, size_t
> *data_len,
> +                                                 CodedBitstreamFragment *au)
> +{
> +    D3D12VAEncodeHEVCContext *priv = avctx->priv_data;
> +    int err;
> +
> +    err = ff_cbs_write_fragment_data(priv->cbc, au);
> +    if (err < 0) {
> +        av_log(avctx, AV_LOG_ERROR, "Failed to write packed header.\n");
> +        return err;
> +    }
> +
> +    if (*data_len < 8 * au->data_size - au->data_bit_padding) {
> +        av_log(avctx, AV_LOG_ERROR, "Access unit too large: "
> +               "%zu < %zu.\n", *data_len,
> +               8 * au->data_size - au->data_bit_padding);
> +        return AVERROR(ENOSPC);
> +    }
> +
> +    memcpy(data, au->data, au->data_size);
> +    *data_len = 8 * au->data_size - au->data_bit_padding;
> +
> +    return 0;
> +}
> +
> +static int d3d12va_encode_hevc_add_nal(AVCodecContext *avctx,
> +                                       CodedBitstreamFragment *au,
> +                                       void *nal_unit)
> +{
> +    H265RawNALUnitHeader *header = nal_unit;
> +    int err;
> +
> +    err = ff_cbs_insert_unit_content(au, -1,
> +                                     header->nal_unit_type, nal_unit, NULL);
> +    if (err < 0) {
> +        av_log(avctx, AV_LOG_ERROR, "Failed to add NAL unit: "
> +               "type = %d.\n", header->nal_unit_type);
> +        return err;
> +    }
> +
> +    return 0;
> +}
> +
> +static int d3d12va_encode_hevc_write_sequence_header(AVCodecContext *avctx,
> +                                                     char *data, size_t
> *data_len)
> +{
> +    D3D12VAEncodeHEVCContext *priv = avctx->priv_data;
> +    CodedBitstreamFragment   *au   = &priv->current_access_unit;
> +    int err;
> +
> +    err = d3d12va_encode_hevc_add_nal(avctx, au, &priv->raw_vps);
> +    if (err < 0)
> +        goto fail;
> +
> +    err = d3d12va_encode_hevc_add_nal(avctx, au, &priv->raw_sps);
> +    if (err < 0)
> +        goto fail;
> +
> +    err = d3d12va_encode_hevc_add_nal(avctx, au, &priv->raw_pps);
> +    if (err < 0)
> +        goto fail;
> +
> +    err = d3d12va_encode_hevc_write_access_unit(avctx, data, data_len, au);
> +fail:
> +    ff_cbs_fragment_reset(au);
> +    return err;
> +
> +}
> +
> +static int d3d12va_encode_hevc_init_sequence_params(AVCodecContext *avctx)
> +{
> +    HWBaseEncodeContext  *base_ctx = avctx->priv_data;
> +    D3D12VAEncodeContext     *ctx  = avctx->priv_data;
> +    D3D12VAEncodeHEVCContext *priv = avctx->priv_data;
> +    AVD3D12VAFramesContext  *hwctx = base_ctx->input_frames->hwctx;
> +    H265RawVPS               *vps  = &priv->raw_vps;
> +    H265RawSPS               *sps  = &priv->raw_sps;
> +    H265RawPPS               *pps  = &priv->raw_pps;
> +    H265RawProfileTierLevel  *ptl  = &vps->profile_tier_level;
> +    H265RawVUI               *vui  = &sps->vui;
> +    D3D12_VIDEO_ENCODER_PROFILE_HEVC profile =
> D3D12_VIDEO_ENCODER_PROFILE_HEVC_MAIN;
> +    D3D12_VIDEO_ENCODER_LEVEL_TIER_CONSTRAINTS_HEVC level = { 0 };
> +    const AVPixFmtDescriptor *desc;
> +    uint8_t min_cu_size, max_cu_size, min_tu_size, max_tu_size;
> +    int chroma_format, bit_depth;
> +    HRESULT hr;
> +    int i;
> +
> +    D3D12_FEATURE_DATA_VIDEO_ENCODER_SUPPORT support = {
> +        .NodeIndex                        = 0,
> +        .Codec                            = D3D12_VIDEO_ENCODER_CODEC_HEVC,
> +        .InputFormat                      = hwctx->format,
> +        .RateControl                      = ctx->rc,
> +        .IntraRefresh                     =
> D3D12_VIDEO_ENCODER_INTRA_REFRESH_MODE_NONE,
> +        .SubregionFrameEncoding           =
> D3D12_VIDEO_ENCODER_FRAME_SUBREGION_LAYOUT_MODE_FULL_FRAME,
> +        .ResolutionsListCount             = 1,
> +        .pResolutionList                  = &ctx->resolution,
> +        .CodecGopSequence                 = ctx->gop,
> +        .MaxReferenceFramesInDPB          = MAX_DPB_SIZE - 1,
> +        .CodecConfiguration               = ctx->codec_conf,
> +        .SuggestedProfile.DataSize        =
> sizeof(D3D12_VIDEO_ENCODER_PROFILE_HEVC),
> +        .SuggestedProfile.pHEVCProfile    = &profile,
> +        .SuggestedLevel.DataSize          =
> sizeof(D3D12_VIDEO_ENCODER_LEVEL_TIER_CONSTRAINTS_HEVC),
> +        .SuggestedLevel.pHEVCLevelSetting = &level,
> +        .pResolutionDependentSupport      = &ctx->res_limits,
> +     };
> +
> +    hr = ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3,
> D3D12_FEATURE_VIDEO_ENCODER_SUPPORT,
> +                                                &support, sizeof(support));
> +
> +    if (FAILED(hr)) {
> +        av_log(avctx, AV_LOG_ERROR, "Failed to check encoder
> support(%lx).\n", (long)hr);
> +        return AVERROR(EINVAL);
> +    }
> +
> +    if (!(support.SupportFlags &
> D3D12_VIDEO_ENCODER_SUPPORT_FLAG_GENERAL_SUPPORT_OK)) {
> +        av_log(avctx, AV_LOG_ERROR, "Driver does not support some request
> features. %#x\n",
> +               support.ValidationFlags);
> +        return AVERROR(EINVAL);
> +    }
> +
> +    if (support.SupportFlags &
> D3D12_VIDEO_ENCODER_SUPPORT_FLAG_RECONSTRUCTED_FRAMES_REQUIRE_TEXTURE_ARRAYS)
> {
> +        av_log(avctx, AV_LOG_ERROR, "D3D12 video encode on this device
> requires texture array support, "
> +               "but it's not implemented.\n");
> +        return AVERROR_PATCHWELCOME;
> +    }
> +
> +    memset(vps, 0, sizeof(*vps));
> +    memset(sps, 0, sizeof(*sps));
> +    memset(pps, 0, sizeof(*pps));
> +
> +    desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format);
> +    av_assert0(desc);
> +    if (desc->nb_components == 1) {
> +        chroma_format = 0;
> +    } else {
> +        if (desc->log2_chroma_w == 1 && desc->log2_chroma_h == 1) {
> +            chroma_format = 1;
> +        } else if (desc->log2_chroma_w == 1 && desc->log2_chroma_h == 0) {
> +            chroma_format = 2;
> +        } else if (desc->log2_chroma_w == 0 && desc->log2_chroma_h == 0) {
> +            chroma_format = 3;
> +        } else {
> +            av_log(avctx, AV_LOG_ERROR, "Chroma format of input pixel format
> "
> +                   "%s is not supported.\n", desc->name);
> +            return AVERROR(EINVAL);
> +        }
> +    }
> +    bit_depth = desc->comp[0].depth;
> +
> +    min_cu_size = d3d12va_encode_hevc_map_cusize(ctx->codec_conf.pHEVCConfig-
> >MinLumaCodingUnitSize);
> +    max_cu_size = d3d12va_encode_hevc_map_cusize(ctx->codec_conf.pHEVCConfig-
> >MaxLumaCodingUnitSize);
> +    min_tu_size = d3d12va_encode_hevc_map_tusize(ctx->codec_conf.pHEVCConfig-
> >MinLumaTransformUnitSize);
> +    max_tu_size = d3d12va_encode_hevc_map_tusize(ctx->codec_conf.pHEVCConfig-
> >MaxLumaTransformUnitSize);
> +
> +    // VPS
> +
> +    vps->nal_unit_header = (H265RawNALUnitHeader) {
> +        .nal_unit_type         = HEVC_NAL_VPS,
> +        .nuh_layer_id          = 0,
> +        .nuh_temporal_id_plus1 = 1,
> +    };
> +
> +    vps->vps_video_parameter_set_id = 0;
> +
> +    vps->vps_base_layer_internal_flag  = 1;
> +    vps->vps_base_layer_available_flag = 1;
> +    vps->vps_max_layers_minus1         = 0;
> +    vps->vps_max_sub_layers_minus1     = 0;
> +    vps->vps_temporal_id_nesting_flag  = 1;
> +
> +    ptl->general_profile_space = 0;
> +    ptl->general_profile_idc   = avctx->profile;
> +    ptl->general_tier_flag     = priv->tier;
> +
> +    ptl->general_profile_compatibility_flag[ptl->general_profile_idc] = 1;
> +
> +    ptl->general_progressive_source_flag    = 1;
> +    ptl->general_interlaced_source_flag     = 0;
> +    ptl->general_non_packed_constraint_flag = 1;
> +    ptl->general_frame_only_constraint_flag = 1;
> +
> +    ptl->general_max_14bit_constraint_flag = bit_depth <= 14;
> +    ptl->general_max_12bit_constraint_flag = bit_depth <= 12;
> +    ptl->general_max_10bit_constraint_flag = bit_depth <= 10;
> +    ptl->general_max_8bit_constraint_flag  = bit_depth ==  8;
> +
> +    ptl->general_max_422chroma_constraint_flag  = chroma_format <= 2;
> +    ptl->general_max_420chroma_constraint_flag  = chroma_format <= 1;
> +    ptl->general_max_monochrome_constraint_flag = chroma_format == 0;
> +
> +    ptl->general_intra_constraint_flag = base_ctx->gop_size == 1;
> +    ptl->general_one_picture_only_constraint_flag = 0;
> +
> +    ptl->general_lower_bit_rate_constraint_flag = 1;
> +
> +    if (avctx->level != FF_LEVEL_UNKNOWN) {
> +        ptl->general_level_idc = avctx->level;
> +    } else {
> +        const H265LevelDescriptor *level;
> +
> +        level = ff_h265_guess_level(ptl, avctx->bit_rate,
> +                                    base_ctx->surface_width, base_ctx-
> >surface_height,
> +                                    1, 1, 1, (base_ctx->b_per_p > 0) + 1);
> +        if (level) {
> +            av_log(avctx, AV_LOG_VERBOSE, "Using level %s.\n", level->name);
> +            ptl->general_level_idc = level->level_idc;
> +        } else {
> +            av_log(avctx, AV_LOG_VERBOSE, "Stream will not conform to "
> +                   "any normal level; using level 8.5.\n");
> +            ptl->general_level_idc = 255;
> +            // The tier flag must be set in level 8.5.
> +            ptl->general_tier_flag = 1;
> +        }
> +        avctx->level = ptl->general_level_idc;
> +    }
> +
> +    vps->vps_sub_layer_ordering_info_present_flag = 0;
> +    vps->vps_max_dec_pic_buffering_minus1[0]      = base_ctx->max_b_depth +
> 1;
> +    vps->vps_max_num_reorder_pics[0]              = base_ctx->max_b_depth;
> +    vps->vps_max_latency_increase_plus1[0]        = 0;
> +
> +    vps->vps_max_layer_id             = 0;
> +    vps->vps_num_layer_sets_minus1    = 0;
> +    vps->layer_id_included_flag[0][0] = 1;
> +
> +    vps->vps_timing_info_present_flag = 0;
> +
> +    // SPS
> +
> +    sps->nal_unit_header = (H265RawNALUnitHeader) {
> +        .nal_unit_type         = HEVC_NAL_SPS,
> +        .nuh_layer_id          = 0,
> +        .nuh_temporal_id_plus1 = 1,
> +    };
> +
> +    sps->sps_video_parameter_set_id = vps->vps_video_parameter_set_id;
> +
> +    sps->sps_max_sub_layers_minus1    = vps->vps_max_sub_layers_minus1;
> +    sps->sps_temporal_id_nesting_flag = vps->vps_temporal_id_nesting_flag;
> +
> +    sps->profile_tier_level = vps->profile_tier_level;
> +
> +    sps->sps_seq_parameter_set_id = 0;
> +
> +    sps->chroma_format_idc          = chroma_format;
> +    sps->separate_colour_plane_flag = 0;
> +
> +    av_assert0(ctx->res_limits.SubregionBlockPixelsSize % min_cu_size == 0);
> +
> +    sps->pic_width_in_luma_samples  = FFALIGN(base_ctx->surface_width,
> +                                              ctx-
> >res_limits.SubregionBlockPixelsSize);
> +    sps->pic_height_in_luma_samples = FFALIGN(base_ctx->surface_height,
> +                                              ctx-
> >res_limits.SubregionBlockPixelsSize);
> +
> +    if (avctx->width  != sps->pic_width_in_luma_samples ||
> +        avctx->height != sps->pic_height_in_luma_samples) {
> +        sps->conformance_window_flag = 1;
> +        sps->conf_win_left_offset   = 0;
> +        sps->conf_win_right_offset  =
> +            (sps->pic_width_in_luma_samples - avctx->width) >> desc-
> >log2_chroma_w;
> +        sps->conf_win_top_offset    = 0;
> +        sps->conf_win_bottom_offset =
> +            (sps->pic_height_in_luma_samples - avctx->height) >> desc-
> >log2_chroma_h;
> +    } else {
> +        sps->conformance_window_flag = 0;
> +    }
> +
> +    sps->bit_depth_luma_minus8   = bit_depth - 8;
> +    sps->bit_depth_chroma_minus8 = bit_depth - 8;
> +
> +    sps->log2_max_pic_order_cnt_lsb_minus4 = ctx->gop.pHEVCGroupOfPictures-
> >log2_max_pic_order_cnt_lsb_minus4;
> +
> +    sps->sps_sub_layer_ordering_info_present_flag =
> +        vps->vps_sub_layer_ordering_info_present_flag;
> +    for (i = 0; i <= sps->sps_max_sub_layers_minus1; i++) {
> +        sps->sps_max_dec_pic_buffering_minus1[i] =
> +            vps->vps_max_dec_pic_buffering_minus1[i];
> +        sps->sps_max_num_reorder_pics[i] =
> +            vps->vps_max_num_reorder_pics[i];
> +        sps->sps_max_latency_increase_plus1[i] =
> +            vps->vps_max_latency_increase_plus1[i];
> +    }
> +
> +    sps->log2_min_luma_coding_block_size_minus3      =
> (uint8_t)(av_log2(min_cu_size) - 3);
> +    sps->log2_diff_max_min_luma_coding_block_size    =
> (uint8_t)(av_log2(max_cu_size) - av_log2(min_cu_size));
> +    sps->log2_min_luma_transform_block_size_minus2   =
> (uint8_t)(av_log2(min_tu_size) - 2);
> +    sps->log2_diff_max_min_luma_transform_block_size =
> (uint8_t)(av_log2(max_tu_size) - av_log2(min_tu_size));
> +
> +    sps->max_transform_hierarchy_depth_inter = ctx->codec_conf.pHEVCConfig-
> >max_transform_hierarchy_depth_inter;
> +    sps->max_transform_hierarchy_depth_intra = ctx->codec_conf.pHEVCConfig-
> >max_transform_hierarchy_depth_intra;
> +
> +    sps->amp_enabled_flag = !!(ctx->codec_conf.pHEVCConfig-
> >ConfigurationFlags &
> +                              
> D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_USE_ASYMETRIC_MOTION_PARTITI
> ON);
> +    sps->sample_adaptive_offset_enabled_flag = !!(ctx-
> >codec_conf.pHEVCConfig->ConfigurationFlags &
> +                                                 
> D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_ENABLE_SAO_FILTER);
> +    sps->sps_temporal_mvp_enabled_flag = 0;
> +    sps->pcm_enabled_flag = 0;
> +
> +    sps->vui_parameters_present_flag = 1;
> +
> +    // vui default parameters
> +    vui->aspect_ratio_idc                        = 0;
> +    vui->video_format                            = 5;
> +    vui->video_full_range_flag                   = 0;
> +    vui->colour_primaries                        = 2;
> +    vui->transfer_characteristics                = 2;
> +    vui->matrix_coefficients                     = 2;
> +    vui->chroma_sample_loc_type_top_field        = 0;
> +    vui->chroma_sample_loc_type_bottom_field     = 0;
> +    vui->tiles_fixed_structure_flag              = 0;
> +    vui->motion_vectors_over_pic_boundaries_flag = 1;
> +    vui->min_spatial_segmentation_idc            = 0;
> +    vui->max_bytes_per_pic_denom                 = 2;
> +    vui->max_bits_per_min_cu_denom               = 1;
> +    vui->log2_max_mv_length_horizontal           = 15;
> +    vui->log2_max_mv_length_vertical             = 15;

The input image might have different values. 

Thanks
Haihao


> +
> +    // PPS
> +
> +    pps->nal_unit_header = (H265RawNALUnitHeader) {
> +        .nal_unit_type         = HEVC_NAL_PPS,
> +        .nuh_layer_id          = 0,
> +        .nuh_temporal_id_plus1 = 1,
> +    };
> +
> +    pps->pps_pic_parameter_set_id = 0;
> +    pps->pps_seq_parameter_set_id = sps->sps_seq_parameter_set_id;
> +
> +    pps->cabac_init_present_flag = 1;
> +
> +    pps->num_ref_idx_l0_default_active_minus1 = 0;
> +    pps->num_ref_idx_l1_default_active_minus1 = 0;
> +
> +    pps->init_qp_minus26 = 0;
> +
> +    pps->transform_skip_enabled_flag = !!(ctx->codec_conf.pHEVCConfig-
> >ConfigurationFlags &
> +                                         
> D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_ENABLE_TRANSFORM_SKIPPING);
> +
> +    // cu_qp_delta always required to be 1 in
> https://github.com/microsoft/DirectX-Specs/blob/master/d3d/D3D12VideoEncoding.md
> +    pps->cu_qp_delta_enabled_flag = 1;
> +
> +    pps->diff_cu_qp_delta_depth   = 0;
> +
> +    pps->pps_slice_chroma_qp_offsets_present_flag = 1;
> +
> +    pps->tiles_enabled_flag = 0; // no tiling in D3D12
> +
> +    pps->pps_loop_filter_across_slices_enabled_flag = !(ctx-
> >codec_conf.pHEVCConfig->ConfigurationFlags &
> +                                                       
> D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_DISABLE_LOOP_FILTER_ACROSS_S
> LICES);
> +    pps->deblocking_filter_control_present_flag = 1;
> +
> +    return 0;
> +}
> +
> +static int d3d12va_encode_hevc_get_encoder_caps(AVCodecContext *avctx)
> +{
> +    int i;
> +    HRESULT hr;
> +    uint8_t min_cu_size, max_cu_size;
> +    HWBaseEncodeContext *base_ctx = avctx->priv_data;
> +    D3D12VAEncodeContext     *ctx = avctx->priv_data;
> +    D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC *config;
> +    D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC hevc_caps;
> +
> +    D3D12_FEATURE_DATA_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT codec_caps =
> {
> +        .NodeIndex                   = 0,
> +        .Codec                       = D3D12_VIDEO_ENCODER_CODEC_HEVC,
> +        .Profile                     = ctx->profile->d3d12_profile,
> +        .CodecSupportLimits.DataSize =
> sizeof(D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC),
> +    };
> +
> +    for (i = 0; i < FF_ARRAY_ELEMS(hevc_config_support_sets); i++) {
> +        hevc_caps = hevc_config_support_sets[i];
> +        codec_caps.CodecSupportLimits.pHEVCSupport = &hevc_caps;
> +        hr = ID3D12VideoDevice3_CheckFeatureSupport(ctx->video_device3,
> D3D12_FEATURE_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT,
> +                                                    &codec_caps,
> sizeof(codec_caps));
> +        if (SUCCEEDED(hr) && codec_caps.IsSupported)
> +            break;
> +    }
> +
> +    if (i == FF_ARRAY_ELEMS(hevc_config_support_sets)) {
> +        av_log(avctx, AV_LOG_ERROR, "Unsupported codec configuration\n");
> +        return AVERROR(EINVAL);
> +    }
> +
> +    ctx->codec_conf.DataSize =
> sizeof(D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC);
> +    ctx->codec_conf.pHEVCConfig = av_mallocz(ctx->codec_conf.DataSize);
> +    if (!ctx->codec_conf.pHEVCConfig)
> +        return AVERROR(ENOMEM);
> +
> +    config = ctx->codec_conf.pHEVCConfig;
> +
> +    config->ConfigurationFlags                  =
> D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_NONE;
> +    config->MinLumaCodingUnitSize               =
> hevc_caps.MinLumaCodingUnitSize;
> +    config->MaxLumaCodingUnitSize               =
> hevc_caps.MaxLumaCodingUnitSize;
> +    config->MinLumaTransformUnitSize            =
> hevc_caps.MinLumaTransformUnitSize;
> +    config->MaxLumaTransformUnitSize            =
> hevc_caps.MaxLumaTransformUnitSize;
> +    config->max_transform_hierarchy_depth_inter =
> hevc_caps.max_transform_hierarchy_depth_inter;
> +    config->max_transform_hierarchy_depth_intra =
> hevc_caps.max_transform_hierarchy_depth_intra;
> +
> +    if (hevc_caps.SupportFlags &
> D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_ASYMETRIC_MOTION_PAR
> TITION_SUPPORT ||
> +        hevc_caps.SupportFlags &
> D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_ASYMETRIC_MOTION_PAR
> TITION_REQUIRED)
> +        config->ConfigurationFlags |=
> D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_USE_ASYMETRIC_MOTION_PARTITI
> ON;
> +
> +    if (hevc_caps.SupportFlags &
> D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_SAO_FILTER_SUPPORT)
> +        config->ConfigurationFlags |=
> D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_ENABLE_SAO_FILTER;
> +
> +    if (hevc_caps.SupportFlags &
> D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_DISABLING_LOOP_FILTE
> R_ACROSS_SLICES_SUPPORT)
> +        config->ConfigurationFlags |=
> D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_DISABLE_LOOP_FILTER_ACROSS_S
> LICES;
> +
> +    if (hevc_caps.SupportFlags &
> D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_TRANSFORM_SKIP_SUPPO
> RT)
> +        config->ConfigurationFlags |=
> D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_HEVC_FLAG_ENABLE_TRANSFORM_SKIPPING;
> +
> +    if (hevc_caps.SupportFlags &
> D3D12_VIDEO_ENCODER_CODEC_CONFIGURATION_SUPPORT_HEVC_FLAG_P_FRAMES_IMPLEMENTED
> _AS_LOW_DELAY_B_FRAMES)
> +        ctx->bi_not_empty = 1;
> +
> +    // block sizes
> +    min_cu_size =
> d3d12va_encode_hevc_map_cusize(hevc_caps.MinLumaCodingUnitSize);
> +    max_cu_size =
> d3d12va_encode_hevc_map_cusize(hevc_caps.MaxLumaCodingUnitSize);
> +
> +    av_log(avctx, AV_LOG_VERBOSE, "Using CTU size %dx%d, "
> +           "min CB size %dx%d.\n", max_cu_size, max_cu_size,
> +           min_cu_size, min_cu_size);
> +
> +    base_ctx->surface_width  = FFALIGN(avctx->width,  min_cu_size);
> +    base_ctx->surface_height = FFALIGN(avctx->height, min_cu_size);
> +
> +    return 0;
> +}
> +
> +static int d3d12va_encode_hevc_configure(AVCodecContext *avctx)
> +{
> +    HWBaseEncodeContext  *base_ctx = avctx->priv_data;
> +    D3D12VAEncodeContext      *ctx = avctx->priv_data;
> +    D3D12VAEncodeHEVCContext *priv = avctx->priv_data;
> +    int fixed_qp_idr, fixed_qp_p, fixed_qp_b;
> +    int err;
> +
> +    err = ff_cbs_init(&priv->cbc, AV_CODEC_ID_HEVC, avctx);
> +    if (err < 0)
> +        return err;
> +
> +    // Rate control
> +    if (ctx->rc.Mode == D3D12_VIDEO_ENCODER_RATE_CONTROL_MODE_CQP) {
> +        D3D12_VIDEO_ENCODER_RATE_CONTROL_CQP *cqp_ctl;
> +        fixed_qp_p = av_clip(base_ctx->rc_quality, 1, 51);
> +        if (avctx->i_quant_factor > 0.0)
> +            fixed_qp_idr = av_clip((avctx->i_quant_factor * fixed_qp_p +
> +                                    avctx->i_quant_offset) + 0.5, 1, 51);
> +        else
> +            fixed_qp_idr = fixed_qp_p;
> +        if (avctx->b_quant_factor > 0.0)
> +            fixed_qp_b = av_clip((avctx->b_quant_factor * fixed_qp_p +
> +                                  avctx->b_quant_offset) + 0.5, 1, 51);
> +        else
> +            fixed_qp_b = fixed_qp_p;
> +
> +        av_log(avctx, AV_LOG_DEBUG, "Using fixed QP = "
> +               "%d / %d / %d for IDR- / P- / B-frames.\n",
> +               fixed_qp_idr, fixed_qp_p, fixed_qp_b);
> +
> +        ctx->rc.ConfigParams.DataSize =
> sizeof(D3D12_VIDEO_ENCODER_RATE_CONTROL_CQP);
> +        cqp_ctl = av_mallocz(ctx->rc.ConfigParams.DataSize);
> +        if (!cqp_ctl)
> +            return AVERROR(ENOMEM);
> +
> +        cqp_ctl->ConstantQP_FullIntracodedFrame                  =
> fixed_qp_idr;
> +        cqp_ctl->ConstantQP_InterPredictedFrame_PrevRefOnly      =
> fixed_qp_p;
> +        cqp_ctl->ConstantQP_InterPredictedFrame_BiDirectionalRef =
> fixed_qp_b;
> +
> +        ctx->rc.ConfigParams.pConfiguration_CQP = cqp_ctl;
> +    }
> +
> +    // GOP
> +    ctx->gop.DataSize =
> sizeof(D3D12_VIDEO_ENCODER_SEQUENCE_GOP_STRUCTURE_HEVC);
> +    ctx->gop.pHEVCGroupOfPictures = av_mallocz(ctx->gop.DataSize);
> +    if (!ctx->gop.pHEVCGroupOfPictures)
> +        return AVERROR(ENOMEM);
> +
> +    ctx->gop.pHEVCGroupOfPictures->GOPLength      = base_ctx->gop_size;
> +    ctx->gop.pHEVCGroupOfPictures->PPicturePeriod = base_ctx->b_per_p + 1;
> +    // Power of 2
> +    if (base_ctx->gop_size & base_ctx->gop_size - 1 == 0)
> +        ctx->gop.pHEVCGroupOfPictures->log2_max_pic_order_cnt_lsb_minus4 =
> +            FFMAX(av_log2(base_ctx->gop_size) - 4, 0);
> +    else
> +        ctx->gop.pHEVCGroupOfPictures->log2_max_pic_order_cnt_lsb_minus4 =
> +            FFMAX(av_log2(base_ctx->gop_size) - 3, 0);
> +
> +    return 0;
> +}
> +
> +static int d3d12va_encode_hevc_set_level(AVCodecContext *avctx)
> +{
> +    D3D12VAEncodeContext      *ctx = avctx->priv_data;
> +    D3D12VAEncodeHEVCContext *priv = avctx->priv_data;
> +    int i;
> +
> +    ctx->level.DataSize =
> sizeof(D3D12_VIDEO_ENCODER_LEVEL_TIER_CONSTRAINTS_HEVC);
> +    ctx->level.pHEVCLevelSetting = av_mallocz(ctx->level.DataSize);
> +    if (!ctx->level.pHEVCLevelSetting)
> +        return AVERROR(ENOMEM);
> +
> +    for (i = 0; i < FF_ARRAY_ELEMS(hevc_levels); i++) {
> +        if (avctx->level == hevc_levels[i].level) {
> +            ctx->level.pHEVCLevelSetting->Level = hevc_levels[i].d3d12_level;
> +            break;
> +        }
> +    }
> +
> +    if (i == FF_ARRAY_ELEMS(hevc_levels)) {
> +        av_log(avctx, AV_LOG_ERROR, "Invalid level %d.\n", avctx->level);
> +        return AVERROR(EINVAL);
> +    }
> +
> +    ctx->level.pHEVCLevelSetting->Tier = priv-
> >raw_vps.profile_tier_level.general_tier_flag == 0 ?
> +                                         D3D12_VIDEO_ENCODER_TIER_HEVC_MAIN :
> +                                         D3D12_VIDEO_ENCODER_TIER_HEVC_HIGH;
> +
> +    return 0;
> +}
> +
> +static void d3d12va_encode_hevc_free_picture_params(D3D12VAEncodePicture
> *pic)
> +{
> +    if (!pic->pic_ctl.pHEVCPicData)
> +        return;
> +
> +    av_freep(&pic->pic_ctl.pHEVCPicData->pList0ReferenceFrames);
> +    av_freep(&pic->pic_ctl.pHEVCPicData->pList1ReferenceFrames);
> +    av_freep(&pic->pic_ctl.pHEVCPicData-
> >pReferenceFramesReconPictureDescriptors);
> +    av_freep(&pic->pic_ctl.pHEVCPicData);
> +}
> +
> +static int d3d12va_encode_hevc_init_picture_params(AVCodecContext *avctx,
> +                                                   D3D12VAEncodePicture *pic)
> +{
> +    HWBaseEncodePicture                             *base_pic =
> (HWBaseEncodePicture *)pic;
> +    D3D12VAEncodeHEVCPicture                            *hpic = base_pic-
> >priv_data;
> +    HWBaseEncodePicture                                 *prev = base_pic-
> >prev;
> +    D3D12VAEncodeHEVCPicture                           *hprev = prev ? prev-
> >priv_data : NULL;
> +    D3D12_VIDEO_ENCODER_REFERENCE_PICTURE_DESCRIPTOR_HEVC *pd = NULL;
> +    UINT                                           *ref_list0 = NULL,
> *ref_list1 = NULL;
> +    int i, idx = 0;
> +
> +    pic->pic_ctl.DataSize =
> sizeof(D3D12_VIDEO_ENCODER_PICTURE_CONTROL_CODEC_DATA_HEVC);
> +    pic->pic_ctl.pHEVCPicData = av_mallocz(pic->pic_ctl.DataSize);
> +    if (!pic->pic_ctl.pHEVCPicData)
> +        return AVERROR(ENOMEM);
> +
> +    if (base_pic->type == PICTURE_TYPE_IDR) {
> +        av_assert0(base_pic->display_order == base_pic->encode_order);
> +        hpic->last_idr_frame = base_pic->display_order;
> +    } else {
> +        av_assert0(prev);
> +        hpic->last_idr_frame = hprev->last_idr_frame;
> +    }
> +    hpic->pic_order_cnt = base_pic->display_order - hpic->last_idr_frame;
> +
> +    switch(base_pic->type) {
> +        case PICTURE_TYPE_IDR:
> +            pic->pic_ctl.pHEVCPicData->FrameType =
> D3D12_VIDEO_ENCODER_FRAME_TYPE_HEVC_IDR_FRAME;
> +            break;
> +        case PICTURE_TYPE_I:
> +            pic->pic_ctl.pHEVCPicData->FrameType =
> D3D12_VIDEO_ENCODER_FRAME_TYPE_HEVC_I_FRAME;
> +            break;
> +        case PICTURE_TYPE_P:
> +            pic->pic_ctl.pHEVCPicData->FrameType =
> D3D12_VIDEO_ENCODER_FRAME_TYPE_HEVC_P_FRAME;
> +            break;
> +        case PICTURE_TYPE_B:
> +            pic->pic_ctl.pHEVCPicData->FrameType =
> D3D12_VIDEO_ENCODER_FRAME_TYPE_HEVC_B_FRAME;
> +            break;
> +        default:
> +            av_assert0(0 && "invalid picture type");
> +    }
> +
> +    pic->pic_ctl.pHEVCPicData->slice_pic_parameter_set_id = 0;
> +    pic->pic_ctl.pHEVCPicData->PictureOrderCountNumber    = hpic-
> >pic_order_cnt;
> +
> +    if (base_pic->type == PICTURE_TYPE_P || base_pic->type == PICTURE_TYPE_B)
> {
> +        pd = av_calloc(MAX_PICTURE_REFERENCES, sizeof(*pd));
> +        if (!pd)
> +            return AVERROR(ENOMEM);
> +
> +        ref_list0 = av_calloc(MAX_PICTURE_REFERENCES, sizeof(*ref_list0));
> +        if (!ref_list0)
> +            return AVERROR(ENOMEM);
> +
> +        pic->pic_ctl.pHEVCPicData->List0ReferenceFramesCount = base_pic-
> >nb_refs[0];
> +        for (i = 0; i < base_pic->nb_refs[0]; i++) {
> +            HWBaseEncodePicture      *ref = base_pic->refs[0][i];
> +            D3D12VAEncodeHEVCPicture *href;
> +
> +            av_assert0(ref && ref->encode_order < base_pic->encode_order);
> +            href = ref->priv_data;
> +
> +            ref_list0[i] = idx;
> +            pd[idx].ReconstructedPictureResourceIndex = idx;
> +            pd[idx].IsRefUsedByCurrentPic = TRUE;
> +            pd[idx].PictureOrderCountNumber = href->pic_order_cnt;
> +            idx++;
> +        }
> +    }
> +
> +    if (base_pic->type == PICTURE_TYPE_B) {
> +        ref_list1 = av_calloc(MAX_PICTURE_REFERENCES, sizeof(*ref_list1));
> +        if (!ref_list1)
> +            return AVERROR(ENOMEM);
> +
> +        pic->pic_ctl.pHEVCPicData->List1ReferenceFramesCount = base_pic-
> >nb_refs[1];
> +        for (i = 0; i < base_pic->nb_refs[1]; i++) {
> +            HWBaseEncodePicture      *ref = base_pic->refs[1][i];
> +            D3D12VAEncodeHEVCPicture *href;
> +
> +            av_assert0(ref && ref->encode_order < base_pic->encode_order);
> +            href = ref->priv_data;
> +
> +            ref_list1[i] = idx;
> +            pd[idx].ReconstructedPictureResourceIndex = idx;
> +            pd[idx].IsRefUsedByCurrentPic = TRUE;
> +            pd[idx].PictureOrderCountNumber = href->pic_order_cnt;
> +            idx++;
> +        }
> +    }
> +
> +    pic->pic_ctl.pHEVCPicData->pList0ReferenceFrames = ref_list0;
> +    pic->pic_ctl.pHEVCPicData->pList1ReferenceFrames = ref_list1;
> +    pic->pic_ctl.pHEVCPicData->ReferenceFramesReconPictureDescriptorsCount =
> idx;
> +    pic->pic_ctl.pHEVCPicData->pReferenceFramesReconPictureDescriptors = pd;
> +
> +    return 0;
> +}
> +
> +static const D3D12VAEncodeType d3d12va_encode_type_hevc = {
> +    .profiles               = d3d12va_encode_hevc_profiles,
> +
> +    .d3d12_codec            = D3D12_VIDEO_ENCODER_CODEC_HEVC,
> +
> +    .flags                  = FLAG_B_PICTURES |
> +                              FLAG_B_PICTURE_REFERENCES |
> +                              FLAG_NON_IDR_KEY_PICTURES,
> +
> +    .default_quality        = 25,
> +
> +    .get_encoder_caps       = &d3d12va_encode_hevc_get_encoder_caps,
> +
> +    .configure              = &d3d12va_encode_hevc_configure,
> +
> +    .set_level              = &d3d12va_encode_hevc_set_level,
> +
> +    .picture_priv_data_size = sizeof(D3D12VAEncodeHEVCPicture),
> +
> +    .init_sequence_params   = &d3d12va_encode_hevc_init_sequence_params,
> +
> +    .init_picture_params    = &d3d12va_encode_hevc_init_picture_params,
> +
> +    .free_picture_params    = &d3d12va_encode_hevc_free_picture_params,
> +
> +    .write_sequence_header  = &d3d12va_encode_hevc_write_sequence_header,
> +};
> +
> +static int d3d12va_encode_hevc_init(AVCodecContext *avctx)
> +{
> +    HWBaseEncodeContext  *base_ctx = avctx->priv_data;
> +    D3D12VAEncodeContext      *ctx = avctx->priv_data;
> +    D3D12VAEncodeHEVCContext *priv = avctx->priv_data;
> +
> +    ctx->codec = &d3d12va_encode_type_hevc;
> +
> +    if (avctx->profile == AV_PROFILE_UNKNOWN)
> +        avctx->profile = priv->profile;
> +    if (avctx->level == FF_LEVEL_UNKNOWN)
> +        avctx->level = priv->level;
> +
> +    if (avctx->level != FF_LEVEL_UNKNOWN && avctx->level & ~0xff) {
> +        av_log(avctx, AV_LOG_ERROR, "Invalid level %d: must fit "
> +               "in 8-bit unsigned integer.\n", avctx->level);
> +        return AVERROR(EINVAL);
> +    }
> +
> +    if (priv->qp > 0)
> +        base_ctx->explicit_qp = priv->qp;
> +
> +    return ff_d3d12va_encode_init(avctx);
> +}
> +
> +static int d3d12va_encode_hevc_close(AVCodecContext *avctx)
> +{
> +    D3D12VAEncodeHEVCContext *priv = avctx->priv_data;
> +
> +    ff_cbs_fragment_free(&priv->current_access_unit);
> +    ff_cbs_close(&priv->cbc);
> +
> +    av_freep(&priv->common.codec_conf.pHEVCConfig);
> +    av_freep(&priv->common.gop.pHEVCGroupOfPictures);
> +    av_freep(&priv->common.level.pHEVCLevelSetting);
> +
> +    return ff_d3d12va_encode_close(avctx);
> +}
> +
> +#define OFFSET(x) offsetof(D3D12VAEncodeHEVCContext, x)
> +#define FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM)
> +static const AVOption d3d12va_encode_hevc_options[] = {
> +    HW_BASE_ENCODE_COMMON_OPTIONS,
> +    D3D12VA_ENCODE_RC_OPTIONS,
> +
> +    { "qp", "Constant QP (for P-frames; scaled by qfactor/qoffset for I/B)",
> +      OFFSET(qp), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, 52, FLAGS },
> +
> +    { "profile", "Set profile (general_profile_idc)",
> +      OFFSET(profile), AV_OPT_TYPE_INT,
> +      { .i64 = AV_PROFILE_UNKNOWN }, AV_PROFILE_UNKNOWN, 0xff, FLAGS,
> "profile" },
> +
> +#define PROFILE(name, value)  name, NULL, 0, AV_OPT_TYPE_CONST, \
> +      { .i64 = value }, 0, 0, FLAGS, "profile"
> +    { PROFILE("main",               AV_PROFILE_HEVC_MAIN) },
> +    { PROFILE("main10",             AV_PROFILE_HEVC_MAIN_10) },
> +    { PROFILE("rext",               AV_PROFILE_HEVC_REXT) },
> +#undef PROFILE
> +
> +    { "tier", "Set tier (general_tier_flag)",
> +      OFFSET(tier), AV_OPT_TYPE_INT,
> +      { .i64 = 0 }, 0, 1, FLAGS, "tier" },
> +    { "main", NULL, 0, AV_OPT_TYPE_CONST,
> +      { .i64 = 0 }, 0, 0, FLAGS, "tier" },
> +    { "high", NULL, 0, AV_OPT_TYPE_CONST,
> +      { .i64 = 1 }, 0, 0, FLAGS, "tier" },
> +
> +    { "level", "Set level (general_level_idc)",
> +      OFFSET(level), AV_OPT_TYPE_INT,
> +      { .i64 = FF_LEVEL_UNKNOWN }, FF_LEVEL_UNKNOWN, 0xff, FLAGS, "level" },
> +
> +#define LEVEL(name, value) name, NULL, 0, AV_OPT_TYPE_CONST, \
> +      { .i64 = value }, 0, 0, FLAGS, "level"
> +    { LEVEL("1",    30) },
> +    { LEVEL("2",    60) },
> +    { LEVEL("2.1",  63) },
> +    { LEVEL("3",    90) },
> +    { LEVEL("3.1",  93) },
> +    { LEVEL("4",   120) },
> +    { LEVEL("4.1", 123) },
> +    { LEVEL("5",   150) },
> +    { LEVEL("5.1", 153) },
> +    { LEVEL("5.2", 156) },
> +    { LEVEL("6",   180) },
> +    { LEVEL("6.1", 183) },
> +    { LEVEL("6.2", 186) },
> +#undef LEVEL
> +
> +    { NULL },
> +};
> +
> +static const FFCodecDefault d3d12va_encode_hevc_defaults[] = {
> +    { "b",              "0"   },
> +    { "bf",             "2"   },
> +    { "g",              "120" },
> +    { "i_qfactor",      "1"   },
> +    { "i_qoffset",      "0"   },
> +    { "b_qfactor",      "1" },
> +    { "b_qoffset",      "0"   },
> +    { "qmin",           "-1"  },
> +    { "qmax",           "-1"  },
> +    { NULL },
> +};
> +
> +static const AVClass d3d12va_encode_hevc_class = {
> +    .class_name = "hevc_d3d12va",
> +    .item_name  = av_default_item_name,
> +    .option     = d3d12va_encode_hevc_options,
> +    .version    = LIBAVUTIL_VERSION_INT,
> +};
> +
> +const FFCodec ff_hevc_d3d12va_encoder = {
> +    .p.name         = "hevc_d3d12va",
> +    CODEC_LONG_NAME("D3D12VA hevc encoder"),
> +    .p.type         = AVMEDIA_TYPE_VIDEO,
> +    .p.id           = AV_CODEC_ID_HEVC,
> +    .priv_data_size = sizeof(D3D12VAEncodeHEVCContext),
> +    .init           = &d3d12va_encode_hevc_init,
> +    FF_CODEC_RECEIVE_PACKET_CB(&ff_hw_base_encode_receive_packet),
> +    .close          = &d3d12va_encode_hevc_close,
> +    .p.priv_class   = &d3d12va_encode_hevc_class,
> +    .p.capabilities = AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE |
> +                      AV_CODEC_CAP_DR1 |
> AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE,
> +    .caps_internal  = FF_CODEC_CAP_NOT_INIT_THREADSAFE |
> +                      FF_CODEC_CAP_INIT_CLEANUP,
> +    .defaults       = d3d12va_encode_hevc_defaults,
> +    .p.pix_fmts = (const enum AVPixelFormat[]) {
> +        AV_PIX_FMT_D3D12,
> +        AV_PIX_FMT_NONE,
> +    },
> +    .hw_configs     = ff_d3d12va_encode_hw_configs,
> +    .p.wrapper_name = "d3d12va",
> +};

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2024-04-15  8:43 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-14  8:14 [FFmpeg-devel] [PATCH v7 01/12] avcodec/vaapi_encode: move pic->input_surface initialization to encode_alloc tong1.wu-at-intel.com
2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 02/12] avcodec/vaapi_encode: introduce a base layer for vaapi encode tong1.wu-at-intel.com
2024-04-15  7:29   ` Xiang, Haihao
2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 03/12] avcodec/vaapi_encode: move the dpb logic from VAAPI to base layer tong1.wu-at-intel.com
2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 04/12] avcodec/vaapi_encode: extract a init function " tong1.wu-at-intel.com
2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 05/12] avcodec/vaapi_encode: extract a close function for " tong1.wu-at-intel.com
2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 06/12] avcodec/vaapi_encode: extract set_output_property to " tong1.wu-at-intel.com
2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 07/12] avcodec/vaapi_encode: extract gop configuration " tong1.wu-at-intel.com
2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 08/12] avcodec/vaapi_encode: extract a get_recon_format function " tong1.wu-at-intel.com
2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 09/12] avcodec/vaapi_encode: extract a free funtion " tong1.wu-at-intel.com
2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 10/12] avutil/hwcontext_d3d12va: add Flags for resource creation tong1.wu-at-intel.com
2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 11/12] avcodec: add D3D12VA hardware HEVC encoder tong1.wu-at-intel.com
2024-03-28  2:35   ` Wu, Tong1
2024-04-15  8:42   ` Xiang, Haihao
2024-03-14  8:14 ` [FFmpeg-devel] [PATCH v7 12/12] Changelog: add D3D12VA HEVC encoder changelog tong1.wu-at-intel.com

Git Inbox Mirror of the ffmpeg-devel mailing list - see https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

This inbox may be cloned and mirrored by anyone:

	git clone --mirror https://master.gitmailbox.com/ffmpegdev/0 ffmpegdev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 ffmpegdev ffmpegdev/ https://master.gitmailbox.com/ffmpegdev \
		ffmpegdev@gitmailbox.com
	public-inbox-index ffmpegdev

Example config snippet for mirrors.


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git