* [FFmpeg-devel] [PATCH 2/3] libavcodec: add NETINT Quadra HW decoders & encoders
@ 2025-07-02 8:11 Steven Zhou
2025-07-02 14:33 ` Derek Buitenhuis
0 siblings, 1 reply; 3+ messages in thread
From: Steven Zhou @ 2025-07-02 8:11 UTC (permalink / raw)
To: ffmpeg-devel
Add NETINT Quadra hardware video decoder and encoder codecs
h264_ni_quadra_dec, h265_ni_quadra_dec, jpeg_ni_quadra_dec,
vp9_ni_quadra_dec, h264_ni_quadra_enc, h265_ni_quadra_enc,
jpeg_ni_quadra_enc, and av1_ni_quadra_enc.
More information:
https://netint.com/products/quadra-t1a-video-processing-unit/
https://docs.netint.com/vpu/quadra/
Signed-off-by: Steven Zhou <steven.zhou@netint.ca>
---
configure | 8 +
libavcodec/Makefile | 9 +
libavcodec/allcodecs.c | 8 +
libavcodec/nicodec.c | 1392 ++++++++++++++++++
libavcodec/nicodec.h | 215 +++
libavcodec/nidec.c | 539 +++++++
libavcodec/nidec.h | 86 ++
libavcodec/nidec_h264.c | 73 +
libavcodec/nidec_hevc.c | 73 +
libavcodec/nidec_jpeg.c | 68 +
libavcodec/nidec_vp9.c | 72 +
libavcodec/nienc.c | 3009 +++++++++++++++++++++++++++++++++++++++
libavcodec/nienc.h | 114 ++
libavcodec/nienc_av1.c | 51 +
libavcodec/nienc_h264.c | 52 +
libavcodec/nienc_hevc.c | 52 +
libavcodec/nienc_jpeg.c | 48 +
17 files changed, 5869 insertions(+)
create mode 100644 libavcodec/nicodec.c
create mode 100644 libavcodec/nicodec.h
create mode 100644 libavcodec/nidec.c
create mode 100644 libavcodec/nidec.h
create mode 100644 libavcodec/nidec_h264.c
create mode 100644 libavcodec/nidec_hevc.c
create mode 100644 libavcodec/nidec_jpeg.c
create mode 100644 libavcodec/nidec_vp9.c
create mode 100644 libavcodec/nienc.c
create mode 100644 libavcodec/nienc.h
create mode 100644 libavcodec/nienc_av1.c
create mode 100644 libavcodec/nienc_h264.c
create mode 100644 libavcodec/nienc_hevc.c
create mode 100644 libavcodec/nienc_jpeg.c
diff --git a/configure b/configure
index ca15d675d4..0a2dda84c9 100755
--- a/configure
+++ b/configure
@@ -3643,6 +3643,14 @@ libx264_encoder_select="atsc_a53 golomb"
libx264rgb_encoder_deps="libx264"
libx264rgb_encoder_select="libx264_encoder"
libx265_encoder_deps="libx265"
+h264_ni_quadra_decoder_deps="ni_quadra"
+h265_ni_quadra_decoder_deps="ni_quadra"
+jpeg_ni_quadra_decoder_deps="ni_quadra"
+vp9_ni_quadra_decoder_deps="ni_quadra"
+av1_ni_quadra_encoder_deps="ni_quadra"
+h264_ni_quadra_encoder_deps="ni_quadra"
+h265_ni_quadra_encoder_deps="ni_quadra"
+jpeg_ni_quadra_encoder_deps="ni_quadra"
libx265_encoder_select="atsc_a53 dovi_rpuenc"
libxavs_encoder_deps="libxavs"
libxavs2_encoder_deps="libxavs2"
diff --git a/libavcodec/Makefile b/libavcodec/Makefile
index 7f963e864d..0072f1d562 100644
--- a/libavcodec/Makefile
+++ b/libavcodec/Makefile
@@ -247,6 +247,7 @@ OBJS-$(CONFIG_APNG_ENCODER) += png.o pngenc.o
OBJS-$(CONFIG_APV_DECODER) += apv_decode.o apv_entropy.o apv_dsp.o
OBJS-$(CONFIG_ARBC_DECODER) += arbc.o
OBJS-$(CONFIG_ARGO_DECODER) += argo.o
+OBJS-$(CONFIG_AV1_NI_QUADRA_ENCODER) += nienc_av1.o nicodec.o nienc.o
OBJS-$(CONFIG_SSA_DECODER) += assdec.o ass.o
OBJS-$(CONFIG_SSA_ENCODER) += assenc.o ass.o
OBJS-$(CONFIG_ASS_DECODER) += assdec.o ass.o
@@ -427,6 +428,8 @@ OBJS-$(CONFIG_H264_MEDIACODEC_DECODER) += mediacodecdec.o
OBJS-$(CONFIG_H264_MEDIACODEC_ENCODER) += mediacodecenc.o
OBJS-$(CONFIG_H264_MF_ENCODER) += mfenc.o mf_utils.o
OBJS-$(CONFIG_H264_MMAL_DECODER) += mmaldec.o
+OBJS-$(CONFIG_H264_NI_QUADRA_DECODER) += nidec_h264.o nicodec.o nidec.o
+OBJS-$(CONFIG_H264_NI_QUADRA_ENCODER) += nienc_h264.o nicodec.o nienc.o
OBJS-$(CONFIG_H264_NVENC_ENCODER) += nvenc_h264.o nvenc.o
OBJS-$(CONFIG_H264_OMX_ENCODER) += omx.o
OBJS-$(CONFIG_H264_QSV_DECODER) += qsvdec.o
@@ -455,6 +458,8 @@ OBJS-$(CONFIG_HEVC_D3D12VA_ENCODER) += d3d12va_encode_hevc.o h265_profile_lev
OBJS-$(CONFIG_HEVC_MEDIACODEC_DECODER) += mediacodecdec.o
OBJS-$(CONFIG_HEVC_MEDIACODEC_ENCODER) += mediacodecenc.o
OBJS-$(CONFIG_HEVC_MF_ENCODER) += mfenc.o mf_utils.o
+OBJS-$(CONFIG_H265_NI_QUADRA_DECODER) += nidec_hevc.o nicodec.o nidec.o
+OBJS-$(CONFIG_H265_NI_QUADRA_ENCODER) += nienc_hevc.o nicodec.o nienc.o
OBJS-$(CONFIG_HEVC_NVENC_ENCODER) += nvenc_hevc.o nvenc.o
OBJS-$(CONFIG_HEVC_QSV_DECODER) += qsvdec.o
OBJS-$(CONFIG_HEVC_QSV_ENCODER) += qsvenc_hevc.o hevc/ps_enc.o
@@ -495,6 +500,8 @@ OBJS-$(CONFIG_JPEG2000_DECODER) += jpeg2000dec.o jpeg2000.o jpeg2000dsp.o
jpeg2000dwt.o mqcdec.o mqc.o jpeg2000htdec.o
OBJS-$(CONFIG_JPEGLS_DECODER) += jpeglsdec.o jpegls.o
OBJS-$(CONFIG_JPEGLS_ENCODER) += jpeglsenc.o jpegls.o
+OBJS-$(CONFIG_JPEG_NI_QUADRA_DECODER) += nidec_jpeg.o nicodec.o nidec.o
+OBJS-$(CONFIG_JPEG_NI_QUADRA_ENCODER) += nienc_jpeg.o nicodec.o nidec.o
OBJS-$(CONFIG_JV_DECODER) += jvdec.o
OBJS-$(CONFIG_KGV1_DECODER) += kgv1dec.o
OBJS-$(CONFIG_KMVC_DECODER) += kmvc.o
@@ -806,6 +813,7 @@ OBJS-$(CONFIG_VP9_AMF_DECODER) += amfdec.o
OBJS-$(CONFIG_VP9_CUVID_DECODER) += cuviddec.o
OBJS-$(CONFIG_VP9_MEDIACODEC_DECODER) += mediacodecdec.o
OBJS-$(CONFIG_VP9_MEDIACODEC_ENCODER) += mediacodecenc.o
+OBJS-$(CONFIG_VP9_NI_QUADRA_DECODER) += nidec_vp9.o nicodec.o nidec.o
OBJS-$(CONFIG_VP9_RKMPP_DECODER) += rkmppdec.o
OBJS-$(CONFIG_VP9_VAAPI_ENCODER) += vaapi_encode_vp9.o
OBJS-$(CONFIG_VP9_QSV_ENCODER) += qsvenc_vp9.o
@@ -1311,6 +1319,7 @@ SKIPHEADERS-$(CONFIG_LIBVPX) += libvpx.h
SKIPHEADERS-$(CONFIG_LIBWEBP_ENCODER) += libwebpenc_common.h
SKIPHEADERS-$(CONFIG_MEDIACODEC) += mediacodecdec_common.h mediacodec_surface.h mediacodec_wrapper.h mediacodec_sw_buffer.h
SKIPHEADERS-$(CONFIG_MEDIAFOUNDATION) += mf_utils.h
+SKIPHEADERS-$(CONFIG_NI_QUADRA) += nidec.h nicodec.h nienc.h ni_hevc_rbsp.h
SKIPHEADERS-$(CONFIG_NVDEC) += nvdec.h
SKIPHEADERS-$(CONFIG_NVENC) += nvenc.h
SKIPHEADERS-$(CONFIG_QSV) += qsv.h qsv_internal.h
diff --git a/libavcodec/allcodecs.c b/libavcodec/allcodecs.c
index 9087b16895..91a78ebbc0 100644
--- a/libavcodec/allcodecs.c
+++ b/libavcodec/allcodecs.c
@@ -841,6 +841,14 @@ extern const FFCodec ff_amrwb_mediacodec_decoder;
extern const FFCodec ff_h263_v4l2m2m_encoder;
extern const FFCodec ff_libaom_av1_decoder;
/* hwaccel hooks only, so prefer external decoders */
+extern const FFCodec ff_h264_ni_quadra_decoder;
+extern const FFCodec ff_h265_ni_quadra_decoder;
+extern const FFCodec ff_vp9_ni_quadra_decoder;
+extern const FFCodec ff_jpeg_ni_quadra_decoder;
+extern const FFCodec ff_h264_ni_quadra_encoder;
+extern const FFCodec ff_h265_ni_quadra_encoder;
+extern const FFCodec ff_av1_ni_quadra_encoder;
+extern const FFCodec ff_jpeg_ni_quadra_encoder;
extern const FFCodec ff_av1_decoder;
extern const FFCodec ff_av1_cuvid_decoder;
extern const FFCodec ff_av1_mediacodec_decoder;
diff --git a/libavcodec/nicodec.c b/libavcodec/nicodec.c
new file mode 100644
index 0000000000..af2baea74d
--- /dev/null
+++ b/libavcodec/nicodec.c
@@ -0,0 +1,1392 @@
+/*
+ * XCoder Codec Lib Wrapper
+ * Copyright (c) 2018 NetInt
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+/**
+ * @file
+ * XCoder codec lib wrapper.
+ */
+
+#include "nicodec.h"
+#include "get_bits.h"
+#include "internal.h"
+#include "libavcodec/h264.h"
+#include "libavcodec/h264_sei.h"
+#include "libavcodec/hevc/hevc.h"
+#include "libavcodec/hevc/sei.h"
+
+#include "libavutil/eval.h"
+#include "libavutil/hdr_dynamic_metadata.h"
+#include "libavutil/hwcontext.h"
+#include "libavutil/hwcontext_ni_quad.h"
+#include "libavutil/imgutils.h"
+#include "libavutil/intreadwrite.h"
+#include "libavutil/mastering_display_metadata.h"
+#include "libavutil/mem.h"
+#include "libavutil/pixdesc.h"
+#include "nidec.h"
+
+#include <math.h>
+#include <ni_av_codec.h>
+#include <ni_rsrc_api.h>
+#include <ni_bitstream.h>
+
+#define NAL_264(X) ((X) & (0x1F))
+#define NAL_265(X) (((X)&0x7E) >> 1)
+#define MAX_HEADERS_SIZE 1000
+
+static const char *const var_names[] = {
+ "in_w", "iw", ///< width of the input video
+ "in_h", "ih", ///< height of the input video
+ "out_w", "ow", ///< width of the cropped video
+ "out_h", "oh", ///< height of the cropped video
+ "x",
+ "y",
+ NULL
+};
+
+enum var_name {
+ VAR_IN_W, VAR_IW,
+ VAR_IN_H, VAR_IH,
+ VAR_OUT_W, VAR_OW,
+ VAR_OUT_H, VAR_OH,
+ VAR_X,
+ VAR_Y,
+ VAR_VARS_NB
+};
+
+static inline void ni_align_free(void *opaque, uint8_t *data)
+{
+ ni_buf_t *buf = (ni_buf_t *)opaque;
+ if (buf) {
+ ni_decoder_frame_buffer_pool_return_buf(buf, (ni_buf_pool_t *)buf->pool);
+ }
+}
+
+static inline void ni_frame_free(void *opaque, uint8_t *data)
+{
+ if (data) {
+ int ret;
+ int num_buffers = opaque ? *((int*)opaque) : 1;
+ for (int i = 0; i < num_buffers; i++) {
+ niFrameSurface1_t* p_data3 = (niFrameSurface1_t*)(data + i * sizeof(niFrameSurface1_t));
+ if (p_data3->ui16FrameIdx != 0) {
+ av_log(NULL, AV_LOG_DEBUG, "Recycle trace ui16FrameIdx = [%d] DevHandle %d\n", p_data3->ui16FrameIdx, p_data3->device_handle);
+ ret = ni_hwframe_buffer_recycle(p_data3, p_data3->device_handle);
+ if (ret != NI_RETCODE_SUCCESS) {
+ av_log(NULL, AV_LOG_ERROR, "ERROR Failed to recycle trace ui16frameidx = [%d] DevHandle %d\n", p_data3->ui16FrameIdx, p_data3->device_handle);
+ }
+ }
+ }
+ ni_aligned_free(data);
+ (void)data; // suppress cppcheck
+ }
+}
+
+static inline void __ni_free(void *opaque, uint8_t *data)
+{
+ free(data); // Free data allocated by libxcoder
+}
+
+static enum AVPixelFormat ni_supported_pixel_formats[] =
+{
+ AV_PIX_FMT_YUV420P, //0
+ AV_PIX_FMT_YUV420P10LE,
+ AV_PIX_FMT_NV12,
+ AV_PIX_FMT_P010LE,
+ AV_PIX_FMT_NONE, //convert RGB to unused
+ AV_PIX_FMT_NONE,
+ AV_PIX_FMT_NONE,
+ AV_PIX_FMT_NONE,
+ AV_PIX_FMT_NONE, //8
+ AV_PIX_FMT_NONE,
+ AV_PIX_FMT_NONE,
+ AV_PIX_FMT_NONE,
+ AV_PIX_FMT_NONE, //12
+ AV_PIX_FMT_NI_QUAD_8_TILE_4X4,
+ AV_PIX_FMT_NI_QUAD_10_TILE_4X4,
+ AV_PIX_FMT_NONE, //15
+};
+
+static inline int ni_pix_fmt_2_ff_pix_fmt(ni_pix_fmt_t pix_fmt)
+{
+ return ni_supported_pixel_formats[pix_fmt];
+}
+
+int parse_symbolic_decoder_param(XCoderDecContext *s) {
+ ni_decoder_input_params_t *pdec_param = &s->api_param.dec_input_params;
+ int i, ret;
+ double res;
+ double var_values[VAR_VARS_NB];
+
+ if (pdec_param == NULL) {
+ return AVERROR_INVALIDDATA;
+ }
+
+ for (i = 0; i < NI_MAX_NUM_OF_DECODER_OUTPUTS; i++) {
+ /*Set output width and height*/
+ var_values[VAR_IN_W] = var_values[VAR_IW] = pdec_param->crop_whxy[i][0];
+ var_values[VAR_IN_H] = var_values[VAR_IH] = pdec_param->crop_whxy[i][1];
+ var_values[VAR_OUT_W] = var_values[VAR_OW] = pdec_param->crop_whxy[i][0];
+ var_values[VAR_OUT_H] = var_values[VAR_OH] = pdec_param->crop_whxy[i][1];
+ if (pdec_param->cr_expr[i][0][0] && pdec_param->cr_expr[i][1][0]) {
+ if (av_expr_parse_and_eval(&res, pdec_param->cr_expr[i][0], var_names,
+ var_values, NULL, NULL, NULL, NULL, NULL, 0,
+ s) < 0) {
+ return AVERROR_INVALIDDATA;
+ }
+ var_values[VAR_OUT_W] = var_values[VAR_OW] = (double)floor(res);
+ if (av_expr_parse_and_eval(&res, pdec_param->cr_expr[i][1], var_names,
+ var_values, NULL, NULL, NULL, NULL, NULL, 0,
+ s) < 0) {
+ return AVERROR_INVALIDDATA;
+ }
+ var_values[VAR_OUT_H] = var_values[VAR_OH] = (double)floor(res);
+ /* evaluate again ow as it may depend on oh */
+ ret = av_expr_parse_and_eval(&res, pdec_param->cr_expr[i][0], var_names,
+ var_values, NULL, NULL, NULL, NULL, NULL,
+ 0, s);
+ if (ret < 0) {
+ return AVERROR_INVALIDDATA;
+ }
+ var_values[VAR_OUT_W] = var_values[VAR_OW] = (double)floor(res);
+ pdec_param->crop_whxy[i][0] = (int)var_values[VAR_OUT_W];
+ pdec_param->crop_whxy[i][1] = (int)var_values[VAR_OUT_H];
+ }
+ /*Set output crop offset X,Y*/
+ if (pdec_param->cr_expr[i][2][0]) {
+ ret = av_expr_parse_and_eval(&res, pdec_param->cr_expr[i][2], var_names,
+ var_values, NULL, NULL, NULL, NULL, NULL,
+ 0, s);
+ if (ret < 0) {
+ return AVERROR_INVALIDDATA;
+ }
+ var_values[VAR_X] = res;
+ pdec_param->crop_whxy[i][2] = floor(var_values[VAR_X]);
+ }
+ if (pdec_param->cr_expr[i][3][0]) {
+ ret = av_expr_parse_and_eval(&res, pdec_param->cr_expr[i][3], var_names,
+ var_values, NULL, NULL, NULL, NULL, NULL,
+ 0, s);
+ if (ret < 0) {
+ return AVERROR_INVALIDDATA;
+ }
+ var_values[VAR_Y] = res;
+ pdec_param->crop_whxy[i][3] = floor(var_values[VAR_Y]);
+ }
+ /*Set output Scale*/
+ /*Reset OW and OH to next lower even number*/
+ var_values[VAR_OUT_W] = var_values[VAR_OW] =
+ (double)(pdec_param->crop_whxy[i][0] -
+ (pdec_param->crop_whxy[i][0] % 2));
+ var_values[VAR_OUT_H] = var_values[VAR_OH] =
+ (double)(pdec_param->crop_whxy[i][1] -
+ (pdec_param->crop_whxy[i][1] % 2));
+ if (pdec_param->sc_expr[i][0][0] && pdec_param->sc_expr[i][1][0]) {
+ if (av_expr_parse_and_eval(&res, pdec_param->sc_expr[i][0], var_names,
+ var_values, NULL, NULL, NULL, NULL, NULL, 0,
+ s) < 0) {
+ return AVERROR_INVALIDDATA;
+ }
+ pdec_param->scale_wh[i][0] = ceil(res);
+ ret = av_expr_parse_and_eval(&res, pdec_param->sc_expr[i][1], var_names,
+ var_values, NULL, NULL, NULL, NULL, NULL,
+ 0, s);
+ if (ret < 0) {
+ return AVERROR_INVALIDDATA;
+ }
+ pdec_param->scale_wh[i][1] = ceil(res);
+ }
+ }
+ return 0;
+}
+
+int ff_xcoder_dec_init(AVCodecContext *avctx, XCoderDecContext *s) {
+ int ret = 0;
+ ni_xcoder_params_t *p_param = &s->api_param;
+
+ s->api_ctx.hw_id = s->dev_dec_idx;
+ s->api_ctx.decoder_low_delay = 0;
+ ff_xcoder_strncpy(s->api_ctx.blk_dev_name, s->dev_blk_name,
+ NI_MAX_DEVICE_NAME_LEN);
+ ff_xcoder_strncpy(s->api_ctx.dev_xcoder_name, s->dev_xcoder,
+ MAX_CHAR_IN_DEVICE_NAME);
+
+ ret = ni_device_session_open(&s->api_ctx, NI_DEVICE_TYPE_DECODER);
+ if (ret != 0) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to open decoder (status = %d), "
+ "resource unavailable\n", ret);
+ ret = AVERROR_EXTERNAL;
+ ff_xcoder_dec_close(avctx, s);
+ } else {
+ s->dev_xcoder_name = s->api_ctx.dev_xcoder_name;
+ s->blk_xcoder_name = s->api_ctx.blk_xcoder_name;
+ s->dev_dec_idx = s->api_ctx.hw_id;
+ av_log(avctx, AV_LOG_VERBOSE,
+ "XCoder %s.%d (inst: %d) opened successfully\n",
+ s->dev_xcoder_name, s->dev_dec_idx, s->api_ctx.session_id);
+
+ if (p_param->dec_input_params.hwframes) {
+ if (!avctx->hw_device_ctx) {
+ char buf[64] = {0};
+ av_log(avctx, AV_LOG_DEBUG,
+ "nicodec.c:ff_xcoder_dec_init() hwdevice_ctx_create\n");
+ snprintf(buf, sizeof(buf), "%d", s->dev_dec_idx);
+ ret = av_hwdevice_ctx_create(&avctx->hw_device_ctx, AV_HWDEVICE_TYPE_NI_QUADRA,
+ buf, NULL, 0); // create with null device
+ if (ret < 0) {
+ av_log(NULL, AV_LOG_ERROR, "Error creating a NI HW device\n");
+ return ret;
+ }
+ }
+ if (!avctx->hw_frames_ctx) {
+ avctx->hw_frames_ctx = av_hwframe_ctx_alloc(avctx->hw_device_ctx);
+
+ if (!avctx->hw_frames_ctx) {
+ ret = AVERROR(ENOMEM);
+ return ret;
+ }
+ }
+ s->frames = (AVHWFramesContext *)avctx->hw_frames_ctx->data;
+
+ s->frames->format = AV_PIX_FMT_NI_QUAD;
+ s->frames->width = avctx->width;
+ s->frames->height = avctx->height;
+
+ s->frames->sw_format = avctx->sw_pix_fmt;
+ // Decoder has its own dedicated pool
+ s->frames->initial_pool_size = -1;
+
+ ret = av_hwframe_ctx_init(avctx->hw_frames_ctx);
+
+ avctx->pix_fmt = AV_PIX_FMT_NI_QUAD;
+ s->api_ctx.hw_action = NI_CODEC_HW_ENABLE;
+ } else {
+ // reassign in case above conditions alter value
+ avctx->pix_fmt = avctx->sw_pix_fmt;
+ s->api_ctx.hw_action = NI_CODEC_HW_NONE;
+ }
+ }
+
+ return ret;
+}
+
+int ff_xcoder_dec_close(AVCodecContext *avctx, XCoderDecContext *s) {
+ ni_session_context_t *p_ctx = &s->api_ctx;
+
+ if (p_ctx) {
+ // dec params in union with enc params struct
+ ni_retcode_t ret;
+ ni_xcoder_params_t *p_param = &s->api_param;
+ int suspended = 0;
+
+ ret = ni_device_session_close(p_ctx, s->eos, NI_DEVICE_TYPE_DECODER);
+ if (NI_RETCODE_SUCCESS != ret) {
+ av_log(avctx, AV_LOG_ERROR,
+ "Failed to close Decode Session (status = %d)\n", ret);
+ }
+ ni_device_session_context_clear(p_ctx);
+
+ if (p_param->dec_input_params.hwframes) {
+ av_log(avctx, AV_LOG_VERBOSE,
+ "File BLK handle %d close suspended to frames Uninit\n",
+ p_ctx->blk_io_handle); // suspended_device_handle
+ if (avctx->hw_frames_ctx) {
+ AVHWFramesContext *ctx =
+ (AVHWFramesContext *)avctx->hw_frames_ctx->data;
+ if (ctx) {
+ AVNIFramesContext *dst_ctx = (AVNIFramesContext*) ctx->hwctx;
+ if (dst_ctx) {
+ dst_ctx->suspended_device_handle = p_ctx->blk_io_handle;
+ suspended = 1;
+ }
+ }
+ }
+ }
+
+ if (suspended) {
+#ifdef __linux__
+ ni_device_close(p_ctx->device_handle);
+#endif
+ } else {
+#ifdef _WIN32
+ ni_device_close(p_ctx->device_handle);
+#elif __linux__
+ ni_device_close(p_ctx->device_handle);
+ ni_device_close(p_ctx->blk_io_handle);
+#endif
+ }
+ p_ctx->device_handle = NI_INVALID_DEVICE_HANDLE;
+ p_ctx->blk_io_handle = NI_INVALID_DEVICE_HANDLE;
+ ni_packet_t *xpkt = &(s->api_pkt.data.packet);
+ ni_packet_buffer_free(xpkt);
+ }
+
+ return 0;
+}
+
+// return 1 if need to prepend saved header to pkt data, 0 otherwise
+int ff_xcoder_add_headers(AVCodecContext *avctx, AVPacket *pkt,
+ uint8_t *extradata, int extradata_size) {
+ XCoderDecContext *s = avctx->priv_data;
+ int ret = 0;
+ int vps_num, sps_num, pps_num;
+
+ // check key frame packet only
+ if (!(pkt->flags & AV_PKT_FLAG_KEY) || !pkt->data || !extradata ||
+ !extradata_size) {
+ return ret;
+ }
+
+ if (s->extradata_size == extradata_size &&
+ memcmp(s->extradata, extradata, extradata_size) == 0) {
+ av_log(avctx, AV_LOG_TRACE, "%s extradata unchanged.\n", __FUNCTION__);
+ return ret;
+ }
+
+ if (AV_CODEC_ID_H264 != avctx->codec_id &&
+ AV_CODEC_ID_HEVC != avctx->codec_id) {
+ av_log(avctx, AV_LOG_DEBUG, "%s not AVC/HEVC codec: %d, skip!\n",
+ __FUNCTION__, avctx->codec_id);
+ return ret;
+ }
+
+ // extradata (headers) non-existing or changed: save/update it in the
+ // session storage
+ av_freep(&s->extradata);
+ s->extradata_size = 0;
+ s->got_first_key_frame = 0;
+ s->extradata = av_malloc(extradata_size);
+ if (!s->extradata) {
+ av_log(avctx, AV_LOG_ERROR, "%s memory allocation failed !\n",
+ __FUNCTION__);
+ return ret;
+ }
+
+ memcpy(s->extradata, extradata, extradata_size);
+ s->extradata_size = extradata_size;
+ // prepend header by default (assuming no header found in the pkt itself)
+ ret = 1;
+ // and we've got the first key frame of this stream
+ s->got_first_key_frame = 1;
+ vps_num = sps_num = pps_num = 0;
+
+ if (s->api_param.dec_input_params.skip_extra_headers &&
+ (s->extradata_size > 0) &&
+ s->extradata) {
+ const uint8_t *ptr = s->extradata;
+ const uint8_t *end = s->extradata + s->extradata_size;
+ uint32_t stc;
+ uint8_t nalu_type;
+
+ while (ptr < end) {
+ stc = -1;
+ ptr = avpriv_find_start_code(ptr, end, &stc);
+ if (ptr == end) {
+ break;
+ }
+
+ if (AV_CODEC_ID_H264 == avctx->codec_id) {
+ nalu_type = stc & 0x1f;
+
+ if (H264_NAL_SPS == nalu_type) {
+ sps_num++;
+ } else if(H264_NAL_PPS == nalu_type) {
+ pps_num++;
+ }
+
+ if (sps_num > H264_MAX_SPS_COUNT ||
+ pps_num > H264_MAX_PPS_COUNT) {
+ ret = 0;
+ av_log(avctx, AV_LOG_WARNING, "Drop extradata because of repeated SPS/PPS\n");
+ break;
+ }
+ } else if (AV_CODEC_ID_HEVC == avctx->codec_id) {
+ nalu_type = (stc >> 1) & 0x3F;
+
+ if (HEVC_NAL_VPS == nalu_type) {
+ vps_num++;
+ } else if (HEVC_NAL_SPS == nalu_type) {
+ sps_num++;
+ } else if (HEVC_NAL_PPS == nalu_type) {
+ pps_num++;
+ }
+
+ if (vps_num > HEVC_MAX_VPS_COUNT ||
+ sps_num > HEVC_MAX_SPS_COUNT ||
+ pps_num > HEVC_MAX_PPS_COUNT) {
+ ret = 0;
+ av_log(avctx, AV_LOG_WARNING, "Drop extradata because of repeated VPS/SPS/PPS\n");
+ break;
+ }
+ }
+ }
+ }
+
+ return ret;
+}
+int ff_xcoder_dec_send(AVCodecContext *avctx, XCoderDecContext *s, AVPacket *pkt) {
+ /* call ni_decoder_session_write to send compressed video packet to the decoder
+ instance */
+ int need_draining = 0;
+ size_t size;
+ ni_packet_t *xpkt = &(s->api_pkt.data.packet);
+ int ret;
+ int sent;
+ int send_size = 0;
+ int new_packet = 0;
+ int extra_prev_size = 0;
+ int svct_skip_packet = s->svct_skip_next_packet;
+ OpaqueData *opaque_data;
+
+ size = pkt->size;
+
+ if (s->flushing) {
+ av_log(avctx, AV_LOG_ERROR, "Decoder is flushing and cannot accept new "
+ "buffer until all output buffers have been released\n");
+ return AVERROR_EXTERNAL;
+ }
+
+ if (pkt->size == 0) {
+ need_draining = 1;
+ }
+
+ if (s->draining && s->eos) {
+ av_log(avctx, AV_LOG_VERBOSE, "Decoder is draining, eos\n");
+ return AVERROR_EOF;
+ }
+
+ if (xpkt->data_len == 0) {
+ AVBSFContext *bsf = avctx->internal->bsf;
+ uint8_t *extradata = bsf ? bsf->par_out->extradata : avctx->extradata;
+ int extradata_size = bsf ? bsf->par_out->extradata_size : avctx->extradata_size;
+
+ memset(xpkt, 0, sizeof(ni_packet_t));
+ xpkt->pts = pkt->pts;
+ xpkt->dts = pkt->dts;
+ xpkt->flags = pkt->flags;
+ xpkt->video_width = avctx->width;
+ xpkt->video_height = avctx->height;
+ xpkt->p_data = NULL;
+ xpkt->data_len = pkt->size;
+ xpkt->pkt_pos = pkt->pos;
+
+ if (pkt->flags & AV_PKT_FLAG_KEY && extradata_size > 0 &&
+ ff_xcoder_add_headers(avctx, pkt, extradata, extradata_size)) {
+ if (extradata_size > s->api_ctx.max_nvme_io_size * 2) {
+ av_log(avctx, AV_LOG_ERROR,
+ "ff_xcoder_dec_send extradata_size %d "
+ "exceeding max size supported: %d\n",
+ extradata_size, s->api_ctx.max_nvme_io_size * 2);
+ } else {
+ av_log(avctx, AV_LOG_VERBOSE,
+ "ff_xcoder_dec_send extradata_size %d "
+ "copied to pkt start.\n",
+ s->extradata_size);
+
+ s->api_ctx.prev_size = s->extradata_size;
+ memcpy(s->api_ctx.p_leftover, s->extradata, s->extradata_size);
+ }
+ }
+
+ s->svct_skip_next_packet = 0;
+ // If there was lone custom sei in the last packet and the firmware would
+ // fail to recoginze it. So passthrough the custom sei here.
+ if (s->lone_sei_pkt.size > 0) {
+ // No need to check the return value here because the lone_sei_pkt was
+ // parsed before. Here it is only to extract the SEI data.
+ ni_dec_packet_parse(&s->api_ctx, &s->api_param, s->lone_sei_pkt.data,
+ s->lone_sei_pkt.size, xpkt, s->low_delay,
+ s->api_ctx.codec_format, s->pkt_nal_bitmap, -1,
+ &s->svct_skip_next_packet, &s->is_lone_sei_pkt);
+ }
+
+ ret = ni_dec_packet_parse(&s->api_ctx, &s->api_param, pkt->data,
+ pkt->size, xpkt, s->low_delay,
+ s->api_ctx.codec_format, s->pkt_nal_bitmap,
+ -1, &s->svct_skip_next_packet,
+ &s->is_lone_sei_pkt);
+ if (ret < 0) {
+ goto fail;
+ }
+
+ if (svct_skip_packet) {
+ av_log(avctx, AV_LOG_TRACE, "ff_xcoder_dec_send packet: pts:%" PRIi64 ","
+ " size:%d\n", pkt->pts, pkt->size);
+ xpkt->data_len = 0;
+ return pkt->size;
+ }
+
+ // If the current packet is a lone SEI, save it to be sent with the next
+ // packet. And also check if getting the first packet containing key frame
+ // in decoder low delay mode.
+ if (s->is_lone_sei_pkt) {
+ av_packet_ref(&s->lone_sei_pkt, pkt);
+ xpkt->data_len = 0;
+ ni_memfree(xpkt->p_custom_sei_set);
+ if (s->low_delay && s->got_first_key_frame &&
+ !(s->pkt_nal_bitmap & NI_GENERATE_ALL_NAL_HEADER_BIT)) {
+ // Packets before the IDR is sent cannot be decoded. So
+ // set packet num to zero here.
+ s->api_ctx.decoder_low_delay = s->low_delay;
+ s->api_ctx.pkt_num = 0;
+ s->pkt_nal_bitmap |= NI_GENERATE_ALL_NAL_HEADER_BIT;
+ av_log(avctx, AV_LOG_TRACE,
+ "ff_xcoder_dec_send got first IDR in decoder low delay "
+ "mode, "
+ "delay time %dms, pkt_nal_bitmap %d\n",
+ s->low_delay, s->pkt_nal_bitmap);
+ }
+ av_log(avctx, AV_LOG_TRACE, "ff_xcoder_dec_send pkt lone SEI, saved, "
+ "and return %d\n", pkt->size);
+ return pkt->size;
+ }
+
+ // Send the previous saved lone SEI packet to the decoder
+ if (s->lone_sei_pkt.size > 0) {
+ av_log(avctx, AV_LOG_TRACE, "ff_xcoder_dec_send copy over lone SEI "
+ "data size: %d\n", s->lone_sei_pkt.size);
+ memcpy(s->api_ctx.p_leftover + s->api_ctx.prev_size,
+ s->lone_sei_pkt.data, s->lone_sei_pkt.size);
+ s->api_ctx.prev_size += s->lone_sei_pkt.size;
+ av_packet_unref(&s->lone_sei_pkt);
+ }
+
+ if (pkt->size + s->api_ctx.prev_size > 0) {
+ ni_packet_buffer_alloc(xpkt, (pkt->size + s->api_ctx.prev_size));
+ if (!xpkt->p_data) {
+ ret = AVERROR(ENOMEM);
+ goto fail;
+ }
+ }
+ new_packet = 1;
+ } else {
+ send_size = xpkt->data_len;
+ }
+
+ av_log(avctx, AV_LOG_VERBOSE, "ff_xcoder_dec_send: pkt->size=%d pkt->buf=%p\n", pkt->size, pkt->buf);
+
+ if (s->started == 0) {
+ xpkt->start_of_stream = 1;
+ s->started = 1;
+ }
+
+ if (need_draining && !s->draining) {
+ av_log(avctx, AV_LOG_VERBOSE, "Sending End Of Stream signal\n");
+ xpkt->end_of_stream = 1;
+ xpkt->data_len = 0;
+
+ av_log(avctx, AV_LOG_TRACE, "ni_packet_copy before: size=%d, s->prev_size=%d, send_size=%d (end of stream)\n", pkt->size, s->api_ctx.prev_size, send_size);
+ if (new_packet) {
+ extra_prev_size = s->api_ctx.prev_size;
+ send_size = ni_packet_copy(xpkt->p_data, pkt->data, pkt->size, s->api_ctx.p_leftover, &s->api_ctx.prev_size);
+ // increment offset of data sent to decoder and save it
+ xpkt->pos = (long long)s->offset;
+ s->offset += pkt->size + extra_prev_size;
+ }
+ av_log(avctx, AV_LOG_TRACE, "ni_packet_copy after: size=%d, s->prev_size=%d, send_size=%d, xpkt->data_len=%d (end of stream)\n", pkt->size, s->api_ctx.prev_size, send_size, xpkt->data_len);
+
+ if (send_size < 0) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to copy pkt (status = %d)\n",
+ send_size);
+ ret = AVERROR_EXTERNAL;
+ goto fail;
+ }
+ xpkt->data_len += extra_prev_size;
+
+ sent = 0;
+ if (xpkt->data_len > 0) {
+ sent = ni_device_session_write(&(s->api_ctx), &(s->api_pkt), NI_DEVICE_TYPE_DECODER);
+ }
+ if (sent < 0) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to send eos signal (status = %d)\n",
+ sent);
+ if (NI_RETCODE_ERROR_VPU_RECOVERY == sent) {
+ ret = xcoder_decode_reset(avctx);
+ if (0 == ret) {
+ ret = AVERROR(EAGAIN);
+ }
+ } else {
+ ret = AVERROR(EIO);
+ }
+ goto fail;
+ }
+ av_log(avctx, AV_LOG_VERBOSE, "Queued eos (status = %d) ts=%llu\n",
+ sent, xpkt->pts);
+ s->draining = 1;
+
+ ni_device_session_flush(&(s->api_ctx), NI_DEVICE_TYPE_DECODER);
+ } else {
+ av_log(avctx, AV_LOG_TRACE, "ni_packet_copy before: size=%d, s->prev_size=%d, send_size=%d\n", pkt->size, s->api_ctx.prev_size, send_size);
+ if (new_packet) {
+ extra_prev_size = s->api_ctx.prev_size;
+ send_size = ni_packet_copy(xpkt->p_data, pkt->data, pkt->size, s->api_ctx.p_leftover, &s->api_ctx.prev_size);
+ // increment offset of data sent to decoder and save it
+ xpkt->pos = (long long)s->offset;
+ s->offset += pkt->size + extra_prev_size;
+ }
+ av_log(avctx, AV_LOG_TRACE, "ni_packet_copy after: size=%d, s->prev_size=%d, send_size=%d, xpkt->data_len=%d\n", pkt->size, s->api_ctx.prev_size, send_size, xpkt->data_len);
+
+ if (send_size < 0) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to copy pkt (status = %d)\n", send_size);
+ ret = AVERROR_EXTERNAL;
+ goto fail;
+ }
+ xpkt->data_len += extra_prev_size;
+
+ sent = 0;
+ if (xpkt->data_len > 0) {
+ sent = ni_device_session_write(&s->api_ctx, &(s->api_pkt), NI_DEVICE_TYPE_DECODER);
+ av_log(avctx, AV_LOG_VERBOSE, "ff_xcoder_dec_send pts=%" PRIi64 ", dts=%" PRIi64 ", pos=%" PRIi64 ", sent=%d\n", pkt->pts, pkt->dts, pkt->pos, sent);
+ }
+ if (sent < 0) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to send compressed pkt (status = %d)\n", sent);
+ if (NI_RETCODE_ERROR_VPU_RECOVERY == sent) {
+ ret = xcoder_decode_reset(avctx);
+ if (0 == ret) {
+ ret = AVERROR(EAGAIN);
+ }
+ } else {
+ ret = AVERROR(EIO);
+ }
+ goto fail;
+ } else if (sent == 0) {
+ av_log(avctx, AV_LOG_VERBOSE, "Queued input buffer size=0\n");
+ } else if (sent < size) { /* partial sent; keep trying */
+ av_log(avctx, AV_LOG_VERBOSE, "Queued input buffer size=%d\n", sent);
+ }
+ }
+
+ if (xpkt->data_len == 0) {
+ /* if this packet is done sending, free any sei buffer. */
+ ni_memfree(xpkt->p_custom_sei_set);
+
+ /* save the opaque pointers from input packet to be copied to corresponding frame later */
+ if (avctx->flags & AV_CODEC_FLAG_COPY_OPAQUE) {
+ opaque_data = &s->opaque_data_array[s->opaque_data_pos];
+ opaque_data->pkt_pos = pkt->pos;
+ opaque_data->opaque = pkt->opaque;
+ av_buffer_replace(&opaque_data->opaque_ref, pkt->opaque_ref);
+ s->opaque_data_pos = (s->opaque_data_pos + 1) % s->opaque_data_nb;
+ }
+ }
+
+ if (sent != 0) {
+ //keep the current pkt to resend next time
+ ni_packet_buffer_free(xpkt);
+ return sent;
+ } else {
+ if (s->draining) {
+ av_log(avctx, AV_LOG_WARNING, "%s draining, sent == 0, return 0!\n", __func__);
+ return 0;
+ } else {
+ av_log(avctx, AV_LOG_VERBOSE, "%s NOT draining, sent == 0, return EAGAIN !\n", __func__);
+ return AVERROR(EAGAIN);
+ }
+ }
+
+fail:
+ ni_packet_buffer_free(xpkt);
+ ni_memfree(xpkt->p_custom_sei_set);
+ s->draining = 1;
+ s->eos = 1;
+
+ return ret;
+}
+
+int retrieve_frame(AVCodecContext *avctx, AVFrame *data, int *got_frame,
+ ni_frame_t *xfme) {
+ XCoderDecContext *s = avctx->priv_data;
+ ni_xcoder_params_t *p_param =
+ &s->api_param; // dec params in union with enc params struct
+ int num_extra_outputs = (p_param->dec_input_params.enable_out1 > 0) + (p_param->dec_input_params.enable_out2 > 0);
+ uint32_t buf_size = xfme->data_len[0] + xfme->data_len[1] +
+ xfme->data_len[2] + xfme->data_len[3];
+ uint8_t *buf = xfme->p_data[0];
+ uint8_t *buf1, *buf2;
+ bool is_hw;
+ int frame_planar;
+ int stride = 0;
+ int res = 0;
+ AVHWFramesContext *ctx = NULL;
+ AVNIFramesContext *dst_ctx = NULL;
+ AVFrame *frame = data;
+ ni_aux_data_t *aux_data = NULL;
+ AVFrameSideData *av_side_data = NULL;
+ ni_session_data_io_t session_io_data1;
+ ni_session_data_io_t session_io_data2;
+ ni_session_data_io_t *p_session_data1 = &session_io_data1;
+ ni_session_data_io_t *p_session_data2 = &session_io_data2;
+ niFrameSurface1_t *p_data3;
+ niFrameSurface1_t *p_data3_1;
+ niFrameSurface1_t *p_data3_2;
+ OpaqueData *opaque_data;
+ int i;
+
+ av_log(avctx, AV_LOG_TRACE,
+ "retrieve_frame: buf %p data_len [%d %d %d %d] buf_size %u\n", buf,
+ xfme->data_len[0], xfme->data_len[1], xfme->data_len[2],
+ xfme->data_len[3], buf_size);
+
+ memset(p_session_data1, 0, sizeof(ni_session_data_io_t));
+ memset(p_session_data2, 0, sizeof(ni_session_data_io_t));
+
+ switch (avctx->sw_pix_fmt) {
+ case AV_PIX_FMT_NV12:
+ case AV_PIX_FMT_P010LE:
+ frame_planar = NI_PIXEL_PLANAR_FORMAT_SEMIPLANAR;
+ break;
+ case AV_PIX_FMT_NI_QUAD_8_TILE_4X4:
+ case AV_PIX_FMT_NI_QUAD_10_TILE_4X4:
+ frame_planar = NI_PIXEL_PLANAR_FORMAT_TILED4X4;
+ break;
+ default:
+ frame_planar = NI_PIXEL_PLANAR_FORMAT_PLANAR;
+ break;
+ }
+
+ if (num_extra_outputs) {
+ ni_frame_buffer_alloc(&(p_session_data1->data.frame), 1,
+ 1, // width height does not matter//codec id does
+ // not matter//no metadata
+ 1, 0, 1, 1, frame_planar);
+ buf1 = p_session_data1->data.frame.p_data[0];
+ if (num_extra_outputs > 1) {
+ ni_frame_buffer_alloc(&(p_session_data2->data.frame), 1,
+ 1, // width height does not matter
+ 1, 0, 1, 1, frame_planar);
+ buf2 = p_session_data2->data.frame.p_data[0];
+ }
+ }
+
+ is_hw = xfme->data_len[3] > 0;
+ if (is_hw) {
+ if (frame->hw_frames_ctx) {
+ ctx = (AVHWFramesContext*)frame->hw_frames_ctx->data;
+ dst_ctx = (AVNIFramesContext*) ctx->hwctx;
+ }
+
+ // Note, the real first frame could be dropped due to AV_PKT_FLAG_DISCARD
+ if ((dst_ctx != NULL) &&
+ (dst_ctx->api_ctx.device_handle != s->api_ctx.device_handle)) {
+ if (frame->hw_frames_ctx) {
+ av_log(avctx, AV_LOG_VERBOSE,
+ "First frame, set hw_frame_context to copy decode sessions "
+ "threads\n");
+ res = ni_device_session_copy(&s->api_ctx, &dst_ctx->api_ctx);
+ if (NI_RETCODE_SUCCESS != res) {
+ return res;
+ }
+ av_log(avctx, AV_LOG_VERBOSE,
+ "retrieve_frame: blk_io_handle %d device_handle %d\n",
+ s->api_ctx.blk_io_handle, s->api_ctx.device_handle);
+ }
+ }
+ }
+
+ av_log(avctx, AV_LOG_VERBOSE, "decoding %" PRId64 " frame ...\n", s->api_ctx.frame_num);
+
+ if (avctx->width <= 0) {
+ av_log(avctx, AV_LOG_ERROR, "width is not set\n");
+ return AVERROR_INVALIDDATA;
+ }
+ if (avctx->height <= 0) {
+ av_log(avctx, AV_LOG_ERROR, "height is not set\n");
+ return AVERROR_INVALIDDATA;
+ }
+
+ stride = s->api_ctx.active_video_width;
+
+ av_log(avctx, AV_LOG_VERBOSE, "XFRAME SIZE: %d, STRIDE: %d\n", buf_size, stride);
+
+ if (!is_hw && (stride == 0 || buf_size < stride * avctx->height)) {
+ av_log(avctx, AV_LOG_ERROR, "Packet too small (%d)\n", buf_size);
+ return AVERROR_INVALIDDATA;
+ }
+
+ frame->flags &= ~AV_FRAME_FLAG_KEY;
+ if(xfme->ni_pict_type & 0x10) //key frame marker
+ frame->flags |= AV_FRAME_FLAG_KEY;
+ switch (xfme->ni_pict_type & 0xF) {
+ case DECODER_PIC_TYPE_IDR:
+ frame->flags |= AV_FRAME_FLAG_KEY;
+ case PIC_TYPE_I:
+ frame->pict_type = AV_PICTURE_TYPE_I;
+ if(s->api_param.dec_input_params.enable_follow_iframe) {
+ frame->flags |= AV_FRAME_FLAG_KEY;
+ }
+ break;
+ case PIC_TYPE_P:
+ frame->pict_type = AV_PICTURE_TYPE_P;
+ break;
+ case PIC_TYPE_B:
+ frame->pict_type = AV_PICTURE_TYPE_B;
+ break;
+ default:
+ frame->pict_type = AV_PICTURE_TYPE_NONE;
+ }
+
+ if (AV_CODEC_ID_MJPEG == avctx->codec_id) {
+ frame->flags |= AV_FRAME_FLAG_KEY;
+ }
+
+ // lowdelay mode should close when frame is B frame
+ if (frame->pict_type == AV_PICTURE_TYPE_B &&
+ s->api_ctx.enable_low_delay_check &&
+ s->low_delay) {
+ av_log(avctx, AV_LOG_WARNING,
+ "Warning: session %d decoder lowDelay mode "
+ "is cancelled due to B frames with "
+ "enable_low_delay_check, frame_num %" PRId64 "\n",
+ s->api_ctx.session_id, s->api_ctx.frame_num);
+ s->low_delay = 0;
+ }
+ res = ff_decode_frame_props(avctx, frame);
+ if (res < 0)
+ return res;
+
+ frame->duration = avctx->internal->last_pkt_props->duration;
+
+ if ((res = av_image_check_size(xfme->video_width, xfme->video_height, 0, avctx)) < 0)
+ return res;
+
+ if (is_hw) {
+ frame->buf[0] = av_buffer_create(buf, buf_size, ni_frame_free, NULL, 0);
+ if (num_extra_outputs) {
+ frame->buf[1] =
+ av_buffer_create(buf1, (int)(buf_size / 3), ni_frame_free, NULL, 0);
+ buf1 = frame->buf[1]->data;
+ memcpy(buf1, buf + sizeof(niFrameSurface1_t),
+ sizeof(niFrameSurface1_t)); // copy hwdesc to new buffer
+ if (num_extra_outputs > 1) {
+ frame->buf[2] = av_buffer_create(buf2, (int)(buf_size / 3),
+ ni_frame_free, NULL, 0);
+ buf2 = frame->buf[2]->data;
+ memcpy(buf2, buf + 2 * sizeof(niFrameSurface1_t),
+ sizeof(niFrameSurface1_t));
+ }
+ }
+ } else {
+ frame->buf[0] = av_buffer_create(buf, buf_size, ni_align_free, xfme->dec_buf, 0);
+ }
+ av_log(avctx, AV_LOG_TRACE,
+ "retrieve_frame: is_hw %d frame->buf[0] %p buf %p buf_size %u "
+ "num_extra_outputs %d pkt_duration %ld\n",
+ is_hw, frame->buf[0], buf, buf_size, num_extra_outputs,
+ frame->duration);
+
+ buf = frame->buf[0]->data;
+
+ // retrieve side data if available
+ ni_dec_retrieve_aux_data(xfme);
+
+ // update avctx framerate with timing info
+ if (xfme->vui_time_scale && xfme->vui_num_units_in_tick) {
+ av_reduce(&avctx->framerate.den, &avctx->framerate.num,
+ xfme->vui_num_units_in_tick * (AV_CODEC_ID_H264 == avctx->codec_id ? 2 : 1),
+ xfme->vui_time_scale, 1 << 30);
+ }
+
+ if (xfme->vui_len > 0) {
+ enum AVColorRange color_range = xfme->video_full_range_flag ? AVCOL_RANGE_JPEG : AVCOL_RANGE_MPEG;
+ if ((avctx->color_range != color_range) ||
+ (avctx->color_trc != xfme->color_trc) ||
+ (avctx->colorspace != xfme->color_space) ||
+ (avctx->color_primaries != xfme->color_primaries)) {
+ avctx->color_range = frame->color_range = color_range;
+ avctx->color_trc = frame->color_trc = xfme->color_trc;
+ avctx->colorspace = frame->colorspace = xfme->color_space;
+ avctx->color_primaries = frame->color_primaries = xfme->color_primaries;
+ }
+
+ if (avctx->pix_fmt != AV_PIX_FMT_NI_QUAD) {
+ if (frame->format == AV_PIX_FMT_YUVJ420P && color_range == AVCOL_RANGE_MPEG)
+ frame->format = AV_PIX_FMT_YUV420P;
+ else if (frame->format == AV_PIX_FMT_YUV420P && color_range == AVCOL_RANGE_JPEG)
+ frame->format = AV_PIX_FMT_YUVJ420P;
+ }
+ }
+ // User Data Unregistered SEI if available
+ av_log(avctx, AV_LOG_VERBOSE, "#SEI# UDU (offset=%u len=%u)\n",
+ xfme->sei_user_data_unreg_offset, xfme->sei_user_data_unreg_len);
+ if (xfme->sei_user_data_unreg_offset) {
+ if ((aux_data = ni_frame_get_aux_data(xfme, NI_FRAME_AUX_DATA_UDU_SEI))) {
+ av_side_data = av_frame_new_side_data(
+ frame, AV_FRAME_DATA_SEI_UNREGISTERED, aux_data->size);
+ if (!av_side_data) {
+ return AVERROR(ENOMEM);
+ } else {
+ memcpy(av_side_data->data, aux_data->data, aux_data->size);
+ }
+ av_log(avctx, AV_LOG_VERBOSE, "UDU SEI added (len=%d type=5)\n",
+ xfme->sei_user_data_unreg_len);
+ } else {
+ av_log(avctx, AV_LOG_ERROR, "UDU SEI dropped! (len=%d type=5)\n",
+ xfme->sei_user_data_unreg_len);
+ }
+ }
+
+ // close caption data if available
+ av_log(avctx, AV_LOG_VERBOSE, "#SEI# CC (offset=%u len=%u)\n",
+ xfme->sei_cc_offset, xfme->sei_cc_len);
+ if ((aux_data = ni_frame_get_aux_data(xfme, NI_FRAME_AUX_DATA_A53_CC))) {
+ av_side_data =
+ av_frame_new_side_data(frame, AV_FRAME_DATA_A53_CC, aux_data->size);
+
+ if (!av_side_data) {
+ return AVERROR(ENOMEM);
+ } else {
+ memcpy(av_side_data->data, aux_data->data, aux_data->size);
+ }
+ }
+
+ // hdr10 sei data if available
+ av_log(avctx, AV_LOG_VERBOSE, "#SEI# MDCV (offset=%u len=%u)\n",
+ xfme->sei_hdr_mastering_display_color_vol_offset,
+ xfme->sei_hdr_mastering_display_color_vol_len);
+ if ((aux_data = ni_frame_get_aux_data(
+ xfme, NI_FRAME_AUX_DATA_MASTERING_DISPLAY_METADATA))) {
+ AVMasteringDisplayMetadata *mdm =
+ av_mastering_display_metadata_create_side_data(frame);
+ if (!mdm) {
+ return AVERROR(ENOMEM);
+ } else {
+ memcpy(mdm, aux_data->data, aux_data->size);
+ }
+ }
+
+ av_log(avctx, AV_LOG_VERBOSE, "#SEI# CLL (offset=%u len=%u)\n",
+ xfme->sei_hdr_content_light_level_info_offset,
+ xfme->sei_hdr_content_light_level_info_len);
+ if ((aux_data = ni_frame_get_aux_data(
+ xfme, NI_FRAME_AUX_DATA_CONTENT_LIGHT_LEVEL))) {
+ AVContentLightMetadata *clm =
+ av_content_light_metadata_create_side_data(frame);
+ if (!clm) {
+ return AVERROR(ENOMEM);
+ } else {
+ memcpy(clm, aux_data->data, aux_data->size);
+ }
+ }
+
+ // hdr10+ sei data if available
+ av_log(avctx, AV_LOG_VERBOSE, "#SEI# HDR10+ (offset=%u len=%u)\n",
+ xfme->sei_hdr_plus_offset, xfme->sei_hdr_plus_len);
+ if ((aux_data = ni_frame_get_aux_data(xfme, NI_FRAME_AUX_DATA_HDR_PLUS))) {
+ AVDynamicHDRPlus *hdrp = av_dynamic_hdr_plus_create_side_data(frame);
+
+ if (!hdrp) {
+ return AVERROR(ENOMEM);
+ } else {
+ memcpy(hdrp, aux_data->data, aux_data->size);
+ }
+ } // hdr10+ sei
+
+ // remember to clean up auxiliary data of ni_frame after their use
+ ni_frame_wipe_aux_data(xfme);
+
+ // NI decoders in public FFmpeg will not have custom SEI feature
+ // If the feature was not already disabled, free its used memory here
+ if (xfme->p_custom_sei_set) {
+ free(xfme->p_custom_sei_set);
+ xfme->p_custom_sei_set = NULL;
+ }
+
+ frame->pkt_dts = xfme->dts;
+ frame->pts = xfme->pts;
+ if (xfme->pts != NI_NOPTS_VALUE) {
+ s->current_pts = frame->pts;
+ }
+
+ if (is_hw) {
+ p_data3 = (niFrameSurface1_t*)(xfme->p_buffer + xfme->data_len[0] + xfme->data_len[1] + xfme->data_len[2]);
+ frame->data[3] = xfme->p_buffer + xfme->data_len[0] + xfme->data_len[1] + xfme->data_len[2];
+
+ av_log(avctx, AV_LOG_DEBUG, "retrieve_frame: OUT0 data[3] trace ui16FrameIdx = [%d], device_handle=%d bitdep=%d, WxH %d x %d\n",
+ p_data3->ui16FrameIdx,
+ p_data3->device_handle,
+ p_data3->bit_depth,
+ p_data3->ui16width,
+ p_data3->ui16height);
+
+ if (num_extra_outputs) {
+ p_data3_1 = (niFrameSurface1_t*)buf1;
+ av_log(avctx, AV_LOG_DEBUG, "retrieve_frame: OUT1 data[3] trace ui16FrameIdx = [%d], device_handle=%d bitdep=%d, WxH %d x %d\n",
+ p_data3_1->ui16FrameIdx,
+ p_data3_1->device_handle,
+ p_data3_1->bit_depth,
+ p_data3_1->ui16width,
+ p_data3_1->ui16height);
+ if (num_extra_outputs > 1) {
+ p_data3_2 = (niFrameSurface1_t*)buf2;
+ av_log(avctx, AV_LOG_DEBUG, "retrieve_frame: OUT2 data[3] trace ui16FrameIdx = [%d], device_handle=%d bitdep=%d, WxH %d x %d\n",
+ p_data3_2->ui16FrameIdx,
+ p_data3_2->device_handle,
+ p_data3_2->bit_depth,
+ p_data3_2->ui16width,
+ p_data3_2->ui16height);
+ }
+ }
+ }
+ av_log(avctx, AV_LOG_VERBOSE, "retrieve_frame: frame->buf[0]=%p, "
+ "frame->data=%p, frame->pts=%" PRId64 ", frame size=%d, "
+ "s->current_pts=%" PRId64 ", frame->pkt_duration=%" PRId64
+ " sei size %d offset %u\n", frame->buf[0], frame->data, frame->pts,
+ buf_size, s->current_pts, frame->duration, xfme->sei_cc_len,
+ xfme->sei_cc_offset);
+
+ /* av_buffer_ref(avpkt->buf); */
+ if (!frame->buf[0])
+ return AVERROR(ENOMEM);
+
+ if (!is_hw &&
+ ((res = av_image_fill_arrays(
+ frame->data, frame->linesize, buf, avctx->sw_pix_fmt,
+ (int)(s->api_ctx.active_video_width / s->api_ctx.bit_depth_factor),
+ s->api_ctx.active_video_height, 1)) < 0)) {
+ av_buffer_unref(&frame->buf[0]);
+ return res;
+ }
+
+ av_log(avctx, AV_LOG_VERBOSE, "retrieve_frame: success av_image_fill_arrays "
+ "return %d\n", res);
+
+ if (!is_hw) {
+ frame->linesize[1] = frame->linesize[2] = (((frame->width / ((frame_planar == 0) ? 1 : 2) * s->api_ctx.bit_depth_factor) + 127) / 128) * 128;
+ frame->linesize[2] = (frame_planar == 0) ? 0 : frame->linesize[1];
+ frame->data[2] = (frame_planar == 0) ? 0 : frame->data[1] + (frame->linesize[1] * frame->height / 2);
+ }
+
+ frame->crop_top = xfme->crop_top;
+ frame->crop_bottom = frame->height - xfme->crop_bottom; // ppu auto crop should have cropped out padding, crop_bottom should be 0
+ frame->crop_left = xfme->crop_left;
+ frame->crop_right = frame->width - xfme->crop_right; // ppu auto crop should have cropped out padding, crop_right should be 0
+
+ if (is_hw && frame->hw_frames_ctx && dst_ctx != NULL) {
+ av_log(avctx, AV_LOG_TRACE,
+ "retrieve_frame: hw_frames_ctx av_buffer_get_ref_count=%d\n",
+ av_buffer_get_ref_count(frame->hw_frames_ctx));
+ dst_ctx->split_ctx.enabled = (num_extra_outputs >= 1) ? 1 : 0;
+ dst_ctx->split_ctx.w[0] = p_data3->ui16width;
+ dst_ctx->split_ctx.h[0] = p_data3->ui16height;
+ dst_ctx->split_ctx.f[0] = (int)p_data3->encoding_type;
+ dst_ctx->split_ctx.f8b[0] = (int)p_data3->bit_depth;
+ dst_ctx->split_ctx.w[1] =
+ (num_extra_outputs >= 1) ? p_data3_1->ui16width : 0;
+ dst_ctx->split_ctx.h[1] =
+ (num_extra_outputs >= 1) ? p_data3_1->ui16height : 0;
+ dst_ctx->split_ctx.f[1] =
+ (num_extra_outputs >= 1) ? p_data3_1->encoding_type : 0;
+ dst_ctx->split_ctx.f8b[1] =
+ (num_extra_outputs >= 1) ? p_data3_1->bit_depth : 0;
+ dst_ctx->split_ctx.w[2] =
+ (num_extra_outputs == 2) ? p_data3_2->ui16width : 0;
+ dst_ctx->split_ctx.h[2] =
+ (num_extra_outputs == 2) ? p_data3_2->ui16height : 0;
+ dst_ctx->split_ctx.f[2] =
+ (num_extra_outputs == 2) ? p_data3_2->encoding_type : 0;
+ dst_ctx->split_ctx.f8b[2] =
+ (num_extra_outputs == 2) ? p_data3_2->bit_depth : 0;
+ }
+
+ /* retrive the opaque pointers saved earlier by matching the pkt_pos between output
+ * frame and input packet, assuming that the pkt_pos of every input packet is unique */
+ if (avctx->flags & AV_CODEC_FLAG_COPY_OPAQUE) {
+ opaque_data = NULL;
+ for (i = 0; i < s->opaque_data_nb; i++) {
+ if (s->opaque_data_array[i].pkt_pos == (int64_t)xfme->pkt_pos) {
+ opaque_data = &s->opaque_data_array[i];
+ break;
+ }
+ }
+ /* copy the pointers over to AVFrame if a matching entry found, otherwise it's unexpected so don't do anything */
+ if (opaque_data) {
+ frame->opaque = opaque_data->opaque;
+ av_buffer_replace(&frame->opaque_ref, opaque_data->opaque_ref);
+ av_buffer_unref(&opaque_data->opaque_ref);
+ opaque_data->pkt_pos = -1;
+ }
+ }
+
+ *got_frame = 1;
+ return buf_size;
+}
+
+int ff_xcoder_dec_receive(AVCodecContext *avctx, XCoderDecContext *s,
+ AVFrame *frame, bool wait)
+{
+ /* call xcode_dec_receive to get a decoded YUV frame from the decoder
+ instance */
+ int ret = 0;
+ int got_frame = 0;
+ ni_session_data_io_t session_io_data;
+ ni_session_data_io_t * p_session_data = &session_io_data;
+ int alloc_mem, height, actual_width, cropped_width, cropped_height;
+ bool bSequenceChange = 0;
+ int frame_planar;
+
+ if (s->draining && s->eos) {
+ return AVERROR_EOF;
+ }
+
+read_op:
+ memset(p_session_data, 0, sizeof(ni_session_data_io_t));
+
+ if (s->draining) {
+ s->api_ctx.burst_control = 0;
+ } else
+ if (s->api_ctx.frame_num % 2 == 0) {
+ s->api_ctx.burst_control = (s->api_ctx.burst_control == 0 ? 1 : 0); //toggle
+ }
+ if (s->api_ctx.burst_control) {
+ av_log(avctx, AV_LOG_DEBUG, "ff_xcoder_dec_receive burst return%" PRId64 " frame\n", s->api_ctx.frame_num);
+ return AVERROR(EAGAIN);
+ }
+
+ // if active video resolution has been obtained we just use it as it's the
+ // exact size of frame to be returned, otherwise we use what we are told by
+ // upper stream as the initial setting and it will be adjusted.
+ height =
+ (int)(s->api_ctx.active_video_height > 0 ? s->api_ctx.active_video_height
+ : avctx->height);
+ actual_width =
+ (int)(s->api_ctx.actual_video_width > 0 ? s->api_ctx.actual_video_width
+ : avctx->width);
+
+ // allocate memory only after resolution is known (buffer pool set up)
+ alloc_mem = (s->api_ctx.active_video_width > 0 &&
+ s->api_ctx.active_video_height > 0 ? 1 : 0);
+ switch (avctx->sw_pix_fmt) {
+ case AV_PIX_FMT_NV12:
+ case AV_PIX_FMT_P010LE:
+ frame_planar = NI_PIXEL_PLANAR_FORMAT_SEMIPLANAR;
+ break;
+ case AV_PIX_FMT_NI_QUAD_8_TILE_4X4:
+ case AV_PIX_FMT_NI_QUAD_10_TILE_4X4:
+ frame_planar = NI_PIXEL_PLANAR_FORMAT_TILED4X4;
+ break;
+ default:
+ frame_planar = NI_PIXEL_PLANAR_FORMAT_PLANAR;
+ break;
+ }
+
+ if (avctx->pix_fmt != AV_PIX_FMT_NI_QUAD) {
+ ret = ni_decoder_frame_buffer_alloc(
+ s->api_ctx.dec_fme_buf_pool, &(p_session_data->data.frame), alloc_mem,
+ actual_width, height, (avctx->codec_id == AV_CODEC_ID_H264),
+ s->api_ctx.bit_depth_factor, frame_planar);
+ } else {
+ ret = ni_frame_buffer_alloc(&(p_session_data->data.frame), actual_width,
+ height, (avctx->codec_id == AV_CODEC_ID_H264),
+ 1, s->api_ctx.bit_depth_factor, 3,
+ frame_planar);
+ }
+
+ if (NI_RETCODE_SUCCESS != ret) {
+ return AVERROR_EXTERNAL;
+ }
+
+ if (avctx->pix_fmt != AV_PIX_FMT_NI_QUAD) {
+ ret = ni_device_session_read(&s->api_ctx, p_session_data, NI_DEVICE_TYPE_DECODER);
+ } else {
+ ret = ni_device_session_read_hwdesc(&s->api_ctx, p_session_data, NI_DEVICE_TYPE_DECODER);
+ }
+
+ if (ret == 0) {
+ s->eos = p_session_data->data.frame.end_of_stream;
+ if (avctx->pix_fmt != AV_PIX_FMT_NI_QUAD) {
+ ni_decoder_frame_buffer_free(&(p_session_data->data.frame));
+ } else {
+ ni_frame_buffer_free(&(p_session_data->data.frame));
+ }
+
+ if (s->eos) {
+ return AVERROR_EOF;
+ } else if (s->draining) {
+ av_log(avctx, AV_LOG_ERROR, "ERROR: %s draining ret == 0 but not EOS\n", __func__);
+ return AVERROR_EXTERNAL;
+ }
+ return AVERROR(EAGAIN);
+ } else if (ret > 0) {
+ int dec_ff_pix_fmt;
+
+ if (p_session_data->data.frame.flags & AV_PKT_FLAG_DISCARD) {
+ av_log(avctx, AV_LOG_DEBUG,
+ "Current frame is dropped when AV_PKT_FLAG_DISCARD is set\n");
+ if (avctx->pix_fmt != AV_PIX_FMT_NI_QUAD) {
+ ni_decoder_frame_buffer_free(&(p_session_data->data.frame));
+ } else {
+ // recycle frame mem bin buffer of all PPU outputs & free p_buffer
+ int num_outputs = (s->api_param.dec_input_params.enable_out1 > 0) +
+ (s->api_param.dec_input_params.enable_out2 > 0) + 1;
+ ni_frame_free(&num_outputs, p_session_data->data.frame.p_buffer);
+ }
+
+ if (s->draining) {
+ goto read_op;
+ }
+ return AVERROR(EAGAIN);
+ }
+
+ av_log(avctx, AV_LOG_VERBOSE, "Got output buffer pts=%lld "
+ "dts=%lld eos=%d sos=%d\n",
+ p_session_data->data.frame.pts, p_session_data->data.frame.dts,
+ p_session_data->data.frame.end_of_stream, p_session_data->data.frame.start_of_stream);
+
+ s->eos = p_session_data->data.frame.end_of_stream;
+
+ // update ctxt resolution if change has been detected
+ frame->width = cropped_width = p_session_data->data.frame.video_width; // ppu auto crop reports wdith as cropped width
+ frame->height = cropped_height = p_session_data->data.frame.video_height; // ppu auto crop reports heigth as cropped height
+
+ if (cropped_width != avctx->width || cropped_height != avctx->height) {
+ av_log(avctx, AV_LOG_WARNING, "ff_xcoder_dec_receive: resolution "
+ "changed: %dx%d to %dx%d\n", avctx->width, avctx->height,
+ cropped_width, cropped_height);
+ avctx->width = cropped_width;
+ avctx->height = cropped_height;
+ bSequenceChange = 1;
+ }
+
+ dec_ff_pix_fmt = ni_pix_fmt_2_ff_pix_fmt(s->api_ctx.pixel_format);
+
+ // If the codec is Jpeg or color range detected is a full range,
+ // yuv420p from xxx_ni_quadra_dec means a full range.
+ // Change it to yuvj420p so that FFmpeg can process it as a full range.
+ if ((avctx->pix_fmt != AV_PIX_FMT_NI_QUAD) &&
+ (dec_ff_pix_fmt == AV_PIX_FMT_YUV420P) &&
+ ((avctx->codec_id == AV_CODEC_ID_MJPEG) ||
+ (avctx->color_range == AVCOL_RANGE_JPEG))) {
+ avctx->sw_pix_fmt = avctx->pix_fmt = dec_ff_pix_fmt = AV_PIX_FMT_YUVJ420P;
+ avctx->color_range = AVCOL_RANGE_JPEG;
+ }
+
+ if (avctx->sw_pix_fmt != dec_ff_pix_fmt) {
+ av_log(avctx, AV_LOG_VERBOSE, "update sw_pix_fmt from %d to %d\n",
+ avctx->sw_pix_fmt, dec_ff_pix_fmt);
+ avctx->sw_pix_fmt = dec_ff_pix_fmt;
+ if (avctx->pix_fmt != AV_PIX_FMT_NI_QUAD) {
+ avctx->pix_fmt = avctx->sw_pix_fmt;
+ }
+ bSequenceChange = 1;
+ }
+
+ frame->format = avctx->pix_fmt;
+
+ av_log(avctx, AV_LOG_VERBOSE, "ff_xcoder_dec_receive: frame->format %d, sw_pix_fmt = %d\n", frame->format, avctx->sw_pix_fmt);
+
+ if (avctx->pix_fmt == AV_PIX_FMT_NI_QUAD) {
+ AVNIFramesContext *ni_hwf_ctx;
+
+ if (bSequenceChange) {
+ AVHWFramesContext *ctx;
+ AVNIFramesContext *dst_ctx;
+
+ av_buffer_unref(&avctx->hw_frames_ctx);
+ avctx->hw_frames_ctx = av_hwframe_ctx_alloc(avctx->hw_device_ctx);
+ if (!avctx->hw_frames_ctx) {
+ ret = AVERROR(ENOMEM);
+ return ret;
+ }
+
+ s->frames = (AVHWFramesContext*)avctx->hw_frames_ctx->data;
+ s->frames->format = AV_PIX_FMT_NI_QUAD;
+ s->frames->width = avctx->width;
+ s->frames->height = avctx->height;
+ s->frames->sw_format = avctx->sw_pix_fmt;
+ s->frames->initial_pool_size = -1; //Decoder has its own dedicated pool
+ ret = av_hwframe_ctx_init(avctx->hw_frames_ctx);
+ if (ret < 0) {
+ return ret;
+ }
+
+ ctx = (AVHWFramesContext*)avctx->hw_frames_ctx->data;
+ dst_ctx = (AVNIFramesContext*) ctx->hwctx;
+ av_log(avctx, AV_LOG_VERBOSE, "ff_xcoder_dec_receive: sequence change, set hw_frame_context to copy decode sessions threads\n");
+ ret = ni_device_session_copy(&s->api_ctx, &dst_ctx->api_ctx);
+ if (NI_RETCODE_SUCCESS != ret) {
+ return ret;
+ }
+ }
+ frame->hw_frames_ctx = av_buffer_ref(avctx->hw_frames_ctx);
+
+ /* Set the hw_id/card number in AVNIFramesContext */
+ ni_hwf_ctx = (AVNIFramesContext*)((AVHWFramesContext*)frame->hw_frames_ctx->data)->hwctx;
+ ni_hwf_ctx->hw_id = s->dev_dec_idx;
+ }
+ if (s->api_ctx.frame_num == 1) {
+ av_log(avctx, AV_LOG_DEBUG, "NI:%s:out\n",
+ (frame_planar == 0) ? "semiplanar"
+ : (frame_planar == 2) ? "tiled"
+ : "planar");
+ }
+ retrieve_frame(avctx, frame, &got_frame, &(p_session_data->data.frame));
+ av_log(avctx, AV_LOG_VERBOSE, "ff_xcoder_dec_receive: got_frame=%d, frame->width=%d, frame->height=%d, crop top %" SIZE_SPECIFIER " bottom %" SIZE_SPECIFIER " left %" SIZE_SPECIFIER " right %" SIZE_SPECIFIER ", frame->format=%d, frame->linesize=%d/%d/%d\n", got_frame, frame->width, frame->height, frame->crop_top, frame->crop_bottom, frame->crop_left, frame->crop_right, frame->format, frame->linesize[0], frame->linesize[1], frame->linesize[2]);
+
+#if FF_API_PKT_PTS
+ FF_DISABLE_DEPRECATION_WARNINGS
+ frame->pkt_pts = frame->pts;
+ FF_ENABLE_DEPRECATION_WARNINGS
+#endif
+ frame->best_effort_timestamp = frame->pts;
+
+ av_log(avctx, AV_LOG_VERBOSE, "ff_xcoder_dec_receive: pkt_timebase= %d/%d, frame_rate=%d/%d, frame->pts=%" PRId64 ", frame->pkt_dts=%" PRId64 "\n", avctx->pkt_timebase.num, avctx->pkt_timebase.den, avctx->framerate.num, avctx->framerate.den, frame->pts, frame->pkt_dts);
+
+ // release buffer ownership and let frame owner return frame buffer to
+ // buffer pool later
+ p_session_data->data.frame.dec_buf = NULL;
+
+ ni_memfree(p_session_data->data.frame.p_custom_sei_set);
+ } else {
+ av_log(avctx, AV_LOG_ERROR, "Failed to get output buffer (status = %d)\n",
+ ret);
+
+ if (NI_RETCODE_ERROR_VPU_RECOVERY == ret) {
+ av_log(avctx, AV_LOG_WARNING, "ff_xcoder_dec_receive VPU recovery, need to reset ..\n");
+ ni_decoder_frame_buffer_free(&(p_session_data->data.frame));
+ return ret;
+ } else if (ret == NI_RETCODE_ERROR_INVALID_SESSION ||
+ ret == NI_RETCODE_ERROR_NVME_CMD_FAILED) {
+ return AVERROR_EOF;
+ }
+ return AVERROR(EIO);
+ }
+
+ ret = 0;
+
+ return ret;
+}
+
+int ff_xcoder_dec_is_flushing(AVCodecContext *avctx, XCoderDecContext *s)
+{
+ return s->flushing;
+}
+
+int ff_xcoder_dec_flush(AVCodecContext *avctx, XCoderDecContext *s)
+{
+ s->draining = 0;
+ s->flushing = 0;
+ s->eos = 0;
+
+ /* Future: for now, always return 1 to indicate the codec has been flushed
+ and it leaves the flushing state and can process again ! will consider
+ case of user retaining frames in HW "surface" usage */
+ return 1;
+}
diff --git a/libavcodec/nicodec.h b/libavcodec/nicodec.h
new file mode 100644
index 0000000000..7a0e22faa0
--- /dev/null
+++ b/libavcodec/nicodec.h
@@ -0,0 +1,215 @@
+/*
+ * XCoder Codec Lib Wrapper
+ * Copyright (c) 2018 NetInt
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+/**
+ * @file
+ * XCoder codec lib wrapper header.
+ */
+
+#ifndef AVCODEC_NICODEC_H
+#define AVCODEC_NICODEC_H
+
+#include <stdbool.h>
+#include <time.h>
+#include "avcodec.h"
+#include "startcode.h"
+#include "bsf.h"
+#include "libavutil/fifo.h"
+
+#include <ni_device_api.h>
+#include "libavutil/hwcontext_ni_quad.h"
+#include "libavutil/hwcontext.h"
+
+#define NI_NAL_VPS_BIT (0x01)
+#define NI_NAL_SPS_BIT (0x01 << 1)
+#define NI_NAL_PPS_BIT (0x01 << 2)
+#define NI_GENERATE_ALL_NAL_HEADER_BIT (0x01 << 3)
+
+/* enum for specifying xcoder device/coder index; can be specified in either
+ decoder or encoder options. */
+enum {
+ BEST_DEVICE_INST = -2,
+ BEST_DEVICE_LOAD = -1
+};
+
+enum {
+ HW_FRAMES_OFF = 0,
+ HW_FRAMES_ON = 1
+};
+
+enum {
+ GEN_GLOBAL_HEADERS_AUTO = -1,
+ GEN_GLOBAL_HEADERS_OFF = 0,
+ GEN_GLOBAL_HEADERS_ON = 1
+};
+
+typedef struct OpaqueData {
+ int64_t pkt_pos;
+ void *opaque;
+ AVBufferRef *opaque_ref;
+} OpaqueData;
+
+typedef struct XCoderDecContext {
+ AVClass *avclass;
+
+ /* from the command line, which resource allocation method we use */
+ char *dev_xcoder;
+ char *dev_xcoder_name; /* dev name of the xcoder card to use */
+ char *blk_xcoder_name; /* blk name of the xcoder card to use */
+ int dev_dec_idx; /* user-specified decoder index */
+ char *dev_blk_name; /* user-specified decoder block device name */
+ int keep_alive_timeout; /* keep alive timeout setting */
+ ni_device_context_t *rsrc_ctx; /* resource management context */
+
+ ni_session_context_t api_ctx;
+ ni_xcoder_params_t api_param;
+ ni_session_data_io_t api_pkt;
+
+ AVPacket buffered_pkt;
+ AVPacket lone_sei_pkt;
+
+ // stream header copied/saved from AVCodecContext.extradata
+ int got_first_key_frame;
+ uint8_t *extradata;
+ int extradata_size;
+
+ int64_t current_pts;
+ unsigned long long offset;
+ int svct_skip_next_packet;
+
+ int started;
+ int draining;
+ int flushing;
+ int is_lone_sei_pkt;
+ int eos;
+ AVHWFramesContext *frames;
+
+ /* for temporarily storing the opaque pointers when AV_CODEC_FLAG_COPY_OPAQUE is set */
+ OpaqueData *opaque_data_array;
+ int opaque_data_nb;
+ int opaque_data_pos;
+
+ /* below are all command line options */
+ char *xcoder_opts;
+ int low_delay;
+ int pkt_nal_bitmap;
+} XCoderDecContext;
+
+typedef struct XCoderEncContext {
+ AVClass *avclass;
+
+ /* from the command line, which resource allocation method we use */
+ char *dev_xcoder;
+ char *dev_xcoder_name; /* dev name of the xcoder card to use */
+ char *blk_xcoder_name; /* blk name of the xcoder card to use */
+ int dev_enc_idx; /* user-specified encoder index */
+ char *dev_blk_name; /* user-specified encoder block device name */
+ int nvme_io_size; /* custom nvme io size */
+ int keep_alive_timeout; /* keep alive timeout setting */
+ ni_device_context_t *rsrc_ctx; /* resource management context */
+ uint64_t xcode_load_pixel; /* xcode load in pixels by this encode task */
+
+ AVFifo *fme_fifo;
+ int eos_fme_received;
+ AVFrame buffered_fme; // buffered frame for sequence change handling
+
+ ni_session_data_io_t api_pkt; /* used for receiving bitstream from xcoder */
+ ni_session_data_io_t api_fme; /* used for sending YUV data to xcoder */
+ ni_session_context_t api_ctx;
+ ni_xcoder_params_t api_param;
+
+ int started;
+ uint8_t *p_spsPpsHdr;
+ int spsPpsHdrLen;
+ int spsPpsArrived;
+ int firstPktArrived;
+ int64_t dtsOffset;
+ int gop_offset_count;/*this is a counter to guess the pts only dtsOffset times*/
+ uint64_t total_frames_received;
+ int64_t first_frame_pts;
+ int64_t latest_dts;
+
+ int encoder_flushing;
+ int encoder_eof;
+
+ // ROI
+ int roi_side_data_size;
+ AVRegionOfInterest *av_rois; // last passed in AVRegionOfInterest
+ int nb_rois;
+
+ /* backup copy of original values of -enc command line option */
+ int orig_dev_enc_idx;
+
+ AVFrame *sframe_pool[MAX_NUM_FRAMEPOOL_HWAVFRAME];
+ int aFree_Avframes_list[MAX_NUM_FRAMEPOOL_HWAVFRAME + 1];
+ int freeHead;
+ int freeTail;
+
+ /* below are all command line options */
+ char *xcoder_opts;
+ char *xcoder_gop;
+ int gen_global_headers;
+ int udu_sei;
+
+ int reconfigCount;
+ int seqChangeCount;
+ // actual enc_change_params is in ni_session_context !
+
+} XCoderEncContext;
+
+// copy maximum number of bytes of a string from src to dst, ensuring null byte
+// terminated
+static inline void ff_xcoder_strncpy(char *dst, const char *src, int max) {
+ if (dst && src && max) {
+ *dst = '\0';
+ strncpy(dst, src, max);
+ *(dst + max - 1) = '\0';
+ }
+}
+
+int ff_xcoder_dec_close(AVCodecContext *avctx,
+ XCoderDecContext *s);
+
+int ff_xcoder_dec_init(AVCodecContext *avctx,
+ XCoderDecContext *s);
+
+int ff_xcoder_dec_send(AVCodecContext *avctx,
+ XCoderDecContext *s,
+ AVPacket *pkt);
+
+int ff_xcoder_dec_receive(AVCodecContext *avctx,
+ XCoderDecContext *s,
+ AVFrame *frame,
+ bool wait);
+
+int ff_xcoder_dec_is_flushing(AVCodecContext *avctx,
+ XCoderDecContext *s);
+
+int ff_xcoder_dec_flush(AVCodecContext *avctx,
+ XCoderDecContext *s);
+
+int parse_symbolic_decoder_param(XCoderDecContext *s);
+
+int retrieve_frame(AVCodecContext *avctx, AVFrame *data, int *got_frame,
+ ni_frame_t *xfme);
+int ff_xcoder_add_headers(AVCodecContext *avctx, AVPacket *pkt,
+ uint8_t *extradata, int extradata_size);
+#endif /* AVCODEC_NICODEC_H */
diff --git a/libavcodec/nidec.c b/libavcodec/nidec.c
new file mode 100644
index 0000000000..5624cbe670
--- /dev/null
+++ b/libavcodec/nidec.c
@@ -0,0 +1,539 @@
+/*
+ * NetInt XCoder H.264/HEVC Decoder common code
+ * Copyright (c) 2018-2019 NetInt
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+/**
+ * @file
+ * XCoder decoder.
+ */
+
+#include "nidec.h"
+#include "fftools/ffmpeg.h"
+#include "libavutil/hwcontext.h"
+#include "libavutil/hwcontext_ni_quad.h"
+#include "libavutil/mem.h"
+#include "fftools/ffmpeg_sched.h"
+
+#define USER_DATA_UNREGISTERED_SEI_PAYLOAD_TYPE 5
+#define NETINT_SKIP_PROFILE 0
+
+int xcoder_decode_close(AVCodecContext *avctx) {
+ int i;
+ XCoderDecContext *s = avctx->priv_data;
+ av_log(avctx, AV_LOG_VERBOSE, "XCoder decode close\n");
+
+ /* this call shall release resource based on s->api_ctx */
+ ff_xcoder_dec_close(avctx, s);
+
+ av_packet_unref(&s->buffered_pkt);
+ av_packet_unref(&s->lone_sei_pkt);
+
+ av_freep(&s->extradata);
+ s->extradata_size = 0;
+ s->got_first_key_frame = 0;
+
+ if (s->opaque_data_array) {
+ for (i = 0; i < s->opaque_data_nb; i++)
+ av_buffer_unref(&s->opaque_data_array[i].opaque_ref);
+ av_freep(&s->opaque_data_array);
+ }
+
+ ni_rsrc_free_device_context(s->rsrc_ctx);
+ s->rsrc_ctx = NULL;
+ return 0;
+}
+
+static int xcoder_setup_decoder(AVCodecContext *avctx) {
+ XCoderDecContext *s = avctx->priv_data;
+ ni_xcoder_params_t *p_param =
+ &s->api_param; // dec params in union with enc params struct
+ int min_resolution_width, min_resolution_height;
+
+ av_log(avctx, AV_LOG_VERBOSE, "XCoder setup device decoder\n");
+
+ if (ni_device_session_context_init(&(s->api_ctx)) < 0) {
+ av_log(avctx, AV_LOG_ERROR,
+ "Error XCoder init decoder context failure\n");
+ return AVERROR_EXTERNAL;
+ }
+
+ min_resolution_width = NI_MIN_RESOLUTION_WIDTH;
+ min_resolution_height = NI_MIN_RESOLUTION_HEIGHT;
+
+ // Check codec id or format as well as profile idc.
+ switch (avctx->codec_id) {
+ case AV_CODEC_ID_HEVC:
+ s->api_ctx.codec_format = NI_CODEC_FORMAT_H265;
+ switch (avctx->profile) {
+ case AV_PROFILE_HEVC_MAIN:
+ case AV_PROFILE_HEVC_MAIN_10:
+ case AV_PROFILE_HEVC_MAIN_STILL_PICTURE:
+ case AV_PROFILE_UNKNOWN:
+ break;
+ case NETINT_SKIP_PROFILE:
+ av_log(avctx, AV_LOG_WARNING, "Warning: HEVC profile %d not supported, skip setting it\n", avctx->profile);
+ break;
+ default:
+ av_log(avctx, AV_LOG_ERROR, "Error: profile %d not supported.\n", avctx->profile);
+ return AVERROR_INVALIDDATA;
+ }
+ break;
+ case AV_CODEC_ID_VP9:
+ s->api_ctx.codec_format = NI_CODEC_FORMAT_VP9;
+ switch (avctx->profile) {
+ case AV_PROFILE_VP9_0:
+ case AV_PROFILE_VP9_2:
+ case AV_PROFILE_UNKNOWN:
+ break;
+ default:
+ av_log(avctx, AV_LOG_ERROR, "Error: profile %d not supported.\n", avctx->profile);
+ return AVERROR_INVALIDDATA;
+ }
+ break;
+ case AV_CODEC_ID_MJPEG:
+ s->api_ctx.codec_format = NI_CODEC_FORMAT_JPEG;
+ min_resolution_width = NI_MIN_RESOLUTION_WIDTH_JPEG;
+ min_resolution_height = NI_MIN_RESOLUTION_HEIGHT_JPEG;
+ switch (avctx->profile) {
+ case AV_PROFILE_MJPEG_HUFFMAN_BASELINE_DCT:
+ case AV_PROFILE_UNKNOWN:
+ break;
+ default:
+ av_log(avctx, AV_LOG_ERROR, "Error: profile %d not supported.\n", avctx->profile);
+ return AVERROR_INVALIDDATA;
+ }
+ break;
+ default:
+ s->api_ctx.codec_format = NI_CODEC_FORMAT_H264;
+ switch (avctx->profile) {
+ case AV_PROFILE_H264_BASELINE:
+ case AV_PROFILE_H264_CONSTRAINED_BASELINE:
+ case AV_PROFILE_H264_MAIN:
+ case AV_PROFILE_H264_EXTENDED:
+ case AV_PROFILE_H264_HIGH:
+ case AV_PROFILE_H264_HIGH_10:
+ case AV_PROFILE_UNKNOWN:
+ break;
+ case NETINT_SKIP_PROFILE:
+ av_log(avctx, AV_LOG_WARNING, "Warning: H264 profile %d not supported, skip setting it.\n", avctx->profile);
+ break;
+ default:
+ av_log(avctx, AV_LOG_ERROR, "Error: profile %d not supported.\n", avctx->profile);
+ return AVERROR_INVALIDDATA;
+ }
+ break;
+ }
+
+ if (avctx->width > NI_MAX_RESOLUTION_WIDTH ||
+ avctx->height > NI_MAX_RESOLUTION_HEIGHT ||
+ avctx->width * avctx->height > NI_MAX_RESOLUTION_AREA) {
+ av_log(avctx, AV_LOG_ERROR,
+ "Error XCoder resolution %dx%d not supported\n", avctx->width,
+ avctx->height);
+ av_log(avctx, AV_LOG_ERROR, "Max Supported Width: %d Height %d Area %d\n",
+ NI_MAX_RESOLUTION_WIDTH, NI_MAX_RESOLUTION_HEIGHT,
+ NI_MAX_RESOLUTION_AREA);
+ return AVERROR_EXTERNAL;
+ } else if (avctx->width < min_resolution_width ||
+ avctx->height < min_resolution_height) {
+ av_log(avctx, AV_LOG_ERROR,
+ "Error XCoder resolution %dx%d not supported\n", avctx->width,
+ avctx->height);
+ av_log(avctx, AV_LOG_ERROR, "Min Supported Width: %d Height %d\n",
+ min_resolution_width, min_resolution_height);
+ return AVERROR_EXTERNAL;
+ }
+
+ s->offset = 0LL;
+
+ s->draining = 0;
+
+ s->api_ctx.pic_reorder_delay = avctx->has_b_frames;
+ s->api_ctx.bit_depth_factor = 1;
+ if (AV_PIX_FMT_YUV420P10BE == avctx->pix_fmt ||
+ AV_PIX_FMT_YUV420P10LE == avctx->pix_fmt ||
+ AV_PIX_FMT_P010LE == avctx->pix_fmt) {
+ s->api_ctx.bit_depth_factor = 2;
+ }
+ av_log(avctx, AV_LOG_VERBOSE, "xcoder_setup_decoder: pix_fmt %u bit_depth_factor %u\n", avctx->pix_fmt, s->api_ctx.bit_depth_factor);
+
+ //Xcoder User Configuration
+ if (ni_decoder_init_default_params(p_param, avctx->framerate.num, avctx->framerate.den, avctx->bit_rate, avctx->width, avctx->height) < 0) {
+ av_log(avctx, AV_LOG_INFO, "Error setting params\n");
+ return AVERROR(EINVAL);
+ }
+
+ if (s->xcoder_opts) {
+ AVDictionary *dict = NULL;
+ AVDictionaryEntry *en = NULL;
+
+ if (av_dict_parse_string(&dict, s->xcoder_opts, "=", ":", 0)) {
+ av_log(avctx, AV_LOG_ERROR, "Xcoder options provided contain error(s)\n");
+ av_dict_free(&dict);
+ return AVERROR_EXTERNAL;
+ } else {
+ while ((en = av_dict_get(dict, "", en, AV_DICT_IGNORE_SUFFIX))) {
+ int parse_ret = ni_decoder_params_set_value(p_param, en->key, en->value);
+ if (parse_ret != NI_RETCODE_SUCCESS) {
+ switch (parse_ret) {
+ case NI_RETCODE_PARAM_INVALID_NAME:
+ av_log(avctx, AV_LOG_ERROR, "Unknown option: %s.\n", en->key);
+ av_dict_free(&dict);
+ return AVERROR_EXTERNAL;
+ case NI_RETCODE_PARAM_ERROR_TOO_BIG:
+ av_log(avctx, AV_LOG_ERROR,
+ "Invalid %s: too big, max char len = %d\n", en->key,
+ NI_MAX_PPU_PARAM_EXPR_CHAR);
+ av_dict_free(&dict);
+ return AVERROR_EXTERNAL;
+ case NI_RETCODE_PARAM_ERROR_TOO_SMALL:
+ av_log(avctx, AV_LOG_ERROR, "Invalid %s: too small\n", en->key);
+ av_dict_free(&dict);
+ return AVERROR_EXTERNAL;
+ case NI_RETCODE_PARAM_ERROR_OOR:
+ av_log(avctx, AV_LOG_ERROR, "Invalid %s: out of range\n",
+ en->key);
+ av_dict_free(&dict);
+ return AVERROR_EXTERNAL;
+ case NI_RETCODE_PARAM_ERROR_ZERO:
+ av_log(avctx, AV_LOG_ERROR,
+ "Error setting option %s to value 0\n", en->key);
+ av_dict_free(&dict);
+ return AVERROR_EXTERNAL;
+ case NI_RETCODE_PARAM_INVALID_VALUE:
+ av_log(avctx, AV_LOG_ERROR, "Invalid value for %s: %s.\n",
+ en->key, en->value);
+ av_dict_free(&dict);
+ return AVERROR_EXTERNAL;
+ case NI_RETCODE_PARAM_WARNING_DEPRECATED:
+ av_log(avctx, AV_LOG_WARNING, "Parameter %s is deprecated\n",
+ en->key);
+ break;
+ default:
+ av_log(avctx, AV_LOG_ERROR, "Invalid %s: ret %d\n", en->key,
+ parse_ret);
+ av_dict_free(&dict);
+ return AVERROR_EXTERNAL;
+ }
+ }
+ }
+ av_dict_free(&dict);
+ }
+
+ for (size_t i = 0; i < NI_MAX_NUM_OF_DECODER_OUTPUTS; i++) {
+ if (p_param->dec_input_params.crop_mode[i] != NI_DEC_CROP_MODE_AUTO) {
+ continue;
+ }
+ for (size_t j = 0; j < 4; j++) {
+ if (strlen(p_param->dec_input_params.cr_expr[i][j])) {
+ av_log(avctx, AV_LOG_ERROR, "Setting crop parameters without setting crop mode to manual?\n");
+ return AVERROR_EXTERNAL;
+ }
+ }
+ }
+ }
+ parse_symbolic_decoder_param(s);
+ return 0;
+}
+
+int xcoder_decode_init(AVCodecContext *avctx) {
+ int i;
+ int ret = 0;
+ XCoderDecContext *s = avctx->priv_data;
+ const AVPixFmtDescriptor *desc;
+ ni_xcoder_params_t *p_param = &s->api_param;
+ uint32_t xcoder_timeout;
+
+ ni_log_set_level(ff_to_ni_log_level(av_log_get_level()));
+
+ av_log(avctx, AV_LOG_VERBOSE, "XCoder decode init\n");
+
+ avctx->sw_pix_fmt = avctx->pix_fmt;
+
+ desc = av_pix_fmt_desc_get(avctx->sw_pix_fmt);
+ av_log(avctx, AV_LOG_VERBOSE, "width: %d height: %d sw_pix_fmt: %s\n",
+ avctx->width, avctx->height, desc ? desc->name : "NONE");
+
+ if (0 == avctx->width || 0 == avctx->height) {
+ av_log(avctx, AV_LOG_ERROR, "Error probing input stream\n");
+ return AVERROR_INVALIDDATA;
+ }
+
+ switch (avctx->pix_fmt) {
+ case AV_PIX_FMT_YUV420P:
+ case AV_PIX_FMT_YUV420P10BE:
+ case AV_PIX_FMT_YUV420P10LE:
+ case AV_PIX_FMT_YUVJ420P:
+ case AV_PIX_FMT_GRAY8:
+ break;
+ case AV_PIX_FMT_NONE:
+ av_log(avctx, AV_LOG_WARNING, "Warning: pixel format is not specified\n");
+ break;
+ default:
+ av_log(avctx, AV_LOG_ERROR, "Error: pixel format %s not supported.\n",
+ desc ? desc->name : "NONE");
+ return AVERROR_INVALIDDATA;
+ }
+
+ av_log(avctx, AV_LOG_VERBOSE, "(avctx->field_order = %d)\n", avctx->field_order);
+ if (avctx->field_order > AV_FIELD_PROGRESSIVE) { //AVFieldOrder with bottom or top coding order represents interlaced video
+ av_log(avctx, AV_LOG_ERROR, "interlaced video not supported!\n");
+ return AVERROR_INVALIDDATA;
+ }
+
+ if ((ret = xcoder_setup_decoder(avctx)) < 0) {
+ return ret;
+ }
+
+ //--------reassign pix format based on user param------------//
+ if (p_param->dec_input_params.semi_planar[0]) {
+ if (avctx->sw_pix_fmt == AV_PIX_FMT_YUV420P10BE ||
+ avctx->sw_pix_fmt == AV_PIX_FMT_YUV420P10LE ||
+ avctx->sw_pix_fmt == AV_PIX_FMT_YUV420P) {
+ av_log(avctx, AV_LOG_VERBOSE, "XCoder decode init: YV12 forced to NV12\n");
+ avctx->sw_pix_fmt = (avctx->sw_pix_fmt == AV_PIX_FMT_YUV420P) ? AV_PIX_FMT_NV12 : AV_PIX_FMT_P010LE;
+ }
+ }
+ if (p_param->dec_input_params.force_8_bit[0]) {
+ if (avctx->sw_pix_fmt == AV_PIX_FMT_YUV420P10BE ||
+ avctx->sw_pix_fmt == AV_PIX_FMT_YUV420P10LE ||
+ avctx->sw_pix_fmt == AV_PIX_FMT_P010LE) {
+ av_log(avctx, AV_LOG_VERBOSE, "XCoder decode init: 10Bit input forced to 8bit\n");
+ avctx->sw_pix_fmt = (avctx->sw_pix_fmt == AV_PIX_FMT_P010LE) ? AV_PIX_FMT_NV12 : AV_PIX_FMT_YUV420P;
+ s->api_ctx.bit_depth_factor = 1;
+ }
+ }
+ if (p_param->dec_input_params.hwframes) { //need to set before open decoder
+ s->api_ctx.hw_action = NI_CODEC_HW_ENABLE;
+ } else {
+ s->api_ctx.hw_action = NI_CODEC_HW_NONE;
+ }
+
+ if (p_param->dec_input_params.hwframes && p_param->dec_input_params.max_extra_hwframe_cnt == 255)
+ p_param->dec_input_params.max_extra_hwframe_cnt = 0;
+ if (p_param->dec_input_params.hwframes && (DEFAULT_FRAME_THREAD_QUEUE_SIZE > 1))
+ p_param->dec_input_params.hwframes |= DEFAULT_FRAME_THREAD_QUEUE_SIZE << 4;
+ //------reassign pix format based on user param done--------//
+
+ s->api_ctx.enable_user_data_sei_passthru = 1; // Enable by default
+
+ s->started = 0;
+ memset(&s->api_pkt, 0, sizeof(ni_packet_t));
+ s->pkt_nal_bitmap = 0;
+ s->svct_skip_next_packet = 0;
+ av_log(avctx, AV_LOG_VERBOSE, "XCoder decode init: time_base = %d/%d, frame rate = %d/%d\n", avctx->time_base.num, avctx->time_base.den, avctx->framerate.num, avctx->framerate.den);
+
+ // overwrite keep alive timeout value here with a custom value if it was
+ // provided
+ // if xcoder option is set then overwrite the (legacy) decoder option
+ xcoder_timeout = s->api_param.dec_input_params.keep_alive_timeout;
+ if (xcoder_timeout != NI_DEFAULT_KEEP_ALIVE_TIMEOUT) {
+ s->api_ctx.keep_alive_timeout = xcoder_timeout;
+ } else {
+ s->api_ctx.keep_alive_timeout = s->keep_alive_timeout;
+ }
+ av_log(avctx, AV_LOG_VERBOSE, "Custom NVME Keep Alive Timeout set to %d\n",
+ s->api_ctx.keep_alive_timeout);
+
+ if (s->api_param.dec_input_params.decoder_low_delay != 0) {
+ s->low_delay = s->api_param.dec_input_params.decoder_low_delay;
+ } else {
+ s->api_param.dec_input_params.decoder_low_delay = s->low_delay;
+ }
+ s->api_ctx.enable_low_delay_check = s->api_param.dec_input_params.enable_low_delay_check;
+ if (avctx->has_b_frames && s->api_ctx.enable_low_delay_check) {
+ // If has B frame, must set lowdelay to 0
+ av_log(avctx, AV_LOG_WARNING,"Warning: decoder lowDelay mode "
+ "is cancelled due to has_b_frames with enable_low_delay_check\n");
+ s->low_delay = s->api_param.dec_input_params.decoder_low_delay = 0;
+ }
+ s->api_ctx.decoder_low_delay = s->low_delay;
+
+ s->api_ctx.p_session_config = &s->api_param;
+
+ if ((ret = ff_xcoder_dec_init(avctx, s)) < 0) {
+ goto done;
+ }
+
+ s->current_pts = NI_NOPTS_VALUE;
+
+ /* The size opaque pointers buffer is chosen by max buffered packets in FW (4) +
+ * max output buffer in FW (24) + some extra room to be safe. If the delay of any
+ * frame is larger than this, we assume that the frame is dropped so the buffered
+ * opaque pointer can be overwritten when the opaque_data_array wraps around */
+ s->opaque_data_nb = 30;
+ s->opaque_data_pos = 0;
+ if (!s->opaque_data_array) {
+ s->opaque_data_array = av_calloc(s->opaque_data_nb, sizeof(OpaqueData));
+ if (!s->opaque_data_array) {
+ ret = AVERROR(ENOMEM);
+ goto done;
+ }
+ }
+ for (i = 0; i < s->opaque_data_nb; i++) {
+ s->opaque_data_array[i].pkt_pos = -1;
+ }
+
+done:
+ return ret;
+}
+
+// reset and restart when xcoder decoder resets
+int xcoder_decode_reset(AVCodecContext *avctx) {
+ XCoderDecContext *s = avctx->priv_data;
+ int ret = NI_RETCODE_FAILURE;
+ int64_t bcp_current_pts;
+
+ av_log(avctx, AV_LOG_VERBOSE, "XCoder decode reset\n");
+
+ ni_device_session_close(&s->api_ctx, s->eos, NI_DEVICE_TYPE_DECODER);
+
+ ni_device_session_context_clear(&s->api_ctx);
+
+#ifdef _WIN32
+ ni_device_close(s->api_ctx.device_handle);
+#elif __linux__
+ ni_device_close(s->api_ctx.device_handle);
+ ni_device_close(s->api_ctx.blk_io_handle);
+#endif
+ s->api_ctx.device_handle = NI_INVALID_DEVICE_HANDLE;
+ s->api_ctx.blk_io_handle = NI_INVALID_DEVICE_HANDLE;
+
+ ni_packet_buffer_free(&(s->api_pkt.data.packet));
+ bcp_current_pts = s->current_pts;
+ ret = xcoder_decode_init(avctx);
+ s->current_pts = bcp_current_pts;
+ s->api_ctx.session_run_state = SESSION_RUN_STATE_RESETTING;
+ return ret;
+}
+
+static int xcoder_send_receive(AVCodecContext *avctx, XCoderDecContext *s,
+ AVFrame *frame, bool wait) {
+ int ret;
+
+ /* send any pending data from buffered packet */
+ while (s->buffered_pkt.size) {
+ ret = ff_xcoder_dec_send(avctx, s, &s->buffered_pkt);
+ if (ret == AVERROR(EAGAIN))
+ break;
+ else if (ret < 0) {
+ av_packet_unref(&s->buffered_pkt);
+ return ret;
+ }
+ av_packet_unref(&s->buffered_pkt);
+ }
+
+ /* check for new frame */
+ return ff_xcoder_dec_receive(avctx, s, frame, wait);
+}
+
+int xcoder_receive_frame(AVCodecContext *avctx, AVFrame *frame) {
+ XCoderDecContext *s = avctx->priv_data;
+ const AVPixFmtDescriptor *desc;
+ int ret;
+
+ av_log(avctx, AV_LOG_VERBOSE, "XCoder receive frame\n");
+ /*
+ * After we have buffered an input packet, check if the codec is in the
+ * flushing state. If it is, we need to call ff_xcoder_dec_flush.
+ *
+ * ff_xcoder_dec_flush returns 0 if the flush cannot be performed on
+ * the codec (because the user retains frames). The codec stays in the
+ * flushing state.
+ * For now we don't consider this case of user retaining the frame
+ * (connected decoder-encoder case), so the return can only be 1
+ * (flushed successfully), or < 0 (failure)
+ *
+ * ff_xcoder_dec_flush returns 1 if the flush can actually be
+ * performed on the codec. The codec leaves the flushing state and can
+ * process again packets.
+ *
+ * ff_xcoder_dec_flush returns a negative value if an error has
+ * occurred.
+ */
+ if (ff_xcoder_dec_is_flushing(avctx, s)) {
+ if (!ff_xcoder_dec_flush(avctx, s)) {
+ return AVERROR(EAGAIN);
+ }
+ }
+
+ // give priority to sending data to decoder
+ if (s->buffered_pkt.size == 0) {
+ ret = ff_decode_get_packet(avctx, &s->buffered_pkt);
+ if (ret < 0) {
+ av_log(avctx, AV_LOG_VERBOSE, "ff_decode_get_packet 1 rc: %s\n",
+ av_err2str(ret));
+ } else {
+ av_log(avctx, AV_LOG_DEBUG, "ff_decode_get_packet 1 rc: Success\n");
+ }
+ }
+
+ /* flush buffered packet and check for new frame */
+ ret = xcoder_send_receive(avctx, s, frame, false);
+ if (NI_RETCODE_ERROR_VPU_RECOVERY == ret) {
+ ret = xcoder_decode_reset(avctx);
+ if (0 == ret) {
+ return AVERROR(EAGAIN);
+ } else {
+ return ret;
+ }
+ } else if (ret != AVERROR(EAGAIN)) {
+ return ret;
+ }
+
+ /* skip fetching new packet if we still have one buffered */
+ if (s->buffered_pkt.size > 0) {
+ return xcoder_send_receive(avctx, s, frame, true);
+ }
+
+ /* fetch new packet or eof */
+ ret = ff_decode_get_packet(avctx, &s->buffered_pkt);
+ if (ret < 0) {
+ av_log(avctx, AV_LOG_VERBOSE, "ff_decode_get_packet 2 rc: %s\n",
+ av_err2str(ret));
+ } else {
+ av_log(avctx, AV_LOG_DEBUG, "ff_decode_get_packet 2 rc: Success\n");
+ }
+
+ if (ret == AVERROR_EOF) {
+ AVPacket null_pkt = {0};
+ ret = ff_xcoder_dec_send(avctx, s, &null_pkt);
+ if (ret < 0) {
+ return ret;
+ }
+ } else if (ret < 0) {
+ return ret;
+ } else {
+ av_log(avctx, AV_LOG_VERBOSE, "width: %d height: %d\n", avctx->width, avctx->height);
+ desc = av_pix_fmt_desc_get(avctx->pix_fmt);
+ av_log(avctx, AV_LOG_VERBOSE, "pix_fmt: %s\n", desc ? desc->name : "NONE");
+ }
+
+ /* crank decoder with new packet */
+ return xcoder_send_receive(avctx, s, frame, true);
+}
+
+void xcoder_decode_flush(AVCodecContext *avctx) {
+ XCoderDecContext *s = avctx->priv_data;
+ ni_device_dec_session_flush(&s->api_ctx);
+ s->draining = 0;
+ s->flushing = 0;
+ s->eos = 0;
+}
diff --git a/libavcodec/nidec.h b/libavcodec/nidec.h
new file mode 100644
index 0000000000..7575bc83d0
--- /dev/null
+++ b/libavcodec/nidec.h
@@ -0,0 +1,86 @@
+/*
+ * NetInt XCoder H.264/HEVC Decoder common code header
+ * Copyright (c) 2018-2019 NetInt
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef AVCODEC_NIDEC_H
+#define AVCODEC_NIDEC_H
+
+#include <stdbool.h>
+#include <ni_rsrc_api.h>
+#include <ni_device_api.h>
+#include <ni_util.h>
+
+#include "avcodec.h"
+#include "codec_internal.h"
+#include "decode.h"
+#include "internal.h"
+
+#include "libavutil/internal.h"
+#include "libavutil/frame.h"
+#include "libavutil/buffer.h"
+#include "libavutil/pixdesc.h"
+#include "libavutil/opt.h"
+
+#include "nicodec.h"
+
+#define OFFSETDEC(x) offsetof(XCoderDecContext, x)
+#define VD AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_DECODING_PARAM
+
+// Common Netint decoder options
+#define NI_DEC_OPTIONS\
+ { "xcoder", "Select which XCoder card to use.", OFFSETDEC(dev_xcoder), \
+ AV_OPT_TYPE_STRING, {.str = NI_BEST_MODEL_LOAD_STR}, CHAR_MIN, CHAR_MAX, VD, "xcoder"}, \
+ { "bestmodelload", "Pick the least model load XCoder/decoder available.", 0,\
+ AV_OPT_TYPE_CONST, {.str = NI_BEST_MODEL_LOAD_STR}, 0, 0, VD, "xcoder"},\
+ { "bestload", "Pick the least real load XCoder/decoder available.", 0,\
+ AV_OPT_TYPE_CONST, {.str = NI_BEST_REAL_LOAD_STR}, 0, 0, VD, "xcoder"},\
+ \
+ { "ni_dec_idx", "Select which decoder to use by index. First is 0, second is 1, and so on.", \
+ OFFSETDEC(dev_dec_idx), AV_OPT_TYPE_INT, {.i64 = BEST_DEVICE_LOAD}, -1, INT_MAX, VD, "ni_dec_idx"}, \
+ \
+ { "ni_dec_name", "Select which decoder to use by NVMe block device name, e.g. /dev/nvme0n1.", \
+ OFFSETDEC(dev_blk_name), AV_OPT_TYPE_STRING, {0}, 0, 0, VD, "ni_dec_name"}, \
+ \
+ { "decname", "Select which decoder to use by NVMe block device name, e.g. /dev/nvme0n1.", \
+ OFFSETDEC(dev_blk_name), AV_OPT_TYPE_STRING, {0}, 0, 0, VD, "decname"}, \
+ \
+ { "xcoder-params", "Set the XCoder configuration using a :-separated list of key=value parameters.", \
+ OFFSETDEC(xcoder_opts), AV_OPT_TYPE_STRING, {0}, 0, 0, VD}, \
+ \
+ { "keep_alive_timeout", "Specify a custom session keep alive timeout in seconds.", \
+ OFFSETDEC(keep_alive_timeout), AV_OPT_TYPE_INT, {.i64 = NI_DEFAULT_KEEP_ALIVE_TIMEOUT}, \
+ NI_MIN_KEEP_ALIVE_TIMEOUT, NI_MAX_KEEP_ALIVE_TIMEOUT, VD, "keep_alive_timeout"}
+
+#define NI_DEC_OPTION_LOW_DELAY\
+ { "low_delay", "Enable low delay decoding mode for 1 in, 1 out decoding sequence. " \
+ "Set 1 to enable low delay mode. Should be used only for streams that are in sequence.", \
+ OFFSETDEC(low_delay), AV_OPT_TYPE_INT, {.i64 = 0}, 0, 1, VD, "low_delay"}
+
+int xcoder_decode_close(AVCodecContext *avctx);
+
+int xcoder_decode_init(AVCodecContext *avctx);
+
+int xcoder_decode_reset(AVCodecContext *avctx);
+
+int xcoder_receive_frame(AVCodecContext *avctx, AVFrame *frame);
+
+void xcoder_decode_flush(AVCodecContext *avctx);
+
+#endif /* AVCODEC_NIDEC_H */
diff --git a/libavcodec/nidec_h264.c b/libavcodec/nidec_h264.c
new file mode 100644
index 0000000000..f6fa0e149d
--- /dev/null
+++ b/libavcodec/nidec_h264.c
@@ -0,0 +1,73 @@
+/*
+ * XCoder H.264 Decoder
+ * Copyright (c) 2018 NetInt
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+/**
+ * @file
+ * XCoder decoder.
+ */
+
+#include "nidec.h"
+#include "hwconfig.h"
+static const AVCodecHWConfigInternal *ff_ni_quad_hw_configs[] = {
+ &(const AVCodecHWConfigInternal) {
+ .public = {
+ .pix_fmt = AV_PIX_FMT_NI_QUAD,
+ .methods = AV_CODEC_HW_CONFIG_METHOD_HW_FRAMES_CTX |
+ AV_CODEC_HW_CONFIG_METHOD_AD_HOC |
+ AV_CODEC_HW_CONFIG_METHOD_HW_DEVICE_CTX,
+ .device_type = AV_HWDEVICE_TYPE_NI_QUADRA,
+ },
+ .hwaccel = NULL,
+ },
+ NULL
+};
+
+static const AVOption dec_options[] = {
+ NI_DEC_OPTIONS,
+ NI_DEC_OPTION_LOW_DELAY,
+ {NULL}};
+
+static const AVClass h264_xcoderdec_class = {
+ .class_name = "h264_ni_quadra_dec",
+ .item_name = av_default_item_name,
+ .option = dec_options,
+ .version = LIBAVUTIL_VERSION_INT,
+};
+
+FFCodec ff_h264_ni_quadra_decoder = {
+ .p.name = "h264_ni_quadra_dec",
+ CODEC_LONG_NAME("H.264 NETINT Quadra decoder v" NI_XCODER_REVISION),
+ .p.long_name = NULL_IF_CONFIG_SMALL("H.264 NETINT Quadra decoder v" NI_XCODER_REVISION),
+ .p.type = AVMEDIA_TYPE_VIDEO,
+ .p.id = AV_CODEC_ID_H264,
+ .p.priv_class = &h264_xcoderdec_class,
+ .p.capabilities = AV_CODEC_CAP_AVOID_PROBING | AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE,
+ .p.pix_fmts = (const enum AVPixelFormat[]){ AV_PIX_FMT_YUV420P, AV_PIX_FMT_NV12,
+ AV_PIX_FMT_YUV420P10LE, AV_PIX_FMT_P010LE,
+ AV_PIX_FMT_NI_QUAD, AV_PIX_FMT_NONE },
+ FF_CODEC_RECEIVE_FRAME_CB(xcoder_receive_frame),
+ .priv_data_size = sizeof(XCoderDecContext),
+ .init = xcoder_decode_init,
+ .close = xcoder_decode_close,
+ .hw_configs = ff_ni_quad_hw_configs,
+ .bsfs = "h264_mp4toannexb",
+ .flush = xcoder_decode_flush,
+};
diff --git a/libavcodec/nidec_hevc.c b/libavcodec/nidec_hevc.c
new file mode 100644
index 0000000000..846450f14f
--- /dev/null
+++ b/libavcodec/nidec_hevc.c
@@ -0,0 +1,73 @@
+/*
+ * XCoder HEVC Decoder
+ * Copyright (c) 2018 NetInt
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+/**
+ * @file
+ * XCoder decoder.
+ */
+
+#include "nidec.h"
+#include "hwconfig.h"
+
+static const AVCodecHWConfigInternal *ff_ni_quad_hw_configs[] = {
+ &(const AVCodecHWConfigInternal) {
+ .public = {
+ .pix_fmt = AV_PIX_FMT_NI_QUAD,
+ .methods = AV_CODEC_HW_CONFIG_METHOD_HW_FRAMES_CTX |
+ AV_CODEC_HW_CONFIG_METHOD_AD_HOC |
+ AV_CODEC_HW_CONFIG_METHOD_HW_DEVICE_CTX,
+ .device_type = AV_HWDEVICE_TYPE_NI_QUADRA,
+ },
+ .hwaccel = NULL,
+ },
+ NULL
+};
+
+static const AVOption dec_options[] = {
+ NI_DEC_OPTIONS,
+ NI_DEC_OPTION_LOW_DELAY,
+ {NULL}};
+
+static const AVClass h265_xcoderdec_class = {
+ .class_name = "h265_ni_quadra_dec",
+ .item_name = av_default_item_name,
+ .option = dec_options,
+ .version = LIBAVUTIL_VERSION_INT,
+};
+
+FFCodec ff_h265_ni_quadra_decoder = {
+ .p.name = "h265_ni_quadra_dec",
+ CODEC_LONG_NAME("H.265 NETINT Quadra decoder v" NI_XCODER_REVISION),
+ .p.type = AVMEDIA_TYPE_VIDEO,
+ .p.id = AV_CODEC_ID_HEVC,
+ .p.priv_class = &h265_xcoderdec_class,
+ .p.capabilities = AV_CODEC_CAP_AVOID_PROBING | AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE,
+ .p.pix_fmts = (const enum AVPixelFormat[]){ AV_PIX_FMT_YUV420P, AV_PIX_FMT_NV12,
+ AV_PIX_FMT_YUV420P10LE, AV_PIX_FMT_P010LE,
+ AV_PIX_FMT_NONE },
+ FF_CODEC_RECEIVE_FRAME_CB(xcoder_receive_frame),
+ .priv_data_size = sizeof(XCoderDecContext),
+ .init = xcoder_decode_init,
+ .close = xcoder_decode_close,
+ .hw_configs = ff_ni_quad_hw_configs,
+ .bsfs = "hevc_mp4toannexb",
+ .flush = xcoder_decode_flush,
+};
diff --git a/libavcodec/nidec_jpeg.c b/libavcodec/nidec_jpeg.c
new file mode 100644
index 0000000000..62198bcbaf
--- /dev/null
+++ b/libavcodec/nidec_jpeg.c
@@ -0,0 +1,68 @@
+/*
+ * XCoder JPEG Decoder
+ * Copyright (c) 2021 NetInt
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include "nidec.h"
+#include "hwconfig.h"
+#include "profiles.h"
+static const AVCodecHWConfigInternal *ff_ni_quad_hw_configs[] = {
+ &(const AVCodecHWConfigInternal) {
+ .public = {
+ .pix_fmt = AV_PIX_FMT_NI_QUAD,
+ .methods = AV_CODEC_HW_CONFIG_METHOD_HW_FRAMES_CTX |
+ AV_CODEC_HW_CONFIG_METHOD_AD_HOC |
+ AV_CODEC_HW_CONFIG_METHOD_HW_DEVICE_CTX,
+ .device_type = AV_HWDEVICE_TYPE_NI_QUADRA,
+ },
+ .hwaccel = NULL,
+ },
+ NULL
+};
+
+static const AVOption dec_options[] = {
+ NI_DEC_OPTIONS,
+ {NULL},
+};
+
+#define JPEG_NI_QUADRA_DEC "jpeg_ni_quadra_dec"
+
+static const AVClass jpeg_xcoderdec_class = {
+ .class_name = JPEG_NI_QUADRA_DEC,
+ .item_name = av_default_item_name,
+ .option = dec_options,
+ .version = LIBAVUTIL_VERSION_INT,
+};
+
+FFCodec ff_jpeg_ni_quadra_decoder = {
+ .p.name = JPEG_NI_QUADRA_DEC,
+ CODEC_LONG_NAME("JPEG NETINT Quadra decoder v" NI_XCODER_REVISION),
+ .p.type = AVMEDIA_TYPE_VIDEO,
+ .p.id = AV_CODEC_ID_MJPEG,
+ .p.capabilities = AV_CODEC_CAP_AVOID_PROBING | AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE,
+ .p.priv_class = &jpeg_xcoderdec_class,
+ .p.pix_fmts = (const enum AVPixelFormat[]){ AV_PIX_FMT_YUVJ420P, AV_PIX_FMT_NI_QUAD,
+ AV_PIX_FMT_NONE },
+ FF_CODEC_RECEIVE_FRAME_CB(xcoder_receive_frame),
+ .hw_configs = ff_ni_quad_hw_configs,
+ .init = xcoder_decode_init,
+ .close = xcoder_decode_close,
+ .priv_data_size = sizeof(XCoderDecContext),
+ .flush = xcoder_decode_flush,
+};
diff --git a/libavcodec/nidec_vp9.c b/libavcodec/nidec_vp9.c
new file mode 100644
index 0000000000..40d424406b
--- /dev/null
+++ b/libavcodec/nidec_vp9.c
@@ -0,0 +1,72 @@
+/*
+ * XCoder VP9 Decoder
+ * Copyright (c) 2020 NetInt
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+/**
+ * @file
+ * XCoder decoder.
+ */
+
+#include "nidec.h"
+#include "hwconfig.h"
+#include "profiles.h"
+static const AVCodecHWConfigInternal *ff_ni_quad_hw_configs[] = {
+ &(const AVCodecHWConfigInternal) {
+ .public = {
+ .pix_fmt = AV_PIX_FMT_NI_QUAD,
+ .methods = AV_CODEC_HW_CONFIG_METHOD_HW_FRAMES_CTX |
+ AV_CODEC_HW_CONFIG_METHOD_AD_HOC |
+ AV_CODEC_HW_CONFIG_METHOD_HW_DEVICE_CTX,
+ .device_type = AV_HWDEVICE_TYPE_NI_QUADRA,
+ },
+ .hwaccel = NULL,
+ },
+ NULL
+};
+
+static const AVOption dec_options[] = {
+ NI_DEC_OPTIONS,
+ {NULL}};
+
+static const AVClass vp9_xcoderdec_class = {
+ .class_name = "vp9_ni_quadra_dec",
+ .item_name = av_default_item_name,
+ .option = dec_options,
+ .version = LIBAVUTIL_VERSION_INT,
+};
+
+FFCodec ff_vp9_ni_quadra_decoder = {
+ .p.name = "vp9_ni_quadra_dec",
+ CODEC_LONG_NAME("VP9 NETINT Quadra decoder v" NI_XCODER_REVISION),
+ .p.type = AVMEDIA_TYPE_VIDEO,
+ .p.id = AV_CODEC_ID_VP9,
+ .p.priv_class = &vp9_xcoderdec_class,
+ .p.capabilities = AV_CODEC_CAP_AVOID_PROBING | AV_CODEC_CAP_DELAY | AV_CODEC_CAP_HARDWARE,
+ .p.pix_fmts = (const enum AVPixelFormat[]){ AV_PIX_FMT_YUV420P, AV_PIX_FMT_NV12,
+ AV_PIX_FMT_YUV420P10LE, AV_PIX_FMT_P010LE,
+ AV_PIX_FMT_NI_QUAD, AV_PIX_FMT_NONE },
+ .p.profiles = NULL_IF_CONFIG_SMALL(ff_vp9_profiles),
+ FF_CODEC_RECEIVE_FRAME_CB(xcoder_receive_frame),
+ .priv_data_size = sizeof(XCoderDecContext),
+ .init = xcoder_decode_init,
+ .close = xcoder_decode_close,
+ .hw_configs = ff_ni_quad_hw_configs,
+ .flush = xcoder_decode_flush,
+};
diff --git a/libavcodec/nienc.c b/libavcodec/nienc.c
new file mode 100644
index 0000000000..5843311414
--- /dev/null
+++ b/libavcodec/nienc.c
@@ -0,0 +1,3009 @@
+/*
+ * NetInt XCoder H.264/HEVC Encoder common code
+ * Copyright (c) 2018-2019 NetInt
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+#include "nienc.h"
+#include "bytestream.h"
+#include "libavcodec/h264.h"
+#include "libavcodec/h264_sei.h"
+#include "libavcodec/hevc/hevc.h"
+#include "libavcodec/hevc/sei.h"
+#include "libavutil/mem.h"
+
+#include "libavcodec/put_bits.h"
+#include "libavutil/avstring.h"
+#include "libavutil/hdr_dynamic_metadata.h"
+#include "libavutil/hwcontext.h"
+#include "libavutil/hwcontext_ni_quad.h"
+#include "libavutil/mastering_display_metadata.h"
+#include "ni_av_codec.h"
+#include "ni_util.h"
+#include "put_bits.h"
+#include "packet_internal.h"
+
+#include <unistd.h>
+#include "encode.h"
+
+static bool gop_params_check(AVDictionary *dict, AVCodecContext *avctx)
+{
+ XCoderEncContext *s = avctx->priv_data;
+ AVDictionaryEntry *en = NULL;
+ char *key;
+
+ while ((en = av_dict_get(dict, "", en, AV_DICT_IGNORE_SUFFIX))) {
+ key = en->key;
+ ni_gop_params_check_set(&s->api_param, key);
+ }
+ return ni_gop_params_check(&s->api_param);
+}
+
+static int xcoder_encoder_headers(AVCodecContext *avctx)
+{
+ // use a copy of encoder context, take care to restore original config
+ // cropping setting
+ XCoderEncContext *ctx = NULL;
+ ni_xcoder_params_t *p_param = NULL;
+ ni_packet_t *xpkt = NULL;
+ int orig_conf_win_right;
+ int orig_conf_win_bottom;
+ int linesize_aligned, height_aligned;
+ int ret, recv;
+
+ ctx = av_malloc(sizeof(XCoderEncContext));
+ if (!ctx) {
+ return AVERROR(ENOMEM);
+ }
+
+ memcpy(ctx, (XCoderEncContext *)(avctx->priv_data),
+ sizeof(XCoderEncContext));
+
+ p_param = (ni_xcoder_params_t *)(ctx->api_ctx.p_session_config);
+
+ orig_conf_win_right = p_param->cfg_enc_params.conf_win_right;
+ orig_conf_win_bottom = p_param->cfg_enc_params.conf_win_bottom;
+
+ linesize_aligned = avctx->width;
+ if (linesize_aligned < NI_MIN_WIDTH) {
+ p_param->cfg_enc_params.conf_win_right +=
+ (NI_MIN_WIDTH - avctx->width) / 2 * 2;
+ linesize_aligned = NI_MIN_WIDTH;
+ } else {
+ if (avctx->sw_pix_fmt == AV_PIX_FMT_NI_QUAD_8_TILE_4X4 ||
+ avctx->sw_pix_fmt == AV_PIX_FMT_NI_QUAD_10_TILE_4X4) {
+ linesize_aligned = FFALIGN(avctx->width, 4);
+ p_param->cfg_enc_params.conf_win_right +=
+ (linesize_aligned - avctx->width) / 2 * 2;
+ } else {
+ linesize_aligned = FFALIGN(avctx->width, 2);
+ p_param->cfg_enc_params.conf_win_right +=
+ (linesize_aligned - avctx->width) / 2 * 2;
+ }
+ }
+ p_param->source_width = linesize_aligned;
+
+ height_aligned = avctx->height;
+ if (height_aligned < NI_MIN_HEIGHT) {
+ p_param->cfg_enc_params.conf_win_bottom +=
+ (NI_MIN_HEIGHT - avctx->height) / 2 * 2;
+ height_aligned = NI_MIN_HEIGHT;
+ } else {
+ if (avctx->sw_pix_fmt == AV_PIX_FMT_NI_QUAD_8_TILE_4X4 ||
+ avctx->sw_pix_fmt == AV_PIX_FMT_NI_QUAD_10_TILE_4X4) {
+ height_aligned = FFALIGN(avctx->height, 4);
+ p_param->cfg_enc_params.conf_win_bottom +=
+ (height_aligned - avctx->height) / 4 * 4;
+ } else {
+ height_aligned = FFALIGN(avctx->height, 2);
+ p_param->cfg_enc_params.conf_win_bottom +=
+ (height_aligned - avctx->height) / 2 * 2;
+ }
+ }
+ p_param->source_height = height_aligned;
+ p_param->cfg_enc_params.enable_acq_limit = 1;
+
+ ctx->api_ctx.hw_id = ctx->dev_enc_idx;
+ ff_xcoder_strncpy(ctx->api_ctx.blk_dev_name, ctx->dev_blk_name,
+ NI_MAX_DEVICE_NAME_LEN);
+ ff_xcoder_strncpy(ctx->api_ctx.dev_xcoder_name, ctx->dev_xcoder,
+ MAX_CHAR_IN_DEVICE_NAME);
+
+ ret = ni_device_session_open(&(ctx->api_ctx), NI_DEVICE_TYPE_ENCODER);
+
+ ctx->dev_xcoder_name = ctx->api_ctx.dev_xcoder_name;
+ ctx->blk_xcoder_name = ctx->api_ctx.blk_xcoder_name;
+ ctx->dev_enc_idx = ctx->api_ctx.hw_id;
+
+ switch (ret) {
+ case NI_RETCODE_SUCCESS:
+ av_log(avctx, AV_LOG_VERBOSE,
+ "XCoder %s.%d (inst: %d) opened successfully\n",
+ ctx->dev_xcoder_name, ctx->dev_enc_idx, ctx->api_ctx.session_id);
+ break;
+ case NI_RETCODE_INVALID_PARAM:
+ av_log(avctx, AV_LOG_ERROR,
+ "Failed to open encoder (status = %d), invalid parameter values "
+ "given: %s\n", ret, ctx->api_ctx.param_err_msg);
+ ret = AVERROR_EXTERNAL;
+ goto end;
+ default:
+ av_log(avctx, AV_LOG_ERROR,
+ "Failed to open encoder (status = %d), resource unavailable\n",
+ ret);
+ ret = AVERROR_EXTERNAL;
+ goto end;
+ }
+
+ xpkt = &(ctx->api_pkt.data.packet);
+ ni_packet_buffer_alloc(xpkt, NI_MAX_TX_SZ);
+
+ while (1) {
+ recv = ni_device_session_read(&(ctx->api_ctx), &(ctx->api_pkt),
+ NI_DEVICE_TYPE_ENCODER);
+
+ if (recv > 0) {
+ av_freep(&avctx->extradata);
+ avctx->extradata_size = recv - (int)(ctx->api_ctx.meta_size);
+ avctx->extradata =
+ av_mallocz(avctx->extradata_size + AV_INPUT_BUFFER_PADDING_SIZE);
+ memcpy(avctx->extradata,
+ (uint8_t *)xpkt->p_data + ctx->api_ctx.meta_size,
+ avctx->extradata_size);
+ av_log(avctx, AV_LOG_VERBOSE, "Xcoder encoder headers len: %d\n",
+ avctx->extradata_size);
+ break;
+ }
+ }
+
+end:
+ // close and clean up the temporary session
+ if (ret != 0) {
+ ni_device_session_close(&(ctx->api_ctx), ctx->encoder_eof,
+ NI_DEVICE_TYPE_ENCODER);
+ } else {
+ ret = ni_device_session_close(&(ctx->api_ctx), ctx->encoder_eof,
+ NI_DEVICE_TYPE_ENCODER);
+ }
+#ifdef _WIN32
+ ni_device_close(ctx->api_ctx.device_handle);
+#elif __linux__
+ ni_device_close(ctx->api_ctx.device_handle);
+ ni_device_close(ctx->api_ctx.blk_io_handle);
+#endif
+ ctx->api_ctx.device_handle = NI_INVALID_DEVICE_HANDLE;
+ ctx->api_ctx.blk_io_handle = NI_INVALID_DEVICE_HANDLE;
+
+ ni_packet_buffer_free(&(ctx->api_pkt.data.packet));
+
+ ni_rsrc_free_device_context(ctx->rsrc_ctx);
+ ctx->rsrc_ctx = NULL;
+
+ p_param->cfg_enc_params.conf_win_right = orig_conf_win_right;
+ p_param->cfg_enc_params.conf_win_bottom = orig_conf_win_bottom;
+
+ av_freep(&ctx);
+
+ return ret;
+}
+
+static int xcoder_encoder_header_check_set(AVCodecContext *avctx)
+{
+ XCoderEncContext *ctx = avctx->priv_data;
+ ni_xcoder_params_t *p_param;
+ // set color metrics
+ enum AVColorPrimaries color_primaries = avctx->color_primaries;
+ enum AVColorTransferCharacteristic color_trc = avctx->color_trc;
+ enum AVColorSpace color_space = avctx->colorspace;
+
+ p_param = (ni_xcoder_params_t *)ctx->api_ctx.p_session_config;
+
+ if (5 == p_param->dolby_vision_profile) {
+ switch (avctx->codec_id) {
+ case AV_CODEC_ID_HEVC:
+ color_primaries = AVCOL_PRI_UNSPECIFIED;
+ color_trc = AVCOL_TRC_UNSPECIFIED;
+ color_space = AVCOL_SPC_UNSPECIFIED;
+ p_param->cfg_enc_params.hrdEnable =
+ p_param->cfg_enc_params.EnableAUD = 1;
+ p_param->cfg_enc_params.forced_header_enable = 1;
+ p_param->cfg_enc_params.videoFullRange = 1;
+ break;
+ case AV_CODEC_ID_AV1:
+ av_log(avctx, AV_LOG_ERROR,
+ "dolbyVisionProfile is not supported on av1 encoder.\n");
+ return -1;
+ case AV_CODEC_ID_MJPEG:
+ av_log(avctx, AV_LOG_ERROR,
+ "dolbyVisionProfile is not supported on jpeg encoder.\n");
+ return -1;
+ case AV_CODEC_ID_H264:
+ av_log(avctx, AV_LOG_ERROR,
+ "dolbyVisionProfile is not supported on h264 encoder.\n");
+ return -1;
+ default:
+ break;
+ }
+ }
+
+ if (avctx->codec_id != AV_CODEC_ID_MJPEG &&
+ ((5 == p_param->dolby_vision_profile &&
+ AV_CODEC_ID_HEVC == avctx->codec_id) ||
+ color_primaries != AVCOL_PRI_UNSPECIFIED ||
+ color_trc != AVCOL_TRC_UNSPECIFIED ||
+ color_space != AVCOL_SPC_UNSPECIFIED)) {
+ p_param->cfg_enc_params.colorDescPresent = 1;
+ p_param->cfg_enc_params.colorPrimaries = color_primaries;
+ p_param->cfg_enc_params.colorTrc = color_trc;
+ p_param->cfg_enc_params.colorSpace = color_space;
+
+ av_log(avctx, AV_LOG_VERBOSE,
+ "XCoder HDR color info color_primaries: %d "
+ "color_trc: %d color_space %d\n",
+ color_primaries, color_trc, color_space);
+ }
+ if (avctx->color_range == AVCOL_RANGE_JPEG ||
+ AV_PIX_FMT_YUVJ420P == avctx->pix_fmt ||
+ AV_PIX_FMT_YUVJ420P == avctx->sw_pix_fmt) {
+ p_param->cfg_enc_params.videoFullRange = 1;
+ }
+
+ return 0;
+}
+
+static int xcoder_setup_encoder(AVCodecContext *avctx)
+{
+ XCoderEncContext *s = avctx->priv_data;
+ int i, ret = 0;
+ uint32_t xcoder_timeout;
+ ni_xcoder_params_t *p_param = &s->api_param;
+ ni_xcoder_params_t *pparams = NULL;
+ ni_session_run_state_t prev_state = s->api_ctx.session_run_state;
+
+ av_log(avctx, AV_LOG_VERBOSE, "XCoder setup device encoder\n");
+
+ if (ni_device_session_context_init(&(s->api_ctx)) < 0) {
+ av_log(avctx, AV_LOG_ERROR,
+ "Error XCoder init encoder context failure\n");
+ return AVERROR_EXTERNAL;
+ }
+
+ switch (avctx->codec_id) {
+ case AV_CODEC_ID_HEVC:
+ s->api_ctx.codec_format = NI_CODEC_FORMAT_H265;
+ break;
+ case AV_CODEC_ID_AV1:
+ s->api_ctx.codec_format = NI_CODEC_FORMAT_AV1;
+ break;
+ case AV_CODEC_ID_MJPEG:
+ s->api_ctx.codec_format = NI_CODEC_FORMAT_JPEG;
+ break;
+ default:
+ s->api_ctx.codec_format = NI_CODEC_FORMAT_H264;
+ break;
+ }
+
+ s->api_ctx.session_run_state = prev_state;
+ s->av_rois = NULL;
+ s->firstPktArrived = 0;
+ s->spsPpsArrived = 0;
+ s->spsPpsHdrLen = 0;
+ s->p_spsPpsHdr = NULL;
+ s->xcode_load_pixel = 0;
+ s->reconfigCount = 0;
+ s->latest_dts = 0;
+ s->first_frame_pts = INT_MIN;
+
+ if (SESSION_RUN_STATE_SEQ_CHANGE_DRAINING != s->api_ctx.session_run_state) {
+ av_log(avctx, AV_LOG_INFO, "Session state: %d allocate frame fifo.\n",
+ s->api_ctx.session_run_state);
+ s->fme_fifo = av_fifo_alloc2((size_t) 1, sizeof(AVFrame), 0);
+ } else {
+ av_log(avctx, AV_LOG_INFO, "Session seq change, fifo size: %lu.\n",
+ av_fifo_can_read(s->fme_fifo));
+ }
+
+ if (!s->fme_fifo) {
+ return AVERROR(ENOMEM);
+ }
+ s->eos_fme_received = 0;
+
+ //Xcoder User Configuration
+ ret = ni_encoder_init_default_params(
+ p_param, avctx->framerate.num,
+ avctx->framerate.den, avctx->bit_rate,
+ avctx->width, avctx->height, s->api_ctx.codec_format);
+ switch (ret) {
+ case NI_RETCODE_PARAM_ERROR_WIDTH_TOO_BIG:
+ if (avctx->codec_id == AV_CODEC_ID_AV1 && avctx->width < NI_PARAM_MAX_WIDTH) {
+ // AV1 resolution will be checked again when encoder session open (ni_validate_custom_template) since crop size may meet AV1 resolution constraint (E.g. AV1 tile encode)
+ av_log(avctx, AV_LOG_ERROR, "AV1 Picture Width exceeds %d - picture needs to be cropped:\n",
+ NI_PARAM_AV1_MAX_WIDTH);
+ ret = NI_RETCODE_SUCCESS;
+ } else {
+ av_log(avctx, AV_LOG_ERROR, "Invalid Picture Width: too big\n");
+ return AVERROR_EXTERNAL;
+ }
+ break;
+ case NI_RETCODE_PARAM_ERROR_WIDTH_TOO_SMALL:
+ av_log(avctx, AV_LOG_ERROR, "Invalid Picture Width: too small\n");
+ return AVERROR_EXTERNAL;
+ case NI_RETCODE_PARAM_ERROR_HEIGHT_TOO_BIG:
+ if (avctx->codec_id == AV_CODEC_ID_AV1) {
+ // AV1 resolution will be checked again when encoder session open (ni_validate_custom_template) since crop size may meet AV1 resolution constraint (E.g. AV1 tile encode)
+ av_log(avctx, AV_LOG_ERROR, "AV1 Picture Height exceeds %d - picture needs to be cropped:\n",
+ NI_PARAM_AV1_MAX_HEIGHT);
+ ret = NI_RETCODE_SUCCESS;
+ } else {
+ av_log(avctx, AV_LOG_ERROR, "Invalid Picture Height: too big\n");
+ return AVERROR_EXTERNAL;
+ }
+ break;
+ case NI_RETCODE_PARAM_ERROR_HEIGHT_TOO_SMALL:
+ av_log(avctx, AV_LOG_ERROR, "Invalid Picture Height: too small\n");
+ return AVERROR_EXTERNAL;
+ case NI_RETCODE_PARAM_ERROR_AREA_TOO_BIG:
+ if (avctx->codec_id == AV_CODEC_ID_AV1) {
+ // AV1 resolution will be checked again when encoder session open (ni_validate_custom_template) since crop size may meet AV1 resolution constraint (E.g. AV1 tile encode)
+ av_log(avctx, AV_LOG_ERROR, "AV1 Picture Width x Height exceeds %d - picture needs to be cropped:\n",
+ NI_PARAM_AV1_MAX_AREA);
+ ret = NI_RETCODE_SUCCESS;
+ } else {
+ av_log(avctx, AV_LOG_ERROR,
+ "Invalid Picture Width x Height: exceeds %d\n",
+ NI_MAX_RESOLUTION_AREA);
+ return AVERROR_EXTERNAL;
+ }
+ break;
+ case NI_RETCODE_PARAM_ERROR_PIC_WIDTH:
+ av_log(avctx, AV_LOG_ERROR, "Invalid Picture Width\n");
+ return AVERROR_EXTERNAL;
+ case NI_RETCODE_PARAM_ERROR_PIC_HEIGHT:
+ av_log(avctx, AV_LOG_ERROR, "Invalid Picture Height\n");
+ return AVERROR_EXTERNAL;
+ default:
+ if (ret < 0) {
+ av_log(avctx, AV_LOG_ERROR, "Error setting preset or log.\n");
+ av_log(avctx, AV_LOG_INFO, "Possible presets:");
+ for (i = 0; g_xcoder_preset_names[i]; i++)
+ av_log(avctx, AV_LOG_INFO, " %s", g_xcoder_preset_names[i]);
+ av_log(avctx, AV_LOG_INFO, "\n");
+
+ av_log(avctx, AV_LOG_INFO, "Possible log:");
+ for (i = 0; g_xcoder_log_names[i]; i++)
+ av_log(avctx, AV_LOG_INFO, " %s", g_xcoder_log_names[i]);
+ av_log(avctx, AV_LOG_INFO, "\n");
+
+ return AVERROR(EINVAL);
+ }
+ break;
+ }
+
+ av_log(avctx, AV_LOG_INFO, "pix_fmt is %d, sw_pix_fmt is %d resolution %dx%d\n", avctx->pix_fmt, avctx->sw_pix_fmt, avctx->width, avctx->height);
+ if (avctx->pix_fmt != AV_PIX_FMT_NI_QUAD) {
+ av_log(avctx, AV_LOG_INFO, "sw_pix_fmt assigned to pix_fmt was %d, is now %d\n", avctx->pix_fmt, avctx->sw_pix_fmt);
+ avctx->sw_pix_fmt = avctx->pix_fmt;
+ } else {
+ if ((avctx->height >= NI_MIN_HEIGHT) && (avctx->width >= NI_MIN_WIDTH)) {
+ p_param->hwframes = 1;
+ } else if (avctx->sw_pix_fmt == AV_PIX_FMT_NI_QUAD_8_TILE_4X4 ||
+ avctx->sw_pix_fmt == AV_PIX_FMT_NI_QUAD_10_TILE_4X4) {
+ av_log(avctx, AV_LOG_ERROR, "Invalid Picture Height or Width: too small\n");
+ return AVERROR_EXTERNAL;
+ }
+
+ if (avctx->codec_id == AV_CODEC_ID_MJPEG) {
+ if (avctx->sw_pix_fmt == AV_PIX_FMT_YUVJ420P) {
+ av_log(avctx, AV_LOG_DEBUG, "Pixfmt %s supported in %s encoder\n",
+ av_get_pix_fmt_name(avctx->sw_pix_fmt), avctx->codec->name);
+ } else if ((avctx->color_range == AVCOL_RANGE_JPEG || avctx->color_range == AVCOL_RANGE_UNSPECIFIED) &&
+ (avctx->sw_pix_fmt == AV_PIX_FMT_YUV420P || avctx->sw_pix_fmt == AV_PIX_FMT_YUV420P10LE ||
+ avctx->sw_pix_fmt == AV_PIX_FMT_NV12 || avctx->sw_pix_fmt == AV_PIX_FMT_P010LE)) {
+ av_log(avctx, AV_LOG_DEBUG, "Pixfmt %s supported in %s encoder when color_range is AVCOL_RANGE_JPEG\n",
+ av_get_pix_fmt_name(avctx->sw_pix_fmt), avctx->codec->name);
+ } else {
+ av_log(avctx, AV_LOG_ERROR, "Pixfmt %s not supported in %s encoder when color_range is %d\n",
+ av_get_pix_fmt_name(avctx->sw_pix_fmt), avctx->codec->name, avctx->color_range);
+ return AVERROR_INVALIDDATA;
+ }
+ }
+ }
+
+ switch (avctx->sw_pix_fmt) {
+ case AV_PIX_FMT_YUV420P:
+ case AV_PIX_FMT_YUVJ420P:
+ s->api_ctx.pixel_format = NI_PIX_FMT_YUV420P;
+ break;
+ case AV_PIX_FMT_YUV420P10LE:
+ s->api_ctx.pixel_format = NI_PIX_FMT_YUV420P10LE;
+ break;
+ case AV_PIX_FMT_NV12:
+ s->api_ctx.pixel_format = NI_PIX_FMT_NV12;
+ break;
+ case AV_PIX_FMT_P010LE:
+ s->api_ctx.pixel_format = NI_PIX_FMT_P010LE;
+ break;
+ case AV_PIX_FMT_NI_QUAD_8_TILE_4X4:
+ s->api_ctx.pixel_format = NI_PIX_FMT_8_TILED4X4;
+ break;
+ case AV_PIX_FMT_NI_QUAD_10_TILE_4X4:
+ s->api_ctx.pixel_format = NI_PIX_FMT_10_TILED4X4;
+ break;
+ case AV_PIX_FMT_ARGB:
+ s->api_ctx.pixel_format = NI_PIX_FMT_ARGB;
+ break;
+ case AV_PIX_FMT_ABGR:
+ s->api_ctx.pixel_format = NI_PIX_FMT_ABGR;
+ break;
+ case AV_PIX_FMT_RGBA:
+ s->api_ctx.pixel_format = NI_PIX_FMT_RGBA;
+ break;
+ case AV_PIX_FMT_BGRA:
+ s->api_ctx.pixel_format = NI_PIX_FMT_BGRA;
+ break;
+ default:
+ av_log(avctx, AV_LOG_ERROR, "Pixfmt %s not supported in Quadra encoder\n",
+ av_get_pix_fmt_name(avctx->sw_pix_fmt));
+ return AVERROR_INVALIDDATA;
+ }
+
+ if (s->xcoder_opts) {
+ AVDictionary *dict = NULL;
+ AVDictionaryEntry *en = NULL;
+
+ if (!av_dict_parse_string(&dict, s->xcoder_opts, "=", ":", 0)) {
+ while ((en = av_dict_get(dict, "", en, AV_DICT_IGNORE_SUFFIX))) {
+ int parse_ret = ni_encoder_params_set_value(p_param, en->key, en->value);
+ if (parse_ret != NI_RETCODE_SUCCESS) {
+ switch (parse_ret) {
+ case NI_RETCODE_PARAM_INVALID_NAME:
+ av_log(avctx, AV_LOG_ERROR, "Unknown option: %s.\n", en->key);
+ av_dict_free(&dict);
+ return AVERROR_EXTERNAL;
+ case NI_RETCODE_PARAM_ERROR_TOO_BIG:
+ av_log(avctx, AV_LOG_ERROR, "Invalid %s: too big\n", en->key);
+ av_dict_free(&dict);
+ return AVERROR_EXTERNAL;
+ case NI_RETCODE_PARAM_ERROR_TOO_SMALL:
+ av_log(avctx, AV_LOG_ERROR, "Invalid %s: too small\n", en->key);
+ av_dict_free(&dict);
+ return AVERROR_EXTERNAL;
+ case NI_RETCODE_PARAM_ERROR_OOR:
+ av_log(avctx, AV_LOG_ERROR, "Invalid %s: out of range\n",
+ en->key);
+ av_dict_free(&dict);
+ return AVERROR_EXTERNAL;
+ case NI_RETCODE_PARAM_ERROR_ZERO:
+ av_log(avctx, AV_LOG_ERROR,
+ "Error setting option %s to value 0\n", en->key);
+ av_dict_free(&dict);
+ return AVERROR_EXTERNAL;
+ case NI_RETCODE_PARAM_INVALID_VALUE:
+ av_log(avctx, AV_LOG_ERROR, "Invalid value for %s: %s.\n",
+ en->key, en->value);
+ av_dict_free(&dict);
+ return AVERROR_EXTERNAL;
+ case NI_RETCODE_PARAM_WARNING_DEPRECATED:
+ av_log(avctx, AV_LOG_WARNING, "Parameter %s is deprecated\n",
+ en->key);
+ break;
+ default:
+ av_log(avctx, AV_LOG_ERROR, "Invalid %s: ret %d\n", en->key,
+ parse_ret);
+ av_dict_free(&dict);
+ return AVERROR_EXTERNAL;
+ }
+ }
+ }
+ av_dict_free(&dict);
+ }
+ }
+
+ if (p_param->enable_vfr) {
+ // in the vfr mode, if the initial framerate is out of [5-120]
+ // think the initial framerate is incorrect, set it to default 30 fps
+ if (p_param->cfg_enc_params.frame_rate < 5 ||
+ p_param->cfg_enc_params.frame_rate > 120) {
+ p_param->cfg_enc_params.frame_rate = 30;
+ s->api_ctx.prev_fps = 30;
+ } else {
+ s->api_ctx.prev_fps = p_param->cfg_enc_params.frame_rate;
+ }
+ s->api_ctx.last_change_framenum = 0;
+ s->api_ctx.fps_change_detect_count = 0;
+ }
+
+ av_log(avctx, AV_LOG_DEBUG, "p_param->hwframes = %d\n", p_param->hwframes);
+ if (s->xcoder_gop) {
+ AVDictionary *dict = NULL;
+ AVDictionaryEntry *en = NULL;
+
+ if (!av_dict_parse_string(&dict, s->xcoder_gop, "=", ":", 0)) {
+ if (!gop_params_check(dict, avctx)) {
+ av_dict_free(&dict);
+ return AVERROR_EXTERNAL;
+ }
+
+ while ((en = av_dict_get(dict, "", en, AV_DICT_IGNORE_SUFFIX))) {
+ int parse_ret = ni_encoder_gop_params_set_value(p_param, en->key, en->value);
+ if (parse_ret != NI_RETCODE_SUCCESS) {
+ switch (parse_ret) {
+ case NI_RETCODE_PARAM_INVALID_NAME:
+ av_log(avctx, AV_LOG_ERROR, "Unknown option: %s.\n", en->key);
+ av_dict_free(&dict);
+ return AVERROR_EXTERNAL;
+ case NI_RETCODE_PARAM_ERROR_TOO_BIG:
+ av_log(avctx, AV_LOG_ERROR,
+ "Invalid custom GOP parameters: %s too big\n", en->key);
+ av_dict_free(&dict);
+ return AVERROR_EXTERNAL;
+ case NI_RETCODE_PARAM_ERROR_TOO_SMALL:
+ av_log(avctx, AV_LOG_ERROR,
+ "Invalid custom GOP parameters: %s too small\n",
+ en->key);
+ av_dict_free(&dict);
+ return AVERROR_EXTERNAL;
+ case NI_RETCODE_PARAM_ERROR_OOR:
+ av_log(avctx, AV_LOG_ERROR,
+ "Invalid custom GOP parameters: %s out of range \n",
+ en->key);
+ av_dict_free(&dict);
+ return AVERROR_EXTERNAL;
+ case NI_RETCODE_PARAM_ERROR_ZERO:
+ av_log(avctx, AV_LOG_ERROR,
+ "Invalid custom GOP paramaters: Error setting option %s "
+ "to value 0 \n",
+ en->key);
+ av_dict_free(&dict);
+ return AVERROR_EXTERNAL;
+ case NI_RETCODE_PARAM_INVALID_VALUE:
+ av_log(avctx, AV_LOG_ERROR,
+ "Invalid value for GOP param %s: %s.\n", en->key,
+ en->value);
+ av_dict_free(&dict);
+ return AVERROR_EXTERNAL;
+ case NI_RETCODE_PARAM_WARNING_DEPRECATED:
+ av_log(avctx, AV_LOG_WARNING, "Parameter %s is deprecated\n",
+ en->key);
+ break;
+ default:
+ av_log(avctx, AV_LOG_ERROR, "Invalid %s: ret %d\n", en->key,
+ parse_ret);
+ av_dict_free(&dict);
+ return AVERROR_EXTERNAL;
+ }
+ }
+ }
+ av_dict_free(&dict);
+ }
+ }
+ if (s->nvme_io_size > 0 && s->nvme_io_size % 4096 != 0) {
+ av_log(avctx, AV_LOG_ERROR, "Error XCoder iosize is not 4KB aligned!\n");
+ return AVERROR_EXTERNAL;
+ }
+
+ s->api_ctx.p_session_config = &s->api_param;
+ pparams = (ni_xcoder_params_t *)s->api_ctx.p_session_config;
+ switch (pparams->cfg_enc_params.gop_preset_index) {
+ /* dtsOffset is the max number of non-reference frames in a GOP
+ * (derived from x264/5 algo) In case of IBBBP the first dts of the I
+ * frame should be input_pts-(3*ticks_per_frame) In case of IBP the
+ * first dts of the I frame should be input_pts-(1*ticks_per_frame)
+ * thus we ensure pts>dts in all cases */
+ case 1:
+ case 9:
+ case 10:
+ s->dtsOffset = 0;
+ break;
+ /* ts requires dts/pts of I frame not same when there are B frames in
+ streams */
+ case 3:
+ case 4:
+ case 7:
+ s->dtsOffset = 1;
+ break;
+ case 5:
+ s->dtsOffset = 2;
+ break;
+ case -1: // adaptive GOP
+ case 8:
+ s->dtsOffset = 3;
+ break;
+ default:
+ s->dtsOffset = 7;
+ break;
+ }
+
+ if (pparams->cfg_enc_params.custom_gop_params.custom_gop_size) {
+ int dts_offset = 0;
+ s->dtsOffset = 0;
+ bool has_b_frame = false;
+ for (int idx = 0;
+ idx < pparams->cfg_enc_params.custom_gop_params.custom_gop_size;
+ idx++) {
+ if (pparams->cfg_enc_params.custom_gop_params.pic_param[idx].poc_offset <
+ idx + 1) {
+ dts_offset = (idx + 1) -
+ pparams->cfg_enc_params.custom_gop_params.pic_param[idx].
+ poc_offset;
+ if (s->dtsOffset < dts_offset) {
+ s->dtsOffset = dts_offset;
+ }
+ }
+
+ if (!has_b_frame &&
+ (pparams->cfg_enc_params.custom_gop_params.pic_param[idx].pic_type ==
+ PIC_TYPE_B)) {
+ has_b_frame = true;
+ }
+ }
+
+ if (has_b_frame && !s->dtsOffset) {
+ s->dtsOffset = 1;
+ }
+ }
+ av_log(avctx, AV_LOG_VERBOSE, "dts offset set to %ld\n", s->dtsOffset);
+
+ s->total_frames_received = 0;
+ s->gop_offset_count = 0;
+ av_log(avctx, AV_LOG_INFO, "dts offset: %ld, gop_offset_count: %d\n",
+ s->dtsOffset, s->gop_offset_count);
+
+ //overwrite the nvme io size here with a custom value if it was provided
+ if (s->nvme_io_size > 0) {
+ s->api_ctx.max_nvme_io_size = s->nvme_io_size;
+ av_log(avctx, AV_LOG_VERBOSE, "Custom NVME IO Size set to = %u\n",
+ s->api_ctx.max_nvme_io_size);
+ av_log(avctx, AV_LOG_INFO, "Encoder user specified NVMe IO Size set to: %u\n",
+ s->api_ctx.max_nvme_io_size);
+ }
+
+ // overwrite keep alive timeout value here with a custom value if it was
+ // provided
+ // if xcoder option is set then overwrite the (legacy) decoder option
+ xcoder_timeout = s->api_param.cfg_enc_params.keep_alive_timeout;
+ if (xcoder_timeout != NI_DEFAULT_KEEP_ALIVE_TIMEOUT) {
+ s->api_ctx.keep_alive_timeout = xcoder_timeout;
+ } else {
+ s->api_ctx.keep_alive_timeout = s->keep_alive_timeout;
+ }
+ av_log(avctx, AV_LOG_VERBOSE, "Custom NVME Keep Alive Timeout set to = %d\n",
+ s->api_ctx.keep_alive_timeout);
+
+ s->encoder_eof = 0;
+ avctx->bit_rate = pparams->bitrate;
+
+ s->api_ctx.src_bit_depth = 8;
+ s->api_ctx.src_endian = NI_FRAME_LITTLE_ENDIAN;
+ s->api_ctx.roi_len = 0;
+ s->api_ctx.roi_avg_qp = 0;
+ s->api_ctx.bit_depth_factor = 1;
+ if (AV_PIX_FMT_YUV420P10BE == avctx->sw_pix_fmt ||
+ AV_PIX_FMT_YUV420P10LE == avctx->sw_pix_fmt ||
+ AV_PIX_FMT_P010LE == avctx->sw_pix_fmt ||
+ AV_PIX_FMT_NI_QUAD_10_TILE_4X4 == avctx->sw_pix_fmt) {
+ s->api_ctx.bit_depth_factor = 2;
+ s->api_ctx.src_bit_depth = 10;
+ if (AV_PIX_FMT_YUV420P10BE == avctx->sw_pix_fmt) {
+ s->api_ctx.src_endian = NI_FRAME_BIG_ENDIAN;
+ }
+ }
+ switch (avctx->sw_pix_fmt) {
+ case AV_PIX_FMT_NV12:
+ case AV_PIX_FMT_P010LE:
+ pparams->cfg_enc_params.planar = NI_PIXEL_PLANAR_FORMAT_SEMIPLANAR;
+ break;
+ case AV_PIX_FMT_NI_QUAD_8_TILE_4X4:
+ case AV_PIX_FMT_NI_QUAD_10_TILE_4X4:
+ pparams->cfg_enc_params.planar = NI_PIXEL_PLANAR_FORMAT_TILED4X4;
+ break;
+ default:
+ pparams->cfg_enc_params.planar = NI_PIXEL_PLANAR_FORMAT_PLANAR;
+ break;
+ }
+
+ if (1) {
+ s->freeHead = 0;
+ s->freeTail = 0;
+ for (i = 0; i < MAX_NUM_FRAMEPOOL_HWAVFRAME; i++) {
+ s->sframe_pool[i] = av_frame_alloc();
+ if (!s->sframe_pool[i]) {
+ return AVERROR(ENOMEM);
+ }
+ s->aFree_Avframes_list[i] = i;
+ s->freeTail++;
+ }
+ s->aFree_Avframes_list[i] = -1;
+ }
+
+ // init HDR SEI stuff
+ s->api_ctx.sei_hdr_content_light_level_info_len =
+ s->api_ctx.light_level_data_len =
+ s->api_ctx.sei_hdr_mastering_display_color_vol_len =
+ s->api_ctx.mdcv_max_min_lum_data_len = 0;
+ s->api_ctx.p_master_display_meta_data = NULL;
+
+ memset( &(s->api_fme), 0, sizeof(ni_session_data_io_t) );
+ memset( &(s->api_pkt), 0, sizeof(ni_session_data_io_t) );
+
+ s->api_pkt.data.packet.av1_buffer_index = 0;
+
+ //validate encoded bitstream headers struct for encoder open
+ if (xcoder_encoder_header_check_set(avctx) < 0) {
+ return AVERROR_EXTERNAL;
+ }
+
+ // aspect ratio
+ // Use the value passed in from FFmpeg if aspect ratio from xcoder-params have default values
+ if ((p_param->cfg_enc_params.aspectRatioWidth == 0) && (p_param->cfg_enc_params.aspectRatioHeight == 1)) {
+ p_param->cfg_enc_params.aspectRatioWidth = avctx->sample_aspect_ratio.num;
+ p_param->cfg_enc_params.aspectRatioHeight = avctx->sample_aspect_ratio.den;
+ }
+
+ // generate encoded bitstream headers in advance if configured to do so
+ if ((avctx->codec_id != AV_CODEC_ID_MJPEG) &&
+ (s->gen_global_headers == 1 ||
+ ((avctx->flags & AV_CODEC_FLAG_GLOBAL_HEADER) &&
+ (s->gen_global_headers == GEN_GLOBAL_HEADERS_AUTO)))) {
+ ret = xcoder_encoder_headers(avctx);
+ }
+
+ // original resolution this stream started with, this is used by encoder sequence change
+ s->api_ctx.ori_width = avctx->width;
+ s->api_ctx.ori_height = avctx->height;
+ s->api_ctx.ori_bit_depth_factor = s->api_ctx.bit_depth_factor;
+ s->api_ctx.ori_pix_fmt = s->api_ctx.pixel_format;
+
+ av_log(avctx, AV_LOG_INFO, "xcoder_setup_encoder "
+ "sw_pix_fmt %d ori_pix_fmt %d\n",
+ avctx->sw_pix_fmt, s->api_ctx.ori_pix_fmt);
+
+ s->api_ctx.ori_luma_linesize = 0;
+ s->api_ctx.ori_chroma_linesize = 0;
+
+ return ret;
+}
+
+av_cold int xcoder_encode_init(AVCodecContext *avctx)
+{
+ XCoderEncContext *ctx = avctx->priv_data;
+ AVHWFramesContext *avhwf_ctx;
+ int ret;
+ ni_log_set_level(ff_to_ni_log_level(av_log_get_level()));
+
+ av_log(avctx, AV_LOG_VERBOSE, "XCoder encode init\n");
+
+ if (ctx->api_ctx.session_run_state == SESSION_RUN_STATE_SEQ_CHANGE_DRAINING) {
+ ctx->dev_enc_idx = ctx->orig_dev_enc_idx;
+ } else {
+ ctx->orig_dev_enc_idx = ctx->dev_enc_idx;
+ }
+
+ if ((ret = xcoder_setup_encoder(avctx)) < 0) {
+ xcoder_encode_close(avctx);
+ return ret;
+ }
+
+ if (!avctx->hw_device_ctx) {
+ if (avctx->hw_frames_ctx) {
+ avhwf_ctx = (AVHWFramesContext *)avctx->hw_frames_ctx->data;
+ avctx->hw_device_ctx = av_buffer_ref(avhwf_ctx->device_ref);
+ }
+ }
+
+ return 0;
+}
+
+int xcoder_encode_close(AVCodecContext *avctx)
+{
+ XCoderEncContext *ctx = avctx->priv_data;
+ ni_retcode_t ret = NI_RETCODE_FAILURE;
+ int i;
+
+ for (i = 0; i < MAX_NUM_FRAMEPOOL_HWAVFRAME; i++) {
+ av_frame_free(&(ctx->sframe_pool[i])); //any remaining stored AVframes that have not been unref will die here
+ ctx->sframe_pool[i] = NULL;
+ }
+
+ ret = ni_device_session_close(&ctx->api_ctx, ctx->encoder_eof,
+ NI_DEVICE_TYPE_ENCODER);
+ if (NI_RETCODE_SUCCESS != ret) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to close Encoder Session (status = %d)\n", ret);
+ }
+
+ av_log(avctx, AV_LOG_VERBOSE, "XCoder encode close: session_run_state %d\n", ctx->api_ctx.session_run_state);
+ if (ctx->api_ctx.session_run_state != SESSION_RUN_STATE_SEQ_CHANGE_DRAINING) {
+ av_log(avctx, AV_LOG_VERBOSE, "XCoder encode close: close blk_io_handle %d device_handle %d\n", ctx->api_ctx.blk_io_handle, ctx->api_ctx.device_handle);
+#ifdef _WIN32
+ ni_device_close(ctx->api_ctx.device_handle);
+#elif __linux__
+ ni_device_close(ctx->api_ctx.device_handle);
+ ni_device_close(ctx->api_ctx.blk_io_handle);
+#endif
+ ctx->api_ctx.device_handle = NI_INVALID_DEVICE_HANDLE;
+ ctx->api_ctx.blk_io_handle = NI_INVALID_DEVICE_HANDLE;
+ ctx->api_ctx.auto_dl_handle = NI_INVALID_DEVICE_HANDLE;
+ ctx->api_ctx.sender_handle = NI_INVALID_DEVICE_HANDLE;
+ }
+
+ av_log(avctx, AV_LOG_VERBOSE, "XCoder encode close (status = %d)\n", ret);
+
+ if (ctx->api_fme.data.frame.buffer_size
+ || ctx->api_fme.data.frame.metadata_buffer_size
+ || ctx->api_fme.data.frame.start_buffer_size) {
+ ni_frame_buffer_free(&(ctx->api_fme.data.frame));
+ }
+ ni_packet_buffer_free(&(ctx->api_pkt.data.packet));
+ if (AV_CODEC_ID_AV1 == avctx->codec_id &&
+ ctx->api_pkt.data.packet.av1_buffer_index)
+ ni_packet_buffer_free_av1(&(ctx->api_pkt.data.packet));
+
+ av_log(avctx, AV_LOG_DEBUG, "fifo num frames: %lu\n",
+ av_fifo_can_read(ctx->fme_fifo));
+ if (ctx->api_ctx.session_run_state != SESSION_RUN_STATE_SEQ_CHANGE_DRAINING) {
+ av_fifo_freep2(&ctx->fme_fifo);
+ av_log(avctx, AV_LOG_DEBUG, " , freed.\n");
+ } else {
+ av_log(avctx, AV_LOG_DEBUG, " , kept.\n");
+ }
+
+ ni_device_session_context_clear(&ctx->api_ctx);
+
+ ni_rsrc_free_device_context(ctx->rsrc_ctx);
+ ctx->rsrc_ctx = NULL;
+
+ ni_memfree(ctx->av_rois);
+ av_freep(&ctx->p_spsPpsHdr);
+
+ if (avctx->hw_device_ctx) {
+ av_buffer_unref(&avctx->hw_device_ctx);
+ }
+ ctx->started = 0;
+
+ return 0;
+}
+
+int xcoder_encode_sequence_change(AVCodecContext *avctx, int width, int height, int bit_depth_factor)
+{
+ XCoderEncContext *ctx = avctx->priv_data;
+ ni_retcode_t ret = NI_RETCODE_FAILURE;
+ ni_xcoder_params_t *p_param = &ctx->api_param;
+ ni_xcoder_params_t *pparams = (ni_xcoder_params_t *)ctx->api_ctx.p_session_config;
+
+ av_log(avctx, AV_LOG_VERBOSE, "XCoder encode sequence change: session_run_state %d\n", ctx->api_ctx.session_run_state);
+
+ ret = ni_device_session_sequence_change(&ctx->api_ctx, width, height, bit_depth_factor, NI_DEVICE_TYPE_ENCODER);
+
+ if (NI_RETCODE_SUCCESS != ret) {
+ av_log(avctx, AV_LOG_ERROR, "Failed to send Sequence Change to Encoder Session (status = %d)\n", ret);
+ return ret;
+ }
+
+ // update AvCodecContext
+ if (avctx->pix_fmt != AV_PIX_FMT_NI_QUAD) {
+ av_log(avctx, AV_LOG_INFO, "sw_pix_fmt assigned to pix_fmt was %d, is now %d\n", avctx->pix_fmt, avctx->sw_pix_fmt);
+ avctx->sw_pix_fmt = avctx->pix_fmt;
+ } else {
+ if ((avctx->height >= NI_MIN_HEIGHT) && (avctx->width >= NI_MIN_WIDTH)) {
+ p_param->hwframes = 1;
+ }
+ }
+
+ switch (avctx->sw_pix_fmt) {
+ case AV_PIX_FMT_YUV420P:
+ case AV_PIX_FMT_YUVJ420P:
+ case AV_PIX_FMT_YUV420P10LE:
+ case AV_PIX_FMT_NV12:
+ case AV_PIX_FMT_P010LE:
+ case AV_PIX_FMT_NI_QUAD_8_TILE_4X4:
+ case AV_PIX_FMT_NI_QUAD_10_TILE_4X4:
+ case AV_PIX_FMT_ARGB:
+ case AV_PIX_FMT_ABGR:
+ case AV_PIX_FMT_RGBA:
+ case AV_PIX_FMT_BGRA:
+ break;
+ case AV_PIX_FMT_YUV420P12:
+ case AV_PIX_FMT_YUV422P:
+ case AV_PIX_FMT_YUV422P10:
+ case AV_PIX_FMT_YUV422P12:
+ case AV_PIX_FMT_GBRP:
+ case AV_PIX_FMT_GBRP10:
+ case AV_PIX_FMT_GBRP12:
+ case AV_PIX_FMT_YUV444P:
+ case AV_PIX_FMT_YUV444P10:
+ case AV_PIX_FMT_YUV444P12:
+ case AV_PIX_FMT_GRAY8:
+ case AV_PIX_FMT_GRAY10:
+ case AV_PIX_FMT_GRAY12:
+ default:
+ return AVERROR_INVALIDDATA;
+ break;
+ }
+
+ // update session context
+ ctx->api_ctx.bit_depth_factor = bit_depth_factor;
+ ctx->api_ctx.src_bit_depth = (bit_depth_factor == 1) ? 8 : 10;
+ ctx->api_ctx.src_endian = (AV_PIX_FMT_YUV420P10BE == avctx->sw_pix_fmt) ? NI_FRAME_BIG_ENDIAN : NI_FRAME_LITTLE_ENDIAN;
+ ctx->api_ctx.ready_to_close = 0;
+ ctx->api_ctx.frame_num = 0; // need to reset frame_num because pkt_num is set to 1 when header received after sequnce change, and low delay mode compares frame_num and pkt_num
+ ctx->api_ctx.pkt_num = 0; // also need to reset pkt_num because before header received, pkt_num > frame_num will also cause low delay mode stuck
+ ctx->api_pkt.data.packet.end_of_stream = 0;
+
+ switch (avctx->sw_pix_fmt) {
+ case AV_PIX_FMT_NV12:
+ case AV_PIX_FMT_P010LE:
+ pparams->cfg_enc_params.planar = NI_PIXEL_PLANAR_FORMAT_SEMIPLANAR;
+ break;
+ case AV_PIX_FMT_NI_QUAD_8_TILE_4X4:
+ case AV_PIX_FMT_NI_QUAD_10_TILE_4X4:
+ pparams->cfg_enc_params.planar = NI_PIXEL_PLANAR_FORMAT_TILED4X4;
+ break;
+ default:
+ pparams->cfg_enc_params.planar = NI_PIXEL_PLANAR_FORMAT_PLANAR;
+ break;
+ }
+ return ret;
+}
+
+static int xcoder_encode_reset(AVCodecContext *avctx)
+{
+ av_log(avctx, AV_LOG_WARNING, "XCoder encode reset\n");
+ xcoder_encode_close(avctx);
+ return xcoder_encode_init(avctx);
+}
+
+// frame fifo operations
+static int is_input_fifo_empty(XCoderEncContext *s)
+{
+ if (!s->fme_fifo) {
+ return 1;
+ }
+ return av_fifo_can_read(s->fme_fifo) ? 0 : 1;
+}
+
+static int enqueue_frame(AVCodecContext *avctx, const AVFrame *inframe)
+{
+ XCoderEncContext *ctx = avctx->priv_data;
+ size_t nb_elems;
+ int ret = 0;
+
+ // expand frame buffer fifo if not enough space
+ if (!av_fifo_can_write(ctx->fme_fifo)) {
+ if (av_fifo_can_read(ctx->fme_fifo) >= NI_MAX_FIFO_CAPACITY) {
+ av_log(avctx, AV_LOG_ERROR, "Encoder frame buffer fifo capacity (%lu) reached maximum (%d)\n",
+ av_fifo_can_read(ctx->fme_fifo), NI_MAX_FIFO_CAPACITY);
+ return AVERROR_EXTERNAL;
+ }
+
+ ret = av_fifo_grow2(ctx->fme_fifo, (size_t) 1);
+ if (ret < 0) {
+ av_log(avctx, AV_LOG_ERROR, "Cannot grow FIFO: out of memory\n");
+ return ret;
+ }
+
+ nb_elems = av_fifo_can_read(ctx->fme_fifo) + av_fifo_can_write(ctx->fme_fifo);
+ if ((nb_elems % 100) == 0) {
+ av_log(avctx, AV_LOG_INFO, "Enc fifo being extended to: %lu\n", nb_elems);
+ }
+ }
+
+ if (inframe == &ctx->buffered_fme) {
+ av_fifo_write(ctx->fme_fifo, (void *)inframe, (size_t) 1);
+ } else {
+ AVFrame temp_frame;
+ memset(&temp_frame, 0, sizeof(AVFrame));
+ // In case double free for external input frame and our buffered frame.
+ av_frame_ref(&temp_frame, inframe);
+ av_fifo_write(ctx->fme_fifo, &temp_frame, 1);
+ }
+
+ av_log(avctx, AV_LOG_DEBUG, "fme queued, fifo num frames: %lu\n",
+ av_fifo_can_read(ctx->fme_fifo));
+ return ret;
+}
+
+int xcoder_send_frame(AVCodecContext *avctx, const AVFrame *frame)
+{
+ XCoderEncContext *ctx = avctx->priv_data;
+ bool ishwframe;
+ bool isnv12frame;
+ bool alignment_2pass_wa;
+ int format_in_use;
+ int ret = 0;
+ int sent;
+ int i, j;
+ int orig_avctx_width = avctx->width;
+ int orig_avctx_height = avctx->height;
+ ni_xcoder_params_t *p_param;
+ int need_to_copy = 1;
+ AVHWFramesContext *avhwf_ctx;
+ AVNIFramesContext *nif_src_ctx;
+ AVFrameSideData *side_data;
+ const AVFrame *first_frame = NULL;
+ // employ a ni_frame_t as a data holder to convert/prepare for side data
+ // of the passed in frame
+ ni_frame_t dec_frame = {0};
+ ni_aux_data_t *aux_data = NULL;
+ // data buffer for various SEI: HDR mastering display color volume, HDR
+ // content light level, close caption, User data unregistered, HDR10+ etc.
+ int send_sei_with_idr;
+ uint8_t mdcv_data[NI_MAX_SEI_DATA];
+ uint8_t cll_data[NI_MAX_SEI_DATA];
+ uint8_t cc_data[NI_MAX_SEI_DATA];
+ uint8_t udu_data[NI_MAX_SEI_DATA];
+ uint8_t hdrp_data[NI_MAX_SEI_DATA];
+
+ av_log(avctx, AV_LOG_VERBOSE, "XCoder send frame\n");
+
+ p_param = (ni_xcoder_params_t *) ctx->api_ctx.p_session_config;
+ alignment_2pass_wa = ((p_param->cfg_enc_params.lookAheadDepth ||
+ p_param->cfg_enc_params.crf >= 0 ||
+ p_param->cfg_enc_params.crfFloat >= 0) &&
+ (avctx->codec_id == AV_CODEC_ID_HEVC ||
+ avctx->codec_id == AV_CODEC_ID_AV1));
+
+ // leave encoder instance open to when the first frame buffer arrives so that
+ // its stride size is known and handled accordingly.
+ if (ctx->started == 0) {
+ if (!is_input_fifo_empty(ctx)) {
+ av_log(avctx, AV_LOG_VERBOSE, "first frame: use fme from fifo peek\n");
+ av_fifo_peek(ctx->fme_fifo, &ctx->buffered_fme, (size_t) 1, NULL);
+ ctx->buffered_fme.extended_data = ctx->buffered_fme.data;
+ first_frame = &ctx->buffered_fme;
+
+ } else if (frame) {
+ av_log(avctx, AV_LOG_VERBOSE, "first frame: use input frame\n");
+ first_frame = frame;
+ } else {
+ av_log(avctx, AV_LOG_ERROR, "first frame: NULL is unexpected!\n");
+ }
+ } else if (ctx->api_ctx.session_run_state == SESSION_RUN_STATE_SEQ_CHANGE_OPENING) {
+ if (!is_input_fifo_empty(ctx)) {
+ av_log(avctx, AV_LOG_VERBOSE, "first frame: use fme from fifo peek\n");
+ av_fifo_peek(ctx->fme_fifo, &ctx->buffered_fme, (size_t) 1, NULL);
+ ctx->buffered_fme.extended_data = ctx->buffered_fme.data;
+ first_frame = &ctx->buffered_fme;
+ } else {
+ av_log(avctx, AV_LOG_ERROR, "No buffered frame - Sequence Change Fail");
+ ret = AVERROR_EXTERNAL;
+ return ret;
+ }
+ }
+
+ if (first_frame && ctx->started == 0) {
+ // if frame stride size is not as we expect it,
+ // adjust using xcoder-params conf_win_right
+ int linesize_aligned = first_frame->width;
+ int height_aligned = first_frame->height;
+ ishwframe = first_frame->format == AV_PIX_FMT_NI_QUAD;
+
+ if (linesize_aligned < NI_MIN_WIDTH) {
+ p_param->cfg_enc_params.conf_win_right +=
+ (NI_MIN_WIDTH - first_frame->width) / 2 * 2;
+ linesize_aligned = NI_MIN_WIDTH;
+ } else {
+ if (avctx->sw_pix_fmt == AV_PIX_FMT_NI_QUAD_8_TILE_4X4 ||
+ avctx->sw_pix_fmt == AV_PIX_FMT_NI_QUAD_10_TILE_4X4) {
+ linesize_aligned = FFALIGN(first_frame->width, 4);
+ p_param->cfg_enc_params.conf_win_right +=
+ (linesize_aligned - first_frame->width) / 2 * 2;
+ } else {
+ linesize_aligned = FFALIGN(first_frame->width, 2);
+ p_param->cfg_enc_params.conf_win_right +=
+ (linesize_aligned - first_frame->width) / 2 * 2;
+ }
+ }
+ p_param->source_width = linesize_aligned;
+
+ if (height_aligned < NI_MIN_HEIGHT) {
+ p_param->cfg_enc_params.conf_win_bottom +=
+ (NI_MIN_HEIGHT - first_frame->height) / 2 * 2;
+ height_aligned = NI_MIN_HEIGHT;
+ } else {
+ if (avctx->sw_pix_fmt == AV_PIX_FMT_NI_QUAD_8_TILE_4X4 ||
+ avctx->sw_pix_fmt == AV_PIX_FMT_NI_QUAD_10_TILE_4X4) {
+ height_aligned = FFALIGN(first_frame->height, 4);
+ p_param->cfg_enc_params.conf_win_bottom +=
+ (height_aligned - first_frame->height) / 4 * 4;
+ } else {
+ height_aligned = FFALIGN(first_frame->height, 2);
+ p_param->cfg_enc_params.conf_win_bottom +=
+ (height_aligned - first_frame->height) / 2 * 2;
+ }
+ }
+ p_param->source_height = height_aligned;
+
+ av_log(avctx, AV_LOG_DEBUG,
+ "color primaries (%u %u) colorspace (%u %u) color_range (%u %u)\n",
+ avctx->color_primaries, first_frame->color_primaries,
+ avctx->colorspace, first_frame->colorspace,
+ avctx->color_range, first_frame->color_range);
+
+ if (avctx->color_primaries == AVCOL_PRI_UNSPECIFIED) {
+ avctx->color_primaries = first_frame->color_primaries;
+ }
+ if (avctx->color_trc == AVCOL_TRC_UNSPECIFIED) {
+ avctx->color_trc = first_frame->color_trc;
+ }
+ if (avctx->colorspace == AVCOL_SPC_UNSPECIFIED) {
+ avctx->colorspace = first_frame->colorspace;
+ }
+ avctx->color_range = first_frame->color_range;
+
+ if (xcoder_encoder_header_check_set(avctx) < 0) {
+ return AVERROR_EXTERNAL;
+ }
+
+ av_log(avctx, AV_LOG_VERBOSE,
+ "XCoder frame->linesize: %d/%d/%d frame width/height %dx%d"
+ " conf_win_right %d conf_win_bottom %d , color primaries %u trc %u "
+ "space %u format %d\n",
+ first_frame->linesize[0], first_frame->linesize[1],
+ first_frame->linesize[2], first_frame->width, first_frame->height,
+ p_param->cfg_enc_params.conf_win_right,
+ p_param->cfg_enc_params.conf_win_bottom,
+ first_frame->color_primaries, first_frame->color_trc,
+ first_frame->colorspace, first_frame->format);
+
+ if (SESSION_RUN_STATE_SEQ_CHANGE_OPENING != ctx->api_ctx.session_run_state) {
+ // sequence change backup / restore encoder device handles, hw_id and
+ // block device name, so no need to overwrite hw_id/blk_dev_name to user
+ // set values
+ ctx->api_ctx.hw_id = ctx->dev_enc_idx;
+ ff_xcoder_strncpy(ctx->api_ctx.dev_xcoder_name, ctx->dev_xcoder,
+ MAX_CHAR_IN_DEVICE_NAME);
+
+ ff_xcoder_strncpy(ctx->api_ctx.blk_dev_name, ctx->dev_blk_name,
+ NI_MAX_DEVICE_NAME_LEN);
+ }
+
+ p_param->cfg_enc_params.enable_acq_limit = 1;
+ p_param->rootBufId = (ishwframe) ? ((niFrameSurface1_t*)((uint8_t*)first_frame->data[3]))->ui16FrameIdx : 0;
+ if (ishwframe) {
+ ctx->api_ctx.hw_action = NI_CODEC_HW_ENABLE;
+ ctx->api_ctx.sender_handle = (ni_device_handle_t)(
+ (int64_t)(((niFrameSurface1_t *)((uint8_t *)first_frame->data[3]))
+ ->device_handle));
+ }
+
+ if (first_frame->hw_frames_ctx && ctx->api_ctx.hw_id == -1 &&
+ 0 == strcmp(ctx->api_ctx.blk_dev_name, "")) {
+ ctx->api_ctx.hw_id = ni_get_cardno(first_frame);
+ av_log(avctx, AV_LOG_VERBOSE,
+ "xcoder_send_frame: hw_id -1, empty blk_dev_name, collocated "
+ "to %d\n",
+ ctx->api_ctx.hw_id);
+ }
+
+ // AUD insertion has to be handled differently in the firmware
+ // if it is global header
+ if (p_param->cfg_enc_params.EnableAUD) {
+ if (avctx->flags & AV_CODEC_FLAG_GLOBAL_HEADER) {
+ p_param->cfg_enc_params.EnableAUD = NI_ENABLE_AUD_FOR_GLOBAL_HEADER;
+ }
+
+ av_log(avctx, AV_LOG_VERBOSE,
+ "%s: EnableAUD %d global header flag %d\n", __FUNCTION__,
+ (p_param->cfg_enc_params.EnableAUD),
+ (avctx->flags & AV_CODEC_FLAG_GLOBAL_HEADER) ? 1 : 0);
+ }
+
+ // config linesize for zero copy (if input resolution is zero copy compatible)
+ ni_encoder_frame_zerocopy_check(&ctx->api_ctx,
+ p_param, first_frame->width, first_frame->height,
+ first_frame->linesize, true);
+
+ ret = ni_device_session_open(&ctx->api_ctx, NI_DEVICE_TYPE_ENCODER);
+
+ // As the file handle may change we need to assign back
+ ctx->dev_xcoder_name = ctx->api_ctx.dev_xcoder_name;
+ ctx->blk_xcoder_name = ctx->api_ctx.blk_xcoder_name;
+ ctx->dev_enc_idx = ctx->api_ctx.hw_id;
+
+ switch (ret) {
+ case NI_RETCODE_SUCCESS:
+ av_log(avctx, AV_LOG_VERBOSE,
+ "XCoder %s.%d (inst: %d) opened successfully\n",
+ ctx->dev_xcoder_name, ctx->dev_enc_idx, ctx->api_ctx.session_id);
+ break;
+ case NI_RETCODE_INVALID_PARAM:
+ av_log(avctx, AV_LOG_ERROR,
+ "Failed to open encoder (status = %d), invalid parameter values "
+ "given: %s\n",
+ ret, ctx->api_ctx.param_err_msg);
+ ret = AVERROR_EXTERNAL;
+ return ret;
+ default:
+ av_log(avctx, AV_LOG_ERROR,
+ "Failed to open encoder (status = %d), "
+ "resource unavailable\n",
+ ret);
+ // for FFmpeg >= 6.1 sequence change session open fail: unlike previous
+ // FFmpeg versions which terminate streams and codecs right away
+ // by calling exit_program after submit_encode_frame returns error,
+ // FFmpeg 6.1 calls enc_flush after error, which enters this function again,
+ // but the buffered frame would have been unref'ed by then, and therefore
+ // we must remove buffered frame upon session open fail to prevent
+ // accessing or unref'ing invalid frame
+ if (SESSION_RUN_STATE_SEQ_CHANGE_DRAINING !=
+ ctx->api_ctx.session_run_state) {
+ if (! is_input_fifo_empty(ctx)) {
+ av_fifo_drain2(ctx->fme_fifo, (size_t) 1);
+ av_log(avctx, AV_LOG_DEBUG, "fme popped, fifo num frames: %lu\n",
+ av_fifo_can_read(ctx->fme_fifo));
+ }
+ }
+ ret = AVERROR_EXTERNAL;
+ return ret;
+ }
+
+ // set up ROI map if in ROI demo mode
+ // Note: this is for demo purpose, and its direct access to QP map in
+ // session context is not the usual way to do ROI; the normal way is
+ // through side data of AVFrame in libavcodec, or aux data of ni_frame
+ // in libxcoder
+ if (p_param->cfg_enc_params.roi_enable &&
+ (1 == p_param->roi_demo_mode || 2 == p_param->roi_demo_mode)) {
+ if (ni_set_demo_roi_map(&ctx->api_ctx) < 0) {
+ return AVERROR(ENOMEM);
+ }
+ }
+ } //end if (first_frame && ctx->started == 0)
+
+ if (ctx->encoder_flushing) {
+ if (! frame && is_input_fifo_empty(ctx)) {
+ av_log(avctx, AV_LOG_DEBUG, "XCoder EOF: null frame && fifo empty\n");
+ return AVERROR_EOF;
+ }
+ }
+
+ if (! frame) {
+ if (is_input_fifo_empty(ctx)) {
+ ctx->eos_fme_received = 1;
+ av_log(avctx, AV_LOG_DEBUG, "null frame, eos_fme_received = 1\n");
+ } else {
+ avctx->internal->draining = 0;
+ av_log(avctx, AV_LOG_DEBUG, "null frame, but fifo not empty, clear draining = 0\n");
+ }
+ } else {
+ av_log(avctx, AV_LOG_DEBUG, "XCoder send frame #%"PRIu64"\n",
+ ctx->api_ctx.frame_num);
+
+ // queue up the frame if fifo is NOT empty, or: sequence change ongoing !
+ if (! is_input_fifo_empty(ctx) ||
+ SESSION_RUN_STATE_SEQ_CHANGE_DRAINING == ctx->api_ctx.session_run_state) {
+ ret = enqueue_frame(avctx, frame);
+ if (ret < 0) {
+ return ret;
+ }
+
+ if (SESSION_RUN_STATE_SEQ_CHANGE_DRAINING ==
+ ctx->api_ctx.session_run_state) {
+ av_log(avctx, AV_LOG_TRACE, "XCoder doing sequence change, frame #%"PRIu64" "
+ "queued and return 0 !\n", ctx->api_ctx.frame_num);
+ return 0;
+ }
+ } else if (frame != &ctx->buffered_fme) {
+ ret = av_frame_ref(&ctx->buffered_fme, frame);
+ }
+ }
+
+resend:
+
+ if (ctx->started == 0) {
+ ctx->api_fme.data.frame.start_of_stream = 1;
+ ctx->started = 1;
+ } else if (ctx->api_ctx.session_run_state == SESSION_RUN_STATE_SEQ_CHANGE_OPENING) {
+ ctx->api_fme.data.frame.start_of_stream = 1;
+ } else {
+ ctx->api_fme.data.frame.start_of_stream = 0;
+ }
+
+ if (is_input_fifo_empty(ctx)) {
+ av_log(avctx, AV_LOG_DEBUG,
+ "no frame in fifo to send, just send/receive ..\n");
+ if (ctx->eos_fme_received) {
+ av_log(avctx, AV_LOG_DEBUG,
+ "no frame in fifo to send, send eos ..\n");
+ }
+ } else {
+ av_log(avctx, AV_LOG_DEBUG, "fifo peek fme\n");
+ av_fifo_peek(ctx->fme_fifo, &ctx->buffered_fme, (size_t) 1, NULL);
+ ctx->buffered_fme.extended_data = ctx->buffered_fme.data;
+ }
+
+ if (!ctx->eos_fme_received) {
+ int8_t bit_depth = 1;
+ ishwframe = ctx->buffered_fme.format == AV_PIX_FMT_NI_QUAD;
+ if (ishwframe) {
+ // Superframe early cleanup of unused outputs
+ niFrameSurface1_t *pOutExtra;
+ if (ctx->buffered_fme.buf[1]) {
+ // NOLINTNEXTLINE(clang-diagnostic-incompatible-pointer-types)
+ pOutExtra= (niFrameSurface1_t *)ctx->buffered_fme.buf[1]->data;
+ if (pOutExtra->ui16FrameIdx != 0) {
+ av_log(avctx, AV_LOG_DEBUG, "Unref unused index %d\n",
+ pOutExtra->ui16FrameIdx);
+ } else {
+ av_log(avctx, AV_LOG_ERROR,
+ "ERROR: Should not be getting superframe with dead "
+ "outputs\n");
+ }
+ av_buffer_unref(&ctx->buffered_fme.buf[1]);
+ if (ctx->buffered_fme.buf[2]) {
+ // NOLINTNEXTLINE(clang-diagnostic-incompatible-pointer-types)
+ pOutExtra = (niFrameSurface1_t *)ctx->buffered_fme.buf[2]->data;
+ if (pOutExtra->ui16FrameIdx != 0) {
+ av_log(avctx, AV_LOG_DEBUG, "Unref unused index %d\n",
+ pOutExtra->ui16FrameIdx);
+ } else {
+ av_log(
+ avctx, AV_LOG_ERROR,
+ "ERROR: Should not be getting superframe with dead "
+ "outputs\n");
+ }
+ av_buffer_unref(&ctx->buffered_fme.buf[2]);
+ }
+ }
+ pOutExtra = (niFrameSurface1_t *)ctx->buffered_fme.data[3];
+ if (ctx->api_ctx.pixel_format == NI_PIX_FMT_ARGB
+ || ctx->api_ctx.pixel_format == NI_PIX_FMT_ABGR
+ || ctx->api_ctx.pixel_format == NI_PIX_FMT_RGBA
+ || ctx->api_ctx.pixel_format == NI_PIX_FMT_BGRA) {
+ bit_depth = 1;
+ } else {
+ bit_depth = pOutExtra->bit_depth;
+ }
+
+ switch (bit_depth) {
+ case 1:
+ case 2:
+ break;
+ default:
+ av_log(avctx, AV_LOG_ERROR, "ERROR: Unknown bit depth %d!\n", bit_depth);
+ return AVERROR_INVALIDDATA;
+ }
+ } else {
+ if (AV_PIX_FMT_YUV420P10BE == ctx->buffered_fme.format ||
+ AV_PIX_FMT_YUV420P10LE == ctx->buffered_fme.format ||
+ AV_PIX_FMT_P010LE == ctx->buffered_fme.format) {
+ bit_depth = 2;
+ }
+ }
+
+ if ((ctx->buffered_fme.height && ctx->buffered_fme.width &&
+ (ctx->buffered_fme.height != avctx->height ||
+ ctx->buffered_fme.width != avctx->width)) ||
+ bit_depth != ctx->api_ctx.bit_depth_factor) {
+ av_log(avctx, AV_LOG_INFO,
+ "xcoder_send_frame resolution change %dx%d "
+ "-> %dx%d or bit depth change %d -> %d\n",
+ avctx->width, avctx->height, ctx->buffered_fme.width,
+ ctx->buffered_fme.height, ctx->api_ctx.bit_depth_factor,
+ bit_depth);
+
+ ctx->api_ctx.session_run_state =
+ SESSION_RUN_STATE_SEQ_CHANGE_DRAINING;
+ ctx->eos_fme_received = 1;
+
+ // have to queue this frame if not done so: an empty queue
+ if (is_input_fifo_empty(ctx)) {
+ av_log(avctx, AV_LOG_TRACE,
+ "resolution change when fifo empty, frame "
+ "#%" PRIu64 " being queued ..\n",
+ ctx->api_ctx.frame_num);
+ // unref buffered frame (this buffered frame is taken from input
+ // AVFrame) because we are going to send EOS (instead of sending
+ // buffered frame)
+ if (frame != &ctx->buffered_fme) {
+ av_frame_unref(&ctx->buffered_fme);
+ }
+ ret = enqueue_frame(avctx, frame);
+ if (ret < 0) {
+ return ret;
+ }
+ }
+ }
+ }
+
+ ctx->api_fme.data.frame.preferred_characteristics_data_len = 0;
+ ctx->api_fme.data.frame.end_of_stream = 0;
+ ctx->api_fme.data.frame.force_key_frame =
+ ctx->api_fme.data.frame.use_cur_src_as_long_term_pic =
+ ctx->api_fme.data.frame.use_long_term_ref = 0;
+
+ ctx->api_fme.data.frame.sei_total_len =
+ ctx->api_fme.data.frame.sei_cc_offset = ctx->api_fme.data.frame
+ .sei_cc_len =
+ ctx->api_fme.data.frame.sei_hdr_mastering_display_color_vol_offset =
+ ctx->api_fme.data.frame
+ .sei_hdr_mastering_display_color_vol_len =
+ ctx->api_fme.data.frame
+ .sei_hdr_content_light_level_info_offset =
+ ctx->api_fme.data.frame
+ .sei_hdr_content_light_level_info_len =
+ ctx->api_fme.data.frame.sei_hdr_plus_offset =
+ ctx->api_fme.data.frame.sei_hdr_plus_len = 0;
+
+ ctx->api_fme.data.frame.roi_len = 0;
+ ctx->api_fme.data.frame.reconf_len = 0;
+ ctx->api_fme.data.frame.force_pic_qp = 0;
+
+ if (SESSION_RUN_STATE_SEQ_CHANGE_DRAINING ==
+ ctx->api_ctx.session_run_state ||
+ (ctx->eos_fme_received && is_input_fifo_empty(ctx))) {
+ av_log(avctx, AV_LOG_VERBOSE, "XCoder start flushing\n");
+ ctx->api_fme.data.frame.end_of_stream = 1;
+ ctx->encoder_flushing = 1;
+ } else {
+ format_in_use = ctx->buffered_fme.format;
+
+ // extra data starts with metadata header, various aux data sizes
+ // have been reset above
+ ctx->api_fme.data.frame.extra_data_len =
+ NI_APP_ENC_FRAME_META_DATA_SIZE;
+
+ ctx->api_fme.data.frame.ni_pict_type = 0;
+
+ ret = ni_enc_prep_reconf_demo_data(&ctx->api_ctx, &dec_frame);
+ if (ret < 0) {
+ return ret;
+ }
+
+ // support VFR
+ if (ctx->api_param.enable_vfr) {
+ int cur_fps = 0, pre_fps = 0;
+
+ pre_fps = ctx->api_ctx.prev_fps;
+
+ if (ctx->buffered_fme.pts > ctx->api_ctx.prev_pts) {
+ ctx->api_ctx.passed_time_in_timebase_unit += ctx->buffered_fme.pts - ctx->api_ctx.prev_pts;
+ ctx->api_ctx.count_frame_num_in_sec++;
+ //change the FrameRate for VFR
+ //1. Only when the fps change, setting the new bitrate
+ //2. The interval between two framerate chagne settings shall be greater than 1 seconds
+ // or at the start the transcoding
+ if (ctx->api_ctx.passed_time_in_timebase_unit >= (avctx->time_base.den / avctx->time_base.num)) {
+ //this is a workaround for small resolution vfr mode
+ //when detect framerate change, the reconfig framerate will trigger bitrate params to reset
+ //the cost related to bitrate estimate is all tuned with downsample flow
+ //but for small resolution, the lookahead won't downsample
+ int slow_down_vfr = 0;
+ cur_fps = ctx->api_ctx.count_frame_num_in_sec;
+ if (ctx->buffered_fme.width < 288 || ctx->buffered_fme.height < 256) {
+ slow_down_vfr = 1;
+ }
+ if ((ctx->api_ctx.frame_num != 0) && (pre_fps != cur_fps) && (slow_down_vfr ? (abs(cur_fps - pre_fps) > 2) : 1) &&
+ ((ctx->api_ctx.frame_num < ctx->api_param.cfg_enc_params.frame_rate) ||
+ (ctx->api_ctx.frame_num - ctx->api_ctx.last_change_framenum >= ctx->api_param.cfg_enc_params.frame_rate))) {
+ aux_data = ni_frame_new_aux_data(&dec_frame, NI_FRAME_AUX_DATA_FRAMERATE, sizeof(ni_framerate_t));
+ if (aux_data) {
+ ni_framerate_t *framerate = (ni_framerate_t *)aux_data->data;
+ framerate->framerate_num = cur_fps;
+ framerate->framerate_denom = 1;
+ }
+
+ ctx->api_ctx.last_change_framenum = ctx->api_ctx.frame_num;
+ ctx->api_ctx.prev_fps = cur_fps;
+ }
+ ctx->api_ctx.count_frame_num_in_sec = 0;
+ ctx->api_ctx.passed_time_in_timebase_unit = 0;
+ }
+ ctx->api_ctx.prev_pts = ctx->buffered_fme.pts;
+ } else if (ctx->buffered_fme.pts < ctx->api_ctx.prev_pts) {
+ //error handle for the case that pts jump back
+ //this may cause a little error in the bitrate setting, This little error is acceptable.
+ //As long as the subsequent, PTS is normal, it will be repaired quickly.
+ ctx->api_ctx.prev_pts = ctx->buffered_fme.pts;
+ } else {
+ //do nothing, when the pts of two adjacent frames are the same
+ //this may cause a little error in the bitrate setting, This little error is acceptable.
+ //As long as the subsequent, PTS is normal, it will be repaired quickly.
+ }
+ }
+
+ // force pic qp demo mode: initial QP (200 frames) -> QP value specified by
+ // ForcePicQpDemoMode (100 frames) -> initial QP (remaining frames)
+ if (p_param->force_pic_qp_demo_mode) {
+ if (ctx->api_ctx.frame_num >= 300) {
+ ctx->api_fme.data.frame.force_pic_qp =
+ p_param->cfg_enc_params.rc.intra_qp;
+ } else if (ctx->api_ctx.frame_num >= 200) {
+ ctx->api_fme.data.frame.force_pic_qp = p_param->force_pic_qp_demo_mode;
+ }
+ }
+
+ // supply QP map if ROI enabled and if ROIs passed in
+ // Note: ROI demo mode takes higher priority over side data !
+ side_data = av_frame_get_side_data(&ctx->buffered_fme, AV_FRAME_DATA_REGIONS_OF_INTEREST);
+
+ if (!p_param->roi_demo_mode && p_param->cfg_enc_params.roi_enable &&
+ side_data) {
+ aux_data = ni_frame_new_aux_data(
+ &dec_frame, NI_FRAME_AUX_DATA_REGIONS_OF_INTEREST, side_data->size);
+ if (aux_data) {
+ memcpy(aux_data->data, side_data->data, side_data->size);
+ }
+ }
+
+ // Note: when ROI demo modes enabled, supply ROI map for the specified range
+ // frames, and 0 map for others
+ if (QUADRA && p_param->roi_demo_mode &&
+ p_param->cfg_enc_params.roi_enable) {
+ if (ctx->api_ctx.frame_num > 90 && ctx->api_ctx.frame_num < 300) {
+ ctx->api_fme.data.frame.roi_len = ctx->api_ctx.roi_len;
+ } else {
+ ctx->api_fme.data.frame.roi_len = 0;
+ }
+ // when ROI enabled, always have a data buffer for ROI
+ // Note: this is handled separately from ROI through side/aux data
+ ctx->api_fme.data.frame.extra_data_len += ctx->api_ctx.roi_len;
+ }
+
+ if (!p_param->cfg_enc_params.enable_all_sei_passthru) {
+ // SEI (HDR)
+ // content light level info
+ if (!(p_param->cfg_enc_params.HDR10CLLEnable)) { // not user set
+ side_data = av_frame_get_side_data(&ctx->buffered_fme, AV_FRAME_DATA_CONTENT_LIGHT_LEVEL);
+
+ if (side_data && side_data->size == sizeof(AVContentLightMetadata)) {
+ aux_data = ni_frame_new_aux_data(
+ &dec_frame, NI_FRAME_AUX_DATA_CONTENT_LIGHT_LEVEL,
+ sizeof(ni_content_light_level_t));
+ if (aux_data) {
+ memcpy(aux_data->data, side_data->data, side_data->size);
+ }
+ }
+ } else if ((AV_CODEC_ID_H264 == avctx->codec_id ||
+ ctx->api_ctx.bit_depth_factor == 1) &&
+ ctx->api_ctx.light_level_data_len == 0) {
+ // User input maxCLL so create SEIs for h264 and don't touch for (h265 &&
+ // hdr10) since that is conveyed in config step
+ // Quadra autoset only for hdr10 format with hevc
+ aux_data = ni_frame_new_aux_data(&dec_frame,
+ NI_FRAME_AUX_DATA_CONTENT_LIGHT_LEVEL,
+ sizeof(ni_content_light_level_t));
+ if (aux_data) {
+ ni_content_light_level_t *cll =
+ (ni_content_light_level_t *)(aux_data->data);
+ cll->max_cll = p_param->cfg_enc_params.HDR10MaxLight;
+ cll->max_fall = p_param->cfg_enc_params.HDR10AveLight;
+ }
+ }
+
+ // mastering display color volume
+ if (!(p_param->cfg_enc_params.HDR10Enable)) { // not user set
+ side_data = av_frame_get_side_data(&ctx->buffered_fme, AV_FRAME_DATA_MASTERING_DISPLAY_METADATA);
+ if (side_data && side_data->size == sizeof(AVMasteringDisplayMetadata)) {
+ aux_data = ni_frame_new_aux_data(
+ &dec_frame, NI_FRAME_AUX_DATA_MASTERING_DISPLAY_METADATA,
+ sizeof(ni_mastering_display_metadata_t));
+ if (aux_data) {
+ memcpy(aux_data->data, side_data->data, side_data->size);
+ }
+ }
+ } else if ((AV_CODEC_ID_H264 == avctx->codec_id ||
+ ctx->api_ctx.bit_depth_factor == 1) &&
+ ctx->api_ctx.sei_hdr_mastering_display_color_vol_len == 0) {
+ // User input masterDisplay so create SEIs for h264 and don't touch for (h265 &&
+ // hdr10) since that is conveyed in config step
+ // Quadra autoset only for hdr10 format with hevc
+ aux_data = ni_frame_new_aux_data(&dec_frame,
+ NI_FRAME_AUX_DATA_MASTERING_DISPLAY_METADATA,
+ sizeof(ni_mastering_display_metadata_t));
+ if (aux_data) {
+ ni_mastering_display_metadata_t *mst_dsp =
+ (ni_mastering_display_metadata_t *)(aux_data->data);
+
+ //X, Y display primaries for RGB channels and white point(WP) in units of 0.00002
+ //and max, min luminance(L) values in units of 0.0001 nits
+ //xy are denom = 50000 num = HDR10dx0/y
+ mst_dsp->display_primaries[0][0].den = MASTERING_DISP_CHROMA_DEN;
+ mst_dsp->display_primaries[0][1].den = MASTERING_DISP_CHROMA_DEN;
+ mst_dsp->display_primaries[1][0].den = MASTERING_DISP_CHROMA_DEN;
+ mst_dsp->display_primaries[1][1].den = MASTERING_DISP_CHROMA_DEN;
+ mst_dsp->display_primaries[2][0].den = MASTERING_DISP_CHROMA_DEN;
+ mst_dsp->display_primaries[2][1].den = MASTERING_DISP_CHROMA_DEN;
+ mst_dsp->white_point[0].den = MASTERING_DISP_CHROMA_DEN;
+ mst_dsp->white_point[1].den = MASTERING_DISP_CHROMA_DEN;
+ mst_dsp->min_luminance.den = MASTERING_DISP_LUMA_DEN;
+ mst_dsp->max_luminance.den = MASTERING_DISP_LUMA_DEN;
+ // ni_mastering_display_metadata_t has to be filled with R,G,B
+ // values, in that order, while HDR10d is filled in order of G,B,R,
+ // so do the conversion here.
+ mst_dsp->display_primaries[0][0].num = p_param->cfg_enc_params.HDR10dx2;
+ mst_dsp->display_primaries[0][1].num = p_param->cfg_enc_params.HDR10dy2;
+ mst_dsp->display_primaries[1][0].num = p_param->cfg_enc_params.HDR10dx0;
+ mst_dsp->display_primaries[1][1].num = p_param->cfg_enc_params.HDR10dy0;
+ mst_dsp->display_primaries[2][0].num = p_param->cfg_enc_params.HDR10dx1;
+ mst_dsp->display_primaries[2][1].num = p_param->cfg_enc_params.HDR10dy1;
+ mst_dsp->white_point[0].num = p_param->cfg_enc_params.HDR10wx;
+ mst_dsp->white_point[1].num = p_param->cfg_enc_params.HDR10wy;
+ mst_dsp->min_luminance.num = p_param->cfg_enc_params.HDR10minluma;
+ mst_dsp->max_luminance.num = p_param->cfg_enc_params.HDR10maxluma;
+ mst_dsp->has_primaries = 1;
+ mst_dsp->has_luminance = 1;
+ }
+ }
+
+ // SEI (HDR10+)
+ side_data = av_frame_get_side_data(&ctx->buffered_fme, AV_FRAME_DATA_DYNAMIC_HDR_PLUS);
+ if (side_data && side_data->size == sizeof(AVDynamicHDRPlus)) {
+ aux_data = ni_frame_new_aux_data(&dec_frame, NI_FRAME_AUX_DATA_HDR_PLUS,
+ sizeof(ni_dynamic_hdr_plus_t));
+ if (aux_data) {
+ memcpy(aux_data->data, side_data->data, side_data->size);
+ }
+ } // hdr10+
+
+ // SEI (close caption)
+ side_data = av_frame_get_side_data(&ctx->buffered_fme, AV_FRAME_DATA_A53_CC);
+
+ if (side_data && side_data->size > 0) {
+ aux_data = ni_frame_new_aux_data(&dec_frame, NI_FRAME_AUX_DATA_A53_CC,
+ side_data->size);
+ if (aux_data) {
+ memcpy(aux_data->data, side_data->data, side_data->size);
+ }
+ }
+
+ // User data unregistered SEI
+ side_data = av_frame_get_side_data(&ctx->buffered_fme, AV_FRAME_DATA_SEI_UNREGISTERED);
+ if (ctx->udu_sei && side_data && side_data->size > 0) {
+ aux_data = ni_frame_new_aux_data(&dec_frame, NI_FRAME_AUX_DATA_UDU_SEI,
+ side_data->size);
+ if (aux_data) {
+ memcpy(aux_data->data, (uint8_t *)side_data->data, side_data->size);
+ }
+ }
+ }
+ if (ctx->api_ctx.force_frame_type) {
+ switch (ctx->buffered_fme.pict_type) {
+ case AV_PICTURE_TYPE_I:
+ ctx->api_fme.data.frame.ni_pict_type = PIC_TYPE_IDR;
+ break;
+ case AV_PICTURE_TYPE_P:
+ ctx->api_fme.data.frame.ni_pict_type = PIC_TYPE_P;
+ break;
+ default:
+ ;
+ }
+ } else if (ctx->buffered_fme.pict_type == AV_PICTURE_TYPE_I) {
+ ctx->api_fme.data.frame.force_key_frame = 1;
+ ctx->api_fme.data.frame.ni_pict_type = PIC_TYPE_IDR;
+ }
+
+ av_log(avctx, AV_LOG_TRACE,
+ "xcoder_send_frame: #%" PRIu64 " ni_pict_type %d"
+ " forced_header_enable %d intraPeriod %d\n",
+ ctx->api_ctx.frame_num, ctx->api_fme.data.frame.ni_pict_type,
+ p_param->cfg_enc_params.forced_header_enable,
+ p_param->cfg_enc_params.intra_period);
+
+ // whether should send SEI with this frame
+ send_sei_with_idr = ni_should_send_sei_with_frame(
+ &ctx->api_ctx, ctx->api_fme.data.frame.ni_pict_type, p_param);
+
+ // prep for auxiliary data (various SEI, ROI) in encode frame, based on the
+ // data returned in decoded frame
+ ni_enc_prep_aux_data(&ctx->api_ctx, &ctx->api_fme.data.frame, &dec_frame,
+ ctx->api_ctx.codec_format, send_sei_with_idr,
+ mdcv_data, cll_data, cc_data, udu_data, hdrp_data);
+
+ if (ctx->api_fme.data.frame.sei_total_len > NI_ENC_MAX_SEI_BUF_SIZE) {
+ av_log(avctx, AV_LOG_ERROR, "xcoder_send_frame: sei total length %u exceeds maximum sei size %u.\n",
+ ctx->api_fme.data.frame.sei_total_len, NI_ENC_MAX_SEI_BUF_SIZE);
+ ret = AVERROR(EINVAL);
+ return ret;
+ }
+
+ ctx->api_fme.data.frame.extra_data_len += ctx->api_fme.data.frame.sei_total_len;
+
+ // data layout requirement: leave space for reconfig data if at least one of
+ // reconfig, SEI or ROI is present
+ // Note: ROI is present when enabled, so use encode config flag instead of
+ // frame's roi_len as it can be 0 indicating a 0'd ROI map setting !
+ if (ctx->api_fme.data.frame.reconf_len ||
+ ctx->api_fme.data.frame.sei_total_len ||
+ p_param->cfg_enc_params.roi_enable) {
+ ctx->api_fme.data.frame.extra_data_len +=
+ sizeof(ni_encoder_change_params_t);
+ }
+
+ ctx->api_fme.data.frame.pts = ctx->buffered_fme.pts;
+ ctx->api_fme.data.frame.dts = ctx->buffered_fme.pkt_dts;
+
+ ctx->api_fme.data.frame.video_width = avctx->width;
+ ctx->api_fme.data.frame.video_height = avctx->height;
+
+ ishwframe = ctx->buffered_fme.format == AV_PIX_FMT_NI_QUAD;
+ if (ctx->api_ctx.auto_dl_handle != 0 || (avctx->height < NI_MIN_HEIGHT) ||
+ (avctx->width < NI_MIN_WIDTH)) {
+ format_in_use = avctx->sw_pix_fmt;
+ ctx->api_ctx.hw_action = 0;
+ ishwframe = 0;
+ }
+ isnv12frame = (format_in_use == AV_PIX_FMT_NV12 || format_in_use == AV_PIX_FMT_P010LE);
+
+ if (ishwframe) {
+ ret = sizeof(niFrameSurface1_t);
+ } else {
+ ret = av_image_get_buffer_size(format_in_use,
+ ctx->buffered_fme.width, ctx->buffered_fme.height, 1);
+ }
+
+ #if FF_API_PKT_PTS
+ // NOLINTNEXTLINE(clang-diagnostic-deprecated-declarations)
+ av_log(avctx, AV_LOG_TRACE, "xcoder_send_frame: pts=%" PRId64 ", pkt_dts=%" PRId64 ", pkt_pts=%" PRId64 "\n", ctx->buffered_fme.pts, ctx->buffered_fme.pkt_dts, ctx->buffered_fme.pkt_pts);
+ #endif
+ av_log(avctx, AV_LOG_TRACE, "xcoder_send_frame: frame->format=%d, frame->width=%d, frame->height=%d, frame->pict_type=%d, size=%d\n", format_in_use, ctx->buffered_fme.width, ctx->buffered_fme.height, ctx->buffered_fme.pict_type, ret);
+ if (ret < 0) {
+ return ret;
+ }
+
+ int dst_stride[NI_MAX_NUM_DATA_POINTERS] = {0};
+ int height_aligned[NI_MAX_NUM_DATA_POINTERS] = {0};
+ int src_height[NI_MAX_NUM_DATA_POINTERS] = {0};
+
+ src_height[0] = ctx->buffered_fme.height;
+ src_height[1] = ctx->buffered_fme.height / 2;
+ src_height[2] = (isnv12frame) ? 0 : (ctx->buffered_fme.height / 2);
+ if (avctx->sw_pix_fmt == AV_PIX_FMT_ARGB ||
+ avctx->sw_pix_fmt == AV_PIX_FMT_RGBA ||
+ avctx->sw_pix_fmt == AV_PIX_FMT_ABGR ||
+ avctx->sw_pix_fmt == AV_PIX_FMT_BGRA) {
+ src_height[0] = ctx->buffered_fme.height;
+ src_height[1] = 0;
+ src_height[2] = 0;
+ alignment_2pass_wa = 0;
+ }
+ /* The main reason for the problem is that when using out=sw and noautoscale=0,
+ * the incoming stream initially conforms to zerocopy, so the firmware will allocate memory according to zerocopy.
+ * The resolution after this changes, but the software inserts autoscale,
+ * so the encoder cannot detect the change in resolution and cannot reopen.
+ * However, due to ffmpeg being 64 bit aligned, So the linesize is not consistent with the linesize we initially decoded,
+ * so the encoding will take the path of non-zero copy. At this time, due to lookahead>0,
+ * the software will send the firmware a size larger than the originally requested size,
+ * so it will not be able to send it. Moreover, at this point,
+ * the firmware is unable to perceive changes in linesize and respond accordingly.
+ * For fix this, we can check the linesize and disable 2-pass workaround.
+ */
+ if (ctx->api_param.luma_linesize) {
+ alignment_2pass_wa = false;
+ }
+
+ ni_get_min_frame_dim(ctx->buffered_fme.width,
+ ctx->buffered_fme.height,
+ ctx->api_ctx.pixel_format,
+ dst_stride, height_aligned);
+
+ av_log(avctx, AV_LOG_TRACE,
+ "xcoder_send_frame frame->width %d "
+ "ctx->api_ctx.bit_depth_factor %d dst_stride[0/1/2] %d/%d/%d sw_pix_fmt %d\n",
+ ctx->buffered_fme.width, ctx->api_ctx.bit_depth_factor,
+ dst_stride[0], dst_stride[1], dst_stride[2], avctx->sw_pix_fmt);
+
+ if (alignment_2pass_wa && !ishwframe) {
+ if (isnv12frame) {
+ // for 2-pass encode output mismatch WA, need to extend (and
+ // pad) CbCr plane height, because 1st pass assume input 32
+ // align
+ height_aligned[1] = FFALIGN(height_aligned[0], 32) / 2;
+ } else {
+ // for 2-pass encode output mismatch WA, need to extend (and
+ // pad) Cr plane height, because 1st pass assume input 32 align
+ height_aligned[2] = FFALIGN(height_aligned[0], 32) / 2;
+ }
+ }
+
+ // alignment(16) extra padding for H.264 encoding
+ if (ishwframe) {
+ uint8_t *dsthw;
+ const uint8_t *srchw;
+
+ ni_frame_buffer_alloc_hwenc(
+ &(ctx->api_fme.data.frame), ctx->buffered_fme.width,
+ ctx->buffered_fme.height,
+ (int)ctx->api_fme.data.frame.extra_data_len);
+ if (!ctx->api_fme.data.frame.p_data[3]) {
+ return AVERROR(ENOMEM);
+ }
+ dsthw = ctx->api_fme.data.frame.p_data[3];
+ srchw = (const uint8_t *)ctx->buffered_fme.data[3];
+ av_log(avctx, AV_LOG_TRACE, "dst=%p src=%p len=%d\n", dsthw, srchw,
+ ctx->api_fme.data.frame.data_len[3]);
+ memcpy(dsthw, srchw, ctx->api_fme.data.frame.data_len[3]);
+ av_log(avctx, AV_LOG_TRACE,
+ "ctx->buffered_fme.data[3] %p memcpy to %p\n",
+ ctx->buffered_fme.data[3], dsthw);
+ } else { // traditional yuv transfer
+ av_log(avctx, AV_LOG_TRACE, "%s %s %d buffered_fme.data[0] %p data[3] %p wxh %u %u dst_stride[0] %d %d linesize[0] %d %d data[1] %p %p data[2] %p %p data[3] %p buf[0] %p crop(t:b:l:r) %lu:%lu:%lu:%lu avctx(w:h:cw:ch) %u:%u:%u:%u\n",
+ __FILE__, __FUNCTION__, __LINE__,
+ ctx->buffered_fme.data[0], ctx->buffered_fme.data[3],
+ ctx->buffered_fme.width, ctx->buffered_fme.height,
+ dst_stride[0], dst_stride[1],
+ ctx->buffered_fme.linesize[0], ctx->buffered_fme.linesize[1],
+ ctx->buffered_fme.data[1], ctx->buffered_fme.data[0] + dst_stride[0] * ctx->buffered_fme.height,
+ ctx->buffered_fme.data[2], ctx->buffered_fme.data[1] + dst_stride[1] * ctx->buffered_fme.height / 2,
+ ctx->buffered_fme.data[3],
+ ctx->buffered_fme.buf[0],
+ ctx->buffered_fme.crop_top, ctx->buffered_fme.crop_bottom, ctx->buffered_fme.crop_left, ctx->buffered_fme.crop_right,
+ avctx->width, avctx->height,
+ avctx->coded_width, avctx->coded_height);
+
+ // check input resolution zero copy compatible or not
+ if (ni_encoder_frame_zerocopy_check(&ctx->api_ctx,
+ p_param, ctx->buffered_fme.width, ctx->buffered_fme.height,
+ (const int *)ctx->buffered_fme.linesize, false) == NI_RETCODE_SUCCESS) {
+ need_to_copy = 0;
+ // alloc metadata buffer etc. (if needed)
+ ret = ni_encoder_frame_zerocopy_buffer_alloc(
+ &(ctx->api_fme.data.frame), ctx->buffered_fme.width,
+ ctx->buffered_fme.height, (const int *)ctx->buffered_fme.linesize, (const uint8_t **)ctx->buffered_fme.data,
+ (int)ctx->api_fme.data.frame.extra_data_len);
+ if (ret != NI_RETCODE_SUCCESS)
+ return AVERROR(ENOMEM);
+ } else {
+ // if linesize changes (while resolution remains the same), copy to previously configured linesizes
+ if (p_param->luma_linesize && p_param->chroma_linesize) {
+ dst_stride[0] = p_param->luma_linesize;
+ dst_stride[1] = p_param->chroma_linesize;
+ dst_stride[2] = isnv12frame ? 0 : p_param->chroma_linesize;
+ }
+ ni_encoder_sw_frame_buffer_alloc(
+ !isnv12frame, &(ctx->api_fme.data.frame), ctx->buffered_fme.width,
+ height_aligned[0], dst_stride, (avctx->codec_id == AV_CODEC_ID_H264),
+ (int)ctx->api_fme.data.frame.extra_data_len, alignment_2pass_wa);
+ }
+ av_log(avctx, AV_LOG_TRACE, "%p need_to_copy %d! pts = %ld\n", ctx->api_fme.data.frame.p_buffer, need_to_copy, ctx->buffered_fme.pts);
+ if (!ctx->api_fme.data.frame.p_data[0]) {
+ return AVERROR(ENOMEM);
+ }
+
+ // if this is indeed sw frame, do the YUV data layout, otherwise may need
+ // to do frame download
+ if (ctx->buffered_fme.format != AV_PIX_FMT_NI_QUAD) {
+ av_log(
+ avctx, AV_LOG_TRACE,
+ "xcoder_send_frame: fme.data_len[0]=%d, "
+ "buf_fme->linesize=%d/%d/%d, dst alloc linesize = %d/%d/%d, "
+ "src height = %d/%d/%d, dst height aligned = %d/%d/%d, "
+ "force_key_frame=%d, extra_data_len=%d sei_size=%d "
+ "(hdr_content_light_level %u hdr_mastering_display_color_vol %u "
+ "hdr10+ %u cc %u udu %u prefC %u) roi_size=%u reconf_size=%u "
+ "force_pic_qp=%u "
+ "use_cur_src_as_long_term_pic %u use_long_term_ref %u\n",
+ ctx->api_fme.data.frame.data_len[0],
+ ctx->buffered_fme.linesize[0], ctx->buffered_fme.linesize[1],
+ ctx->buffered_fme.linesize[2], dst_stride[0], dst_stride[1],
+ dst_stride[2], src_height[0], src_height[1], src_height[2],
+ height_aligned[0], height_aligned[1], height_aligned[2],
+ ctx->api_fme.data.frame.force_key_frame,
+ ctx->api_fme.data.frame.extra_data_len,
+ ctx->api_fme.data.frame.sei_total_len,
+ ctx->api_fme.data.frame.sei_hdr_content_light_level_info_len,
+ ctx->api_fme.data.frame.sei_hdr_mastering_display_color_vol_len,
+ ctx->api_fme.data.frame.sei_hdr_plus_len,
+ ctx->api_fme.data.frame.sei_cc_len,
+ ctx->api_fme.data.frame.sei_user_data_unreg_len,
+ ctx->api_fme.data.frame.preferred_characteristics_data_len,
+ (p_param->cfg_enc_params.roi_enable ? ctx->api_ctx.roi_len : 0),
+ ctx->api_fme.data.frame.reconf_len,
+ ctx->api_fme.data.frame.force_pic_qp,
+ ctx->api_fme.data.frame.use_cur_src_as_long_term_pic,
+ ctx->api_fme.data.frame.use_long_term_ref);
+
+ // YUV part of the encoder input data layout
+ if (need_to_copy) {
+ ni_copy_frame_data(
+ (uint8_t **)(ctx->api_fme.data.frame.p_data),
+ ctx->buffered_fme.data, ctx->buffered_fme.width,
+ ctx->buffered_fme.height, ctx->api_ctx.bit_depth_factor,
+ ctx->api_ctx.pixel_format, p_param->cfg_enc_params.conf_win_right, dst_stride,
+ height_aligned, ctx->buffered_fme.linesize, src_height);
+ }
+ } else {
+ ni_session_data_io_t *p_session_data;
+ ni_session_data_io_t niframe;
+ niFrameSurface1_t *src_surf;
+
+ av_log(avctx, AV_LOG_DEBUG,
+ "xcoder_send_frame:Autodownload to be run: hdl: %d w: %d h: %d\n",
+ ctx->api_ctx.auto_dl_handle, avctx->width, avctx->height);
+ avhwf_ctx =
+ (AVHWFramesContext *)ctx->buffered_fme.hw_frames_ctx->data;
+ nif_src_ctx = (AVNIFramesContext*) avhwf_ctx->hwctx;
+
+ src_surf = (niFrameSurface1_t *)ctx->buffered_fme.data[3];
+
+ if (avctx->height < NI_MIN_HEIGHT || avctx->width < NI_MIN_WIDTH) {
+ int bit_depth;
+ int is_planar;
+
+ p_session_data = &niframe;
+ memset(&niframe, 0, sizeof(niframe));
+ bit_depth = ((avctx->sw_pix_fmt == AV_PIX_FMT_YUV420P10LE) ||
+ (avctx->sw_pix_fmt == AV_PIX_FMT_P010LE))
+ ? 2
+ : 1;
+ is_planar = (avctx->sw_pix_fmt == AV_PIX_FMT_YUV420P) ||
+ (avctx->sw_pix_fmt == AV_PIX_FMT_YUV420P10LE);
+
+ /* Allocate a minimal frame */
+ ni_enc_frame_buffer_alloc(&niframe.data.frame, avctx->width,
+ avctx->height, 0, /* alignment */
+ 1, /* metadata */
+ bit_depth, 0, /* hw_frame_count */
+ is_planar, ctx->api_ctx.pixel_format);
+ } else {
+ p_session_data = &(ctx->api_fme);
+ }
+
+ nif_src_ctx->api_ctx.is_auto_dl = true;
+ ret = ni_device_session_hwdl(&nif_src_ctx->api_ctx, p_session_data,
+ src_surf);
+ ishwframe = false;
+ if (ret <= 0) {
+ av_log(avctx, AV_LOG_ERROR,
+ "nienc.c:ni_hwdl_frame() failed to retrieve frame\n");
+ return AVERROR_EXTERNAL;
+ }
+
+ if ((avctx->height < NI_MIN_HEIGHT) ||
+ (avctx->width < NI_MIN_WIDTH)) {
+ int nb_planes = av_pix_fmt_count_planes(avctx->sw_pix_fmt);
+ int ni_fmt = ctx->api_ctx.pixel_format;
+ ni_expand_frame(&ctx->api_fme.data.frame,
+ &p_session_data->data.frame, dst_stride,
+ avctx->width, avctx->height, ni_fmt, nb_planes);
+
+ ni_frame_buffer_free(&niframe.data.frame);
+ }
+ }
+ } // end if hwframe else
+
+ // auxiliary data part of the encoder input data layout
+ ni_enc_copy_aux_data(&ctx->api_ctx, &ctx->api_fme.data.frame, &dec_frame,
+ ctx->api_ctx.codec_format, mdcv_data, cll_data,
+ cc_data, udu_data, hdrp_data, ishwframe, isnv12frame);
+
+ ni_frame_buffer_free(&dec_frame);
+ } // end non seq change
+
+ sent = ni_device_session_write(&ctx->api_ctx, &ctx->api_fme, NI_DEVICE_TYPE_ENCODER);
+
+ av_log(avctx, AV_LOG_DEBUG, "xcoder_send_frame: size %d sent to xcoder\n", sent);
+
+ // return EIO at error
+ if (NI_RETCODE_ERROR_VPU_RECOVERY == sent) {
+ sent = xcoder_encode_reset(avctx);
+ if (sent < 0) {
+ av_log(avctx, AV_LOG_ERROR, "xcoder_send_frame(): VPU recovery failed:%d, returning EIO\n", sent);
+ ret = AVERROR(EIO);
+ }
+ } else if (sent < 0) {
+ av_log(avctx, AV_LOG_ERROR, "xcoder_send_frame(): failure sent (%d) , "
+ "returning EIO\n", sent);
+ ret = AVERROR(EIO);
+
+ // if rejected due to sequence change in progress, revert resolution
+ // setting and will do it again next time.
+ if (ctx->api_fme.data.frame.start_of_stream &&
+ (avctx->width != orig_avctx_width ||
+ avctx->height != orig_avctx_height)) {
+ avctx->width = orig_avctx_width;
+ avctx->height = orig_avctx_height;
+ }
+ return ret;
+ } else {
+ av_log(avctx, AV_LOG_DEBUG, "xcoder_send_frame(): sent (%d)\n", sent);
+ if (sent == 0) {
+ // case of sequence change in progress
+ if (ctx->api_fme.data.frame.start_of_stream &&
+ (avctx->width != orig_avctx_width ||
+ avctx->height != orig_avctx_height)) {
+ avctx->width = orig_avctx_width;
+ avctx->height = orig_avctx_height;
+ }
+
+ // when buffer_full, drop the frame and return EAGAIN if in strict timeout
+ // mode, otherwise buffer the frame and it is to be sent out using encode2
+ // API: queue the frame only if not done so yet, i.e. queue is empty
+ // *and* it's a valid frame.
+ if (ctx->api_ctx.status == NI_RETCODE_NVME_SC_WRITE_BUFFER_FULL) {
+ ishwframe = ctx->buffered_fme.format == AV_PIX_FMT_NI_QUAD;
+ if (ishwframe) {
+ // Do not queue frames to avoid FFmpeg stuck when multiple HW frames are queued up in nienc, causing decoder unable to acquire buffer, which led to FFmpeg stuck
+ av_log(avctx, AV_LOG_ERROR, "xcoder_send_frame(): device WRITE_BUFFER_FULL cause HW frame drop! (approx. Frame num #%" PRIu64 "\n", ctx->api_ctx.frame_num);
+ av_frame_unref(&ctx->buffered_fme);
+ ret = 1;
+ } else {
+ av_log(avctx, AV_LOG_DEBUG, "xcoder_send_frame(): Write buffer full, enqueue frame and return 0\n");
+ ret = 0;
+
+ if (frame && is_input_fifo_empty(ctx)) {
+ ret = enqueue_frame(avctx, frame);
+ if (ret < 0) {
+ return ret;
+ }
+ }
+ }
+ }
+ } else {
+ ishwframe = (ctx->buffered_fme.format == AV_PIX_FMT_NI_QUAD) &&
+ (ctx->api_ctx.auto_dl_handle == 0) &&
+ (avctx->height >= NI_MIN_HEIGHT) &&
+ (avctx->width >= NI_MIN_WIDTH);
+
+ if (!ctx->eos_fme_received && ishwframe) {
+ av_log(avctx, AV_LOG_TRACE, "AVframe_index = %d at head %d\n",
+ ctx->aFree_Avframes_list[ctx->freeHead], ctx->freeHead);
+ av_frame_ref(
+ ctx->sframe_pool[ctx->aFree_Avframes_list[ctx->freeHead]],
+ &ctx->buffered_fme);
+ av_log(avctx, AV_LOG_TRACE,
+ "AVframe_index = %d popped from free head %d\n",
+ ctx->aFree_Avframes_list[ctx->freeHead], ctx->freeHead);
+ av_log(avctx, AV_LOG_TRACE,
+ "ctx->buffered_fme.data[3] %p sframe_pool[%d]->data[3] %p\n",
+ ctx->buffered_fme.data[3],
+ ctx->aFree_Avframes_list[ctx->freeHead],
+ ctx->sframe_pool[ctx->aFree_Avframes_list[ctx->freeHead]]
+ ->data[3]);
+ if (ctx->sframe_pool[ctx->aFree_Avframes_list[ctx->freeHead]]
+ ->data[3]) {
+ av_log(avctx, AV_LOG_DEBUG,
+ "nienc.c sframe_pool[%d] trace ui16FrameIdx = [%u] sent\n",
+ ctx->aFree_Avframes_list[ctx->freeHead],
+ ((niFrameSurface1_t
+ *)((uint8_t *)ctx
+ ->sframe_pool
+ [ctx->aFree_Avframes_list[ctx->freeHead]]
+ ->data[3]))
+ ->ui16FrameIdx);
+ av_log(
+ avctx, AV_LOG_TRACE,
+ "xcoder_send_frame: after ref sframe_pool, hw frame "
+ "av_buffer_get_ref_count=%d, data[3]=%p\n",
+ av_buffer_get_ref_count(
+ ctx->sframe_pool[ctx->aFree_Avframes_list[ctx->freeHead]]
+ ->buf[0]),
+ ctx->sframe_pool[ctx->aFree_Avframes_list[ctx->freeHead]]
+ ->data[3]);
+ }
+ if (deq_free_frames(ctx) != 0) {
+ ret = AVERROR_EXTERNAL;
+ return ret;
+ }
+ }
+
+ // only if it's NOT sequence change flushing (in which case only the eos
+ // was sent and not the first sc pkt) AND
+ // only after successful sending will it be removed from fifo
+ if (SESSION_RUN_STATE_SEQ_CHANGE_DRAINING !=
+ ctx->api_ctx.session_run_state) {
+ if (!is_input_fifo_empty(ctx)) {
+ av_fifo_drain2(ctx->fme_fifo, (size_t) 1);
+ av_log(avctx, AV_LOG_DEBUG, "fme popped, fifo num frames: %lu\n",
+ av_fifo_can_read(ctx->fme_fifo));
+ }
+ av_frame_unref(&ctx->buffered_fme);
+ ishwframe = (ctx->buffered_fme.format == AV_PIX_FMT_NI_QUAD) &&
+ (ctx->api_ctx.auto_dl_handle == 0);
+ if (ishwframe) {
+ if (ctx->buffered_fme.buf[0])
+ av_log(avctx, AV_LOG_TRACE, "xcoder_send_frame: after unref buffered_fme, hw frame av_buffer_get_ref_count=%d\n", av_buffer_get_ref_count(ctx->buffered_fme.buf[0]));
+ else
+ av_log(avctx, AV_LOG_TRACE, "xcoder_send_frame: after unref buffered_fme, hw frame av_buffer_get_ref_count=0 (buf[0] is NULL)\n");
+ }
+ } else {
+ av_log(avctx, AV_LOG_TRACE, "XCoder frame(eos) sent, sequence changing!"
+ " NO fifo pop !\n");
+ }
+
+ // pushing input pts in circular FIFO
+ ctx->api_ctx.enc_pts_list[ctx->api_ctx.enc_pts_w_idx % NI_FIFO_SZ] = ctx->api_fme.data.frame.pts;
+ ctx->api_ctx.enc_pts_w_idx++;
+
+ // have another check before return: if no more frames in fifo to send and
+ // we've got eos (NULL) frame from upper stream, flag for flushing
+ if (ctx->eos_fme_received && is_input_fifo_empty(ctx)) {
+ av_log(avctx, AV_LOG_DEBUG, "Upper stream EOS frame received, fifo "
+ "empty, start flushing ..\n");
+ ctx->encoder_flushing = 1;
+ }
+
+ ret = 0;
+ }
+ }
+
+ // try to flush encoder input fifo if it's not in seqchange draining state.
+ // Sending a frame before seqchange done may lead to stuck because the new frame's
+ // resolution could be different from that of the last sequence. Need to flush the
+ // fifo because its size increases with seqchange.
+ if (ret == 0 && frame && !is_input_fifo_empty(ctx) &&
+ SESSION_RUN_STATE_SEQ_CHANGE_DRAINING != ctx->api_ctx.session_run_state) {
+ av_log(avctx, AV_LOG_DEBUG, "try to flush encoder input fifo. Fifo num frames: %lu\n",
+ av_fifo_can_read(ctx->fme_fifo));
+ goto resend;
+ }
+
+ if (ctx->encoder_flushing) {
+ av_log(avctx, AV_LOG_DEBUG, "xcoder_send_frame flushing ..\n");
+ ret = ni_device_session_flush(&ctx->api_ctx, NI_DEVICE_TYPE_ENCODER);
+ }
+
+ av_log(avctx, AV_LOG_VERBOSE, "XCoder send frame return %d \n", ret);
+ return ret;
+}
+
+static int xcoder_encode_reinit(AVCodecContext *avctx)
+{
+ int ret = 0;
+ XCoderEncContext *ctx = avctx->priv_data;
+ bool ishwframe;
+ ni_device_handle_t device_handle = ctx->api_ctx.device_handle;
+ ni_device_handle_t blk_io_handle = ctx->api_ctx.blk_io_handle;
+ int hw_id = ctx->api_ctx.hw_id;
+ char tmp_blk_dev_name[NI_MAX_DEVICE_NAME_LEN];
+ int bit_depth = 1;
+ int pix_fmt = AV_PIX_FMT_YUV420P;
+ int stride, ori_stride;
+ bool bIsSmallPicture = false;
+ AVFrame temp_frame;
+ ni_xcoder_params_t *p_param = &ctx->api_param;
+
+ ff_xcoder_strncpy(tmp_blk_dev_name, ctx->api_ctx.blk_dev_name,
+ NI_MAX_DEVICE_NAME_LEN);
+
+ // re-init avctx's resolution to the changed one that is
+ // stored in the first frame of the fifo
+ av_fifo_peek(ctx->fme_fifo, &temp_frame, (size_t) 1, NULL);
+ temp_frame.extended_data = temp_frame.data;
+
+ ishwframe = temp_frame.format == AV_PIX_FMT_NI_QUAD;
+
+ if (ishwframe) {
+ bit_depth = (uint8_t)((niFrameSurface1_t*)((uint8_t*)temp_frame.data[3]))->bit_depth;
+ av_log(avctx, AV_LOG_INFO, "xcoder_receive_packet hw frame bit depth "
+ "changing %d -> %d\n",
+ ctx->api_ctx.bit_depth_factor, bit_depth);
+
+ switch (avctx->sw_pix_fmt) {
+ case AV_PIX_FMT_YUV420P:
+ case AV_PIX_FMT_YUVJ420P:
+ if (bit_depth == 2) {
+ avctx->sw_pix_fmt = AV_PIX_FMT_YUV420P10LE;
+ pix_fmt = NI_PIX_FMT_YUV420P10LE;
+ } else {
+ pix_fmt = NI_PIX_FMT_YUV420P;
+ }
+ break;
+ case AV_PIX_FMT_YUV420P10LE:
+ if (bit_depth == 1) {
+ avctx->sw_pix_fmt = AV_PIX_FMT_YUV420P;
+ pix_fmt = NI_PIX_FMT_YUV420P;
+ } else {
+ pix_fmt = NI_PIX_FMT_YUV420P10LE;
+ }
+ break;
+ case AV_PIX_FMT_NV12:
+ if (bit_depth == 2) {
+ avctx->sw_pix_fmt = AV_PIX_FMT_P010LE;
+ pix_fmt = NI_PIX_FMT_P010LE;
+ } else {
+ pix_fmt = NI_PIX_FMT_NV12;
+ }
+ break;
+ case AV_PIX_FMT_P010LE:
+ if (bit_depth == 1) {
+ avctx->sw_pix_fmt = AV_PIX_FMT_NV12;
+ pix_fmt = NI_PIX_FMT_NV12;
+ } else {
+ pix_fmt = NI_PIX_FMT_P010LE;
+ }
+ break;
+ case AV_PIX_FMT_NI_QUAD_10_TILE_4X4:
+ if (bit_depth == 1) {
+ avctx->sw_pix_fmt = AV_PIX_FMT_NI_QUAD_8_TILE_4X4;
+ pix_fmt = NI_PIX_FMT_8_TILED4X4;
+ } else {
+ pix_fmt = NI_PIX_FMT_10_TILED4X4;
+ }
+ break;
+ case AV_PIX_FMT_NI_QUAD_8_TILE_4X4:
+ if (bit_depth == 2) {
+ avctx->sw_pix_fmt = AV_PIX_FMT_NI_QUAD_10_TILE_4X4;
+ pix_fmt = NI_PIX_FMT_10_TILED4X4;
+ } else {
+ pix_fmt = NI_PIX_FMT_8_TILED4X4;
+ }
+ break;
+ case AV_PIX_FMT_ARGB:
+ pix_fmt = NI_PIX_FMT_ARGB;
+ break;
+ case AV_PIX_FMT_ABGR:
+ pix_fmt = NI_PIX_FMT_ABGR;
+ break;
+ case AV_PIX_FMT_RGBA:
+ pix_fmt = NI_PIX_FMT_RGBA;
+ break;
+ case AV_PIX_FMT_BGRA:
+ pix_fmt = NI_PIX_FMT_BGRA;
+ break;
+ default:
+ pix_fmt = NI_PIX_FMT_NONE;
+ break;
+ }
+ } else {
+ switch (temp_frame.format) {
+ case AV_PIX_FMT_YUV420P:
+ case AV_PIX_FMT_YUVJ420P:
+ pix_fmt = NI_PIX_FMT_YUV420P;
+ bit_depth = 1;
+ break;
+ case AV_PIX_FMT_NV12:
+ pix_fmt = NI_PIX_FMT_NV12;
+ bit_depth = 1;
+ break;
+ case AV_PIX_FMT_YUV420P10LE:
+ pix_fmt = NI_PIX_FMT_YUV420P10LE;
+ bit_depth = 2;
+ break;
+ case AV_PIX_FMT_P010LE:
+ pix_fmt = NI_PIX_FMT_P010LE;
+ bit_depth = 2;
+ break;
+ default:
+ pix_fmt = NI_PIX_FMT_NONE;
+ break;
+ }
+ }
+
+ ctx->eos_fme_received = 0;
+ ctx->encoder_eof = 0;
+ ctx->encoder_flushing = 0;
+ ctx->firstPktArrived = 0;
+ ctx->spsPpsArrived = 0;
+ ctx->spsPpsHdrLen = 0;
+ av_freep(&ctx->p_spsPpsHdr);
+ ctx->seqChangeCount++;
+
+ // check if resolution is zero copy compatible and set linesize according to new resolution
+ if (ni_encoder_frame_zerocopy_check(&ctx->api_ctx,
+ p_param, temp_frame.width, temp_frame.height,
+ (const int *)temp_frame.linesize, true) == NI_RETCODE_SUCCESS) {
+ stride = p_param->luma_linesize; // new sequence is zero copy compatible
+ } else {
+ stride = FFALIGN(temp_frame.width*bit_depth, 128);
+ }
+
+ if (ctx->api_ctx.ori_luma_linesize && ctx->api_ctx.ori_chroma_linesize) {
+ ori_stride = ctx->api_ctx.ori_luma_linesize; // previous sequence was zero copy compatible
+ } else {
+ ori_stride = FFALIGN(ctx->api_ctx.ori_width*bit_depth, 128);
+ }
+
+ if (pix_fmt == NI_PIX_FMT_ARGB
+ || pix_fmt == NI_PIX_FMT_ABGR
+ || pix_fmt == NI_PIX_FMT_RGBA
+ || pix_fmt == NI_PIX_FMT_BGRA) {
+ stride = temp_frame.width;
+ ori_stride = ctx->api_ctx.ori_width;
+ }
+
+ if (ctx->api_param.cfg_enc_params.lookAheadDepth
+ || ctx->api_param.cfg_enc_params.crf >= 0
+ || ctx->api_param.cfg_enc_params.crfFloat >= 0) {
+ av_log(avctx, AV_LOG_DEBUG, "xcoder_encode_reinit 2-pass "
+ "lookaheadDepth %d and/or CRF %d and/or CRFFloat %f\n",
+ ctx->api_param.cfg_enc_params.lookAheadDepth,
+ ctx->api_param.cfg_enc_params.crf,
+ ctx->api_param.cfg_enc_params.crfFloat);
+ if ((temp_frame.width < NI_2PASS_ENCODE_MIN_WIDTH) ||
+ (temp_frame.height < NI_2PASS_ENCODE_MIN_HEIGHT)) {
+ bIsSmallPicture = true;
+ }
+ } else {
+ if ((temp_frame.width < NI_MIN_WIDTH) ||
+ (temp_frame.height < NI_MIN_HEIGHT)) {
+ bIsSmallPicture = true;
+ }
+ }
+
+ if (ctx->api_param.cfg_enc_params.multicoreJointMode) {
+ av_log(avctx, AV_LOG_DEBUG, "xcoder_encode_reinit multicore "
+ "joint mode\n");
+ if ((temp_frame.width < 256) ||
+ (temp_frame.height < 256)) {
+ bIsSmallPicture = true;
+ }
+ }
+
+ if (ctx->api_param.cfg_enc_params.crop_width || ctx->api_param.cfg_enc_params.crop_height) {
+ av_log(avctx, AV_LOG_DEBUG, "xcoder_encode_reinit needs to close and re-open "
+ " due to crop width x height \n");
+ bIsSmallPicture = true;
+ }
+
+ av_log(avctx, AV_LOG_INFO, "%s resolution "
+ "changing %dx%d -> %dx%d "
+ "format %d -> %d "
+ "original stride %d height %d pix fmt %d "
+ "new stride %d height %d pix fmt %d \n",
+ __func__, avctx->width, avctx->height,
+ temp_frame.width, temp_frame.height,
+ avctx->pix_fmt, temp_frame.format,
+ ori_stride, ctx->api_ctx.ori_height, ctx->api_ctx.ori_pix_fmt,
+ stride, temp_frame.height, pix_fmt);
+
+ avctx->width = temp_frame.width;
+ avctx->height = temp_frame.height;
+ avctx->pix_fmt = temp_frame.format;
+
+ // fast sequence change without close / open only if new resolution < original resolution
+ if ((ori_stride*ctx->api_ctx.ori_height < stride*temp_frame.height) ||
+ (ctx->api_ctx.ori_pix_fmt != pix_fmt) ||
+ bIsSmallPicture ||
+ (avctx->codec_id == AV_CODEC_ID_MJPEG) ||
+ ctx->api_param.cfg_enc_params.disable_adaptive_buffers) {
+ xcoder_encode_close(avctx);
+ ret = xcoder_encode_init(avctx);
+ // clear crop parameters upon sequence change because cropping values may not be compatible to new resolution
+ // (except for Motion Constrained mode 2, for which we crop to 64x64 alignment)
+ if (ctx->api_param.cfg_enc_params.motionConstrainedMode == MOTION_CONSTRAINED_QUALITY_MODE && avctx->codec_id == AV_CODEC_ID_HEVC) {
+ ctx->api_param.cfg_enc_params.crop_width = (temp_frame.width / 64 * 64);
+ ctx->api_param.cfg_enc_params.crop_height = (temp_frame.height / 64 * 64);
+ ctx->api_param.cfg_enc_params.hor_offset = ctx->api_param.cfg_enc_params.ver_offset = 0;
+ av_log(avctx, AV_LOG_DEBUG, "xcoder_encode_reinit sets "
+ "crop width x height to %d x %d for Motion Constrained mode 2\n",
+ ctx->api_param.cfg_enc_params.crop_width,
+ ctx->api_param.cfg_enc_params.crop_height);
+ } else {
+ ctx->api_param.cfg_enc_params.crop_width = ctx->api_param.cfg_enc_params.crop_height = 0;
+ ctx->api_param.cfg_enc_params.hor_offset = ctx->api_param.cfg_enc_params.ver_offset = 0;
+ }
+ } else {
+ if (avctx->codec_id == AV_CODEC_ID_AV1) {
+ // AV1 8x8 alignment HW limitation is now worked around by FW cropping input resolution
+ if (temp_frame.width % NI_PARAM_AV1_ALIGN_WIDTH_HEIGHT)
+ av_log(avctx, AV_LOG_ERROR,
+ "resolution change: AV1 Picture Width not aligned to %d - picture will be cropped\n",
+ NI_PARAM_AV1_ALIGN_WIDTH_HEIGHT);
+
+ if (temp_frame.height % NI_PARAM_AV1_ALIGN_WIDTH_HEIGHT)
+ av_log(avctx, AV_LOG_ERROR,
+ "resolution change: AV1 Picture Height not aligned to %d - picture will be cropped\n",
+ NI_PARAM_AV1_ALIGN_WIDTH_HEIGHT);
+ }
+ ret = xcoder_encode_sequence_change(avctx, temp_frame.width, temp_frame.height, bit_depth);
+ }
+
+ // keep device handle(s) open during sequence change to fix mem bin buffer not recycled
+ ctx->api_ctx.device_handle = device_handle;
+ ctx->api_ctx.blk_io_handle = blk_io_handle;
+ ctx->api_ctx.hw_id = hw_id;
+ ff_xcoder_strncpy(ctx->api_ctx.blk_dev_name, tmp_blk_dev_name,
+ NI_MAX_DEVICE_NAME_LEN);
+ ctx->api_ctx.session_run_state = SESSION_RUN_STATE_SEQ_CHANGE_OPENING; // this state is referenced when sending first frame after sequence change
+
+ return ret;
+}
+
+int xcoder_receive_packet(AVCodecContext *avctx, AVPacket *pkt)
+{
+ XCoderEncContext *ctx = avctx->priv_data;
+ int i, ret = 0;
+ int recv;
+ AVFrame *frame = NULL;
+ ni_packet_t *xpkt = &ctx->api_pkt.data.packet;
+ bool av1_output_frame = 0;
+
+ av_log(avctx, AV_LOG_VERBOSE, "XCoder receive packet\n");
+
+ if (ctx->encoder_eof) {
+ av_log(avctx, AV_LOG_VERBOSE, "xcoder_receive_packet: EOS\n");
+ return AVERROR_EOF;
+ }
+
+ if (ni_packet_buffer_alloc(xpkt, NI_MAX_TX_SZ)) {
+ av_log(avctx, AV_LOG_ERROR,
+ "xcoder_receive_packet: packet buffer size %d allocation failed\n",
+ NI_MAX_TX_SZ);
+ return AVERROR(ENOMEM);
+ }
+
+ if (avctx->codec_id == AV_CODEC_ID_MJPEG && (!ctx->spsPpsArrived)) {
+ ctx->spsPpsArrived = 1;
+ // for Jpeg, start pkt_num counter from 1, because unlike video codecs
+ // (1st packet is header), there is no header for Jpeg
+ ctx->api_ctx.pkt_num = 1;
+ }
+
+ while (1) {
+ xpkt->recycle_index = -1;
+ recv = ni_device_session_read(&ctx->api_ctx, &(ctx->api_pkt), NI_DEVICE_TYPE_ENCODER);
+
+ av_log(avctx, AV_LOG_TRACE,
+ "XCoder receive packet: xpkt.end_of_stream=%d, xpkt.data_len=%d, "
+ "xpkt.frame_type=%d, recv=%d, encoder_flushing=%d, encoder_eof=%d\n",
+ xpkt->end_of_stream, xpkt->data_len, xpkt->frame_type, recv,
+ ctx->encoder_flushing, ctx->encoder_eof);
+
+ if (recv <= 0) {
+ ctx->encoder_eof = xpkt->end_of_stream;
+ if (ctx->encoder_eof || xpkt->end_of_stream) {
+ if (SESSION_RUN_STATE_SEQ_CHANGE_DRAINING ==
+ ctx->api_ctx.session_run_state) {
+ // after sequence change completes, reset codec state
+ av_log(avctx, AV_LOG_INFO, "xcoder_receive_packet 1: sequence "
+ "change completed, return AVERROR(EAGAIN) and will reopen "
+ "codec!\n");
+
+ ret = xcoder_encode_reinit(avctx);
+ av_log(avctx, AV_LOG_DEBUG, "xcoder_receive_packet: xcoder_encode_reinit ret %d\n", ret);
+ if (ret >= 0) {
+ ret = AVERROR(EAGAIN);
+
+ xcoder_send_frame(avctx, NULL);
+
+ ctx->api_ctx.session_run_state = SESSION_RUN_STATE_NORMAL;
+ }
+ break;
+ }
+
+ ret = AVERROR_EOF;
+ av_log(avctx, AV_LOG_VERBOSE, "xcoder_receive_packet: got encoder_eof, return AVERROR_EOF\n");
+ break;
+ } else {
+ bool bIsReset = false;
+ if (NI_RETCODE_ERROR_VPU_RECOVERY == recv) {
+ xcoder_encode_reset(avctx);
+ bIsReset = true;
+ } else if (NI_RETCODE_ERROR_INVALID_SESSION == recv) {
+ av_log(ctx, AV_LOG_ERROR, "encoder read retval %d\n", recv);
+ ret = AVERROR(EIO);
+ break;
+ }
+ ret = AVERROR(EAGAIN);
+ if ((!ctx->encoder_flushing && !ctx->eos_fme_received) || bIsReset) { // if encode session was reset, can't read again with invalid session, must break out first
+ av_log(avctx, AV_LOG_TRACE, "xcoder_receive_packet: NOT encoder_"
+ "flushing, NOT eos_fme_received, return AVERROR(EAGAIN)\n");
+ break;
+ }
+ }
+ } else {
+ /* got encoded data back */
+ uint8_t *p_src, *p_end;
+ int64_t local_pts;
+ ni_custom_sei_set_t *p_custom_sei_set;
+ int meta_size = ctx->api_ctx.meta_size;
+ uint32_t copy_len = 0;
+ uint32_t data_len = 0;
+ int total_custom_sei_size = 0;
+ int custom_sei_count = 0;
+
+ if (avctx->pix_fmt == AV_PIX_FMT_NI_QUAD && xpkt->recycle_index >= 0 &&
+ avctx->height >= NI_MIN_HEIGHT && avctx->width >= NI_MIN_WIDTH &&
+ xpkt->recycle_index < NI_GET_MAX_HWDESC_P2P_BUF_ID(ctx->api_ctx.ddr_config)) {
+ int avframe_index =
+ recycle_index_2_avframe_index(ctx, xpkt->recycle_index);
+ av_log(avctx, AV_LOG_VERBOSE, "UNREF trace ui16FrameIdx = [%d].\n",
+ xpkt->recycle_index);
+ if (avframe_index >= 0 && ctx->sframe_pool[avframe_index]) {
+ av_frame_unref(ctx->sframe_pool[avframe_index]);
+ av_log(avctx, AV_LOG_DEBUG,
+ "AVframe_index = %d pushed to free tail %d\n",
+ avframe_index, ctx->freeTail);
+ enq_free_frames(ctx, avframe_index);
+ // enqueue the index back to free
+ xpkt->recycle_index = -1;
+ } else {
+ av_log(avctx, AV_LOG_DEBUG,
+ "can't push to tail - avframe_index %d sframe_pool %p\n",
+ avframe_index, ctx->sframe_pool[avframe_index]);
+ }
+ }
+
+ if (!ctx->spsPpsArrived) {
+ ret = AVERROR(EAGAIN);
+ ctx->spsPpsArrived = 1;
+ ctx->spsPpsHdrLen = recv - meta_size;
+ ctx->p_spsPpsHdr = av_malloc(ctx->spsPpsHdrLen);
+ if (!ctx->p_spsPpsHdr) {
+ ret = AVERROR(ENOMEM);
+ break;
+ }
+
+ memcpy(ctx->p_spsPpsHdr, (uint8_t *)xpkt->p_data + meta_size,
+ xpkt->data_len - meta_size);
+
+ // start pkt_num counter from 1 to get the real first frame
+ ctx->api_ctx.pkt_num = 1;
+ // for low-latency mode, keep reading until the first frame is back
+ if (ctx->api_param.low_delay_mode) {
+ av_log(avctx, AV_LOG_TRACE, "XCoder receive packet: low delay mode,"
+ " keep reading until 1st pkt arrives\n");
+ continue;
+ }
+ break;
+ }
+
+ // handle pic skip
+ if (xpkt->frame_type == 3) { // 0=I, 1=P, 2=B, 3=not coded / skip
+ ret = AVERROR(EAGAIN);
+ if (ctx->first_frame_pts == INT_MIN)
+ ctx->first_frame_pts = xpkt->pts;
+ if (AV_CODEC_ID_AV1 == avctx->codec_id) {
+ ctx->latest_dts = xpkt->pts;
+ } else if (ctx->total_frames_received < ctx->dtsOffset) {
+ // guess dts
+ ctx->latest_dts = ctx->first_frame_pts +
+ ctx->gop_offset_count - ctx->dtsOffset;
+ ctx->gop_offset_count++;
+ } else {
+ // get dts from pts FIFO
+ ctx->latest_dts =
+ ctx->api_ctx
+ .enc_pts_list[ctx->api_ctx.enc_pts_r_idx % NI_FIFO_SZ];
+ ctx->api_ctx.enc_pts_r_idx++;
+ }
+ if (ctx->latest_dts > xpkt->pts) {
+ ctx->latest_dts = xpkt->pts;
+ }
+ ctx->total_frames_received++;
+
+ if (!ctx->encoder_flushing && !ctx->eos_fme_received) {
+ av_log(avctx, AV_LOG_TRACE, "xcoder_receive_packet: skip"
+ " picture output, return AVERROR(EAGAIN)\n");
+ break;
+ } else {
+ continue;
+ }
+ }
+
+ // store av1 packets to be merged & sent along with future packet
+ if (avctx->codec_id == AV_CODEC_ID_AV1) {
+ av_log(
+ avctx, AV_LOG_TRACE,
+ "xcoder_receive_packet: AV1 xpkt buf %p size %d show_frame %d\n",
+ xpkt->p_data, xpkt->data_len, xpkt->av1_show_frame);
+ if (!xpkt->av1_show_frame) {
+ // store AV1 packets
+ xpkt->av1_p_buffer[xpkt->av1_buffer_index] = xpkt->p_buffer;
+ xpkt->av1_p_data[xpkt->av1_buffer_index] = xpkt->p_data;
+ xpkt->av1_buffer_size[xpkt->av1_buffer_index] = xpkt->buffer_size;
+ xpkt->av1_data_len[xpkt->av1_buffer_index] = xpkt->data_len;
+ xpkt->av1_buffer_index++;
+ xpkt->p_buffer = NULL;
+ xpkt->p_data = NULL;
+ xpkt->buffer_size = 0;
+ xpkt->data_len = 0;
+ if (xpkt->av1_buffer_index >= MAX_AV1_ENCODER_GOP_NUM) {
+ av_log(avctx, AV_LOG_ERROR,
+ "xcoder_receive_packet: recv AV1 not shown frame "
+ "number %d >= %d, return AVERROR_EXTERNAL\n",
+ xpkt->av1_buffer_index, MAX_AV1_ENCODER_GOP_NUM);
+ ret = AVERROR_EXTERNAL;
+ break;
+ } else if (!ctx->encoder_flushing && !ctx->eos_fme_received) {
+ av_log(avctx, AV_LOG_TRACE,
+ "xcoder_receive_packet: recv AV1 not shown frame, "
+ "return AVERROR(EAGAIN)\n");
+ ret = AVERROR(EAGAIN);
+ break;
+ } else {
+ if (ni_packet_buffer_alloc(xpkt, NI_MAX_TX_SZ)) {
+ av_log(avctx, AV_LOG_ERROR,
+ "xcoder_receive_packet: AV1 packet buffer size %d "
+ "allocation failed during flush\n",
+ NI_MAX_TX_SZ);
+ ret = AVERROR(ENOMEM);
+ break;
+ }
+ av_log(avctx, AV_LOG_TRACE,
+ "xcoder_receive_packet: recv AV1 not shown frame "
+ "during flush, continue..\n");
+ continue;
+ }
+ } else {
+ // calculate length of previously received AV1 packets pending for merge
+ av1_output_frame = 1;
+ for (i = 0; i < xpkt->av1_buffer_index; i++) {
+ data_len += xpkt->av1_data_len[i] - meta_size;
+ }
+ }
+ }
+
+ p_src = (uint8_t*)xpkt->p_data + meta_size;
+ p_end = p_src + (xpkt->data_len - meta_size);
+ local_pts = xpkt->pts;
+
+ p_custom_sei_set = ctx->api_ctx.pkt_custom_sei_set[local_pts % NI_FIFO_SZ];
+ if (p_custom_sei_set != NULL) {
+ custom_sei_count = p_custom_sei_set->count;
+ for (i = 0; i < p_custom_sei_set->count; i++) {
+ total_custom_sei_size += p_custom_sei_set->custom_sei[i].size;
+ }
+ }
+
+ if (custom_sei_count) {
+ // if HRD or custom sei enabled, search for pic_timing or custom SEI insertion point by
+ // skipping non-VCL until video data is found.
+ uint32_t nalu_type = 0;
+ const uint8_t *p_start_code = p_src;
+ uint32_t stc = -1;
+ if (AV_CODEC_ID_HEVC == avctx->codec_id) {
+ do {
+ stc = -1;
+ p_start_code = avpriv_find_start_code(p_start_code, p_end, &stc);
+ nalu_type = (stc >> 1) & 0x3F;
+ } while (nalu_type > HEVC_NAL_RSV_VCL31);
+
+ // calc. length to copy
+ copy_len = p_start_code - 5 - p_src;
+ } else if (AV_CODEC_ID_H264 == avctx->codec_id) {
+ do {
+ stc = -1;
+ p_start_code = avpriv_find_start_code(p_start_code, p_end, &stc);
+ nalu_type = stc & 0x1F;
+ } while (nalu_type > H264_NAL_IDR_SLICE);
+
+ // calc. length to copy
+ copy_len = p_start_code - 5 - p_src;
+ } else {
+ av_log(avctx, AV_LOG_ERROR, "xcoder_receive packet: codec %d not "
+ "supported for SEI !\n", avctx->codec_id);
+ }
+ }
+
+ if (avctx->codec_id == AV_CODEC_ID_MJPEG && !ctx->firstPktArrived) {
+ // there is no header for Jpeg, so skip header copy
+ ctx->firstPktArrived = 1;
+ if (ctx->first_frame_pts == INT_MIN) {
+ ctx->first_frame_pts = xpkt->pts;
+ }
+ }
+
+ if (!ctx->firstPktArrived) {
+ int sizeof_spspps_attached_to_idr = ctx->spsPpsHdrLen;
+ if ((avctx->flags & AV_CODEC_FLAG_GLOBAL_HEADER) &&
+ (avctx->codec_id != AV_CODEC_ID_AV1) &&
+ (ctx->seqChangeCount == 0)) {
+ sizeof_spspps_attached_to_idr = 0;
+ }
+ ctx->firstPktArrived = 1;
+ if (ctx->first_frame_pts == INT_MIN) {
+ ctx->first_frame_pts = xpkt->pts;
+ }
+
+ data_len += xpkt->data_len - meta_size + sizeof_spspps_attached_to_idr + total_custom_sei_size;
+ if (avctx->codec_id == AV_CODEC_ID_AV1)
+ av_log(avctx, AV_LOG_TRACE, "xcoder_receive_packet: AV1 first output pkt size %d\n", data_len);
+
+ ret = ff_get_encode_buffer(avctx, pkt, data_len, 0);
+
+ if (!ret) {
+ uint8_t *p_dst, *p_side_data;
+
+ // fill in AVC/HEVC sidedata
+ if ((avctx->flags & AV_CODEC_FLAG_GLOBAL_HEADER) &&
+ (avctx->extradata_size != ctx->spsPpsHdrLen ||
+ (memcmp(avctx->extradata, ctx->p_spsPpsHdr, ctx->spsPpsHdrLen) !=
+ 0))) {
+ avctx->extradata_size = ctx->spsPpsHdrLen;
+ av_freep(&avctx->extradata);
+ avctx->extradata = av_mallocz(avctx->extradata_size +
+ AV_INPUT_BUFFER_PADDING_SIZE);
+ if (!avctx->extradata) {
+ av_log(avctx, AV_LOG_ERROR,
+ "Cannot allocate AVC/HEVC header of size %d.\n",
+ avctx->extradata_size);
+ return AVERROR(ENOMEM);
+ }
+ memcpy(avctx->extradata, ctx->p_spsPpsHdr, avctx->extradata_size);
+ }
+
+ p_side_data = av_packet_new_side_data(
+ pkt, AV_PKT_DATA_NEW_EXTRADATA, ctx->spsPpsHdrLen);
+ if (p_side_data) {
+ memcpy(p_side_data, ctx->p_spsPpsHdr, ctx->spsPpsHdrLen);
+ }
+
+ p_dst = pkt->data;
+ if (sizeof_spspps_attached_to_idr) {
+ memcpy(p_dst, ctx->p_spsPpsHdr, ctx->spsPpsHdrLen);
+ p_dst += ctx->spsPpsHdrLen;
+ }
+
+ if (custom_sei_count && avctx->codec_id != AV_CODEC_ID_AV1) {
+ // copy buf_period
+ memcpy(p_dst, p_src, copy_len);
+ p_dst += copy_len;
+
+ for (i = 0; i < custom_sei_count; i++) {
+ // copy custom sei
+ ni_custom_sei_t *p_custom_sei = &p_custom_sei_set->custom_sei[i];
+ if (p_custom_sei->location == NI_CUSTOM_SEI_LOC_AFTER_VCL) {
+ break;
+ }
+ memcpy(p_dst, &p_custom_sei->data[0], p_custom_sei->size);
+ p_dst += p_custom_sei->size;
+ }
+
+ // copy the IDR data
+ memcpy(p_dst, p_src + copy_len,
+ xpkt->data_len - meta_size - copy_len);
+ p_dst += xpkt->data_len - meta_size - copy_len;
+
+ // copy custom sei after slice
+ for (; i < custom_sei_count; i++) {
+ ni_custom_sei_t *p_custom_sei = &p_custom_sei_set->custom_sei[i];
+ memcpy(p_dst, &p_custom_sei->data[0], p_custom_sei->size);
+ p_dst += p_custom_sei->size;
+ }
+ } else {
+ // merge AV1 packets
+ if (avctx->codec_id == AV_CODEC_ID_AV1) {
+ for (i = 0; i < xpkt->av1_buffer_index; i++) {
+ memcpy(p_dst, (uint8_t *)xpkt->av1_p_data[i] + meta_size,
+ xpkt->av1_data_len[i] - meta_size);
+ p_dst += (xpkt->av1_data_len[i] - meta_size);
+ }
+ }
+
+ memcpy(p_dst, (uint8_t*)xpkt->p_data + meta_size,
+ xpkt->data_len - meta_size);
+ }
+ }
+ } else {
+ data_len += xpkt->data_len - meta_size + total_custom_sei_size;
+ if (avctx->codec_id == AV_CODEC_ID_AV1)
+ av_log(avctx, AV_LOG_TRACE, "xcoder_receive_packet: AV1 output pkt size %d\n", data_len);
+
+ ret = ff_get_encode_buffer(avctx, pkt, data_len, 0);
+
+ if (!ret) {
+ uint8_t *p_dst = pkt->data;
+
+ if (custom_sei_count && avctx->codec_id != AV_CODEC_ID_AV1) {
+ // copy buf_period
+ memcpy(p_dst, p_src, copy_len);
+ p_dst += copy_len;
+
+ for (i = 0; i < custom_sei_count; i++) {
+ // copy custom sei
+ ni_custom_sei_t *p_custom_sei = &p_custom_sei_set->custom_sei[i];
+ if (p_custom_sei->location == NI_CUSTOM_SEI_LOC_AFTER_VCL) {
+ break;
+ }
+ memcpy(p_dst, &p_custom_sei->data[0], p_custom_sei->size);
+ p_dst += p_custom_sei->size;
+ }
+
+ // copy the packet data
+ memcpy(p_dst, p_src + copy_len,
+ xpkt->data_len - meta_size - copy_len);
+ p_dst += xpkt->data_len - meta_size - copy_len;
+
+ // copy custom sei after slice
+ for (; i < custom_sei_count; i++) {
+ ni_custom_sei_t *p_custom_sei = &p_custom_sei_set->custom_sei[i];
+ memcpy(p_dst, &p_custom_sei->data[0], p_custom_sei->size);
+ p_dst += p_custom_sei->size;
+ }
+ } else {
+ // merge AV1 packets
+ if (avctx->codec_id == AV_CODEC_ID_AV1) {
+ for (i = 0; i < xpkt->av1_buffer_index; i++) {
+ memcpy(p_dst, (uint8_t *)xpkt->av1_p_data[i] + meta_size,
+ xpkt->av1_data_len[i] - meta_size);
+ p_dst += (xpkt->av1_data_len[i] - meta_size);
+ }
+ }
+
+ memcpy(p_dst, (uint8_t *)xpkt->p_data + meta_size,
+ xpkt->data_len - meta_size);
+ }
+ }
+ }
+
+ // free buffer
+ if (custom_sei_count) {
+ ni_memfree(p_custom_sei_set);
+ ctx->api_ctx.pkt_custom_sei_set[local_pts % NI_FIFO_SZ] = NULL;
+ }
+
+ if (!ret) {
+ if (xpkt->frame_type == 0) {
+ pkt->flags |= AV_PKT_FLAG_KEY;
+ }
+
+ pkt->pts = xpkt->pts;
+ /* to ensure pts>dts for all frames, we assign a guess pts for the first 'dtsOffset' frames and then the pts from input stream
+ * is extracted from input pts FIFO.
+ * if GOP = IBBBP and PTSs = 0 1 2 3 4 5 .. then out DTSs = -3 -2 -1 0 1 ... and -3 -2 -1 are the guessed values
+ * if GOP = IBPBP and PTSs = 0 1 2 3 4 5 .. then out DTSs = -1 0 1 2 3 ... and -1 is the guessed value
+ * the number of guessed values is equal to dtsOffset
+ */
+ if (AV_CODEC_ID_AV1 == avctx->codec_id) {
+ pkt->dts = pkt->pts;
+ av_log(avctx, AV_LOG_TRACE, "Packet dts (av1): %ld\n", pkt->dts);
+ } else if (ctx->total_frames_received < ctx->dtsOffset) {
+ // guess dts
+ pkt->dts = ctx->first_frame_pts + ctx->gop_offset_count - ctx->dtsOffset;
+ ctx->gop_offset_count++;
+ av_log(avctx, AV_LOG_TRACE, "Packet dts (guessed): %ld\n",
+ pkt->dts);
+ } else {
+ // get dts from pts FIFO
+ pkt->dts =
+ ctx->api_ctx
+ .enc_pts_list[ctx->api_ctx.enc_pts_r_idx % NI_FIFO_SZ];
+ ctx->api_ctx.enc_pts_r_idx++;
+ av_log(avctx, AV_LOG_TRACE, "Packet dts: %ld\n", pkt->dts);
+ }
+ if (ctx->total_frames_received >= 1) {
+ if (pkt->dts < ctx->latest_dts) {
+ av_log(NULL, AV_LOG_WARNING, "dts: %ld < latest_dts: %ld.\n",
+ pkt->dts, ctx->latest_dts);
+ }
+ }
+ if (pkt->pts < ctx->first_frame_pts) {
+ av_log(NULL, AV_LOG_WARNING, "pts %ld less than first frame pts %ld. Force it to first frame pts\n",
+ pkt->pts, ctx->first_frame_pts);
+ pkt->pts = ctx->first_frame_pts;
+ }
+ if (pkt->dts > pkt->pts) {
+ av_log(NULL, AV_LOG_WARNING, "dts: %ld, pts: %ld. Forcing dts = pts \n",
+ pkt->dts, pkt->pts);
+ pkt->dts = pkt->pts;
+ av_log(avctx, AV_LOG_TRACE, "Force dts to: %ld\n", pkt->dts);
+ }
+ ctx->total_frames_received++;
+ ctx->latest_dts = pkt->dts;
+ av_log(avctx, AV_LOG_DEBUG, "XCoder recv pkt #%" PRId64 ""
+ " pts %" PRId64 " dts %" PRId64 " size %d st_index %d frame_type %u avg qp %u\n",
+ ctx->api_ctx.pkt_num - 1, pkt->pts, pkt->dts, pkt->size,
+ pkt->stream_index, xpkt->frame_type, xpkt->avg_frame_qp);
+
+ enum AVPictureType pict_type = AV_PICTURE_TYPE_NONE;
+ switch (xpkt->frame_type) {
+ case 0:
+ pict_type = AV_PICTURE_TYPE_I;
+ break;
+ case 1:
+ pict_type = AV_PICTURE_TYPE_P;
+ break;
+ case 2:
+ pict_type = AV_PICTURE_TYPE_B;
+ break;
+ default:
+ break;
+ }
+
+ int frame_qp = 0;
+ switch (avctx->codec_id) {
+ case AV_CODEC_ID_H264:
+ case AV_CODEC_ID_HEVC:
+ frame_qp = xpkt->avg_frame_qp;
+ break;
+ default:
+ break;
+ }
+
+ ff_side_data_set_encoder_stats(pkt, frame_qp * FF_QP2LAMBDA, NULL, 0, pict_type);
+ }
+ ctx->encoder_eof = xpkt->end_of_stream;
+ if (ctx->encoder_eof &&
+ SESSION_RUN_STATE_SEQ_CHANGE_DRAINING ==
+ ctx->api_ctx.session_run_state) {
+ // after sequence change completes, reset codec state
+ av_log(avctx, AV_LOG_DEBUG, "xcoder_receive_packet 2: sequence change "
+ "completed, return 0 and will reopen codec !\n");
+ ret = xcoder_encode_reinit(avctx);
+ av_log(avctx, AV_LOG_DEBUG, "xcoder_receive_packet: xcoder_encode_reinit ret %d\n", ret);
+ if (ret >= 0) {
+ xcoder_send_frame(avctx, NULL);
+ ctx->api_ctx.session_run_state = SESSION_RUN_STATE_NORMAL;
+ }
+ }
+ break;
+ }
+ }
+
+ if ((AV_CODEC_ID_AV1 == avctx->codec_id) && xpkt->av1_buffer_index &&
+ av1_output_frame) {
+ av_log(avctx, AV_LOG_TRACE,
+ "xcoder_receive_packet: ni_packet_buffer_free_av1 %d packtes\n",
+ xpkt->av1_buffer_index);
+ ni_packet_buffer_free_av1(xpkt);
+ }
+
+ av_log(avctx, AV_LOG_VERBOSE, "xcoder_receive_packet: return %d\n", ret);
+ return ret;
+}
+
+// for FFmpeg 4.4+
+int ff_xcoder_receive_packet(AVCodecContext *avctx, AVPacket *pkt)
+{
+ XCoderEncContext *ctx = avctx->priv_data;
+ AVFrame *frame = &ctx->buffered_fme;
+ int ret;
+
+ ret = ff_encode_get_frame(avctx, frame);
+ if (!ctx->encoder_flushing && ret >= 0 || ret == AVERROR_EOF) {
+ ret = xcoder_send_frame(avctx, (ret == AVERROR_EOF ? NULL : frame));
+ if (ret < 0 && ret != AVERROR_EOF) {
+ av_frame_unref(frame);
+ return ret;
+ }
+ }
+ // Once send_frame returns EOF go on receiving packets until EOS is met.
+ return xcoder_receive_packet(avctx, pkt);
+}
+
+bool free_frames_isempty(XCoderEncContext *ctx)
+{
+ return (ctx->freeHead == ctx->freeTail);
+}
+
+bool free_frames_isfull(XCoderEncContext *ctx)
+{
+ return (ctx->freeHead == ((ctx->freeTail == MAX_NUM_FRAMEPOOL_HWAVFRAME) ? 0 : ctx->freeTail + 1));
+}
+
+int deq_free_frames(XCoderEncContext *ctx)
+{
+ if (free_frames_isempty(ctx)) {
+ return -1;
+ }
+ ctx->aFree_Avframes_list[ctx->freeHead] = -1;
+ ctx->freeHead = (ctx->freeHead == MAX_NUM_FRAMEPOOL_HWAVFRAME) ? 0 : ctx->freeHead + 1;
+ return 0;
+}
+
+int enq_free_frames(XCoderEncContext *ctx, int idx)
+{
+ if (free_frames_isfull(ctx)) {
+ return -1;
+ }
+ ctx->aFree_Avframes_list[ctx->freeTail] = idx;
+ ctx->freeTail = (ctx->freeTail == MAX_NUM_FRAMEPOOL_HWAVFRAME) ? 0 : ctx->freeTail + 1;
+ return 0;
+}
+
+int recycle_index_2_avframe_index(XCoderEncContext *ctx, uint32_t recycleIndex)
+{
+ int i;
+ for (i = 0; i < MAX_NUM_FRAMEPOOL_HWAVFRAME; i++) {
+ if (ctx->sframe_pool[i]->data[3] &&
+ ((niFrameSurface1_t *)(ctx->sframe_pool[i]->data[3]))->ui16FrameIdx == recycleIndex) {
+ return i;
+ }
+ }
+ return -1;
+}
+
+const AVCodecHWConfigInternal *ff_ni_enc_hw_configs[] = {
+ HW_CONFIG_ENCODER_FRAMES(NI_QUAD, NI_QUADRA),
+ HW_CONFIG_ENCODER_DEVICE(NV12, NI_QUADRA),
+ HW_CONFIG_ENCODER_DEVICE(P010, NI_QUADRA),
+ HW_CONFIG_ENCODER_DEVICE(YUV420P, NI_QUADRA),
+ HW_CONFIG_ENCODER_DEVICE(YUV420P10, NI_QUADRA),
+ NULL,
+};
diff --git a/libavcodec/nienc.h b/libavcodec/nienc.h
new file mode 100644
index 0000000000..5f43e56995
--- /dev/null
+++ b/libavcodec/nienc.h
@@ -0,0 +1,114 @@
+/*
+ * NetInt XCoder H.264/HEVC Encoder common code header
+ * Copyright (c) 2018-2019 NetInt
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef AVCODEC_NIENC_H
+#define AVCODEC_NIENC_H
+
+#include <ni_rsrc_api.h>
+#include <ni_device_api.h>
+#include <ni_util.h>
+
+#include "libavutil/internal.h"
+
+#include "avcodec.h"
+#include "codec_internal.h"
+#include "internal.h"
+#include "libavutil/opt.h"
+#include "libavutil/imgutils.h"
+
+#include "hwconfig.h"
+#include "nicodec.h"
+
+#define OFFSETENC(x) offsetof(XCoderEncContext, x)
+#define VE AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM
+
+// Common Netint encoder options
+#define NI_ENC_OPTIONS\
+ { "xcoder", "Select which XCoder card to use.", OFFSETENC(dev_xcoder), \
+ AV_OPT_TYPE_STRING, { .str = NI_BEST_MODEL_LOAD_STR }, CHAR_MIN, CHAR_MAX, VE, "xcoder" }, \
+ { "bestmodelload", "Pick the least model load XCoder/encoder available.", 0, AV_OPT_TYPE_CONST, \
+ { .str = NI_BEST_MODEL_LOAD_STR }, 0, 0, VE, "xcoder" }, \
+ { "bestload", "Pick the least real load XCoder/encoder available.", 0, AV_OPT_TYPE_CONST, \
+ { .str = NI_BEST_REAL_LOAD_STR }, 0, 0, VE, "xcoder" }, \
+ \
+ { "ni_enc_idx", "Select which encoder to use by index. First is 0, second is 1, and so on.", \
+ OFFSETENC(dev_enc_idx), AV_OPT_TYPE_INT, { .i64 = BEST_DEVICE_LOAD }, -1, INT_MAX, VE }, \
+ \
+ { "ni_enc_name", "Select which encoder to use by NVMe block device name, e.g. /dev/nvme0n1.", \
+ OFFSETENC(dev_blk_name), AV_OPT_TYPE_STRING, { 0 }, 0, 0, VE }, \
+ \
+ { "encname", "Select which encoder to use by NVMe block device name, e.g. /dev/nvme0n1.", \
+ OFFSETENC(dev_blk_name), AV_OPT_TYPE_STRING, { 0 }, 0, 0, VE }, \
+ \
+ { "iosize", "Specify a custom NVMe IO transfer size (multiples of 4096 only).", \
+ OFFSETENC(nvme_io_size), AV_OPT_TYPE_INT, { .i64 = BEST_DEVICE_LOAD }, -1, INT_MAX, VE }, \
+ \
+ { "xcoder-params", "Set the XCoder configuration using a :-separated list of key=value parameters.", \
+ OFFSETENC(xcoder_opts), AV_OPT_TYPE_STRING, { 0 }, 0, 0, VE }, \
+ \
+ { "xcoder-gop", "Set the XCoder custom gop using a :-separated list of key=value parameters.", \
+ OFFSETENC(xcoder_gop), AV_OPT_TYPE_STRING, { 0 }, 0, 0, VE }, \
+ \
+ { "keep_alive_timeout", "Specify a custom session keep alive timeout in seconds.", \
+ OFFSETENC(keep_alive_timeout), AV_OPT_TYPE_INT, { .i64 = NI_DEFAULT_KEEP_ALIVE_TIMEOUT }, \
+ NI_MIN_KEEP_ALIVE_TIMEOUT, NI_MAX_KEEP_ALIVE_TIMEOUT, VE }
+
+// "gen_global_headers" encoder options
+#define NI_ENC_OPTION_GEN_GLOBAL_HEADERS\
+ { "gen_global_headers", "Generate SPS and PPS headers during codec initialization.", \
+ OFFSETENC(gen_global_headers), AV_OPT_TYPE_INT, { .i64 = GEN_GLOBAL_HEADERS_AUTO }, \
+ GEN_GLOBAL_HEADERS_AUTO, GEN_GLOBAL_HEADERS_ON, VE, "gen_global_headers" }, \
+ { "auto", NULL, 0, AV_OPT_TYPE_CONST, \
+ { .i64 = GEN_GLOBAL_HEADERS_AUTO }, 0, 0, VE, "gen_global_headers" }, \
+ { "off", NULL, 0, AV_OPT_TYPE_CONST, \
+ { .i64 = GEN_GLOBAL_HEADERS_OFF }, 0, 0, VE, "gen_global_headers" }, \
+ { "on", NULL, 0, AV_OPT_TYPE_CONST, \
+ { .i64 = GEN_GLOBAL_HEADERS_ON }, 0, 0, VE, "gen_global_headers" }
+
+#define NI_ENC_OPTION_UDU_SEI \
+ { "udu_sei", "Pass through user data unregistered SEI if available", OFFSETENC(udu_sei), \
+ AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE }
+
+int xcoder_encode_init(AVCodecContext *avctx);
+
+int xcoder_encode_close(AVCodecContext *avctx);
+
+int xcoder_encode_sequence_change(AVCodecContext *avctx, int width, int height, int bit_depth_factor);
+
+int xcoder_send_frame(AVCodecContext *avctx, const AVFrame *frame);
+
+int xcoder_receive_packet(AVCodecContext *avctx, AVPacket *pkt);
+
+int ff_xcoder_receive_packet(AVCodecContext *avctx, AVPacket *pkt);
+
+bool free_frames_isempty(XCoderEncContext *ctx);
+
+bool free_frames_isfull(XCoderEncContext *ctx);
+
+int deq_free_frames(XCoderEncContext *ctx);
+
+int enq_free_frames(XCoderEncContext *ctx, int idx);
+
+int recycle_index_2_avframe_index(XCoderEncContext *ctx, uint32_t recycleIndex);
+
+extern const AVCodecHWConfigInternal *ff_ni_enc_hw_configs[];
+
+#endif /* AVCODEC_NIENC_H */
diff --git a/libavcodec/nienc_av1.c b/libavcodec/nienc_av1.c
new file mode 100644
index 0000000000..f81a5921e7
--- /dev/null
+++ b/libavcodec/nienc_av1.c
@@ -0,0 +1,51 @@
+/*
+ * NetInt XCoder HEVC Encoder
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include "nienc.h"
+
+static const AVOption enc_options[] = {
+ NI_ENC_OPTIONS,
+ NI_ENC_OPTION_GEN_GLOBAL_HEADERS,
+ {NULL}
+};
+
+static const AVClass av1_xcoderenc_class = {
+ .class_name = "av1_ni_quadra_enc",
+ .item_name = av_default_item_name,
+ .option = enc_options,
+ .version = LIBAVUTIL_VERSION_INT,
+};
+
+FFCodec ff_av1_ni_quadra_encoder = {
+ .p.name = "av1_ni_quadra_enc",
+ CODEC_LONG_NAME("AV1 NETINT Quadra encoder v" NI_XCODER_REVISION),
+ .p.type = AVMEDIA_TYPE_VIDEO,
+ .p.id = AV_CODEC_ID_AV1,
+ .p.priv_class = &av1_xcoderenc_class,
+ .p.capabilities = AV_CODEC_CAP_DELAY,
+ .p.pix_fmts = (const enum AVPixelFormat[]){ AV_PIX_FMT_YUV420P, AV_PIX_FMT_YUVJ420P,
+ AV_PIX_FMT_YUV420P10LE, AV_PIX_FMT_NV12,
+ AV_PIX_FMT_P010LE, AV_PIX_FMT_NI_QUAD,
+ AV_PIX_FMT_NONE },
+ FF_CODEC_RECEIVE_PACKET_CB(ff_xcoder_receive_packet),
+ .init = xcoder_encode_init,
+ .close = xcoder_encode_close,
+ .priv_data_size = sizeof(XCoderEncContext),
+ .hw_configs = ff_ni_enc_hw_configs,
+};
diff --git a/libavcodec/nienc_h264.c b/libavcodec/nienc_h264.c
new file mode 100644
index 0000000000..ff1ff78e13
--- /dev/null
+++ b/libavcodec/nienc_h264.c
@@ -0,0 +1,52 @@
+/*
+ * NetInt XCoder H.264 Encoder
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include "nienc.h"
+
+static const AVOption enc_options[] = {
+ NI_ENC_OPTIONS,
+ NI_ENC_OPTION_GEN_GLOBAL_HEADERS,
+ NI_ENC_OPTION_UDU_SEI,
+ {NULL}
+};
+
+static const AVClass h264_xcoderenc_class = {
+ .class_name = "h264_ni_quadra_enc",
+ .item_name = av_default_item_name,
+ .option = enc_options,
+ .version = LIBAVUTIL_VERSION_INT,
+};
+
+FFCodec ff_h264_ni_quadra_encoder = {
+ .p.name = "h264_ni_quadra_enc",
+ CODEC_LONG_NAME("H.264 NETINT Quadra encoder v" NI_XCODER_REVISION),
+ .p.type = AVMEDIA_TYPE_VIDEO,
+ .p.id = AV_CODEC_ID_H264,
+ .p.priv_class = &h264_xcoderenc_class,
+ .p.capabilities = AV_CODEC_CAP_DELAY,
+ .p.pix_fmts = (const enum AVPixelFormat[]){ AV_PIX_FMT_YUV420P, AV_PIX_FMT_YUVJ420P,
+ AV_PIX_FMT_YUV420P10LE, AV_PIX_FMT_NV12,
+ AV_PIX_FMT_P010LE, AV_PIX_FMT_NI_QUAD,
+ AV_PIX_FMT_NONE },
+ FF_CODEC_RECEIVE_PACKET_CB(ff_xcoder_receive_packet),
+ .init = xcoder_encode_init,
+ .close = xcoder_encode_close,
+ .priv_data_size = sizeof(XCoderEncContext),
+ .hw_configs = ff_ni_enc_hw_configs,
+};
diff --git a/libavcodec/nienc_hevc.c b/libavcodec/nienc_hevc.c
new file mode 100644
index 0000000000..640ccb039b
--- /dev/null
+++ b/libavcodec/nienc_hevc.c
@@ -0,0 +1,52 @@
+/*
+ * NetInt XCoder HEVC Encoder
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include "nienc.h"
+
+static const AVOption enc_options[] = {
+ NI_ENC_OPTIONS,
+ NI_ENC_OPTION_GEN_GLOBAL_HEADERS,
+ NI_ENC_OPTION_UDU_SEI,
+ {NULL}
+};
+
+static const AVClass h265_xcoderenc_class = {
+ .class_name = "h265_ni_quadra_enc",
+ .item_name = av_default_item_name,
+ .option = enc_options,
+ .version = LIBAVUTIL_VERSION_INT,
+};
+
+FFCodec ff_h265_ni_quadra_encoder = {
+ .p.name = "h265_ni_quadra_enc",
+ CODEC_LONG_NAME("H.265 NETINT Quadra encoder v" NI_XCODER_REVISION),
+ .p.type = AVMEDIA_TYPE_VIDEO,
+ .p.id = AV_CODEC_ID_H265,
+ .p.priv_class = &h265_xcoderenc_class,
+ .p.capabilities = AV_CODEC_CAP_DELAY,
+ .p.pix_fmts = (const enum AVPixelFormat[]){ AV_PIX_FMT_YUV420P, AV_PIX_FMT_YUVJ420P,
+ AV_PIX_FMT_YUV420P10, AV_PIX_FMT_NV12,
+ AV_PIX_FMT_P010LE, AV_PIX_FMT_NI_QUAD,
+ AV_PIX_FMT_NONE },
+ FF_CODEC_RECEIVE_PACKET_CB(ff_xcoder_receive_packet),
+ .init = xcoder_encode_init,
+ .close = xcoder_encode_close,
+ .priv_data_size = sizeof(XCoderEncContext),
+ .hw_configs = ff_ni_enc_hw_configs,
+};
diff --git a/libavcodec/nienc_jpeg.c b/libavcodec/nienc_jpeg.c
new file mode 100644
index 0000000000..007b371da4
--- /dev/null
+++ b/libavcodec/nienc_jpeg.c
@@ -0,0 +1,48 @@
+/*
+ * NetInt XCoder H.264 Encoder
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include "nienc.h"
+
+static const AVOption enc_options[] = {
+ NI_ENC_OPTIONS,
+ {NULL}
+};
+
+static const AVClass jpeg_xcoderenc_class = {
+ .class_name = "jpeg_ni_quadra_enc",
+ .item_name = av_default_item_name,
+ .option = enc_options,
+ .version = LIBAVUTIL_VERSION_INT,
+};
+
+FFCodec ff_jpeg_ni_quadra_encoder = {
+ .p.name = "jpeg_ni_quadra_enc",
+ CODEC_LONG_NAME("JPEG NETINT Quadra encoder v" NI_XCODER_REVISION),
+ .p.type = AVMEDIA_TYPE_VIDEO,
+ .p.id = AV_CODEC_ID_MJPEG,
+ .p.priv_class = &jpeg_xcoderenc_class,
+ .p.capabilities = AV_CODEC_CAP_DELAY,
+ .p.pix_fmts = (const enum AVPixelFormat[]){ AV_PIX_FMT_YUVJ420P, AV_PIX_FMT_NI_QUAD,
+ AV_PIX_FMT_NONE },
+ FF_CODEC_RECEIVE_PACKET_CB(ff_xcoder_receive_packet),
+ .init = xcoder_encode_init,
+ .close = xcoder_encode_close,
+ .priv_data_size = sizeof(XCoderEncContext),
+ .hw_configs = ff_ni_enc_hw_configs,
+};
--
2.25.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [FFmpeg-devel] [PATCH 2/3] libavcodec: add NETINT Quadra HW decoders & encoders
2025-07-02 8:11 [FFmpeg-devel] [PATCH 2/3] libavcodec: add NETINT Quadra HW decoders & encoders Steven Zhou
@ 2025-07-02 14:33 ` Derek Buitenhuis
2025-07-02 16:33 ` Steven Zhou
0 siblings, 1 reply; 3+ messages in thread
From: Derek Buitenhuis @ 2025-07-02 14:33 UTC (permalink / raw)
To: ffmpeg-devel
On 7/2/2025 9:11 AM, Steven Zhou wrote:
> Add NETINT Quadra hardware video decoder and encoder codecs
> h264_ni_quadra_dec, h265_ni_quadra_dec, jpeg_ni_quadra_dec,
> vp9_ni_quadra_dec, h264_ni_quadra_enc, h265_ni_quadra_enc,
> jpeg_ni_quadra_enc, and av1_ni_quadra_enc.
>
> More information:
> https://netint.com/products/quadra-t1a-video-processing-unit/
> https://docs.netint.com/vpu/quadra/
>
> Signed-off-by: Steven Zhou <steven.zhou@netint.ca>
> ---
Hi,
As far as I know, these cards/units are not available to the general public at all.
I do not believe such blobs should be supported in FFmpeg, as per our long standing
history of now allowing these.
- Derek
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [FFmpeg-devel] [PATCH 2/3] libavcodec: add NETINT Quadra HW decoders & encoders
2025-07-02 14:33 ` Derek Buitenhuis
@ 2025-07-02 16:33 ` Steven Zhou
0 siblings, 0 replies; 3+ messages in thread
From: Steven Zhou @ 2025-07-02 16:33 UTC (permalink / raw)
To: FFmpeg development discussions and patches
> -----Original Message-----
> From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of Derek
> Buitenhuis
> Sent: Wednesday, July 2, 2025 7:33 AM
> To: ffmpeg-devel@ffmpeg.org
> Subject: Re: [FFmpeg-devel] [PATCH 2/3] libavcodec: add NETINT Quadra HW
> decoders & encoders
>
> On 7/2/2025 9:11 AM, Steven Zhou wrote:
> > Add NETINT Quadra hardware video decoder and encoder codecs
> > h264_ni_quadra_dec, h265_ni_quadra_dec, jpeg_ni_quadra_dec,
> > vp9_ni_quadra_dec, h264_ni_quadra_enc, h265_ni_quadra_enc,
> > jpeg_ni_quadra_enc, and av1_ni_quadra_enc.
> >
> > More information:
> > https://netint.com/products/quadra-t1a-video-processing-unit/
> > https://docs.netint.com/vpu/quadra/
> >
> > Signed-off-by: Steven Zhou <steven.zhou@netint.ca>
> > ---
>
> Hi,
>
> As far as I know, these cards/units are not available to the general public at all.
>
> I do not believe such blobs should be supported in FFmpeg, as per our long
> standing history of now allowing these.
>
> - Derek
Hi Derek,
The HW is now available on Akamai Cloud for public access: https://techdocs.akamai.com/cloud-computing/docs/accelerated-compute-instances
We hope adding the codecs/filters to FFmpeg will aid public cloud users in getting started with these accelerators.
If anyone is interested in accessing some demo environments with the Quadra HW to test out this patch, please email me directly.
The contents of these patches is also available as fork on github: https://github.com/netintsteven/NI_FF_upstream/tree/upstream
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org
> with subject "unsubscribe".
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2025-07-02 16:33 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-07-02 8:11 [FFmpeg-devel] [PATCH 2/3] libavcodec: add NETINT Quadra HW decoders & encoders Steven Zhou
2025-07-02 14:33 ` Derek Buitenhuis
2025-07-02 16:33 ` Steven Zhou
Git Inbox Mirror of the ffmpeg-devel mailing list - see https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
This inbox may be cloned and mirrored by anyone:
git clone --mirror https://master.gitmailbox.com/ffmpegdev/0 ffmpegdev/git/0.git
# If you have public-inbox 1.1+ installed, you may
# initialize and index your mirror using the following commands:
public-inbox-init -V2 ffmpegdev ffmpegdev/ https://master.gitmailbox.com/ffmpegdev \
ffmpegdev@gitmailbox.com
public-inbox-index ffmpegdev
Example config snippet for mirrors.
AGPL code for this site: git clone https://public-inbox.org/public-inbox.git