Git Inbox Mirror of the ffmpeg-devel mailing list - see https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
 help / color / mirror / Atom feed
From: Steven Zhou <steven.zhou@netint.ca>
To: FFmpeg development discussions and patches <ffmpeg-devel@ffmpeg.org>
Subject: Re: [FFmpeg-devel] [PATCH 1/3] libavutil: add hwcontext_ni_quad
Date: Wed, 2 Jul 2025 08:32:09 +0000
Message-ID: <YT2PR01MB4701532F171869619AA9B42DE340A@YT2PR01MB4701.CANPRD01.PROD.OUTLOOK.COM> (raw)
In-Reply-To: <YT2PR01MB470153E4271409C9DBAE923AE340A@YT2PR01MB4701.CANPRD01.PROD.OUTLOOK.COM>

> -----Original Message-----
> From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of
> Steven Zhou
> Sent: Wednesday, July 2, 2025 1:29 AM
> To: FFmpeg development discussions and patches <ffmpeg-
> devel@ffmpeg.org>
> Subject: Re: [FFmpeg-devel] [PATCH 1/3] libavutil: add hwcontext_ni_quad
> 
> Hi everyone, this set of patches adds hwcontext, codecs, and filters support
> for
> NETINT HW video processing units. NETINT has been offering HW video
> transcoding
> products since 2017 and developing on a fork of FFmpeg since then. But this
> year
> we have partnered with Akamai to make NETINT Quadra video accelerator HW
> available on the Akamai Cloud:
> https://techdocs.akamai.com/cloud-computing/docs/accelerated-compute-
> instances
> 
> We hope our codecs and filters can become a part of the main FFmpeg project
> to
> make accessing them easier for public cloud users.
> 
> If anyone is interested in accessing some demo environments with the Quadra
> HW
> to test out this patch, please email me directly.

The contents of these patches is also available as fork on github: https://github.com/netintsteven/NI_FF_upstream/tree/upstream
> 
> > -----Original Message-----
> > From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of
> > Steven Zhou
> > Sent: Wednesday, July 2, 2025 1:11 AM
> > To: ffmpeg-devel@ffmpeg.org
> > Subject: [FFmpeg-devel] [PATCH 1/3] libavutil: add hwcontext_ni_quad
> >
> > Add support for NETINT Quadra hardware video decoders, filters, and
> > encoders
> > in the libavutil hwcontext framework. This commit enables compile
> > configuration for linking to Quadra's driver 'Libxcoder' and HW frames
> > support.
> >
> > More information:
> > https://netint.com/products/quadra-t1a-video-processing-unit/
> > https://docs.netint.com/vpu/quadra/
> >
> > Signed-off-by: Steven Zhou <steven.zhou@netint.ca>
> > ---
> >  configure                      |   13 +-
> >  libavutil/Makefile             |    3 +
> >  libavutil/hwcontext.c          |    4 +
> >  libavutil/hwcontext.h          |    1 +
> >  libavutil/hwcontext_internal.h |    1 +
> >  libavutil/hwcontext_ni_quad.c  | 1257
> > ++++++++++++++++++++++++++++++++
> >  libavutil/hwcontext_ni_quad.h  |   99 +++
> >  libavutil/pixdesc.c            |   15 +
> >  libavutil/pixfmt.h             |    8 +
> >  9 files changed, 1400 insertions(+), 1 deletion(-)
> >  create mode 100644 libavutil/hwcontext_ni_quad.c
> >  create mode 100644 libavutil/hwcontext_ni_quad.h
> >
> > diff --git a/configure b/configure
> > index 976a21b931..ca15d675d4 100755
> > --- a/configure
> > +++ b/configure
> > @@ -353,6 +353,7 @@ External library support:
> >    --enable-libvpl          enable Intel oneVPL code via libvpl if libmfx is not used
> > [no]
> >    --enable-libnpp          enable Nvidia Performance Primitives-based code [no]
> >    --enable-mmal            enable Broadcom Multi-Media Abstraction Layer
> > (Raspberry Pi) via MMAL [no]
> > +  --disable-ni_quadra      disable NetInt Quadra HWaccel codecs/filters
> > [autodetect]
> >    --disable-nvdec          disable Nvidia video decoding acceleration (via
> hwaccel)
> > [autodetect]
> >    --disable-nvenc          disable Nvidia video encoding code [autodetect]
> >    --enable-omx             enable OpenMAX IL code [no]
> > @@ -2015,6 +2016,7 @@ HWACCEL_AUTODETECT_LIBRARY_LIST="
> >      d3d12va
> >      dxva2
> >      ffnvcodec
> > +    ni_quadra
> >      libdrm
> >      nvdec
> >      nvenc
> > @@ -4105,7 +4107,7 @@ swscale_suggest="libm stdatomic"
> >
> >  avcodec_extralibs="pthreads_extralibs iconv_extralibs dxva2_extralibs
> > liblcevc_dec_extralibs lcms2_extralibs"
> >  avfilter_extralibs="pthreads_extralibs"
> > -avutil_extralibs="d3d11va_extralibs d3d12va_extralibs
> mediacodec_extralibs
> > nanosleep_extralibs pthreads_extralibs vaapi_drm_extralibs
> > vaapi_x11_extralibs vaapi_win32_extralibs vdpau_x11_extralibs"
> > +avutil_extralibs="d3d11va_extralibs d3d12va_extralibs
> > mediacodec_extralibs nanosleep_extralibs pthreads_extralibs
> > vaapi_drm_extralibs vaapi_x11_extralibs vaapi_win32_extralibs
> > vdpau_x11_extralibs ni_quadra_extralibs"
> >
> >  # programs
> >  ffmpeg_deps="avcodec avfilter avformat threads"
> > @@ -6941,6 +6943,15 @@ for func in $MATH_FUNCS; do
> >      eval check_mathfunc $func \${${func}_args:-1} $libm_extralibs
> >  done
> >
> > +# Auto-detect ni_quadra and check libxcoder API version
> > +if enabled ni_quadra; then
> > +    if ! check_pkg_config ni_quadra xcoder ni_device_api.h ni_device_open;
> > then
> > +        disable_with_reason ni_quadra "libxcoder not found"
> > +    elif ! check_cpp_condition xcoder ni_defs.h
> > "LIBXCODER_API_VERSION_MAJOR == 2 &&
> > LIBXCODER_API_VERSION_MINOR >= 77"; then
> > +        disable_with_reason ni_quadra "libxcoder API version must be >= 2.77"
> > +    fi
> > +fi
> > +
> >  # these are off by default, so fail if requested and not available
> >  enabled avisynth          && { require_headers "avisynth/avisynth_c.h
> > avisynth/avs/version.h" &&
> >                                 { test_cpp_condition avisynth/avs/version.h
> > "AVS_MAJOR_VER >= 3 && AVS_MINOR_VER >= 7 && AVS_BUGFIX_VER >=
> 3
> > || AVS_MAJOR_VER >= 3 && AVS_MINOR_VER > 7 || AVS_MAJOR_VER > 3"
> > ||
> > diff --git a/libavutil/Makefile b/libavutil/Makefile
> > index 9ef118016b..e5b80e245c 100644
> > --- a/libavutil/Makefile
> > +++ b/libavutil/Makefile
> > @@ -50,6 +50,7 @@ HEADERS = adler32.h                                                     \
> >            hwcontext_amf.h                                               \
> >            hwcontext_qsv.h                                               \
> >            hwcontext_mediacodec.h                                        \
> > +          hwcontext_ni_quad.h                                           \
> >            hwcontext_opencl.h                                            \
> >            hwcontext_vaapi.h                                             \
> >            hwcontext_videotoolbox.h                                      \
> > @@ -208,6 +209,7 @@ OBJS-$(CONFIG_AMF)                      +=
> > hwcontext_amf.o
> >  OBJS-$(CONFIG_LIBDRM)                   += hwcontext_drm.o
> >  OBJS-$(CONFIG_MACOS_KPERF)              += macos_kperf.o
> >  OBJS-$(CONFIG_MEDIACODEC)               += hwcontext_mediacodec.o
> > +OBJS-$(CONFIG_NI_QUADRA)                += hwcontext_ni_quad.o
> >  OBJS-$(CONFIG_OPENCL)                   += hwcontext_opencl.o
> >  OBJS-$(CONFIG_QSV)                      += hwcontext_qsv.o
> >  OBJS-$(CONFIG_VAAPI)                    += hwcontext_vaapi.o
> > @@ -239,6 +241,7 @@ SKIPHEADERS-$(CONFIG_DXVA2)            +=
> > hwcontext_dxva2.h
> >  SKIPHEADERS-$(CONFIG_AMF)              += hwcontext_amf.h               \
> >                                            hwcontext_amf_internal.h
> >  SKIPHEADERS-$(CONFIG_QSV)              += hwcontext_qsv.h
> > +SKIPHEADERS-$(CONFIG_NI_QUADRA)        += hwcontext_ni_quad.h
> >  SKIPHEADERS-$(CONFIG_OPENCL)           += hwcontext_opencl.h
> >  SKIPHEADERS-$(CONFIG_VAAPI)            += hwcontext_vaapi.h
> >  SKIPHEADERS-$(CONFIG_VIDEOTOOLBOX)     += hwcontext_videotoolbox.h
> > diff --git a/libavutil/hwcontext.c b/libavutil/hwcontext.c
> > index 3111a44651..81cbdce168 100644
> > --- a/libavutil/hwcontext.c
> > +++ b/libavutil/hwcontext.c
> > @@ -54,6 +54,9 @@ static const HWContextType * const hw_table[] = {
> >  #if CONFIG_VAAPI
> >      &ff_hwcontext_type_vaapi,
> >  #endif
> > +#if CONFIG_NI_QUADRA
> > +    &ff_hwcontext_type_ni_quadra,
> > +#endif
> >  #if CONFIG_VDPAU
> >      &ff_hwcontext_type_vdpau,
> >  #endif
> > @@ -80,6 +83,7 @@ static const char *const hw_type_names[] = {
> >      [AV_HWDEVICE_TYPE_D3D12VA] = "d3d12va",
> >      [AV_HWDEVICE_TYPE_OPENCL] = "opencl",
> >      [AV_HWDEVICE_TYPE_QSV]    = "qsv",
> > +    [AV_HWDEVICE_TYPE_NI_QUADRA] = "ni_quadra",
> >      [AV_HWDEVICE_TYPE_VAAPI]  = "vaapi",
> >      [AV_HWDEVICE_TYPE_VDPAU]  = "vdpau",
> >      [AV_HWDEVICE_TYPE_VIDEOTOOLBOX] = "videotoolbox",
> > diff --git a/libavutil/hwcontext.h b/libavutil/hwcontext.h
> > index 96042ba197..7aa0993fe1 100644
> > --- a/libavutil/hwcontext.h
> > +++ b/libavutil/hwcontext.h
> > @@ -30,6 +30,7 @@ enum AVHWDeviceType {
> >      AV_HWDEVICE_TYPE_CUDA,
> >      AV_HWDEVICE_TYPE_VAAPI,
> >      AV_HWDEVICE_TYPE_DXVA2,
> > +    AV_HWDEVICE_TYPE_NI_QUADRA,
> >      AV_HWDEVICE_TYPE_QSV,
> >      AV_HWDEVICE_TYPE_VIDEOTOOLBOX,
> >      AV_HWDEVICE_TYPE_D3D11VA,
> > diff --git a/libavutil/hwcontext_internal.h b/libavutil/hwcontext_internal.h
> > index db23579c9e..9d6825b003 100644
> > --- a/libavutil/hwcontext_internal.h
> > +++ b/libavutil/hwcontext_internal.h
> > @@ -159,6 +159,7 @@ extern const HWContextType
> > ff_hwcontext_type_dxva2;
> >  extern const HWContextType ff_hwcontext_type_opencl;
> >  extern const HWContextType ff_hwcontext_type_qsv;
> >  extern const HWContextType ff_hwcontext_type_vaapi;
> > +extern const HWContextType ff_hwcontext_type_ni_quadra;
> >  extern const HWContextType ff_hwcontext_type_vdpau;
> >  extern const HWContextType ff_hwcontext_type_videotoolbox;
> >  extern const HWContextType ff_hwcontext_type_mediacodec;
> > diff --git a/libavutil/hwcontext_ni_quad.c b/libavutil/hwcontext_ni_quad.c
> > new file mode 100644
> > index 0000000000..903ee0a253
> > --- /dev/null
> > +++ b/libavutil/hwcontext_ni_quad.c
> > @@ -0,0 +1,1257 @@
> > +/*
> > + * This file is part of FFmpeg.
> > + *
> > + * FFmpeg is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU Lesser General Public
> > + * License as published by the Free Software Foundation; either
> > + * version 2.1 of the License, or (at your option) any later version.
> > + *
> > + * FFmpeg is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> GNU
> > + * Lesser General Public License for more details.
> > + *
> > + * You should have received a copy of the GNU Lesser General Public
> > + * License along with FFmpeg; if not, write to the Free Software
> > + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301
> > USA
> > + */
> > +
> > +#include "config.h"
> > +
> > +#include <fcntl.h>
> > +#include <string.h>
> > +#if HAVE_UNISTD_H
> > +#include <unistd.h>
> > +#endif
> > +
> > +#include "avassert.h"
> > +#include "buffer.h"
> > +#include "common.h"
> > +#include "hwcontext.h"
> > +#include "hwcontext_internal.h"
> > +#include "hwcontext_ni_quad.h"
> > +#include "libavutil/imgutils.h"
> > +#include "mem.h"
> > +#include "pixdesc.h"
> > +#include "pixfmt.h"
> > +
> > +static enum AVPixelFormat supported_pixel_formats[] = {
> > +    AV_PIX_FMT_YUV420P, AV_PIX_FMT_YUYV422,
> AV_PIX_FMT_UYVY422,
> > +    AV_PIX_FMT_NV12,    AV_PIX_FMT_ARGB,    AV_PIX_FMT_RGBA,
> > +    AV_PIX_FMT_ABGR,    AV_PIX_FMT_BGRA,
> AV_PIX_FMT_YUV420P10LE,
> > +    AV_PIX_FMT_NV16,    AV_PIX_FMT_BGR0,    AV_PIX_FMT_P010LE
> > +};
> > +
> > +static inline void ni_frame_free(void *opaque, uint8_t *data)
> > +{
> > +    if (data) {
> > +        niFrameSurface1_t* p_data3 = (niFrameSurface1_t*)data;
> > +        if (p_data3->ui16FrameIdx != 0) {
> > +            ni_hwframe_buffer_recycle(p_data3, p_data3->device_handle);
> > +        }
> > +        ni_aligned_free(p_data3);
> > +    }
> > +}
> > +
> > +static int ni_device_create(AVHWDeviceContext *ctx, const char *device,
> > +                            AVDictionary *opts, int flags)
> > +{
> > +    AVNIDeviceContext *ni_hw_ctx;
> > +    char *blk_name;
> > +    int i, module_id = 0, ret = 0;
> > +    ni_device_handle_t fd;
> > +    uint32_t max_io_size = NI_INVALID_IO_SIZE;
> > +    ni_device_t *p_ni_devices = NULL;
> > +
> > +    p_ni_devices = av_calloc(1, sizeof(ni_device_t));
> > +    if(p_ni_devices == NULL) {
> > +        av_log(ctx, AV_LOG_ERROR, "could not allocate memory for
> p_ni_devices
> > in %s", __func__);
> > +        return AVERROR_UNKNOWN;
> > +    }
> > +
> > +    ni_hw_ctx = (AVNIDeviceContext *)ctx->hwctx;
> > +    ni_hw_ctx->uploader_handle = NI_INVALID_DEVICE_HANDLE;
> > +    ni_hw_ctx->uploader_ID = -2; // -1 is load balance by pixel rate,
> > +                                 // default -2 invalid
> > +
> > +    if (device) {
> > +        /* parse device string and fail if incorrect */
> > +        av_log(ctx, AV_LOG_VERBOSE, "%s %s\n", __func__, device);
> > +        ni_hw_ctx->uploader_ID = atoi(device);
> > +        av_log(ctx, AV_LOG_DEBUG, "%s: given uploader ID %d\n", __func__,
> > +               ni_hw_ctx->uploader_ID);
> > +        if (ni_hw_ctx->uploader_ID < -1) {
> > +            av_log(ctx, AV_LOG_ERROR, "%s: uploader ID %d must be >= -1.\n",
> > +                   __func__, ni_hw_ctx->uploader_ID);
> > +            ret =  AVERROR_UNKNOWN;
> > +            LRETURN;
> > +        }
> > +    }
> > +
> > +    for (i = 0; i < NI_MAX_DEVICE_CNT; i++) {
> > +        ni_hw_ctx->cards[i] = NI_INVALID_DEVICE_HANDLE;
> > +    }
> > +
> > +    /* Scan all cards on the host, only look at NETINT cards */
> > +    if (ni_rsrc_list_all_devices(p_ni_devices) == NI_RETCODE_SUCCESS) {
> > +        // Note: this only checks for Netint encoders
> > +        for (i = 0; i < p_ni_devices->xcoder_cnt[NI_DEVICE_TYPE_ENCODER];
> i++)
> > {
> > +            blk_name =
> > +                &(p_ni_devices-
> > >xcoders[NI_DEVICE_TYPE_ENCODER][i].blk_name[0]);
> > +            // cone-to-one correspondence between card index and module_id
> > +            module_id = p_ni_devices-
> > >xcoders[NI_DEVICE_TYPE_ENCODER][i].module_id;
> > +            av_log(ctx, AV_LOG_DEBUG, "%s blk name %s\n", __func__,
> > blk_name);
> > +            fd = ni_device_open(blk_name, &max_io_size);
> > +            if (fd != NI_INVALID_DEVICE_HANDLE) {
> > +                ni_hw_ctx->cards[module_id] = fd;
> > +            }
> > +        }
> > +    } else {
> > +        ret = AVERROR_UNKNOWN;
> > +    }
> > +END:
> > +    av_freep(&p_ni_devices);
> > +    return ret;
> > +}
> > +
> > +static void ni_device_uninit(AVHWDeviceContext *ctx)
> > +{
> > +    AVNIDeviceContext *ni_hw_ctx;
> > +    int i;
> > +
> > +    ni_hw_ctx = (AVNIDeviceContext *)ctx->hwctx;
> > +
> > +    av_log(ctx, AV_LOG_VERBOSE, "%s\n", __func__);
> > +
> > +    if (ni_hw_ctx->uploader_handle != NI_INVALID_DEVICE_HANDLE) {
> > +        ni_device_close(ni_hw_ctx->uploader_handle);
> > +        ni_hw_ctx->uploader_handle = NI_INVALID_DEVICE_HANDLE;
> > +    }
> > +
> > +    for (i = 0; i < NI_MAX_DEVICE_CNT; i++) {
> > +        ni_device_handle_t fd = ni_hw_ctx->cards[i];
> > +        if (fd != NI_INVALID_DEVICE_HANDLE) {
> > +            ni_hw_ctx->cards[i] = NI_INVALID_DEVICE_HANDLE;
> > +            ni_device_close(fd);
> > +        } else {
> > +            break;
> > +        }
> > +    }
> > +
> > +    return;
> > +}
> > +
> > +static int ni_frames_get_constraints(AVHWDeviceContext *ctx,
> > +                                     const void *hwconfig,
> > +                                     AVHWFramesConstraints *constraints)
> > +{
> > +    int i;
> > +    int num_pix_fmts_supported;
> > +
> > +    num_pix_fmts_supported =
> FF_ARRAY_ELEMS(supported_pixel_formats);
> > +
> > +    constraints->valid_sw_formats =
> > av_malloc_array(num_pix_fmts_supported + 1,
> > +                                                    sizeof(*constraints->valid_sw_formats));
> > +    if (!constraints->valid_sw_formats) {
> > +        return AVERROR(ENOMEM);
> > +    }
> > +
> > +    for (i = 0; i < num_pix_fmts_supported; i++) {
> > +        constraints->valid_sw_formats[i] = supported_pixel_formats[i];
> > +    }
> > +    constraints->valid_sw_formats[num_pix_fmts_supported] =
> > AV_PIX_FMT_NONE;
> > +
> > +    constraints->valid_hw_formats = av_malloc_array(2, sizeof(*constraints-
> > >valid_hw_formats));
> > +    if (!constraints->valid_hw_formats) {
> > +        return AVERROR(ENOMEM);
> > +    }
> > +
> > +    constraints->valid_hw_formats[0] = AV_PIX_FMT_NI_QUAD;
> > +    constraints->valid_hw_formats[1] = AV_PIX_FMT_NONE;
> > +
> > +    return 0;
> > +}
> > +
> > +static int ni_get_buffer(AVHWFramesContext *ctx, AVFrame *frame)
> > +{
> > +    int ret = 0;
> > +    uint8_t *buf;
> > +    uint32_t buf_size;
> > +    ni_frame_t *xfme;
> > +    AVNIFramesContext *f_hwctx = (AVNIFramesContext*) ctx->hwctx;
> > +    ni_session_data_io_t dst_session_io_data;
> > +    ni_session_data_io_t * p_dst_session_data = &dst_session_io_data;
> > +    bool isnv12frame = (ctx->sw_format == AV_PIX_FMT_NV12 ||
> > +                        ctx->sw_format == AV_PIX_FMT_P010LE);
> > +
> > +    av_log(ctx, AV_LOG_TRACE, "hwcontext_ni.c:ni_get_buffer()\n");
> > +
> > +    // alloc dest avframe buff
> > +    memset(p_dst_session_data, 0, sizeof(dst_session_io_data));
> > +    ret = ni_frame_buffer_alloc(&p_dst_session_data->data.frame, ctx-
> >width,
> > +                                ctx->height, 0, 1, // codec type does not matter, metadata
> > exists
> > +                                f_hwctx->api_ctx.bit_depth_factor, 1, !isnv12frame);
> > +    if (ret != 0) {
> > +        return AVERROR(ENOMEM);
> > +    }
> > +
> > +    xfme = &p_dst_session_data->data.frame;
> > +    buf_size = xfme->data_len[0] + xfme->data_len[1] +
> > +               xfme->data_len[2] + xfme->data_len[3];
> > +    buf = xfme->p_data[0];
> > +    memset(buf, 0, buf_size);
> > +    frame->buf[0] = av_buffer_create(buf, buf_size, ni_frame_free, NULL, 0);
> > +    if (!frame->buf[0]) {
> > +        return AVERROR(ENOMEM);
> > +    }
> > +    frame->data[3] = xfme->p_buffer + xfme->data_len[0] + xfme-
> >data_len[1]
> > +
> > +                     xfme->data_len[2];
> > +    frame->format = AV_PIX_FMT_NI_QUAD;
> > +    frame->width = ctx->width;
> > +    frame->height = ctx->height;
> > +
> > +    return 0;
> > +}
> > +
> > +static int ni_transfer_get_formats(AVHWFramesContext *ctx,
> > +                                   enum AVHWFrameTransferDirection dir,
> > +                                   enum AVPixelFormat **formats)
> > +{
> > +    enum AVPixelFormat *fmts;
> > +
> > +    fmts = av_malloc_array(2, sizeof(*fmts));
> > +    if (!fmts) {
> > +        return AVERROR(ENOMEM);
> > +    }
> > +
> > +    fmts[0] = ctx->sw_format;
> > +    fmts[1] = AV_PIX_FMT_NONE;
> > +
> > +    *formats = fmts;
> > +
> > +    return 0;
> > +}
> > +
> > +static void ni_frames_uninit(AVHWFramesContext *ctx)
> > +{
> > +    AVNIFramesContext *f_hwctx = (AVNIFramesContext*) ctx->hwctx;
> > +    int dev_dec_idx = f_hwctx->uploader_device_id; //Supplied by
> > init_hw_device ni=<name>:<id> or ni_hwupload=<id>
> > +
> > +    av_log(ctx, AV_LOG_DEBUG, "%s: only close if upload instance,
> > poolsize=%d "
> > +                              "devid=%d\n",
> > +                              __func__, ctx->initial_pool_size, dev_dec_idx);
> > +
> > +    if (dev_dec_idx != -2 && ctx->initial_pool_size >= 0) {
> > +        if (f_hwctx->src_session_io_data.data.frame.buffer_size
> > +            || f_hwctx->src_session_io_data.data.frame.metadata_buffer_size
> > +            || f_hwctx->src_session_io_data.data.frame.start_buffer_size) {
> > +            av_log(ctx, AV_LOG_DEBUG, "%s:free upload src frame buffer\n",
> > +                 __func__);
> > +            ni_frame_buffer_free(&f_hwctx->src_session_io_data.data.frame);
> > +        }
> > +        av_log(ctx, AV_LOG_VERBOSE, "SessionID = %d!\n", f_hwctx-
> > >api_ctx.session_id);
> > +        if (f_hwctx->api_ctx.session_id != NI_INVALID_SESSION_ID) {
> > +            ni_device_session_close(&f_hwctx->api_ctx, 1,
> > NI_DEVICE_TYPE_UPLOAD);
> > +        }
> > +        ni_device_session_context_clear(&f_hwctx->api_ctx);
> > +
> > +        //only upload frames init allocates these ones
> > +        av_freep(&f_hwctx->surface_ptrs);
> > +        av_freep(&f_hwctx->surfaces_internal);
> > +    } else {
> > +        ni_device_session_context_clear(&f_hwctx->api_ctx);
> > +    }
> > +
> > +    if (f_hwctx->suspended_device_handle != NI_INVALID_DEVICE_HANDLE)
> {
> > +        av_log(ctx, AV_LOG_DEBUG, "%s: close file handle, =%d\n",
> > +               __func__, f_hwctx->suspended_device_handle);
> > +        ni_device_close(f_hwctx->suspended_device_handle);
> > +        f_hwctx->suspended_device_handle = NI_INVALID_DEVICE_HANDLE;
> > +    }
> > +}
> > +
> > +static AVBufferRef *ni_pool_alloc(void *opaque, size_t size)
> > +{
> > +    AVHWFramesContext *ctx = (AVHWFramesContext*)opaque;
> > +    AVNIFramesContext *f_hwctx = (AVNIFramesContext*) ctx->hwctx;
> > +
> > +    if (f_hwctx->nb_surfaces_used < f_hwctx->nb_surfaces) {
> > +        f_hwctx->nb_surfaces_used++;
> > +        return av_buffer_create((uint8_t*) (f_hwctx->surfaces_internal +
> > f_hwctx->nb_surfaces_used - 1),
> > +                                sizeof(*f_hwctx->surfaces), NULL, NULL, 0);
> > +    }
> > +
> > +    return NULL;
> > +}
> > +
> > +static int ni_init_surface(AVHWFramesContext *ctx, niFrameSurface1_t
> *surf)
> > +{
> > +    /* Fill with dummy values. This data is never used. */
> > +    surf->ui16FrameIdx    = 0;
> > +    surf->ui16session_ID  = 0;
> > +    surf->ui32nodeAddress = 0;
> > +    surf->device_handle   = 0;
> > +    surf->bit_depth       = 0;
> > +    surf->encoding_type   = 0;
> > +    surf->output_idx      = 0;
> > +    surf->src_cpu         = 0;
> > +
> > +    return 0;
> > +}
> > +
> > +static int ni_init_pool(AVHWFramesContext *ctx)
> > +{
> > +    AVNIFramesContext *f_hwctx = (AVNIFramesContext*) ctx->hwctx;
> > +    int i, ret;
> > +
> > +    av_log(ctx, AV_LOG_VERBOSE, "ctx->initial_pool_size = %d\n", ctx-
> > >initial_pool_size);
> > +
> > +    if (ctx->initial_pool_size <= 0) {
> > +        av_log(ctx, AV_LOG_ERROR, "NI requires a fixed frame pool size\n");
> > +        return AVERROR(EINVAL);
> > +    }
> > +
> > +    f_hwctx->surfaces_internal = av_calloc(ctx->initial_pool_size,
> > +                                           sizeof(*f_hwctx->surfaces_internal));
> > +    if (!f_hwctx->surfaces_internal) {
> > +        return AVERROR(ENOMEM);
> > +    }
> > +
> > +    for (i = 0; i < ctx->initial_pool_size; i++) {
> > +        ret = ni_init_surface(ctx, &f_hwctx->surfaces_internal[i]);
> > +        if (ret < 0) {
> > +            return ret;
> > +        }
> > +    }
> > +
> > +    ffhwframesctx(ctx)->pool_internal =
> > +        av_buffer_pool_init2(sizeof(niFrameSurface1_t), ctx, ni_pool_alloc,
> > NULL);
> > +    if (!ffhwframesctx(ctx)->pool_internal) {
> > +        return AVERROR(ENOMEM);
> > +    }
> > +
> > +    f_hwctx->surfaces = f_hwctx->surfaces_internal;
> > +    f_hwctx->nb_surfaces = ctx->initial_pool_size;
> > +
> > +    return 0;
> > +}
> > +
> > +static int ni_init_internal_session(AVHWFramesContext *ctx)
> > +{
> > +    AVNIFramesContext *f_hwctx = (AVNIFramesContext*) ctx->hwctx;
> > +    ni_log_set_level(ff_to_ni_log_level(av_log_get_level()));
> > +    av_log(ctx, AV_LOG_INFO, "hwcontext_ni:ni_init_internal_session()\n");
> > +    if (ni_device_session_context_init(&(f_hwctx->api_ctx)) < 0) {
> > +        av_log(ctx, AV_LOG_ERROR, "ni init context failure\n");
> > +        return -1;
> > +    }
> > +
> > +    return 0;
> > +}
> > +
> > +static void init_split_rsrc(AVNIFramesContext *f_hwctx, int w, int h)
> > +{
> > +    int i;
> > +    ni_split_context_t* p_split_ctx = &f_hwctx->split_ctx;
> > +    memset(p_split_ctx, 0, sizeof(ni_split_context_t));
> > +    for (i = 0; i < 3; i++) {
> > +        p_split_ctx->w[i] = w;
> > +        p_split_ctx->h[i] = h;
> > +        p_split_ctx->f[i] = -1;
> > +    }
> > +}
> > +
> > +static int ni_frames_init(AVHWFramesContext *ctx) //hwupload runs this
> on
> > hwupload_config_output
> > +{
> > +    AVNIFramesContext *f_hwctx = (AVNIFramesContext*) ctx->hwctx;
> > +    AVNIDeviceContext *device_hwctx = (AVNIDeviceContext*) ctx-
> > >device_ctx->hwctx;
> > +    int linesize_aligned,height_aligned;
> > +    int pool_size,ret;
> > +
> > +    av_log(ctx, AV_LOG_INFO, "%s: Enter, supplied poolsize = %d,
> > devid=%d\n",
> > +           __func__, ctx->initial_pool_size, device_hwctx->uploader_ID);
> > +
> > +    f_hwctx->suspended_device_handle = NI_INVALID_DEVICE_HANDLE;
> > +    f_hwctx->uploader_device_id = -2; // -1 is load balance by pixel rate,
> > +                                      // default -2 invalid
> > +    pool_size = ctx->initial_pool_size;
> > +    if (device_hwctx->uploader_ID < -1) {
> > +        if (pool_size > -1) { // ffmpeg does not specify init_hw_device for
> decoder
> > +                              // - so decoder device_hwctx->uploader_ID is always -1
> > +            av_log(ctx, AV_LOG_INFO, "%s no uploader device selected!\n",
> > +                   __func__);
> > +            return AVERROR(EINVAL);
> > +        }
> > +    }
> > +
> > +    ret = ni_init_internal_session(ctx);
> > +    if (ret < 0) {
> > +        return AVERROR(EINVAL);
> > +    }
> > +
> > +    init_split_rsrc(f_hwctx, ctx->width, ctx->height);
> > +    if (pool_size <= -1) { // None upload init returns here
> > +        av_log(ctx, AV_LOG_INFO, "%s: poolsize code %d, this code recquires
> no
> > host pool\n",
> > +               __func__, pool_size);
> > +        return ret;
> > +    } else if (pool_size == 0) {
> > +        pool_size = ctx->initial_pool_size = 3;
> > +        av_log(ctx, AV_LOG_INFO, "%s: Pool_size autoset to %d\n", __func__,
> > pool_size);
> > +    }
> > +
> > +    /*Kept in AVNIFramesContext for future reference, the
> AVNIDeviceContext
> > data member gets overwritten*/
> > +    f_hwctx->uploader_device_id = device_hwctx->uploader_ID;
> > +
> > +    if ((ctx->width & 0x1) || (ctx->height & 0x1)) {
> > +        av_log(ctx, AV_LOG_ERROR, "Odd resolution %dx%d not permitted\n",
> > +               ctx->width, ctx->height);
> > +        return AVERROR(EINVAL);
> > +    }
> > +
> > +    linesize_aligned = NI_VPU_CEIL(ctx->width, 2);
> > +    ctx->width = linesize_aligned;
> > +
> > +    height_aligned = ctx->height;
> > +    ctx->height = NI_VPU_CEIL(height_aligned, 2);
> > +
> > +    f_hwctx->api_ctx.active_video_width = ctx->width;
> > +    f_hwctx->api_ctx.active_video_height = ctx->height;
> > +
> > +    switch (ctx->sw_format) {
> > +        case AV_PIX_FMT_YUV420P:
> > +            f_hwctx->api_ctx.bit_depth_factor = 1;
> > +            f_hwctx->api_ctx.src_bit_depth = 8;
> > +            f_hwctx->api_ctx.pixel_format = NI_PIX_FMT_YUV420P;
> > +            break;
> > +        case AV_PIX_FMT_YUV420P10LE:
> > +            f_hwctx->api_ctx.bit_depth_factor = 2;
> > +            f_hwctx->api_ctx.src_bit_depth = 10;
> > +            f_hwctx->api_ctx.src_endian = NI_FRAME_LITTLE_ENDIAN;
> > +            f_hwctx->api_ctx.pixel_format = NI_PIX_FMT_YUV420P10LE;
> > +            break;
> > +        case AV_PIX_FMT_NV12:
> > +            f_hwctx->api_ctx.bit_depth_factor = 1;
> > +            f_hwctx->api_ctx.src_bit_depth = 8;
> > +            f_hwctx->api_ctx.pixel_format = NI_PIX_FMT_NV12;
> > +            break;
> > +        case AV_PIX_FMT_P010LE:
> > +            f_hwctx->api_ctx.bit_depth_factor = 2;
> > +            f_hwctx->api_ctx.src_bit_depth = 10;
> > +            f_hwctx->api_ctx.pixel_format = NI_PIX_FMT_P010LE;
> > +            f_hwctx->api_ctx.src_endian = NI_FRAME_LITTLE_ENDIAN;
> > +            break;
> > +        case AV_PIX_FMT_RGBA:
> > +            f_hwctx->api_ctx.bit_depth_factor = 4;
> > +            f_hwctx->api_ctx.src_bit_depth    = 32;
> > +            f_hwctx->api_ctx.src_endian       = NI_FRAME_LITTLE_ENDIAN;
> > +            f_hwctx->api_ctx.pixel_format     = NI_PIX_FMT_RGBA;
> > +            break;
> > +        case AV_PIX_FMT_BGRA:
> > +            f_hwctx->api_ctx.bit_depth_factor = 4;
> > +            f_hwctx->api_ctx.src_bit_depth    = 32;
> > +            f_hwctx->api_ctx.src_endian       = NI_FRAME_LITTLE_ENDIAN;
> > +            f_hwctx->api_ctx.pixel_format     = NI_PIX_FMT_BGRA;
> > +            break;
> > +        case AV_PIX_FMT_ABGR:
> > +            f_hwctx->api_ctx.bit_depth_factor = 4;
> > +            f_hwctx->api_ctx.src_bit_depth    = 32;
> > +            f_hwctx->api_ctx.src_endian       = NI_FRAME_LITTLE_ENDIAN;
> > +            f_hwctx->api_ctx.pixel_format     = NI_PIX_FMT_ABGR;
> > +            break;
> > +        case AV_PIX_FMT_ARGB:
> > +            f_hwctx->api_ctx.bit_depth_factor = 4;
> > +            f_hwctx->api_ctx.src_bit_depth    = 32;
> > +            f_hwctx->api_ctx.src_endian       = NI_FRAME_LITTLE_ENDIAN;
> > +            f_hwctx->api_ctx.pixel_format     = NI_PIX_FMT_ARGB;
> > +            break;
> > +        case AV_PIX_FMT_BGR0:
> > +            f_hwctx->api_ctx.bit_depth_factor = 4;
> > +            f_hwctx->api_ctx.src_bit_depth    = 32;
> > +            f_hwctx->api_ctx.src_endian       = NI_FRAME_LITTLE_ENDIAN;
> > +            f_hwctx->api_ctx.pixel_format     = NI_PIX_FMT_BGR0;
> > +            break;
> > +        case AV_PIX_FMT_YUYV422:
> > +            f_hwctx->api_ctx.bit_depth_factor = 1;
> > +            f_hwctx->api_ctx.src_bit_depth    = 8;
> > +            f_hwctx->api_ctx.src_endian       = NI_FRAME_LITTLE_ENDIAN;
> > +            f_hwctx->api_ctx.pixel_format     = NI_PIX_FMT_YUYV422;
> > +            break;
> > +        case AV_PIX_FMT_UYVY422:
> > +            f_hwctx->api_ctx.bit_depth_factor = 1;
> > +            f_hwctx->api_ctx.src_bit_depth    = 8;
> > +            f_hwctx->api_ctx.src_endian       = NI_FRAME_LITTLE_ENDIAN;
> > +            f_hwctx->api_ctx.pixel_format     = NI_PIX_FMT_UYVY422;
> > +            break;
> > +        case AV_PIX_FMT_NV16:
> > +            f_hwctx->api_ctx.bit_depth_factor = 1;
> > +            f_hwctx->api_ctx.src_bit_depth    = 8;
> > +            f_hwctx->api_ctx.src_endian       = NI_FRAME_LITTLE_ENDIAN;
> > +            f_hwctx->api_ctx.pixel_format     = NI_PIX_FMT_NV16;
> > +            break;
> > +        default:
> > +            av_log(ctx, AV_LOG_ERROR, "Pixel format not supported by
> > device.\n");
> > +            return AVERROR(EINVAL);
> > +    }
> > +
> > +    if (ctx->width > NI_MAX_RESOLUTION_WIDTH ||
> > +        ctx->height > NI_MAX_RESOLUTION_HEIGHT ||
> > +        ctx->width * ctx->height > NI_MAX_RESOLUTION_AREA) {
> > +        av_log(ctx, AV_LOG_ERROR, "Error XCoder resolution %dx%d not
> > supported\n",
> > +               ctx->width, ctx->height);
> > +        av_log(ctx, AV_LOG_ERROR, "Max Supported Width: %d Height %d
> > Area %d\n",
> > +               NI_MAX_RESOLUTION_WIDTH, NI_MAX_RESOLUTION_HEIGHT,
> > NI_MAX_RESOLUTION_AREA);
> > +        return AVERROR_EXTERNAL;
> > +    } else if (f_hwctx->uploader_device_id >= -1) {
> > +        // leave it to ni_device_session_open to handle uploader session open
> > +        // based on api_ctx.hw_id set to proper value
> > +    } else {
> > +        av_log(ctx, AV_LOG_ERROR, "Error XCoder command line options");
> > +        return AVERROR(EINVAL);
> > +    }
> > +
> > +    av_log(ctx, AV_LOG_VERBOSE,
> > +           "pixel sw_format=%d width = %d height = %d outformat=%d "
> > +           "uploader_device_id=%d\n",
> > +           ctx->sw_format, ctx->width, ctx->height, ctx->format,
> > +           f_hwctx->uploader_device_id);
> > +
> > +    f_hwctx->api_ctx.hw_id = f_hwctx->uploader_device_id;
> > +    f_hwctx->api_ctx.keep_alive_timeout = f_hwctx->keep_alive_timeout;
> > +    if (0 == f_hwctx->api_ctx.keep_alive_timeout) {
> > +        f_hwctx->api_ctx.keep_alive_timeout =
> > NI_DEFAULT_KEEP_ALIVE_TIMEOUT;
> > +    }
> > +
> > +    f_hwctx->api_ctx.framerate.framerate_num = f_hwctx->framerate.num;
> > +    f_hwctx->api_ctx.framerate.framerate_denom = f_hwctx-
> >framerate.den;
> > +
> > +    ret = ni_device_session_open(&f_hwctx->api_ctx,
> > NI_DEVICE_TYPE_UPLOAD);
> > +    if (ret != NI_RETCODE_SUCCESS) {
> > +        av_log(ctx, AV_LOG_ERROR, "Error Something wrong in xcoder
> open\n");
> > +        ni_frames_uninit(ctx);
> > +        return AVERROR_EXTERNAL;
> > +    } else {
> > +        av_log(ctx, AV_LOG_VERBOSE,
> > +               "XCoder %s.%d (inst: %d) opened successfully\n",
> > +               f_hwctx->api_ctx.dev_xcoder_name, f_hwctx->api_ctx.hw_id,
> > +               f_hwctx->api_ctx.session_id);
> > +#ifndef _WIN32
> > +        // replace device_handle with blk_io_handle
> > +        ni_device_close(f_hwctx->api_ctx.device_handle);
> > +        f_hwctx->api_ctx.device_handle = f_hwctx->api_ctx.blk_io_handle;
> > +#endif
> > +        // save blk_io_handle for track
> > +        device_hwctx->uploader_handle = f_hwctx->api_ctx.blk_io_handle;
> > +    }
> > +    memset(&f_hwctx->src_session_io_data, 0,
> sizeof(ni_session_data_io_t));
> > +
> > +    ret = ni_device_session_init_framepool(&f_hwctx->api_ctx, pool_size,
> > NI_UPLOADER_FLAG_LM);
> > +    if (ret < 0) {
> > +        return ret;
> > +    }
> > +
> > +    if (!ctx->pool) {
> > +        ret = ni_init_pool(ctx);
> > +        if (ret < 0) {
> > +            av_log(ctx, AV_LOG_ERROR, "Error creating an internal frame
> pool\n");
> > +            return ret;
> > +        }
> > +    }
> > +    return 0;
> > +}
> > +
> > +static int ni_to_avframe_copy(AVHWFramesContext *hwfc, AVFrame *dst,
> > +                              const ni_frame_t *src)
> > +{
> > +    int src_linesize[4], src_height[4];
> > +    int i, h, nb_planes;
> > +    uint8_t *src_line, *dst_line;
> > +
> > +    nb_planes = av_pix_fmt_count_planes(hwfc->sw_format);
> > +
> > +    switch (hwfc->sw_format) {
> > +    case AV_PIX_FMT_YUV420P:
> > +        src_linesize[0] = FFALIGN(dst->width, 128);
> > +        src_linesize[1] = FFALIGN(dst->width / 2, 128);
> > +        src_linesize[2] = src_linesize[1];
> > +        src_linesize[3] = 0;
> > +
> > +        src_height[0] = dst->height;
> > +        src_height[1] = FFALIGN(dst->height, 2) / 2;
> > +        src_height[2] = src_height[1];
> > +        src_height[3] = 0;
> > +        break;
> > +
> > +    case AV_PIX_FMT_YUV420P10LE:
> > +        src_linesize[0] = FFALIGN(dst->width * 2, 128);
> > +        src_linesize[1] = FFALIGN(dst->width, 128);
> > +        src_linesize[2] = src_linesize[1];
> > +        src_linesize[3] = 0;
> > +
> > +        src_height[0] = dst->height;
> > +        src_height[1] = FFALIGN(dst->height, 2) / 2;
> > +        src_height[2] = src_height[1];
> > +        src_height[3] = 0;
> > +        break;
> > +
> > +    case AV_PIX_FMT_NV12:
> > +        src_linesize[0] = FFALIGN(dst->width, 128);
> > +        src_linesize[1] = FFALIGN(dst->width, 128);
> > +        src_linesize[2] = 0;
> > +        src_linesize[3] = 0;
> > +
> > +        src_height[0] = dst->height;
> > +        src_height[1] = FFALIGN(dst->height, 2) / 2;
> > +        src_height[2] = 0;
> > +        src_height[3] = 0;
> > +        break;
> > +
> > +    case AV_PIX_FMT_NV16:
> > +        src_linesize[0] = FFALIGN(dst->width, 64);
> > +        src_linesize[1] = FFALIGN(dst->width, 64);
> > +        src_linesize[2] = 0;
> > +        src_linesize[3] = 0;
> > +
> > +        src_height[0] = dst->height;
> > +        src_height[1] = dst->height;
> > +        src_height[2] = 0;
> > +        src_height[3] = 0;
> > +        break;
> > +
> > +    case AV_PIX_FMT_YUYV422:
> > +    case AV_PIX_FMT_UYVY422:
> > +        src_linesize[0] = FFALIGN(dst->width, 16) * 2;
> > +        src_linesize[1] = 0;
> > +        src_linesize[2] = 0;
> > +        src_linesize[3] = 0;
> > +
> > +        src_height[0] = dst->height;
> > +        src_height[1] = 0;
> > +        src_height[2] = 0;
> > +        src_height[3] = 0;
> > +        break;
> > +
> > +    case AV_PIX_FMT_P010LE:
> > +        src_linesize[0] = FFALIGN(dst->width * 2, 128);
> > +        src_linesize[1] = FFALIGN(dst->width * 2, 128);
> > +        src_linesize[2] = 0;
> > +        src_linesize[3] = 0;
> > +
> > +        src_height[0] = dst->height;
> > +        src_height[1] = FFALIGN(dst->height, 2) / 2;
> > +        src_height[2] = 0;
> > +        src_height[3] = 0;
> > +        break;
> > +
> > +    case AV_PIX_FMT_RGBA:
> > +    case AV_PIX_FMT_BGRA:
> > +    case AV_PIX_FMT_ABGR:
> > +    case AV_PIX_FMT_ARGB:
> > +    case AV_PIX_FMT_BGR0:
> > +        src_linesize[0] = FFALIGN(dst->width, 16) * 4;
> > +        src_linesize[1] = 0;
> > +        src_linesize[2] = 0;
> > +        src_linesize[3] = 0;
> > +
> > +        src_height[0] = dst->height;
> > +        src_height[1] = 0;
> > +        src_height[2] = 0;
> > +        src_height[3] = 0;
> > +        break;
> > +
> > +    default:
> > +        av_log(hwfc, AV_LOG_ERROR, "Unsupported pixel format %s\n",
> > +               av_get_pix_fmt_name(hwfc->sw_format));
> > +        return AVERROR(EINVAL);
> > +    }
> > +
> > +    for (i = 0; i < nb_planes; i++) {
> > +        dst_line = dst->data[i];
> > +        src_line = src->p_data[i];
> > +
> > +        for (h = 0; h < src_height[i]; h++) {
> > +            memcpy(dst_line, src_line,
> > +                   FFMIN(src_linesize[i], dst->linesize[i]));
> > +            dst_line += dst->linesize[i];
> > +            src_line += src_linesize[i];
> > +        }
> > +    }
> > +
> > +    return 0;
> > +}
> > +
> > +static int av_to_niframe_copy(AVHWFramesContext *hwfc, const int
> > dst_stride[4],
> > +                              ni_frame_t *dst, const AVFrame *src) {
> > +    int src_height[4], hpad[4], vpad[4];
> > +    int i, j, h, nb_planes;
> > +    uint8_t *src_line, *dst_line, YUVsample, *sample, *dest;
> > +    uint16_t lastidx;
> > +    bool tenBit;
> > +
> > +    nb_planes = av_pix_fmt_count_planes(hwfc->sw_format);
> > +
> > +    switch (src->format) {
> > +    case AV_PIX_FMT_YUV420P:
> > +        hpad[0] = FFMAX(dst_stride[0] - src->linesize[0], 0);
> > +        hpad[1] = FFMAX(dst_stride[1] - src->linesize[1], 0);
> > +        hpad[2] = FFMAX(dst_stride[2] - src->linesize[2], 0);
> > +        hpad[3] = 0;
> > +
> > +        src_height[0] = src->height;
> > +        src_height[1] = FFALIGN(src->height, 2) / 2;
> > +        src_height[2] = FFALIGN(src->height, 2) / 2;
> > +        src_height[3] = 0;
> > +
> > +        vpad[0] = FFALIGN(src_height[0], 2) - src_height[0];
> > +        vpad[1] = FFALIGN(src_height[1], 2) - src_height[1];
> > +        vpad[2] = FFALIGN(src_height[2], 2) - src_height[2];
> > +        vpad[3] = 0;
> > +
> > +        tenBit = false;
> > +        break;
> > +
> > +    case AV_PIX_FMT_YUV420P10LE:
> > +        hpad[0] = FFMAX(dst_stride[0] - src->linesize[0], 0);
> > +        hpad[1] = FFMAX(dst_stride[1] - src->linesize[1], 0);
> > +        hpad[2] = FFMAX(dst_stride[2] - src->linesize[2], 0);
> > +        hpad[3] = 0;
> > +
> > +        src_height[0] = src->height;
> > +        src_height[1] = FFALIGN(src->height, 2) / 2;
> > +        src_height[2] = FFALIGN(src->height, 2) / 2;
> > +        src_height[3] = 0;
> > +
> > +        vpad[0] = FFALIGN(src_height[0], 2) - src_height[0];
> > +        vpad[1] = FFALIGN(src_height[1], 2) - src_height[1];
> > +        vpad[2] = FFALIGN(src_height[2], 2) - src_height[2];
> > +        vpad[3] = 0;
> > +
> > +        tenBit = true;
> > +        break;
> > +
> > +    case AV_PIX_FMT_NV12:
> > +        hpad[0] = FFMAX(dst_stride[0] - src->linesize[0], 0);
> > +        hpad[1] = FFMAX(dst_stride[1] - src->linesize[1], 0);
> > +        hpad[2] = 0;
> > +        hpad[3] = 0;
> > +
> > +        src_height[0] = src->height;
> > +        src_height[1] = FFALIGN(src->height, 2) / 2;
> > +        src_height[2] = 0;
> > +        src_height[3] = 0;
> > +
> > +        vpad[0] = FFALIGN(src_height[0], 2) - src_height[0];
> > +        vpad[1] = FFALIGN(src_height[1], 2) - src_height[1];
> > +        vpad[2] = 0;
> > +        vpad[3] = 0;
> > +
> > +        tenBit = false;
> > +        break;
> > +    case AV_PIX_FMT_NV16:
> > +        hpad[0] = 0;
> > +        hpad[1] = 0;
> > +        hpad[2] = 0;
> > +        hpad[3] = 0;
> > +
> > +        src_height[0] = src->height;
> > +        src_height[1] = src->height;
> > +        src_height[2] = 0;
> > +        src_height[3] = 0;
> > +
> > +        vpad[0] = 0;
> > +        vpad[1] = 0;
> > +        vpad[2] = 0;
> > +        vpad[3] = 0;
> > +
> > +        tenBit = false;
> > +        break;
> > +
> > +    case AV_PIX_FMT_P010LE:
> > +        hpad[0] = FFMAX(dst_stride[0] - src->linesize[0], 0);
> > +        hpad[1] = FFMAX(dst_stride[1] - src->linesize[1], 0);
> > +        hpad[2] = 0;
> > +        hpad[3] = 0;
> > +
> > +        src_height[0] = src->height;
> > +        src_height[1] = FFALIGN(src->height, 2) / 2;
> > +        src_height[2] = 0;
> > +        src_height[3] = 0;
> > +
> > +        vpad[0] = FFALIGN(src_height[0], 2) - src_height[0];
> > +        vpad[1] = FFALIGN(src_height[1], 2) - src_height[1];
> > +        vpad[2] = 0;
> > +        vpad[3] = 0;
> > +
> > +        tenBit = true;
> > +        break;
> > +
> > +    case AV_PIX_FMT_RGBA:
> > +    case AV_PIX_FMT_BGRA:
> > +    case AV_PIX_FMT_ABGR:
> > +    case AV_PIX_FMT_ARGB:
> > +    case AV_PIX_FMT_BGR0:
> > +        hpad[0] = FFMAX(dst_stride[0] - src->linesize[0], 0);
> > +        hpad[1] = 0;
> > +        hpad[2] = 0;
> > +        hpad[3] = 0;
> > +
> > +        src_height[0] = src->height;
> > +        src_height[1] = 0;
> > +        src_height[2] = 0;
> > +        src_height[3] = 0;
> > +
> > +        vpad[0] = 0;
> > +        vpad[1] = 0;
> > +        vpad[2] = 0;
> > +        vpad[3] = 0;
> > +
> > +        tenBit = false;
> > +        break;
> > +
> > +    case AV_PIX_FMT_YUYV422:
> > +    case AV_PIX_FMT_UYVY422:
> > +        hpad[0] = FFMAX(dst_stride[0] - src->linesize[0], 0);
> > +        hpad[1] = 0;
> > +        hpad[2] = 0;
> > +        hpad[3] = 0;
> > +
> > +        src_height[0] = src->height;
> > +        src_height[1] = 0;
> > +        src_height[2] = 0;
> > +        src_height[3] = 0;
> > +
> > +        vpad[0] = 0;
> > +        vpad[1] = 0;
> > +        vpad[2] = 0;
> > +        vpad[3] = 0;
> > +
> > +        tenBit = false;
> > +        break;
> > +
> > +    default:
> > +        av_log(hwfc, AV_LOG_ERROR, "Pixel format %s not supported\n",
> > +               av_get_pix_fmt_name(src->format));
> > +        break;
> > +    }
> > +
> > +    for (i = 0; i < nb_planes; i++) {
> > +        dst_line = dst->p_data[i];
> > +        src_line = src->data[i];
> > +
> > +        for (h = 0; h < src_height[i]; h++) {
> > +            memcpy(dst_line, src_line, FFMIN(src->linesize[i], dst_stride[i]));
> > +
> > +            if (hpad[i]) {
> > +                lastidx = src->linesize[i];
> > +
> > +                if (tenBit) {
> > +                    sample = &src_line[lastidx - 2];
> > +                    dest   = &dst_line[lastidx];
> > +
> > +                    /* two bytes per sample */
> > +                    for (j = 0; j < hpad[i] / 2; j++) {
> > +                        memcpy(dest, sample, 2);
> > +                        dest += 2;
> > +                    }
> > +                } else {
> > +                    YUVsample = dst_line[lastidx - 1];
> > +                    memset(&dst_line[lastidx], YUVsample, hpad[i]);
> > +                }
> > +            }
> > +
> > +            src_line += src->linesize[i];
> > +            dst_line += dst_stride[i];
> > +        }
> > +
> > +        /* Extend the height by cloning the last line */
> > +        src_line = dst_line - dst_stride[i];
> > +        for (h = 0; h < vpad[i]; h++) {
> > +            memcpy(dst_line, src_line, dst_stride[i]);
> > +            dst_line += dst_stride[i];
> > +        }
> > +    }
> > +
> > +    return 0;
> > +}
> > +
> > +static int ni_hwdl_frame(AVHWFramesContext *hwfc, AVFrame *dst,
> > +                         const AVFrame *src)
> > +{
> > +    AVNIFramesContext *f_hwctx = (AVNIFramesContext*) hwfc->hwctx;
> > +    ni_session_data_io_t session_io_data;
> > +    ni_session_data_io_t *p_session_data = &session_io_data;
> > +    niFrameSurface1_t *src_surf = (niFrameSurface1_t *)src->data[3];
> > +    int ret;
> > +    int pixel_format;
> > +
> > +    memset(&session_io_data, 0, sizeof(ni_session_data_io_t));
> > +
> > +    av_log(hwfc, AV_LOG_VERBOSE,
> > +           "%s handle %d trace ui16FrameIdx = [%d] SID %d\n", __func__,
> > +           src_surf->device_handle, src_surf->ui16FrameIdx,
> > +           src_surf->ui16session_ID);
> > +
> > +    av_log(hwfc, AV_LOG_DEBUG, "%s hwdl processed h/w = %d/%d\n",
> > __func__,
> > +           src->height, src->width);
> > +
> > +    switch (hwfc->sw_format) {
> > +    case AV_PIX_FMT_YUV420P:
> > +        pixel_format = NI_PIX_FMT_YUV420P;
> > +        break;
> > +    case AV_PIX_FMT_YUV420P10LE:
> > +        pixel_format = NI_PIX_FMT_YUV420P10LE;
> > +        break;
> > +    case AV_PIX_FMT_NV12:
> > +        pixel_format = NI_PIX_FMT_NV12;
> > +        break;
> > +    case AV_PIX_FMT_NV16:
> > +        pixel_format = NI_PIX_FMT_NV16;
> > +        break;
> > +    case AV_PIX_FMT_YUYV422:
> > +        pixel_format = NI_PIX_FMT_YUYV422;
> > +        break;
> > +    case AV_PIX_FMT_UYVY422:
> > +        pixel_format = NI_PIX_FMT_UYVY422;
> > +        break;
> > +    case AV_PIX_FMT_P010LE:
> > +        pixel_format = NI_PIX_FMT_P010LE;
> > +        break;
> > +    case AV_PIX_FMT_RGBA:
> > +        pixel_format = NI_PIX_FMT_RGBA;
> > +        break;
> > +    case AV_PIX_FMT_BGRA:
> > +        pixel_format = NI_PIX_FMT_BGRA;
> > +        break;
> > +    case AV_PIX_FMT_ABGR:
> > +        pixel_format = NI_PIX_FMT_ABGR;
> > +        break;
> > +    case AV_PIX_FMT_ARGB:
> > +        pixel_format = NI_PIX_FMT_ARGB;
> > +        break;
> > +    case AV_PIX_FMT_BGR0:
> > +        pixel_format = NI_PIX_FMT_BGR0;
> > +        break;
> > +    default:
> > +        av_log(hwfc, AV_LOG_ERROR, "Pixel format %s not supported\n",
> > +               av_get_pix_fmt_name(hwfc->sw_format));
> > +        return AVERROR(EINVAL);
> > +    }
> > +
> > +    ret = ni_frame_buffer_alloc_dl(&(p_session_data->data.frame), src-
> >width,
> > +                                   src->height, pixel_format);
> > +    if (ret != NI_RETCODE_SUCCESS) {
> > +        av_log(hwfc, AV_LOG_ERROR, "%s Cannot allocate ni_frame\n",
> > __func__);
> > +        return AVERROR(ENOMEM);
> > +    }
> > +
> > +    f_hwctx->api_ctx.is_auto_dl = false;
> > +    ret = ni_device_session_hwdl(&f_hwctx->api_ctx, p_session_data,
> > src_surf);
> > +    if (ret <= 0) {
> > +        av_log(hwfc, AV_LOG_DEBUG, "%s failed to retrieve frame\n",
> __func__);
> > +        ni_frame_buffer_free(&p_session_data->data.frame);
> > +        return AVERROR_EXTERNAL;
> > +    }
> > +
> > +    ret = ni_to_avframe_copy(hwfc, dst, &p_session_data->data.frame);
> > +    if (ret < 0) {
> > +        av_log(hwfc, AV_LOG_ERROR, "Can't copy frame %d\n", ret);
> > +        ni_frame_buffer_free(&p_session_data->data.frame);
> > +        return ret;
> > +    }
> > +
> > +    dst->format = hwfc->sw_format;
> > +
> > +    av_frame_copy_props(dst, src);
> > +    ni_frame_buffer_free(&p_session_data->data.frame);
> > +
> > +    return 0;
> > +}
> > +
> > +static int ni_hwup_frame(AVHWFramesContext *hwfc, AVFrame *dst,
> const
> > AVFrame *src)
> > +{
> > +    AVNIFramesContext *f_hwctx = (AVNIFramesContext*) hwfc->hwctx;
> > +    ni_session_data_io_t *p_src_session_data;
> > +    niFrameSurface1_t *dst_surf;
> > +    int ret = 0;
> > +    int dst_stride[4];
> > +    int pixel_format;
> > +    bool isSemiPlanar;
> > +    int need_to_copy = 1;
> > +    size_t crop_right = 0, crop_bottom = 0;
> > +
> > +    dst_surf = (niFrameSurface1_t *)dst->data[3];
> > +
> > +    if (dst_surf == NULL || dst->hw_frames_ctx == NULL) {
> > +        av_log(hwfc, AV_LOG_ERROR, "Invalid hw frame\n");
> > +        return AVERROR(EINVAL);
> > +    }
> > +
> > +    p_src_session_data = &f_hwctx->src_session_io_data;
> > +
> > +    switch (src->format) {
> > +    /* 8-bit YUV420 planar */
> > +    case AV_PIX_FMT_YUV420P:
> > +        dst_stride[0] = FFALIGN(src->width, 128);
> > +        dst_stride[1] = FFALIGN((src->width / 2), 128);
> > +        dst_stride[2] = dst_stride[1];
> > +        dst_stride[3] = 0;
> > +
> > +        pixel_format = NI_PIX_FMT_YUV420P;
> > +        isSemiPlanar = false;
> > +        break;
> > +
> > +    /* 10-bit YUV420 planar, little-endian, least significant bits */
> > +    case AV_PIX_FMT_YUV420P10LE:
> > +        dst_stride[0] = FFALIGN(src->width * 2, 128);
> > +        dst_stride[1] = FFALIGN(src->width, 128);
> > +        dst_stride[2] = dst_stride[1];
> > +        dst_stride[3] = 0;
> > +
> > +        pixel_format = NI_PIX_FMT_YUV420P10LE;
> > +        isSemiPlanar = false;
> > +        break;
> > +
> > +    /* 8-bit YUV420 semi-planar */
> > +    case AV_PIX_FMT_NV12:
> > +        dst_stride[0] = FFALIGN(src->width, 128);
> > +        dst_stride[1] = dst_stride[0];
> > +        dst_stride[2] = 0;
> > +        dst_stride[3] = 0;
> > +
> > +        pixel_format = NI_PIX_FMT_NV12;
> > +        isSemiPlanar = true;
> > +        break;
> > +
> > +    /* 8-bit yuv422 semi-planar */
> > +    case AV_PIX_FMT_NV16:
> > +        dst_stride[0] = FFALIGN(src->width, 64);
> > +        dst_stride[1] = dst_stride[0];
> > +        dst_stride[2] = 0;
> > +        dst_stride[3] = 0;
> > +
> > +        pixel_format = NI_PIX_FMT_NV16;
> > +        isSemiPlanar = false;
> > +        break;
> > +
> > +    /*8-bit yuv422 planar */
> > +    case AV_PIX_FMT_YUYV422:
> > +        dst_stride[0] = FFALIGN(src->width, 16) * 2;
> > +        dst_stride[1] = 0;
> > +        dst_stride[2] = 0;
> > +        dst_stride[3] = 0;
> > +
> > +        pixel_format = NI_PIX_FMT_YUYV422;
> > +        isSemiPlanar = false;
> > +        break;
> > +
> > +    case AV_PIX_FMT_UYVY422:
> > +        dst_stride[0] = FFALIGN(src->width, 16) * 2;
> > +        dst_stride[1] = 0;
> > +        dst_stride[2] = 0;
> > +        dst_stride[3] = 0;
> > +
> > +        pixel_format = NI_PIX_FMT_UYVY422;
> > +        isSemiPlanar = false;
> > +        break;
> > +
> > +    /* 10-bit YUV420 semi-planar, little endian, most significant bits */
> > +    case AV_PIX_FMT_P010LE:
> > +        dst_stride[0] = FFALIGN(src->width * 2, 128);
> > +        dst_stride[1] = dst_stride[0];
> > +        dst_stride[2] = 0;
> > +        dst_stride[3] = 0;
> > +
> > +        pixel_format = NI_PIX_FMT_P010LE;
> > +        isSemiPlanar = true;
> > +        break;
> > +
> > +    /* 32-bit RGBA packed */
> > +    case AV_PIX_FMT_RGBA:
> > +        /* RGBA for the scaler has a 16-byte width/64-byte stride alignment */
> > +        dst_stride[0] = FFALIGN(src->width, 16) * 4;
> > +        dst_stride[1] = 0;
> > +        dst_stride[2] = 0;
> > +        dst_stride[3] = 0;
> > +
> > +        pixel_format = NI_PIX_FMT_RGBA;
> > +        isSemiPlanar = false;
> > +        break;
> > +
> > +    case AV_PIX_FMT_BGRA:
> > +        dst_stride[0] = FFALIGN(src->width, 16) * 4;
> > +        dst_stride[1] = 0;
> > +        dst_stride[2] = 0;
> > +        dst_stride[3] = 0;
> > +
> > +        pixel_format = NI_PIX_FMT_BGRA;
> > +        isSemiPlanar = false;
> > +        break;
> > +
> > +    case AV_PIX_FMT_ABGR:
> > +        dst_stride[0] = FFALIGN(src->width, 16) * 4;
> > +        dst_stride[1] = 0;
> > +        dst_stride[2] = 0;
> > +        dst_stride[3] = 0;
> > +
> > +        pixel_format = NI_PIX_FMT_ABGR;
> > +        isSemiPlanar = false;
> > +        break;
> > +
> > +    case AV_PIX_FMT_ARGB:
> > +        dst_stride[0] = FFALIGN(src->width, 16) * 4;
> > +        dst_stride[1] = 0;
> > +        dst_stride[2] = 0;
> > +        dst_stride[3] = 0;
> > +
> > +        pixel_format = NI_PIX_FMT_ARGB;
> > +        isSemiPlanar = false;
> > +        break;
> > +
> > +    case AV_PIX_FMT_BGR0:
> > +        dst_stride[0] = FFALIGN(src->width, 16) * 4;
> > +        dst_stride[1] = 0;
> > +        dst_stride[2] = 0;
> > +        dst_stride[3] = 0;
> > +
> > +        pixel_format = NI_PIX_FMT_BGR0;
> > +        isSemiPlanar = false;
> > +        break;
> > +
> > +    default:
> > +        av_log(hwfc, AV_LOG_ERROR, "Pixel format %s not supported by
> > device %s\n",
> > +               av_get_pix_fmt_name(src->format), ffhwframesctx(hwfc)-
> >hw_type-
> > >name);
> > +        return AVERROR(EINVAL);
> > +    }
> > +
> > +    // check input resolution zero copy compatible or not
> > +    if (ni_uploader_frame_zerocopy_check(&f_hwctx->api_ctx,
> > +        src->width, src->height,
> > +        (const int *)src->linesize, pixel_format) == NI_RETCODE_SUCCESS) {
> > +        need_to_copy = 0;
> > +        p_src_session_data->data.frame.extra_data_len =
> > +            NI_APP_ENC_FRAME_META_DATA_SIZE;
> > +        // alloc metadata buffer etc. (if needed)
> > +        ret = ni_encoder_frame_zerocopy_buffer_alloc(
> > +            &p_src_session_data->data.frame, src->width,
> > +            src->height, (const int *)src->linesize, (const uint8_t **)src->data,
> > +            (int)p_src_session_data->data.frame.extra_data_len);
> > +        if (ret != NI_RETCODE_SUCCESS) {
> > +            return AVERROR(ENOMEM);
> > +        }
> > +    } else {
> > +        // allocate only once per upload Session when we have frame info
> > +        p_src_session_data->data.frame.extra_data_len =
> > +            NI_APP_ENC_FRAME_META_DATA_SIZE;
> > +
> > +        ret = ni_frame_buffer_alloc_pixfmt(&p_src_session_data->data.frame,
> > +                                           pixel_format, src->width,
> > +                                           src->height, dst_stride,
> > +                                           1, // force to av_codec_id_h264 for max compat
> > +                                           (int)p_src_session_data->data.frame.extra_data_len);
> > +        if (ret < 0) {
> > +            av_log(hwfc, AV_LOG_ERROR, "Cannot allocate ni_frame %d\n", ret);
> > +            return ret;
> > +        }
> > +    }
> > +
> > +    if (need_to_copy) {
> > +        ret = av_to_niframe_copy(hwfc, dst_stride, &p_src_session_data-
> > >data.frame, src);
> > +        if (ret < 0) {
> > +            av_log(hwfc, AV_LOG_ERROR, "%s can't copy frame\n", __func__);
> > +            return AVERROR(EINVAL);
> > +        }
> > +    }
> > +
> > +    ret = ni_device_session_hwup(&f_hwctx->api_ctx, p_src_session_data,
> > dst_surf);
> > +    if (ret < 0) {
> > +        av_log(hwfc, AV_LOG_ERROR, "%s failed to upload frame %d\n",
> > +               __func__, ret);
> > +        return AVERROR_EXTERNAL;
> > +    }
> > +
> > +    dst_surf->ui16width = f_hwctx->split_ctx.w[0] = src->width;
> > +    dst_surf->ui16height = f_hwctx->split_ctx.h[0] = src->height;
> > +    dst_surf->ui32nodeAddress = 0; // always 0 offset for upload
> > +    dst_surf->encoding_type = isSemiPlanar ?
> > NI_PIXEL_PLANAR_FORMAT_SEMIPLANAR
> > +                                           : NI_PIXEL_PLANAR_FORMAT_PLANAR;
> > +
> > +    av_log(hwfc, AV_LOG_VERBOSE, "%s trace ui16FrameIdx = [%u] hdl %d
> > SID%d\n",
> > +           __func__, dst_surf->ui16FrameIdx, dst_surf->device_handle,
> > +           dst_surf->ui16session_ID);
> > +
> > +    // Update frames context
> > +    f_hwctx->split_ctx.f[0] = (int)dst_surf->encoding_type;
> > +
> > +    /* Set the hw_id/card number in AVNIFramesContext */
> > +    ((AVNIFramesContext*)((AVHWFramesContext*)dst->hw_frames_ctx-
> > >data)->hwctx)->hw_id = f_hwctx->api_ctx.hw_id;
> > +
> > +    crop_right  = dst->crop_right;
> > +    crop_bottom = dst->crop_bottom;
> > +
> > +    av_frame_copy_props(dst, src); // should get the metadata right
> > +    av_log(hwfc, AV_LOG_DEBUG, "%s Upload frame w/h %d/%d crop
> > r/b %lu/%lu\n",
> > +           __func__, dst->width, dst->height, crop_right, crop_bottom);
> > +
> > +    return ret;
> > +}
> > +
> > +static int ni_transfer_data_to(AVHWFramesContext *hwfc, AVFrame *dst,
> > +                               const AVFrame *src)
> > +{
> > +    int err;
> > +    niFrameSurface1_t *dst_surf;
> > +
> > +    if (src->width > hwfc->width || src->height > hwfc->height) {
> > +        return AVERROR(EINVAL);
> > +    }
> > +
> > +    /* should check against MAX frame size */
> > +    err = ni_hwup_frame(hwfc, dst, src);
> > +    if (err) {
> > +        return err;
> > +    }
> > +
> > +    dst_surf = (niFrameSurface1_t *)(dst->data[3]);
> > +
> > +    av_log(hwfc, AV_LOG_VERBOSE,
> > +           "hwcontext.c:ni_hwup_frame() dst_surf FID %d %d\n",
> > +           dst_surf->ui16FrameIdx, dst_surf->ui16session_ID);
> > +
> > +    return 0;
> > +}
> > +
> > +static int ni_transfer_data_from(AVHWFramesContext *hwfc, AVFrame
> *dst,
> > +                                 const AVFrame *src)
> > +{
> > +    if (dst->width > hwfc->width || dst->height > hwfc->height) {
> > +        av_log(hwfc, AV_LOG_ERROR, "Invalid frame dimensions\n");
> > +        return AVERROR(EINVAL);
> > +    }
> > +
> > +    return ni_hwdl_frame(hwfc, dst, src);
> > +}
> > +
> > +const HWContextType ff_hwcontext_type_ni_quadra = {
> > +    // QUADRA
> > +    .type = AV_HWDEVICE_TYPE_NI_QUADRA,
> > +    .name = "NI_QUADRA",
> > +
> > +    .device_hwctx_size = sizeof(AVNIDeviceContext),
> > +    .frames_hwctx_size = sizeof(AVNIFramesContext),
> > +
> > +    .device_create = ni_device_create,
> > +    .device_uninit = ni_device_uninit,
> > +
> > +    .frames_get_constraints = ni_frames_get_constraints,
> > +
> > +    .frames_init   = ni_frames_init,
> > +    .frames_uninit = ni_frames_uninit,
> > +
> > +    .frames_get_buffer = ni_get_buffer,
> > +
> > +    .transfer_get_formats = ni_transfer_get_formats,
> > +    .transfer_data_to     = ni_transfer_data_to,
> > +    .transfer_data_from   = ni_transfer_data_from,
> > +
> > +    .pix_fmts =
> > +        (const enum AVPixelFormat[]){AV_PIX_FMT_NI_QUAD,
> > AV_PIX_FMT_NONE},
> > +};
> > diff --git a/libavutil/hwcontext_ni_quad.h b/libavutil/hwcontext_ni_quad.h
> > new file mode 100644
> > index 0000000000..a8795398d7
> > --- /dev/null
> > +++ b/libavutil/hwcontext_ni_quad.h
> > @@ -0,0 +1,99 @@
> > +/*
> > + * This file is part of FFmpeg.
> > + *
> > + * FFmpeg is free software; you can redistribute it and/or
> > + * modify it under the terms of the GNU Lesser General Public
> > + * License as published by the Free Software Foundation; either
> > + * version 2.1 of the License, or (at your option) any later version.
> > + *
> > + * FFmpeg is distributed in the hope that it will be useful,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> GNU
> > + * Lesser General Public License for more details.
> > + *
> > + * You should have received a copy of the GNU Lesser General Public
> > + * License along with FFmpeg; if not, write to the Free Software
> > + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301
> > USA
> > + */
> > +
> > +#ifndef AVUTIL_HWCONTEXT_NI_QUAD_H
> > +#define AVUTIL_HWCONTEXT_NI_QUAD_H
> > +
> > +#include "hwcontext.h"
> > +#include <ni_device_api.h>
> > +#include <ni_rsrc_api.h>
> > +#include <ni_util.h>
> > +
> > +enum
> > +{
> > +    NI_MEMTYPE_VIDEO_MEMORY_NONE,
> > +    NI_MEMTYPE_VIDEO_MEMORY_DECODER_TARGET,
> > +    NI_MEMTYPE_VIDEO_MEMORY_HWUPLOAD_TARGET,
> > +};
> > +
> > +typedef enum _ni_filter_poolsize_code {
> > +    NI_DECODER_ID       = -1,
> > +    NI_SCALE_ID         = -2,
> > +    NI_PAD_ID           = -3,
> > +    NI_CROP_ID          = -4,
> > +    NI_OVERLAY_ID       = -5,
> > +    NI_ROI_ID           = -6,
> > +    NI_BG_ID            = -7,
> > +    NI_STACK_ID         = -8,
> > +    NI_ROTATE_ID        = -9,
> > +    NI_DRAWBOX_ID       = -10,
> > +    NI_BGR_ID           = -11,
> > +    NI_DRAWTEXT_ID      = -12,
> > +    NI_AI_PREPROCESS_ID = -13,
> > +    NI_DELOGO_ID        = -14,
> > +    NI_MERGE_ID         = -15,
> > +    NI_FLIP_ID          = -16,
> > +    NI_HVSPLUS_ID       = -17,
> > +} ni_filter_poolsize_code;
> > +
> > +/**
> > +* This struct is allocated as AVHWDeviceContext.hwctx
> > +*/
> > +typedef struct AVNIDeviceContext {
> > +    int uploader_ID;
> > +    ni_device_handle_t uploader_handle;
> > +
> > +    ni_device_handle_t cards[NI_MAX_DEVICE_CNT];
> > +} AVNIDeviceContext;
> > +
> > +/**
> > +* This struct is allocated as AVHWFramesContext.hwctx
> > +*/
> > +typedef struct AVNIFramesContext {
> > +    niFrameSurface1_t *surfaces;
> > +    int               nb_surfaces;
> > +    int               keep_alive_timeout;
> > +    int               frame_type;
> > +    AVRational        framerate;                  /* used for modelling hwupload */
> > +    int               hw_id;
> > +    ni_session_context_t api_ctx; // for down/uploading frames
> > +    ni_split_context_t   split_ctx;
> > +    ni_device_handle_t   suspended_device_handle;
> > +    int                  uploader_device_id; // same one passed to libxcoder session
> > open
> > +
> > +    // Accessed only by hwcontext_ni_quad.c
> > +    niFrameSurface1_t    *surfaces_internal;
> > +    int                  nb_surfaces_used;
> > +    niFrameSurface1_t    **surface_ptrs;
> > +    ni_session_data_io_t src_session_io_data; // for upload frame to be sent
> > up
> > +} AVNIFramesContext;
> > +
> > +static inline int ni_get_cardno(const AVFrame *frame) {
> > +    AVNIFramesContext* ni_hwf_ctx;
> > +    ni_hwf_ctx = (AVNIFramesContext*)((AVHWFramesContext*)frame-
> > >hw_frames_ctx->data)->hwctx;
> > +    return ni_hwf_ctx->hw_id;
> > +}
> > +
> > +// copy hwctx specific data from one AVHWFramesContext to another
> > +static inline void ni_cpy_hwframe_ctx(AVHWFramesContext
> *in_frames_ctx,
> > +                                      AVHWFramesContext *out_frames_ctx)
> > +{
> > +    memcpy(out_frames_ctx->hwctx, in_frames_ctx->hwctx,
> > sizeof(AVNIFramesContext));
> > +}
> > +
> > +#endif /* AVUTIL_HWCONTEXT_NI_H */
> > diff --git a/libavutil/pixdesc.c b/libavutil/pixdesc.c
> > index 53adde5aba..ad001cec93 100644
> > --- a/libavutil/pixdesc.c
> > +++ b/libavutil/pixdesc.c
> > @@ -2170,6 +2170,21 @@ static const AVPixFmtDescriptor
> > av_pix_fmt_descriptors[AV_PIX_FMT_NB] = {
> >          .name = "qsv",
> >          .flags = AV_PIX_FMT_FLAG_HWACCEL,
> >      },
> > +    // NETINT: AV_PIX_FMT_NI_QUAD pixel format for Quadra HW frame
> > +    [AV_PIX_FMT_NI_QUAD] = {
> > +        .name = "ni_quadra",
> > +        .flags = AV_PIX_FMT_FLAG_HWACCEL,
> > +    },
> > +    // NETINT: AV_PIX_FMT_NI_QUAD_8_TILE_4X4 pixel format for Quadra
> > internally compressed frame
> > +    [AV_PIX_FMT_NI_QUAD_8_TILE_4X4] = {
> > +        .name = "ni_quadra_8_tile4x4",
> > +        .flags = AV_PIX_FMT_FLAG_HWACCEL,
> > +    },
> > +    // NETINT: AV_PIX_FMT_NI_QUAD_10_TILE_4X4 pixel format for Quadra
> > internally compressed frame
> > +    [AV_PIX_FMT_NI_QUAD_10_TILE_4X4] = {
> > +        .name = "ni_quadra_10_tile4x4",
> > +        .flags = AV_PIX_FMT_FLAG_HWACCEL,
> > +    },
> >      [AV_PIX_FMT_MEDIACODEC] = {
> >          .name = "mediacodec",
> >          .flags = AV_PIX_FMT_FLAG_HWACCEL,
> > diff --git a/libavutil/pixfmt.h b/libavutil/pixfmt.h
> > index bf1b8ed008..a7ee2b6b1c 100644
> > --- a/libavutil/pixfmt.h
> > +++ b/libavutil/pixfmt.h
> > @@ -488,6 +488,14 @@ enum AVPixelFormat {
> >      AV_PIX_FMT_GBRAP32BE,   ///< planar GBRA 4:4:4:4 128bpp, big-endian
> >      AV_PIX_FMT_GBRAP32LE,   ///< planar GBRA 4:4:4:4 128bpp, little-
> endian
> >
> > +    /**
> > +     * HW acceleration through NI, data[3] contains a pointer to the
> > +     * niFrameSurface1_t structure, for Netint Quadra.
> > +     */
> > +    AV_PIX_FMT_NI_QUAD,
> > +    AV_PIX_FMT_NI_QUAD_8_TILE_4X4,  /// 8-bit tiled 4x4 compression
> > format within QUADRA
> > +    AV_PIX_FMT_NI_QUAD_10_TILE_4X4, /// 10-bit tiled 4x4 compression
> > format within QUADRA
> > +
> >      AV_PIX_FMT_NB         ///< number of pixel formats, DO NOT USE THIS if
> you
> > want to link with shared libav* because the number of formats might differ
> > between versions
> >  };
> >
> > --
> > 2.25.1
> >
> > _______________________________________________
> > ffmpeg-devel mailing list
> > ffmpeg-devel@ffmpeg.org
> > https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> >
> > To unsubscribe, visit link above, or email
> > ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

      reply	other threads:[~2025-07-02  8:32 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-02  8:11 Steven Zhou
2025-07-02  8:29 ` Steven Zhou
2025-07-02  8:32   ` Steven Zhou [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YT2PR01MB4701532F171869619AA9B42DE340A@YT2PR01MB4701.CANPRD01.PROD.OUTLOOK.COM \
    --to=steven.zhou@netint.ca \
    --cc=ffmpeg-devel@ffmpeg.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Git Inbox Mirror of the ffmpeg-devel mailing list - see https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

This inbox may be cloned and mirrored by anyone:

	git clone --mirror https://master.gitmailbox.com/ffmpegdev/0 ffmpegdev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 ffmpegdev ffmpegdev/ https://master.gitmailbox.com/ffmpegdev \
		ffmpegdev@gitmailbox.com
	public-inbox-index ffmpegdev

Example config snippet for mirrors.


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git