From: "Xiang, Haihao" <haihao.xiang-at-intel.com@ffmpeg.org>
To: "ffmpeg-devel@ffmpeg.org" <ffmpeg-devel@ffmpeg.org>
Cc: "Wu, Tong1" <tong1.wu@intel.com>
Subject: Re: [FFmpeg-devel] [PATCH 3/3] avutil/hwcontext_qsv: fix D3D11VA<->qsv hwmap errors
Date: Mon, 25 Apr 2022 09:37:30 +0000
Message-ID: <f0a3e4abe550f3de325e59c0283596e13ccca70c.camel@intel.com> (raw)
In-Reply-To: <20220401092416.1018-3-tong1.wu@intel.com>
On Fri, 2022-04-01 at 17:24 +0800, Tong Wu wrote:
> For hwmap between qsv and d3d11va, The mfxHDLPair information should be
> put into texture_infos when deriving from qsv context. Moreover, when
> uploading from rawvideo, the ways that the textures are created are
> different, bindflag assertions are needed to make sure the right
> textures are derived during the process. Now after this fix,
> d3d_dec->qsv_vpp->qsv_enc, d3d_dec->qsv_vpp->qsv_download->yuv,
> yuv->d3d_upload->qsv_vpp->qsv->download->yuv,
> qsv_dec->qsv_vpp->d3d_download->yuv can all work properly.
>
> For d3d_dec->qsv_vpp->qsv_enc, one sample command line:
> ffmpeg.exe -hwaccel qsv -c:v h264_qsv -i input.264
> -vf
> "hwmap=derive_device=d3d11va,format=d3d11,hwmap=derive_device=qsv,format=qsv"
> -c:v h264_qsv -y ./output.264
The default child_device_type is dxva2 for option --enable-libmfx, I don't think
it makes sense to derive a QSV device based on dxva2 child device to a d3d11vadevice.
But even if initializing qsv device with d3d11va child device, the command below
still doesn't work
$ ffmpeg.exe -y -hwaccel qsv -init_hw_device
qsv=qsv:hw,child_device=0,child_device_type=d3d11va -c:v h264_qsv -i input.h264
-vf "hwmap=derive_device=d3d11va,format=d3d11,hwdownload,format=nv12" -f null -
You may try https://patchwork.ffmpeg.org/project/ffmpeg/list/?series=5304
Thanks
Haihao
>
> Signed-off-by: Tong Wu <tong1.wu@intel.com>
> ---
> libavutil/hwcontext_qsv.c | 48 ++++++++++++++++++++++++++++++++-------
> 1 file changed, 40 insertions(+), 8 deletions(-)
>
> diff --git a/libavutil/hwcontext_qsv.c b/libavutil/hwcontext_qsv.c
> index 95f8071abe..e6a7ac3ef0 100644
> --- a/libavutil/hwcontext_qsv.c
> +++ b/libavutil/hwcontext_qsv.c
> @@ -806,12 +806,23 @@ static int qsv_frames_derive_from(AVHWFramesContext
> *dst_ctx,
> #if CONFIG_D3D11VA
> case AV_HWDEVICE_TYPE_D3D11VA:
> {
> + dst_ctx->initial_pool_size = src_ctx->initial_pool_size;
> AVD3D11VAFramesContext *dst_hwctx = dst_ctx->hwctx;
> - mfxHDLPair *pair = (mfxHDLPair*)src_hwctx-
> >surfaces[i].Data.MemId;
> - dst_hwctx->texture = (ID3D11Texture2D*)pair->first;
> + dst_hwctx->texture_infos = av_calloc(src_hwctx->nb_surfaces,
> + sizeof(*dst_hwctx-
> >texture_infos));
> if (src_hwctx->frame_type & MFX_MEMTYPE_SHARED_RESOURCE)
> dst_hwctx->MiscFlags = D3D11_RESOURCE_MISC_SHARED;
> dst_hwctx->BindFlags = qsv_get_d3d11va_bind_flags(src_hwctx-
> >frame_type);
> + for (i = 0; i < src_hwctx->nb_surfaces; i++) {
> + mfxHDLPair* pair = (mfxHDLPair*)src_hwctx-
> >surfaces[i].Data.MemId;
> + dst_hwctx->texture_infos[i].texture = (ID3D11Texture2D*)pair-
> >first;
> + if (dst_hwctx->BindFlags & D3D11_BIND_RENDER_TARGET) {
> + dst_hwctx->texture_infos[i].index = 0;
> + }
> + else {
> + dst_hwctx->texture_infos[i].index = (intptr_t)pair-
> >second;
> + }
> + }
> }
> break;
> #endif
> @@ -900,9 +911,16 @@ static int qsv_map_from(AVHWFramesContext *ctx,
> dst->height = src->height;
>
> if (child_frames_ctx->device_ctx->type == AV_HWDEVICE_TYPE_D3D11VA) {
> +#if CONFIG_D3D11VA
> + AVD3D11VAFramesContext* child_frames_hwctx = child_frames_ctx-
> >hwctx;
> mfxHDLPair *pair = (mfxHDLPair*)surf->Data.MemId;
> dst->data[0] = pair->first;
> - dst->data[1] = pair->second;
> + if (child_frames_hwctx->BindFlags & D3D11_BIND_RENDER_TARGET) {
> + dst->data[1] = 0;
> + } else {
> + dst->data[1] = pair->second;
> + }
> +#endif
> } else {
> dst->data[3] = child_data;
> }
> @@ -930,9 +948,16 @@ static int qsv_map_from(AVHWFramesContext *ctx,
> dummy->height = src->height;
>
> if (child_frames_ctx->device_ctx->type == AV_HWDEVICE_TYPE_D3D11VA) {
> +#if CONFIG_D3D11VA
> + AVD3D11VAFramesContext* child_frames_hwctx = child_frames_ctx->hwctx;
> mfxHDLPair *pair = (mfxHDLPair*)surf->Data.MemId;
> dummy->data[0] = pair->first;
> - dummy->data[1] = pair->second;
> + if (child_frames_hwctx->BindFlags & D3D11_BIND_RENDER_TARGET) {
> + dst->data[1] = 0;
> + } else {
> + dst->data[1] = pair->second;
> + }
> +#endif
> } else {
> dummy->data[3] = child_data;
> }
> @@ -1287,6 +1312,10 @@ static int qsv_frames_derive_to(AVHWFramesContext
> *dst_ctx,
> return AVERROR(ENOSYS);
> }
>
> + s->child_frames_ref = av_buffer_ref(dst_ctx->internal->source_frames);
> + if (!s->child_frames_ref) {
> + return AVERROR(ENOMEM);
> + }
> dst_hwctx->surfaces = s->surfaces_internal;
>
> return 0;
> @@ -1314,10 +1343,13 @@ static int qsv_map_to(AVHWFramesContext *dst_ctx,
> case AV_PIX_FMT_D3D11:
> {
> mfxHDLPair *pair = (mfxHDLPair*)hwctx->surfaces[i].Data.MemId;
> - if (pair->first == src->data[0]
> - && pair->second == src->data[1]) {
> - index = i;
> - break;
> + if (pair->first == src->data[0]) {
> + if (hwctx->frame_type &
> MFX_MEMTYPE_VIDEO_MEMORY_DECODER_TARGET
> + && pair->second == src->data[1]
> + || hwctx->frame_type &
> MFX_MEMTYPE_VIDEO_MEMORY_PROCESSOR_TARGET) {
> + index = i;
> + break;
> + }
> }
> }
> #endif
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
prev parent reply other threads:[~2022-04-25 9:37 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-04-01 9:24 [FFmpeg-devel] [PATCH 1/3] avutil/hwcontext_d3d11va: add a format check for staging texture Tong Wu
2022-04-01 9:24 ` [FFmpeg-devel] [PATCH 2/3] avutil/hwcontext_d3d11va: fix the uninitialized texture bindflag Tong Wu
2022-04-01 9:24 ` [FFmpeg-devel] [PATCH 3/3] avutil/hwcontext_qsv: fix D3D11VA<->qsv hwmap errors Tong Wu
2022-04-25 9:37 ` Xiang, Haihao [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f0a3e4abe550f3de325e59c0283596e13ccca70c.camel@intel.com \
--to=haihao.xiang-at-intel.com@ffmpeg.org \
--cc=ffmpeg-devel@ffmpeg.org \
--cc=tong1.wu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Git Inbox Mirror of the ffmpeg-devel mailing list - see https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
This inbox may be cloned and mirrored by anyone:
git clone --mirror https://master.gitmailbox.com/ffmpegdev/0 ffmpegdev/git/0.git
# If you have public-inbox 1.1+ installed, you may
# initialize and index your mirror using the following commands:
public-inbox-init -V2 ffmpegdev ffmpegdev/ https://master.gitmailbox.com/ffmpegdev \
ffmpegdev@gitmailbox.com
public-inbox-index ffmpegdev
Example config snippet for mirrors.
AGPL code for this site: git clone https://public-inbox.org/public-inbox.git