From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from ffbox0-bg.ffmpeg.org (ffbox0-bg.ffmpeg.org [79.124.17.100]) by master.gitmailbox.com (Postfix) with ESMTPS id A505C4AB92 for ; Mon, 23 Jun 2025 11:17:58 +0000 (UTC) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.ffmpeg.org (Postfix) with ESMTP id 561AD68E0E0; Mon, 23 Jun 2025 14:17:54 +0300 (EEST) Received: from mail-ej1-f49.google.com (mail-ej1-f49.google.com [209.85.218.49]) by ffbox0-bg.ffmpeg.org (Postfix) with ESMTPS id 2F7D168E0CD for ; Mon, 23 Jun 2025 14:17:47 +0300 (EEST) Received: by mail-ej1-f49.google.com with SMTP id a640c23a62f3a-ade4679fba7so783244066b.2 for ; Mon, 23 Jun 2025 04:17:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1750677466; x=1751282266; darn=ffmpeg.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=Jn7K3MFpHn1c59r+qYKyROpobgdz4DJNvuorOECkqy8=; b=J7useF25ixWy/4gmcRObW67xy0kPnBgZADsyZcGta808/CohFRlTqy9+SACJnBykdF j7m3oMggJu028CK1+CA66e+VnCdlJo+C0JIyfwWjWlOOLzxKvWVVzGI0J2JOmNz/lHwU axA6qg0sazmfo9P4JHjNQE4WkiyVBKeePntBR/OvFOnCmEHCKhZf6KYBLuDu7z1uZuST DecVEKLmyf9fKewcL0dGWEcIiwcQZWlQcHw1wM1Cs4W5m6awCtwtQgVOEqr/gn/rrAOY BYafdyzbH4Q9Lkl6h9HmpjpJn2lG2F8chs15tVAdJNxpxXPrtEWocfj6kNSrVX78nvod lmqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1750677466; x=1751282266; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Jn7K3MFpHn1c59r+qYKyROpobgdz4DJNvuorOECkqy8=; b=Gxj8WqCs28tXMIA+D2RJvpZRonWkx/FYRijvYnsN9oTbwa/m3XEPaYsKGc79Ob7Xwx /AkCyCnDDSUMJJzkhxxSzp+dH2sv225xryYVseaXTEaEJFiNXNnxrRGs395brXRhEU64 kLR9rKRb4crf8KDkbOS4E2041KtGgZE03TYkW9dT19Kk5LAmS/nJN6IP8TFZ3qVkVeKL 5ggZTiKBj7SJYy8UG7a2VLK1mHw5eOFcLRrSmLM3O0QQIetqbp0iSGF9bOcTADlUYrB4 TCMIBEn2bSmrhNvj304ZpJKzEkuUPPCdZgzpbsYt+LxbkyTRGH2cikdAwRXg2LFJk0xb rfhQ== X-Gm-Message-State: AOJu0Yw7fPMMGQXPmpBu9CN5SqYzGyPAIgkFg2zdfvcdCl0a3yo/JJaf 3PJdFr4x+Mrtg0NamNqp1xjuV9mwAO2K16CdCdIV9u2+11CqWCaucKpJdu4NxQ== X-Gm-Gg: ASbGnct64BdYVZNmquCqndSCzF7/nNW+OyO2K4YVoUk/+dtKQUEKfwsdYh63Ntv2Wvh qLq6plj8AOzYW68Nb48/sGeTz9dazGFdve7D0XLB4Ps+DO/tTpLudkqYdVHmNxZAN2yju22GHtc 1kTYbvjy5Q9KZbAFxyPEoeVujQ3j7M7Ovwk22bDLkIfF4PaOKGjak9ATB0IXkMroL/NhcHCjR1l RnVjgwFTgjOkz46uF2VPzxqJg5qZCVNr0Kv3CAgOSI8qzVUrNF37exEEGblS0mJki8eyOY6qPyK OlzljLPcnOl8qZXxPkg+QTwiiCB4As05oiD42BkTzfl9wksHD52+W/RwJ6laTGzsBH0UsaEB4dj dGIIClmONQatF6LM0jGLsAmwOXd++O215GKO4vuW2apRr/UfekBNa9A== X-Google-Smtp-Source: AGHT+IGurgZ8Tg+zLwwOTE2Pr/dxcmNdIGoFIGX5qqmqIHCwXwtmLAgykWTojseR+7lXdRqrsaN9Yg== X-Received: by 2002:a17:907:3f90:b0:ad5:3a97:8438 with SMTP id a640c23a62f3a-ae057b4584cmr1205036066b.41.1750677465608; Mon, 23 Jun 2025 04:17:45 -0700 (PDT) Received: from localhost.localdomain (178-222-11-229.dynamic.isp.telekom.rs. [178.222.11.229]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-ae054209b49sm685812966b.164.2025.06.23.04.17.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 23 Jun 2025 04:17:45 -0700 (PDT) From: Araz Iusubov X-Google-Original-From: Araz Iusubov To: ffmpeg-devel@ffmpeg.org Date: Mon, 23 Jun 2025 13:17:33 +0200 Message-ID: <20250623111733.104-1-Primeadvice@gmail.com> X-Mailer: git-send-email 2.49.0.windows.1 MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH, v4] avcodec/d3d12va_encode: texture array support for HEVC X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Araz Iusubov Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" Archived-At: List-Archive: List-Post: This patch adds support for the texture array feature used by AMD boards in the D3D12 HEVC encoder. In texture array mode, a single texture array is shared for all reference and reconstructed pictures using different subresources. The implementation ensures compatibility and has been successfully tested on AMD, Intel, and NVIDIA GPUs. v2 updates: 1. The reference to MaxL1ReferencesForB for the H.264 codec was updated to use the corresponding H.264 field instead of the HEVC one. 2. Max_subresource_array_size calculation was adjusted by removing the D3D12VA_VIDEO_ENC_ASYNC_DEPTH offset. v3 updates: 1. Fixed a type mismatch by explicitly casting AVD3D12VAFrame* to (uint8_t*) when assigning to data[0]. 2. Adjusted logging format specifier for HRESULT to use `%lx`. v4 updates: 1. Moved texture array management to hwcontext_d3d12va for proper abstraction. 2. Added `texture_array` and `texture_array_size` fields to AVD3D12VAFramesContext. 3. Implemented shared texture array allocation during `av_hwframe_ctx_init`. 4. Frames now receive unique subresource indices via `d3d12va_pool_alloc_texture_array`. 5. Removed `d3d12va_create_texture_array`, allocation is now handled entirely within hwcontext. 6. Encoder now uses subresource indices provided by hwcontext instead of managing them manually. --- libavcodec/d3d12va_encode.c | 191 +++++++++++++++++++++++-------- libavcodec/d3d12va_encode.h | 12 ++ libavcodec/d3d12va_encode_hevc.c | 5 +- libavutil/hwcontext_d3d12va.c | 66 ++++++++++- libavutil/hwcontext_d3d12va.h | 18 +++ 5 files changed, 242 insertions(+), 50 deletions(-) diff --git a/libavcodec/d3d12va_encode.c b/libavcodec/d3d12va_encode.c index e24a5b8d24..f9f4ca8903 100644 --- a/libavcodec/d3d12va_encode.c +++ b/libavcodec/d3d12va_encode.c @@ -191,7 +191,8 @@ static int d3d12va_encode_issue(AVCodecContext *avctx, FFHWBaseEncodeContext *base_ctx = avctx->priv_data; D3D12VAEncodeContext *ctx = avctx->priv_data; D3D12VAEncodePicture *pic = base_pic->priv; - AVD3D12VAFramesContext *frames_hwctx = base_ctx->input_frames->hwctx; + AVD3D12VAFramesContext *frames_hwctx_input = base_ctx->input_frames->hwctx; + AVD3D12VAFramesContext *frames_hwctx_recon = ((AVHWFramesContext*)base_pic->recon_image->hw_frames_ctx->data)->hwctx; int err, i, j; HRESULT hr; char data[MAX_PARAM_BUFFER_SIZE]; @@ -221,7 +222,7 @@ static int d3d12va_encode_issue(AVCodecContext *avctx, D3D12_VIDEO_ENCODER_RESOLVE_METADATA_INPUT_ARGUMENTS input_metadata = { .EncoderCodec = ctx->codec->d3d12_codec, .EncoderProfile = ctx->profile->d3d12_profile, - .EncoderInputFormat = frames_hwctx->format, + .EncoderInputFormat = frames_hwctx_input->format, .EncodedPictureEffectiveResolution = ctx->resolution, }; @@ -264,6 +265,9 @@ static int d3d12va_encode_issue(AVCodecContext *avctx, av_log(avctx, AV_LOG_DEBUG, "Input surface is %p.\n", pic->input_surface->texture); + if (ctx->is_texture_array) + pic->subresource_index = ((AVD3D12VAFrame*)base_pic->recon_image->data[0])->subresource_index; + pic->recon_surface = (AVD3D12VAFrame *)base_pic->recon_image->data[0]; av_log(avctx, AV_LOG_DEBUG, "Recon surface is %p.\n", pic->recon_surface->texture); @@ -325,11 +329,28 @@ static int d3d12va_encode_issue(AVCodecContext *avctx, goto fail; } + if (ctx->is_texture_array) { + d3d12_refs.pSubresources = av_calloc(d3d12_refs.NumTexture2Ds, + sizeof(*d3d12_refs.pSubresources)); + if (!d3d12_refs.pSubresources) { + err = AVERROR(ENOMEM); + goto fail; + } + } + i = 0; - for (j = 0; j < base_pic->nb_refs[0]; j++) - d3d12_refs.ppTexture2Ds[i++] = ((D3D12VAEncodePicture *)base_pic->refs[0][j]->priv)->recon_surface->texture; - for (j = 0; j < base_pic->nb_refs[1]; j++) - d3d12_refs.ppTexture2Ds[i++] = ((D3D12VAEncodePicture *)base_pic->refs[1][j]->priv)->recon_surface->texture; + for (j = 0; j < base_pic->nb_refs[0]; j++) { + d3d12_refs.ppTexture2Ds[i] = ((D3D12VAEncodePicture *)base_pic->refs[0][j]->priv)->recon_surface->texture; + if (ctx->is_texture_array) + d3d12_refs.pSubresources[i] = ((D3D12VAEncodePicture *)base_pic->refs[0][j]->priv)->subresource_index; + i++; + } + for (j = 0; j < base_pic->nb_refs[1]; j++) { + d3d12_refs.ppTexture2Ds[i] = ((D3D12VAEncodePicture *)base_pic->refs[1][j]->priv)->recon_surface->texture; + if (ctx->is_texture_array) + d3d12_refs.pSubresources[i] = ((D3D12VAEncodePicture *)base_pic->refs[1][j]->priv)->subresource_index; + i++; + } } input_args.PictureControlDesc.IntraRefreshFrameIndex = 0; @@ -343,7 +364,10 @@ static int d3d12va_encode_issue(AVCodecContext *avctx, output_args.Bitstream.pBuffer = pic->output_buffer; output_args.Bitstream.FrameStartOffset = pic->aligned_header_size; output_args.ReconstructedPicture.pReconstructedPicture = pic->recon_surface->texture; - output_args.ReconstructedPicture.ReconstructedPictureSubresource = 0; + if (ctx->is_texture_array) + output_args.ReconstructedPicture.ReconstructedPictureSubresource = pic->subresource_index; + else + output_args.ReconstructedPicture.ReconstructedPictureSubresource = 0; output_args.EncoderOutputMetadata.pBuffer = pic->encoded_metadata; output_args.EncoderOutputMetadata.Offset = 0; @@ -369,52 +393,99 @@ static int d3d12va_encode_issue(AVCodecContext *avctx, goto fail; } -#define TRANSITION_BARRIER(res, before, after) \ +#define TRANSITION_BARRIER(res, subres, before, after) \ (D3D12_RESOURCE_BARRIER) { \ .Type = D3D12_RESOURCE_BARRIER_TYPE_TRANSITION, \ .Flags = D3D12_RESOURCE_BARRIER_FLAG_NONE, \ .Transition = { \ .pResource = res, \ - .Subresource = D3D12_RESOURCE_BARRIER_ALL_SUBRESOURCES, \ + .Subresource = subres, \ .StateBefore = before, \ .StateAfter = after, \ }, \ } barriers[0] = TRANSITION_BARRIER(pic->input_surface->texture, + D3D12_RESOURCE_BARRIER_ALL_SUBRESOURCES, D3D12_RESOURCE_STATE_COMMON, D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ); barriers[1] = TRANSITION_BARRIER(pic->output_buffer, + D3D12_RESOURCE_BARRIER_ALL_SUBRESOURCES, D3D12_RESOURCE_STATE_COMMON, D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE); - barriers[2] = TRANSITION_BARRIER(pic->recon_surface->texture, - D3D12_RESOURCE_STATE_COMMON, - D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE); - barriers[3] = TRANSITION_BARRIER(pic->encoded_metadata, + barriers[2] = TRANSITION_BARRIER(pic->encoded_metadata, + D3D12_RESOURCE_BARRIER_ALL_SUBRESOURCES, D3D12_RESOURCE_STATE_COMMON, D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE); - barriers[4] = TRANSITION_BARRIER(pic->resolved_metadata, + barriers[3] = TRANSITION_BARRIER(pic->resolved_metadata, + D3D12_RESOURCE_BARRIER_ALL_SUBRESOURCES, D3D12_RESOURCE_STATE_COMMON, D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE); - ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, 5, barriers); + ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, 4, barriers); - if (d3d12_refs.NumTexture2Ds) { - D3D12_RESOURCE_BARRIER refs_barriers[3]; - - for (i = 0; i < d3d12_refs.NumTexture2Ds; i++) - refs_barriers[i] = TRANSITION_BARRIER(d3d12_refs.ppTexture2Ds[i], - D3D12_RESOURCE_STATE_COMMON, - D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ); - - ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, d3d12_refs.NumTexture2Ds, - refs_barriers); + //set transit barriers for reference pic and recon pic + int barriers_ref_index = 0; + D3D12_RESOURCE_BARRIER *barriers_ref = NULL; + if(ctx->is_texture_array) { + barriers_ref = av_calloc(frames_hwctx_recon->texture_array_size * ctx->plane_count, + sizeof(D3D12_RESOURCE_BARRIER)); + } else { + barriers_ref = av_calloc(MAX_DPB_SIZE,sizeof(D3D12_RESOURCE_BARRIER)); + } + + if (ctx->is_texture_array) { + // In Texture array mode, the D3D12 uses the same texture array (resource)for all + // the reference pics in ppTexture2Ds and also for the pReconstructedPicture, + // just different subresources. + D3D12_RESOURCE_DESC references_tex_array_desc = { 0 }; + pic->recon_surface->texture->lpVtbl->GetDesc(pic->recon_surface->texture, &references_tex_array_desc); + + for (uint32_t reference_subresource = 0; reference_subresource < references_tex_array_desc.DepthOrArraySize; + reference_subresource++) { + + //D3D12 DecomposeSubresource + uint32_t mip_slice, plane_slice, array_slice, array_size; + array_size = references_tex_array_desc.DepthOrArraySize; + mip_slice = reference_subresource % references_tex_array_desc.MipLevels; + array_slice = (reference_subresource / references_tex_array_desc.MipLevels) % array_size; + + for (plane_slice = 0; plane_slice < ctx->plane_count; plane_slice++) { + //Calculate the subresource index + uint32_t planeOutputSubresource = mip_slice + array_slice * references_tex_array_desc.MipLevels + + plane_slice * references_tex_array_desc.MipLevels * array_size; + if(reference_subresource == pic->subresource_index) { + barriers_ref[barriers_ref_index++] = TRANSITION_BARRIER(pic->recon_surface->texture, planeOutputSubresource, + D3D12_RESOURCE_STATE_COMMON, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE); + } else { + barriers_ref[barriers_ref_index++] = TRANSITION_BARRIER(pic->recon_surface->texture, planeOutputSubresource, + D3D12_RESOURCE_STATE_COMMON, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ); + } + } + } + } else { + barriers_ref[barriers_ref_index++] = TRANSITION_BARRIER(pic->recon_surface->texture, + D3D12_RESOURCE_BARRIER_ALL_SUBRESOURCES, + D3D12_RESOURCE_STATE_COMMON, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE); + + if (d3d12_refs.NumTexture2Ds) { + for (i = 0; i < d3d12_refs.NumTexture2Ds; i++) + barriers_ref[barriers_ref_index++] = TRANSITION_BARRIER(d3d12_refs.ppTexture2Ds[i], + D3D12_RESOURCE_BARRIER_ALL_SUBRESOURCES, + D3D12_RESOURCE_STATE_COMMON, + D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ); + } } + ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, barriers_ref_index, barriers_ref); ID3D12VideoEncodeCommandList2_EncodeFrame(cmd_list, ctx->encoder, ctx->encoder_heap, &input_args, &output_args); barriers[3] = TRANSITION_BARRIER(pic->encoded_metadata, + D3D12_RESOURCE_BARRIER_ALL_SUBRESOURCES, D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE, D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ); @@ -422,35 +493,35 @@ static int d3d12va_encode_issue(AVCodecContext *avctx, ID3D12VideoEncodeCommandList2_ResolveEncoderOutputMetadata(cmd_list, &input_metadata, &output_metadata); - if (d3d12_refs.NumTexture2Ds) { - D3D12_RESOURCE_BARRIER refs_barriers[3]; - - for (i = 0; i < d3d12_refs.NumTexture2Ds; i++) - refs_barriers[i] = TRANSITION_BARRIER(d3d12_refs.ppTexture2Ds[i], - D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ, - D3D12_RESOURCE_STATE_COMMON); - - ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, d3d12_refs.NumTexture2Ds, - refs_barriers); + //swap the barriers_ref transition state + if (barriers_ref_index > 0) { + for (i = 0; i < barriers_ref_index; i++) { + D3D12_RESOURCE_STATES temp_statue = barriers_ref[i].Transition.StateBefore; + barriers_ref[i].Transition.StateBefore = barriers_ref[i].Transition.StateAfter; + barriers_ref[i].Transition.StateAfter = temp_statue; + } + ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, barriers_ref_index, + barriers_ref); } barriers[0] = TRANSITION_BARRIER(pic->input_surface->texture, + D3D12_RESOURCE_BARRIER_ALL_SUBRESOURCES, D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ, D3D12_RESOURCE_STATE_COMMON); barriers[1] = TRANSITION_BARRIER(pic->output_buffer, + D3D12_RESOURCE_BARRIER_ALL_SUBRESOURCES, D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE, D3D12_RESOURCE_STATE_COMMON); - barriers[2] = TRANSITION_BARRIER(pic->recon_surface->texture, - D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE, - D3D12_RESOURCE_STATE_COMMON); - barriers[3] = TRANSITION_BARRIER(pic->encoded_metadata, + barriers[2] = TRANSITION_BARRIER(pic->encoded_metadata, + D3D12_RESOURCE_BARRIER_ALL_SUBRESOURCES, D3D12_RESOURCE_STATE_VIDEO_ENCODE_READ, D3D12_RESOURCE_STATE_COMMON); - barriers[4] = TRANSITION_BARRIER(pic->resolved_metadata, + barriers[3] = TRANSITION_BARRIER(pic->resolved_metadata, + D3D12_RESOURCE_BARRIER_ALL_SUBRESOURCES, D3D12_RESOURCE_STATE_VIDEO_ENCODE_WRITE, D3D12_RESOURCE_STATE_COMMON); - ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, 5, barriers); + ID3D12VideoEncodeCommandList2_ResourceBarrier(cmd_list, 4, barriers); hr = ID3D12VideoEncodeCommandList2_Close(cmd_list); if (FAILED(hr)) { @@ -489,6 +560,14 @@ static int d3d12va_encode_issue(AVCodecContext *avctx, if (d3d12_refs.ppTexture2Ds) av_freep(&d3d12_refs.ppTexture2Ds); + if (ctx->is_texture_array) { + if (d3d12_refs.pSubresources) + av_freep(&d3d12_refs.pSubresources); + } + + if (barriers_ref) + av_freep(&barriers_ref); + return 0; fail: @@ -498,6 +577,14 @@ fail: if (d3d12_refs.ppTexture2Ds) av_freep(&d3d12_refs.ppTexture2Ds); + if (ctx->is_texture_array) { + if (d3d12_refs.pSubresources) + av_freep(&d3d12_refs.pSubresources); + } + + if (barriers_ref) + av_freep(&barriers_ref); + if (ctx->codec->free_picture_params) ctx->codec->free_picture_params(pic); @@ -1341,6 +1428,7 @@ fail: static int d3d12va_encode_create_recon_frames(AVCodecContext *avctx) { FFHWBaseEncodeContext *base_ctx = avctx->priv_data; + D3D12VAEncodeContext *ctx = avctx->priv_data; AVD3D12VAFramesContext *hwctx; enum AVPixelFormat recon_format; int err; @@ -1364,6 +1452,9 @@ static int d3d12va_encode_create_recon_frames(AVCodecContext *avctx) hwctx->flags = D3D12_RESOURCE_FLAG_VIDEO_ENCODE_REFERENCE_ONLY | D3D12_RESOURCE_FLAG_DENY_SHADER_RESOURCE; + if (ctx->is_texture_array) + hwctx->texture_array_size = MAX_DPB_SIZE + 1; + err = av_hwframe_ctx_init(base_ctx->recon_frames_ref); if (err < 0) { av_log(avctx, AV_LOG_ERROR, "Failed to initialise reconstructed " @@ -1396,6 +1487,7 @@ int ff_d3d12va_encode_init(AVCodecContext *avctx) FFHWBaseEncodeContext *base_ctx = avctx->priv_data; D3D12VAEncodeContext *ctx = avctx->priv_data; D3D12_FEATURE_DATA_VIDEO_FEATURE_AREA_SUPPORT support = { 0 }; + D3D12_FEATURE_DATA_FORMAT_INFO format_info = {0}; int err; HRESULT hr; @@ -1431,6 +1523,15 @@ int ff_d3d12va_encode_init(AVCodecContext *avctx) goto fail; } + format_info.Format = ((AVD3D12VAFramesContext*)base_ctx->input_frames->hwctx)->format; + if (FAILED(ID3D12VideoDevice_CheckFeatureSupport(ctx->hwctx->device, D3D12_FEATURE_FORMAT_INFO, + &format_info, sizeof(format_info)))) { + av_log(avctx, AV_LOG_ERROR, "Failed to query format plane count: 0x%x\n", hr); + err = AVERROR_EXTERNAL; + goto fail; + } + ctx->plane_count = format_info.PlaneCount; + err = d3d12va_encode_set_profile(avctx); if (err < 0) goto fail; @@ -1458,10 +1559,6 @@ int ff_d3d12va_encode_init(AVCodecContext *avctx) if (err < 0) goto fail; - err = d3d12va_encode_create_recon_frames(avctx); - if (err < 0) - goto fail; - err = d3d12va_encode_prepare_output_buffers(avctx); if (err < 0) goto fail; @@ -1487,6 +1584,10 @@ int ff_d3d12va_encode_init(AVCodecContext *avctx) goto fail; } + err = d3d12va_encode_create_recon_frames(avctx); + if (err < 0) + goto fail; + base_ctx->output_delay = base_ctx->b_per_p; base_ctx->decode_delay = base_ctx->max_b_depth; diff --git a/libavcodec/d3d12va_encode.h b/libavcodec/d3d12va_encode.h index 3b0b8153d5..c8e64ddffd 100644 --- a/libavcodec/d3d12va_encode.h +++ b/libavcodec/d3d12va_encode.h @@ -52,6 +52,8 @@ typedef struct D3D12VAEncodePicture { ID3D12Resource *encoded_metadata; ID3D12Resource *resolved_metadata; + int subresource_index; + D3D12_VIDEO_ENCODER_PICTURE_CONTROL_CODEC_DATA pic_ctl; int fence_value; @@ -189,6 +191,16 @@ typedef struct D3D12VAEncodeContext { */ AVBufferPool *output_buffer_pool; + /** + * Flag indicates if the HW is texture array mode. + */ + int is_texture_array; + + /** + * The number of planes in the input DXGI FORMAT . + */ + int plane_count; + /** * D3D12 video encoder. */ diff --git a/libavcodec/d3d12va_encode_hevc.c b/libavcodec/d3d12va_encode_hevc.c index 938ba01f54..7e1d973f7e 100644 --- a/libavcodec/d3d12va_encode_hevc.c +++ b/libavcodec/d3d12va_encode_hevc.c @@ -280,9 +280,8 @@ static int d3d12va_encode_hevc_init_sequence_params(AVCodecContext *avctx) } if (support.SupportFlags & D3D12_VIDEO_ENCODER_SUPPORT_FLAG_RECONSTRUCTED_FRAMES_REQUIRE_TEXTURE_ARRAYS) { - av_log(avctx, AV_LOG_ERROR, "D3D12 video encode on this device requires texture array support, " - "but it's not implemented.\n"); - return AVERROR_PATCHWELCOME; + ctx->is_texture_array = 1; + av_log(avctx, AV_LOG_DEBUG, "D3D12 video encode on this device uses texture array mode.\n"); } desc = av_pix_fmt_desc_get(base_ctx->input_frames->sw_format); diff --git a/libavutil/hwcontext_d3d12va.c b/libavutil/hwcontext_d3d12va.c index 6507cf69c1..bb741ac6d9 100644 --- a/libavutil/hwcontext_d3d12va.c +++ b/libavutil/hwcontext_d3d12va.c @@ -49,6 +49,7 @@ typedef struct D3D12VAFramesContext { ID3D12GraphicsCommandList *command_list; AVD3D12VASyncContext sync_ctx; UINT luma_component_size; + int nb_surfaces_used; } D3D12VAFramesContext; typedef struct D3D12VADevicePriv { @@ -174,7 +175,8 @@ fail: static void d3d12va_frames_uninit(AVHWFramesContext *ctx) { - D3D12VAFramesContext *s = ctx->hwctx; + D3D12VAFramesContext *s = ctx->hwctx; + AVD3D12VAFramesContext *hwctx = ctx->hwctx; D3D12_OBJECT_RELEASE(s->sync_ctx.fence); if (s->sync_ctx.event) @@ -185,6 +187,11 @@ static void d3d12va_frames_uninit(AVHWFramesContext *ctx) D3D12_OBJECT_RELEASE(s->command_allocator); D3D12_OBJECT_RELEASE(s->command_list); D3D12_OBJECT_RELEASE(s->command_queue); + + if (hwctx->texture_array) { + D3D12_OBJECT_RELEASE(hwctx->texture_array); + hwctx->texture_array = NULL; + } } static int d3d12va_frames_get_constraints(AVHWDeviceContext *ctx, const void *hwconfig, AVHWFramesConstraints *constraints) @@ -228,6 +235,28 @@ static void free_texture(void *opaque, uint8_t *data) av_freep(&data); } +static AVBufferRef *d3d12va_pool_alloc_texture_array(AVHWFramesContext *ctx) +{ + AVD3D12VAFrame *desc = av_mallocz(sizeof(*desc)); + D3D12VAFramesContext *s = ctx->hwctx; + AVD3D12VAFramesContext *hwctx = ctx->hwctx; + AVBufferRef *buf; + + // In Texture array mode, the D3D12 uses the same texture address for all the pictures, + //just different subresources. + desc->subresource_index = s->nb_surfaces_used; + desc->texture = hwctx->texture_array; + + buf = av_buffer_create((uint8_t *)desc, sizeof(*desc), NULL, NULL, 0); + + if (!buf) { + av_free(desc); + return NULL; + } + s->nb_surfaces_used++; + return buf; +} + static AVBufferRef *d3d12va_pool_alloc(void *opaque, size_t size) { AVHWFramesContext *ctx = (AVHWFramesContext *)opaque; @@ -236,6 +265,11 @@ static AVBufferRef *d3d12va_pool_alloc(void *opaque, size_t size) AVBufferRef *buf; AVD3D12VAFrame *frame; + + //For texture array mode, no need to create texture. + if (hwctx->texture_array_size > 0) + return d3d12va_pool_alloc_texture_array(ctx); + D3D12_HEAP_PROPERTIES props = { .Type = D3D12_HEAP_TYPE_DEFAULT }; D3D12_RESOURCE_DESC desc = { .Dimension = D3D12_RESOURCE_DIMENSION_TEXTURE2D, @@ -280,7 +314,10 @@ fail: static int d3d12va_frames_init(AVHWFramesContext *ctx) { - AVD3D12VAFramesContext *hwctx = ctx->hwctx; + AVD3D12VAFramesContext *hwctx = ctx->hwctx; + D3D12VAFramesContext *s = ctx->hwctx; + AVD3D12VADeviceContext *device_hwctx = ctx->device_ctx->hwctx; + int i; for (i = 0; i < FF_ARRAY_ELEMS(supported_formats); i++) { @@ -298,6 +335,31 @@ static int d3d12va_frames_init(AVHWFramesContext *ctx) return AVERROR(EINVAL); } + //For texture array mode, create texture array resource in the init stage. + //This texture array will be used for all the pictures,but with different subresources. + if (hwctx->texture_array_size > 0){ + D3D12_HEAP_PROPERTIES props = { .Type = D3D12_HEAP_TYPE_DEFAULT }; + + D3D12_RESOURCE_DESC desc = { + .Dimension = D3D12_RESOURCE_DIMENSION_TEXTURE2D, + .Alignment = 0, + .Width = ctx->width, + .Height = ctx->height, + .DepthOrArraySize = hwctx->texture_array_size, + .MipLevels = 1, + .Format = hwctx->format, + .SampleDesc = {.Count = 1, .Quality = 0 }, + .Layout = D3D12_TEXTURE_LAYOUT_UNKNOWN, + .Flags = hwctx->flags, + }; + + if (FAILED(ID3D12Device_CreateCommittedResource(device_hwctx->device, &props, D3D12_HEAP_FLAG_NONE, &desc, + D3D12_RESOURCE_STATE_COMMON, NULL, &IID_ID3D12Resource, (void **)&hwctx->texture_array))) { + av_log(ctx, AV_LOG_ERROR, "Could not create the texture\n"); + return AVERROR(EINVAL); + } + } + ffhwframesctx(ctx)->pool_internal = av_buffer_pool_init2(sizeof(AVD3D12VAFrame), ctx, d3d12va_pool_alloc, NULL); diff --git a/libavutil/hwcontext_d3d12va.h b/libavutil/hwcontext_d3d12va.h index 212a6a6146..d48d847d11 100644 --- a/libavutil/hwcontext_d3d12va.h +++ b/libavutil/hwcontext_d3d12va.h @@ -111,6 +111,11 @@ typedef struct AVD3D12VAFrame { */ ID3D12Resource *texture; + /** + * In texture array mode, the index of subresource + */ + int subresource_index; + /** * The sync context for the texture * @@ -137,6 +142,19 @@ typedef struct AVD3D12VAFramesContext { * @see https://learn.microsoft.com/en-us/windows/win32/api/d3d12/ne-d3d12-d3d12_resource_flags */ D3D12_RESOURCE_FLAGS flags; + + /** + * In texture array mode, the D3D12 uses the same the same texture array (resource)for all + * pictures. + */ + ID3D12Resource *texture_array; + + /** + * In texture array mode, the D3D12 uses the same texture array (resource)for all + * pictures, but different subresources to represent each picture. + * This is the size of the texture array (in number of subresources). + */ + int texture_array_size; } AVD3D12VAFramesContext; #endif /* AVUTIL_HWCONTEXT_D3D12VA_H */ -- 2.45.2.windows.1 _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".