From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from ffbox0-bg.ffmpeg.org (ffbox0-bg.ffmpeg.org [79.124.17.100]) by master.gitmailbox.com (Postfix) with ESMTPS id 6B5E84E1B9 for ; Wed, 14 Jan 2026 07:50:53 +0000 (UTC) Authentication-Results: ffbox; dkim=fail (body hash mismatch (got b'zzRccfcodg4YNGkMgj9H4sXyeSaDAFnXO4xDXAi0ieg=', expected b'2Oboi5JezsmLjTxK4/93rY9CHSUCW9m+iPjG9gPUB/c=')) header.d=gmail.com header.a=rsa-sha256 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=ffmpeg.org; i=@ffmpeg.org; q=dns/txt; s=mail; t=1768377028; h=to : date : message-id : mime-version : reply-to : subject : list-id : list-archive : list-archive : list-help : list-owner : list-post : list-subscribe : list-unsubscribe : from : cc : content-type : content-transfer-encoding : from; bh=MWEQ0w1cGXMpRI+/SYRSqWC0aIFOmTxLwERxTOkvFzc=; b=BvbYrV1vKu1XFXR6f39zdH6Zrm7zbgcbNoHvtyVtN/CBjZMAqreGehPxH4myjmK7cgErM MeTMP2JCkT48ziJPY56p6g1t+yn6OxDU8CbtePAdFHfCftScBZEZ47meI470bpa4n1k0+vp +cjtV3qcY9nuWt8OhONxOOia+9v86fRlWvns9ZK9Zs+EUqQkvQsKsDe5hoJidVi82FHAVT/ E8oK6gsS3NJNXG7t6XiILQiZ76UtJapIrBoykjTIi0ZxzU/aOpou8TbhZhYyIXMHrCbSl0t Z6bUoA56NyrgAkQf2bofpobn/UctUSNIwAskQJMWWH02w7FNub8Jr91EVUvQ== Received: from [172.20.0.4] (unknown [172.20.0.4]) by ffbox0-bg.ffmpeg.org (Postfix) with ESMTP id 3454F690DBD; Wed, 14 Jan 2026 09:50:28 +0200 (EET) ARC-Seal: i=1; cv=none; a=rsa-sha256; d=ffmpeg.org; s=arc; t=1768377011; b=b3JGtZ88FV4OxP93V28N0meVG2QvOZTfHHnOZRWVqu5BxLDfERsbHv3gt3/HraEIWAng3 sgXHIdU9nOfxQEgGi295YnOb2rykfT8zXQaOUwPDJDT8xnpdD3LTqMOell6Xl0tdHLP4VBl GxIIQMK1PH7cPO5bnboGWbFkjehPLU06dHcka1lA3OVXe1XnmItbkU6E+K/q9dd5A3s92kk NrS+jVnqsULXsyvd7k3Lu6/cBi3G+z/SC6OGD7PWsJCqcxktVZVOJp3zzvfy26jeyVfiNEN 7kGKz+gtvz+eebJhue2VPackownuDhFWsfNRs5Dp7ZUmHo9P2uYy6Y5yvfRQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=ffmpeg.org; s=arc; t=1768377011; h=from : sender : reply-to : subject : date : message-id : to : cc : mime-version : content-type : content-transfer-encoding : content-id : content-description : resent-date : resent-from : resent-sender : resent-to : resent-cc : resent-message-id : in-reply-to : references : list-id : list-help : list-unsubscribe : list-subscribe : list-post : list-owner : list-archive; bh=zzRccfcodg4YNGkMgj9H4sXyeSaDAFnXO4xDXAi0ieg=; b=gEGF2mixZm2qzzbCLdt90dgr2/REhWghNuFTyk6tEaUDqfrBWnu4rVnqiN4AzzP6HN3dD 9YFmQme37ePCttw+6Db3jfS7zzgoXhm4lIgOm0zZ9yUmTOFJuT/M470TMoTpzUpyaVkPo/p 2wFeLdDoYOJ1UTpMN/onoA1eOgfEfJh4vUZnvlBNfdPKDR8veAs5R0r/x5UghC2M7h9b3le waNiaLdJJJxw9xpTkDqqWypTe4C3S8YkSAdBTRKflF+pxaN5VbdDMEQ1X1VlZOGUxNOQCXH 4fX13Q8UwRjAkxEfVnvl3RiPrjNqVBnI7ONyOoWYu0qt1CpzFmShDlBop2Bg== ARC-Authentication-Results: i=1; ffmpeg.org; dkim=pass header.d=gmail.com; arc=none; dmarc=pass header.from=gmail.com policy.dmarc=quarantine Authentication-Results: ffmpeg.org; dkim=pass header.d=gmail.com; arc=none (Message is not ARC signed); dmarc=pass (Used From Domain Record) header.from=gmail.com policy.dmarc=quarantine Received: from mail-pg1-f170.google.com (mail-pg1-f170.google.com [209.85.215.170]) by ffbox0-bg.ffmpeg.org (Postfix) with ESMTPS id BC58D690D67 for ; Wed, 14 Jan 2026 09:49:57 +0200 (EET) Received: by mail-pg1-f170.google.com with SMTP id 41be03b00d2f7-c0c24d0f4ceso3230777a12.1 for ; Tue, 13 Jan 2026 23:49:57 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1768376995; x=1768981795; darn=ffmpeg.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=2Oboi5JezsmLjTxK4/93rY9CHSUCW9m+iPjG9gPUB/c=; b=k6CnQ9VY3wtCUVE/+dTj8Cnz0DbEvi6MX/60kKrbh4SKzAL/Nv6OB16MD57vyyfhGl 5by4iG6/6pb8rGFcj4SUEto7a/Sm5Nh5hLlkooiV6tP1DfFFXXFWmkg2d+JhJWIYuW0W Fd9h7Z5eR6+MFwSY8oc6Nq06sbLuyEfzQZErkH19zxbRAFYshuaxE80He+NXw33MCmNp U6il8Fr9haAU7OZeI2M36oJfLVVeUXZnFWd/NGcP8YM4WnrDzOMTp/pY693bl6VbLFx+ iJQ4QMrvnSo/Mebgtfcx1DrE3AM600FKN7hVe2ggVM52tCZKpjFg5rgh/bu7x0fNHkV9 NSeQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1768376995; x=1768981795; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-gg:x-gm-message-state:from:to:cc:subject:date :message-id:reply-to; bh=2Oboi5JezsmLjTxK4/93rY9CHSUCW9m+iPjG9gPUB/c=; b=LwOH2pV5NHHO45SG6PEGoPFESlYprsH7SPIx6uLYDvnAqMILpka+1ozg7XSpoRiRbO SEXYPGNi2EDqOIqEYXCTs5BBYwPGkYdYv6G1mRt3myVHFjDUi+pwGRFqPRH6bo6k52kG Pw7bvdRa1Ya2vEfBxQlGt8v9Xd9qMmmhbae0mN/e0sfK/TeIY76ZizPU9DDJhakCq65o bHYSQlTVClBgPm84iqIpmfdCKquhqfYj/XUmQRyNO8bB9WBmMvpkNekF7nTXEPIDNYKQ NDy/qihxWuHjKzkmgpa310dQjaJwYRaYrfu/8eapXjmpHYksCbjs/DnjHfmB3UDnJFLZ JjPA== X-Gm-Message-State: AOJu0YwOEcdwCy8kx/tcinDgxGoVfb0eYQfebt4+zXwg+GTsRI8qR3ii 6X430aUKQQulfIYxpOYvTMAPD8DZrs0ZagXfBWvtz6YWhKhHNH8rnwDak4bvYg== X-Gm-Gg: AY/fxX5A9MQPw2hpKYkykZLbLhmVVr/fnoBhJn2MBA4kwHgsEAPNE8Jnid1IrtTqrk1 r/elCiyvblbpcuxgW3O+mJN0ojacRZr7QgYBo4KWiAmyUZVFpFoXPeW2LTLfNp74tQLfWSCSWf6 TJs2OIv/IWnQaODRNVSyqOoREptrKeThrj9Bw/fzqiyFsQmKgTgV1ph5q58m+RFdx9Vo3pNvwt1 PgrsnFoKiWqBjvI+BrIkyjPuDQ2hOzZMfcLR1qfPA7AYuf5qheeIxk3QGCwOMm4W6ojZZlQU6+D xBZVNAn9Qmbtgxqy8QLIdzcnPO/YeBqxegWAqpOq3kwsSvdKlOmTxGVYawlTGFIRqQ9lXvJSxph lMVRV9qg+K3+0GYos+ElxAJma+Hw9GzruB7QItnO3dVGRIhyp4vsttsFULo9ByOiG0HMwviqXGB 5EImmT+QHzq4NvXMDfDkk4xNzoE5i69zY7a77St/k3+nX0v0qyGw== X-Received: by 2002:a17:90b:4a46:b0:341:c964:125b with SMTP id 98e67ed59e1d1-35109177bc3mr1632737a91.31.1768376995408; Tue, 13 Jan 2026 23:49:55 -0800 (PST) Received: from raja-rathour-ASUS-TUF-Gaming-A15 ([150.129.181.211]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-c4cc05cd87asm21772120a12.15.2026.01.13.23.49.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 13 Jan 2026 23:49:55 -0800 (PST) To: ffmpeg-devel@ffmpeg.org Date: Wed, 14 Jan 2026 13:19:41 +0530 Message-ID: <20260114074941.283607-1-imraja729@gmail.com> X-Mailer: git-send-email 2.51.0 MIME-Version: 1.0 Message-ID-Hash: 7BH225QKSJJTOZM5ZU46ZVUNHEJFRI3R X-Message-ID-Hash: 7BH225QKSJJTOZM5ZU46ZVUNHEJFRI3R X-MailFrom: SRS0=K6ea=7T=gmail.com=imraja729@ffmpeg.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; header-match-ffmpeg-devel.ffmpeg.org-0; header-match-ffmpeg-devel.ffmpeg.org-1; header-match-ffmpeg-devel.ffmpeg.org-2; header-match-ffmpeg-devel.ffmpeg.org-3; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header X-Mailman-Version: 3.3.10 Precedence: list Reply-To: FFmpeg development discussions and patches Subject: [FFmpeg-devel] [PATCH] avfilter/dnn_backend_torch: enable async execution, memory safety & dynamic shapes List-Id: FFmpeg development discussions and patches Archived-At: Archived-At: List-Archive: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Raja Rathour via ffmpeg-devel Cc: imraja729@gmail.com Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Archived-At: List-Archive: List-Post: This patch overhauls the LibTorch backend to support modern FFmpeg DNN features: 1. Async Execution: Implements non-blocking inference using ff_dnn_start_inference_async. 2. Memory Safety: Fixes a critical memory leak by introducing a persistent input buffer in THInferRequest. 3. Dynamic Shapes: Adds support for changing input resolutions by reallocating buffers on the fly. 4. Robustness: Fixes device selection crashes on parameter-less models. Signed-off-by: Raja Rathour --- libavfilter/dnn/dnn_backend_torch.cpp | 182 ++++++++++++-------------- 1 file changed, 84 insertions(+), 98 deletions(-) diff --git a/libavfilter/dnn/dnn_backend_torch.cpp b/libavfilter/dnn/dnn_backend_torch.cpp index 2e4326d9d4..a320de1bf4 100644 --- a/libavfilter/dnn/dnn_backend_torch.cpp +++ b/libavfilter/dnn/dnn_backend_torch.cpp @@ -47,6 +47,8 @@ typedef struct THModel { typedef struct THInferRequest { torch::Tensor *output; torch::Tensor *input_tensor; + float *input_data; // Persistent buffer to prevent leaks + size_t input_data_size; // Track size for dynamic resizing } THInferRequest; typedef struct THRequestItem { @@ -95,7 +97,10 @@ static void th_free_request(THInferRequest *request) delete(request->input_tensor); request->input_tensor = NULL; } - return; + if (request->input_data) { + av_freep(&request->input_data); + request->input_data_size = 0; + } } static inline void destroy_request_item(THRequestItem **arg) @@ -138,7 +143,8 @@ static void dnn_free_model_th(DNNModel **model) av_freep(&item); } ff_queue_destroy(th_model->task_queue); - delete th_model->jit_model; + if (th_model->jit_model) + delete th_model->jit_model; av_freep(&th_model); *model = NULL; } @@ -155,10 +161,6 @@ static int get_input_th(DNNModel *model, DNNData *input, const char *input_name) return 0; } -static void deleter(void *arg) -{ - av_freep(&arg); -} static int fill_model_input_th(THModel *th_model, THRequestItem *request) { @@ -168,31 +170,43 @@ static int fill_model_input_th(THModel *th_model, THRequestItem *request) DNNData input = { 0 }; DnnContext *ctx = th_model->ctx; int ret, width_idx, height_idx, channel_idx; + size_t cur_size; lltask = (LastLevelTaskItem *)ff_queue_pop_front(th_model->lltask_queue); if (!lltask) { - ret = AVERROR(EINVAL); - goto err; + return AVERROR(EINVAL); } request->lltask = lltask; task = lltask->task; infer_request = request->infer_request; ret = get_input_th(&th_model->model, &input, NULL); - if ( ret != 0) { - goto err; + if (ret != 0) { + return ret; } width_idx = dnn_get_width_idx_by_layout(input.layout); height_idx = dnn_get_height_idx_by_layout(input.layout); channel_idx = dnn_get_channel_idx_by_layout(input.layout); input.dims[height_idx] = task->in_frame->height; input.dims[width_idx] = task->in_frame->width; - input.data = av_malloc(input.dims[height_idx] * input.dims[width_idx] * - input.dims[channel_idx] * sizeof(float)); - if (!input.data) - return AVERROR(ENOMEM); - infer_request->input_tensor = new torch::Tensor(); - infer_request->output = new torch::Tensor(); + + cur_size = input.dims[height_idx] * input.dims[width_idx] * + input.dims[channel_idx] * sizeof(float); + + if (!infer_request->input_data || infer_request->input_data_size < cur_size) { + av_freep(&infer_request->input_data); + infer_request->input_data = (float *)av_malloc(cur_size); + if (!infer_request->input_data) + return AVERROR(ENOMEM); + infer_request->input_data_size = cur_size; + } + + input.data = infer_request->input_data; + + if (!infer_request->input_tensor) + infer_request->input_tensor = new torch::Tensor(); + if (!infer_request->output) + infer_request->output = new torch::Tensor(); switch (th_model->model.func_type) { case DFT_PROCESS_FRAME: @@ -206,17 +220,15 @@ static int fill_model_input_th(THModel *th_model, THRequestItem *request) } break; default: - avpriv_report_missing_feature(NULL, "model function type %d", th_model->model.func_type); + avpriv_report_missing_feature(th_model->ctx, "model function type %d", th_model->model.func_type); break; } *infer_request->input_tensor = torch::from_blob(input.data, {1, input.dims[channel_idx], input.dims[height_idx], input.dims[width_idx]}, - deleter, torch::kFloat32); + nullptr, torch::kFloat32); + return 0; -err: - th_free_request(infer_request); - return ret; } static int th_start_inference(void *args) @@ -240,22 +252,28 @@ static int th_start_inference(void *args) th_model = (THModel *)task->model; ctx = th_model->ctx; - if (ctx->torch_option.optimize) - torch::jit::setGraphExecutorOptimize(true); - else - torch::jit::setGraphExecutorOptimize(false); + torch::jit::setGraphExecutorOptimize(!!ctx->torch_option.optimize); if (!infer_request->input_tensor || !infer_request->output) { av_log(ctx, AV_LOG_ERROR, "input or output tensor is NULL\n"); return DNN_GENERIC_ERROR; } - // Transfer tensor to the same device as model - c10::Device device = (*th_model->jit_model->parameters().begin()).device(); + + /* FIX: Use the context device directly instead of querying model parameters */ + const char *device_name = ctx->device ? ctx->device : "cpu"; + c10::Device device = c10::Device(device_name); + if (infer_request->input_tensor->device() != device) *infer_request->input_tensor = infer_request->input_tensor->to(device); + inputs.push_back(*infer_request->input_tensor); - - *infer_request->output = th_model->jit_model->forward(inputs).toTensor(); + + try { + *infer_request->output = th_model->jit_model->forward(inputs).toTensor(); + } catch (const c10::Error& e) { + av_log(ctx, AV_LOG_ERROR, "Torch forward pass failed: %s\n", e.what()); + return DNN_GENERIC_ERROR; + } return 0; } @@ -273,13 +291,12 @@ static void infer_completion_callback(void *args) { outputs.order = DCO_RGB; outputs.layout = DL_NCHW; outputs.dt = DNN_FLOAT; + if (sizes.size() == 4) { - // 4 dimensions: [batch_size, channel, height, width] - // this format of data is normally used for video frame SR - outputs.dims[0] = sizes.at(0); // N - outputs.dims[1] = sizes.at(1); // C - outputs.dims[2] = sizes.at(2); // H - outputs.dims[3] = sizes.at(3); // W + outputs.dims[0] = sizes.at(0); + outputs.dims[1] = sizes.at(1); + outputs.dims[2] = sizes.at(2); + outputs.dims[3] = sizes.at(3); } else { avpriv_report_missing_feature(th_model->ctx, "Support of this kind of model"); goto err; @@ -288,7 +305,6 @@ static void infer_completion_callback(void *args) { switch (th_model->model.func_type) { case DFT_PROCESS_FRAME: if (task->do_ioproc) { - // Post process can only deal with CPU memory. if (output->device() != torch::kCPU) *output = output->to(torch::kCPU); outputs.scale = 255; @@ -307,14 +323,15 @@ static void infer_completion_callback(void *args) { avpriv_report_missing_feature(th_model->ctx, "model function type %d", th_model->model.func_type); goto err; } + task->inference_done++; av_freep(&request->lltask); + err: th_free_request(infer_request); if (ff_safe_queue_push_back(th_model->request_queue, request) < 0) { destroy_request_item(&request); - av_log(th_model->ctx, AV_LOG_ERROR, "Unable to push back request_queue when failed to start inference.\n"); } } @@ -332,7 +349,6 @@ static int execute_model_th(THRequestItem *request, Queue *lltask_queue) lltask = (LastLevelTaskItem *)ff_queue_peek_front(lltask_queue); if (lltask == NULL) { - av_log(NULL, AV_LOG_ERROR, "Failed to get LastLevelTaskItem\n"); ret = AVERROR(EINVAL); goto err; } @@ -340,16 +356,19 @@ static int execute_model_th(THRequestItem *request, Queue *lltask_queue) th_model = (THModel *)task->model; ret = fill_model_input_th(th_model, request); - if ( ret != 0) { + if (ret != 0) { goto err; } + if (task->async) { - avpriv_report_missing_feature(th_model->ctx, "LibTorch async"); + ret = ff_dnn_start_inference_async(th_model->ctx, &request->exec_module); + if (ret < 0) + goto err; + return 0; } else { ret = th_start_inference((void *)(request)); - if (ret != 0) { + if (ret != 0) goto err; - } infer_completion_callback(request); return (task->inference_done == task->inference_todo) ? 0 : DNN_GENERIC_ERROR; } @@ -367,7 +386,6 @@ static int get_output_th(DNNModel *model, const char *input_name, int input_widt { int ret = 0; THModel *th_model = (THModel*) model; - DnnContext *ctx = th_model->ctx; TaskItem task = { 0 }; THRequestItem *request = NULL; DNNExecBaseParams exec_params = { @@ -377,20 +395,17 @@ static int get_output_th(DNNModel *model, const char *input_name, int input_widt .in_frame = NULL, .out_frame = NULL, }; - ret = ff_dnn_fill_gettingoutput_task(&task, &exec_params, th_model, input_height, input_width, ctx); - if ( ret != 0) { + + ret = ff_dnn_fill_gettingoutput_task(&task, &exec_params, th_model, input_height, input_width, th_model->ctx); + if (ret != 0) goto err; - } ret = extract_lltask_from_task(&task, th_model->lltask_queue); - if ( ret != 0) { - av_log(ctx, AV_LOG_ERROR, "unable to extract last level task from task.\n"); + if (ret != 0) goto err; - } request = (THRequestItem*) ff_safe_queue_pop_front(th_model->request_queue); if (!request) { - av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); ret = AVERROR(EINVAL); goto err; } @@ -407,12 +422,9 @@ err: static THInferRequest *th_create_inference_request(void) { - THInferRequest *request = (THInferRequest *)av_malloc(sizeof(THInferRequest)); - if (!request) { + THInferRequest *request = (THInferRequest *)av_mallocz(sizeof(THInferRequest)); + if (!request) return NULL; - } - request->input_tensor = NULL; - request->output = NULL; return request; } @@ -451,38 +463,29 @@ static DNNModel *dnn_load_model_th(DnnContext *ctx, DNNFunctionType func_type, A } th_model->request_queue = ff_safe_queue_create(); - if (!th_model->request_queue) { + if (!th_model->request_queue) goto fail; - } item = (THRequestItem *)av_mallocz(sizeof(THRequestItem)); - if (!item) { + if (!item) goto fail; - } - item->lltask = NULL; + item->infer_request = th_create_inference_request(); - if (!item->infer_request) { - av_log(NULL, AV_LOG_ERROR, "Failed to allocate memory for Torch inference request\n"); + if (!item->infer_request) goto fail; - } + item->exec_module.start_inference = &th_start_inference; item->exec_module.callback = &infer_completion_callback; item->exec_module.args = item; - if (ff_safe_queue_push_back(th_model->request_queue, item) < 0) { + if (ff_safe_queue_push_back(th_model->request_queue, item) < 0) goto fail; - } item = NULL; th_model->task_queue = ff_queue_create(); - if (!th_model->task_queue) { - goto fail; - } - th_model->lltask_queue = ff_queue_create(); - if (!th_model->lltask_queue) { + if (!th_model->task_queue || !th_model->lltask_queue) goto fail; - } model->get_input = &get_input_th; model->get_output = &get_output_th; @@ -491,10 +494,8 @@ static DNNModel *dnn_load_model_th(DnnContext *ctx, DNNFunctionType func_type, A return model; fail: - if (item) { + if (item) destroy_request_item(&item); - av_freep(&item); - } dnn_free_model_th(&model); return NULL; } @@ -502,48 +503,36 @@ fail: static int dnn_execute_model_th(const DNNModel *model, DNNExecBaseParams *exec_params) { THModel *th_model = (THModel *)model; - DnnContext *ctx = th_model->ctx; TaskItem *task; THRequestItem *request; int ret = 0; - ret = ff_check_exec_params(ctx, DNN_TH, model->func_type, exec_params); - if (ret != 0) { - av_log(ctx, AV_LOG_ERROR, "exec parameter checking fail.\n"); + ret = ff_check_exec_params(th_model->ctx, DNN_TH, model->func_type, exec_params); + if (ret != 0) return ret; - } task = (TaskItem *)av_malloc(sizeof(TaskItem)); - if (!task) { - av_log(ctx, AV_LOG_ERROR, "unable to alloc memory for task item.\n"); + if (!task) return AVERROR(ENOMEM); - } ret = ff_dnn_fill_task(task, exec_params, th_model, 0, 1); if (ret != 0) { av_freep(&task); - av_log(ctx, AV_LOG_ERROR, "unable to fill task.\n"); return ret; } - ret = ff_queue_push_back(th_model->task_queue, task); - if (ret < 0) { + if (ff_queue_push_back(th_model->task_queue, task) < 0) { av_freep(&task); - av_log(ctx, AV_LOG_ERROR, "unable to push back task_queue.\n"); - return ret; + return AVERROR(ENOMEM); } ret = extract_lltask_from_task(task, th_model->lltask_queue); - if (ret != 0) { - av_log(ctx, AV_LOG_ERROR, "unable to extract last level task from task.\n"); + if (ret != 0) return ret; - } request = (THRequestItem *)ff_safe_queue_pop_front(th_model->request_queue); - if (!request) { - av_log(ctx, AV_LOG_ERROR, "unable to get infer request.\n"); + if (!request) return AVERROR(EINVAL); - } return execute_model_th(request, th_model->lltask_queue); } @@ -560,14 +549,11 @@ static int dnn_flush_th(const DNNModel *model) THRequestItem *request; if (ff_queue_size(th_model->lltask_queue) == 0) - // no pending task need to flush return 0; request = (THRequestItem *)ff_safe_queue_pop_front(th_model->request_queue); - if (!request) { - av_log(th_model->ctx, AV_LOG_ERROR, "unable to get infer request.\n"); + if (!request) return AVERROR(EINVAL); - } return execute_model_th(request, th_model->lltask_queue); } @@ -580,4 +566,4 @@ extern const DNNModule ff_dnn_backend_torch = { .get_result = dnn_get_result_th, .flush = dnn_flush_th, .free_model = dnn_free_model_th, -}; +}; \ No newline at end of file -- 2.51.0 _______________________________________________ ffmpeg-devel mailing list -- ffmpeg-devel@ffmpeg.org To unsubscribe send an email to ffmpeg-devel-leave@ffmpeg.org