From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by master.gitmailbox.com (Postfix) with ESMTP id A573C4B2AD for ; Mon, 3 Jun 2024 05:09:50 +0000 (UTC) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 5183F68D5F3; Mon, 3 Jun 2024 08:09:47 +0300 (EEST) Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.17]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 29FC168D546 for ; Mon, 3 Jun 2024 08:09:39 +0300 (EEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1717391386; x=1748927386; h=from:to:subject:date:message-id:mime-version: content-transfer-encoding; bh=CEh9o32+U2dYfpERcaRoYgrxfHS1UroCQAqpe5bP2M0=; b=h8HrHrTMlnSmBFMaXa9wLjqCXVdG8jEtqbRuQQb+0i2u9J1W97yiOw0P Xs2mn9X5oREwaD2c7xx3VcOVqOVLcwhcwLSwCyhxCEqILFkwulZqDNMXm TQoWWeOuklMICg9g2yiO38fngrB/QqFBSn3jp4r6kU78+Tt216DwAfd47 N3vxPWRRNdiFU+us+QyZ0Gv3M+/v0wvHxQk9MvLoK/FWECEQGmOX7t60H vt3exfsLgLmWMYjhqDhJEa3q98/AYWh+xorjlVTVcejhg21CeJ3lJoC9g YDvKcaswBrJCtjbJ6yyN7+f/r5T72yeGuNbYAJIvHe+IEYQTehmTC9q5p Q==; X-CSE-ConnectionGUID: wnQN+PBIRbadxx04pVB5UA== X-CSE-MsgGUID: N/0FIbsjR8OfguiXx4rktw== X-IronPort-AV: E=McAfee;i="6600,9927,11091"; a="13993031" X-IronPort-AV: E=Sophos;i="6.08,210,1712646000"; d="scan'208";a="13993031" Received: from orviesa001.jf.intel.com ([10.64.159.141]) by orvoesa109.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 02 Jun 2024 22:09:38 -0700 X-CSE-ConnectionGUID: bvazzVJNRviwuutfyPfKpA== X-CSE-MsgGUID: q7gQT7opSreLlJEzO0qPuQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.08,210,1712646000"; d="scan'208";a="74248666" Received: from unknown (HELO wenbin-Z390-AORUS-ULTRA.sh.intel.com) ([10.239.156.43]) by orviesa001.jf.intel.com with ESMTP; 02 Jun 2024 22:09:37 -0700 From: wenbin.chen-at-intel.com@ffmpeg.org To: ffmpeg-devel@ffmpeg.org Date: Mon, 3 Jun 2024 13:09:35 +0800 Message-Id: <20240603050935.3335800-1-wenbin.chen@intel.com> X-Mailer: git-send-email 2.34.1 MIME-Version: 1.0 Subject: [FFmpeg-devel] [PATCH] libavfi/dnn: enable LibTorch xpu device option support X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" Archived-At: List-Archive: List-Post: From: Wenbin Chen Add xpu device support to libtorch backend. To enable xpu support you need to add "-Wl,--no-as-needed -lintel-ext-pt-gpu -Wl,--as-needed" to "--extra-libs" when configure ffmpeg. Signed-off-by: Wenbin Chen --- libavfilter/dnn/dnn_backend_torch.cpp | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/libavfilter/dnn/dnn_backend_torch.cpp b/libavfilter/dnn/dnn_backend_torch.cpp index 2557264713..ea493f5873 100644 --- a/libavfilter/dnn/dnn_backend_torch.cpp +++ b/libavfilter/dnn/dnn_backend_torch.cpp @@ -250,6 +250,10 @@ static int th_start_inference(void *args) av_log(ctx, AV_LOG_ERROR, "input or output tensor is NULL\n"); return DNN_GENERIC_ERROR; } + // Transfer tensor to the same device as model + c10::Device device = (*th_model->jit_model->parameters().begin()).device(); + if (infer_request->input_tensor->device() != device) + *infer_request->input_tensor = infer_request->input_tensor->to(device); inputs.push_back(*infer_request->input_tensor); *infer_request->output = th_model->jit_model->forward(inputs).toTensor(); @@ -285,6 +289,9 @@ static void infer_completion_callback(void *args) { switch (th_model->model.func_type) { case DFT_PROCESS_FRAME: if (task->do_ioproc) { + // Post process can only deal with CPU memory. + if (output->device() != torch::kCPU) + *output = output->to(torch::kCPU); outputs.scale = 255; outputs.data = output->data_ptr(); if (th_model->model.frame_post_proc != NULL) { @@ -424,7 +431,13 @@ static DNNModel *dnn_load_model_th(DnnContext *ctx, DNNFunctionType func_type, A th_model->ctx = ctx; c10::Device device = c10::Device(device_name); - if (!device.is_cpu()) { + if (device.is_xpu()) { + if (!at::hasXPU()) { + av_log(ctx, AV_LOG_ERROR, "No XPU device found\n"); + goto fail; + } + at::detail::getXPUHooks().initXPU(); + } else if (!device.is_cpu()) { av_log(ctx, AV_LOG_ERROR, "Not supported device:\"%s\"\n", device_name); goto fail; } @@ -432,6 +445,7 @@ static DNNModel *dnn_load_model_th(DnnContext *ctx, DNNFunctionType func_type, A try { th_model->jit_model = new torch::jit::Module; (*th_model->jit_model) = torch::jit::load(ctx->model_filename); + th_model->jit_model->to(device); } catch (const c10::Error& e) { av_log(ctx, AV_LOG_ERROR, "Failed to load torch model\n"); goto fail; -- 2.34.1 _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".