* [FFmpeg-devel] [PATCH] libavfi/dnn: enable LibTorch xpu device option support
@ 2024-06-03 5:09 wenbin.chen-at-intel.com
2024-06-08 12:36 ` Guo, Yejun
0 siblings, 1 reply; 2+ messages in thread
From: wenbin.chen-at-intel.com @ 2024-06-03 5:09 UTC (permalink / raw)
To: ffmpeg-devel
From: Wenbin Chen <wenbin.chen@intel.com>
Add xpu device support to libtorch backend.
To enable xpu support you need to add
"-Wl,--no-as-needed -lintel-ext-pt-gpu -Wl,--as-needed" to
"--extra-libs" when configure ffmpeg.
Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
---
libavfilter/dnn/dnn_backend_torch.cpp | 16 +++++++++++++++-
1 file changed, 15 insertions(+), 1 deletion(-)
diff --git a/libavfilter/dnn/dnn_backend_torch.cpp b/libavfilter/dnn/dnn_backend_torch.cpp
index 2557264713..ea493f5873 100644
--- a/libavfilter/dnn/dnn_backend_torch.cpp
+++ b/libavfilter/dnn/dnn_backend_torch.cpp
@@ -250,6 +250,10 @@ static int th_start_inference(void *args)
av_log(ctx, AV_LOG_ERROR, "input or output tensor is NULL\n");
return DNN_GENERIC_ERROR;
}
+ // Transfer tensor to the same device as model
+ c10::Device device = (*th_model->jit_model->parameters().begin()).device();
+ if (infer_request->input_tensor->device() != device)
+ *infer_request->input_tensor = infer_request->input_tensor->to(device);
inputs.push_back(*infer_request->input_tensor);
*infer_request->output = th_model->jit_model->forward(inputs).toTensor();
@@ -285,6 +289,9 @@ static void infer_completion_callback(void *args) {
switch (th_model->model.func_type) {
case DFT_PROCESS_FRAME:
if (task->do_ioproc) {
+ // Post process can only deal with CPU memory.
+ if (output->device() != torch::kCPU)
+ *output = output->to(torch::kCPU);
outputs.scale = 255;
outputs.data = output->data_ptr();
if (th_model->model.frame_post_proc != NULL) {
@@ -424,7 +431,13 @@ static DNNModel *dnn_load_model_th(DnnContext *ctx, DNNFunctionType func_type, A
th_model->ctx = ctx;
c10::Device device = c10::Device(device_name);
- if (!device.is_cpu()) {
+ if (device.is_xpu()) {
+ if (!at::hasXPU()) {
+ av_log(ctx, AV_LOG_ERROR, "No XPU device found\n");
+ goto fail;
+ }
+ at::detail::getXPUHooks().initXPU();
+ } else if (!device.is_cpu()) {
av_log(ctx, AV_LOG_ERROR, "Not supported device:\"%s\"\n", device_name);
goto fail;
}
@@ -432,6 +445,7 @@ static DNNModel *dnn_load_model_th(DnnContext *ctx, DNNFunctionType func_type, A
try {
th_model->jit_model = new torch::jit::Module;
(*th_model->jit_model) = torch::jit::load(ctx->model_filename);
+ th_model->jit_model->to(device);
} catch (const c10::Error& e) {
av_log(ctx, AV_LOG_ERROR, "Failed to load torch model\n");
goto fail;
--
2.34.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [FFmpeg-devel] [PATCH] libavfi/dnn: enable LibTorch xpu device option support
2024-06-03 5:09 [FFmpeg-devel] [PATCH] libavfi/dnn: enable LibTorch xpu device option support wenbin.chen-at-intel.com
@ 2024-06-08 12:36 ` Guo, Yejun
0 siblings, 0 replies; 2+ messages in thread
From: Guo, Yejun @ 2024-06-08 12:36 UTC (permalink / raw)
To: FFmpeg development discussions and patches
> -----Original Message-----
> From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of
> wenbin.chen-at-intel.com@ffmpeg.org
> Sent: Monday, June 3, 2024 1:10 PM
> To: ffmpeg-devel@ffmpeg.org
> Subject: [FFmpeg-devel] [PATCH] libavfi/dnn: enable LibTorch xpu device option
> support
>
> From: Wenbin Chen <wenbin.chen@intel.com>
>
> Add xpu device support to libtorch backend.
> To enable xpu support you need to add
> "-Wl,--no-as-needed -lintel-ext-pt-gpu -Wl,--as-needed" to "--extra-libs" when
> configure ffmpeg.
>
> Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
LGTM, will push soon.
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2024-06-08 12:36 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-06-03 5:09 [FFmpeg-devel] [PATCH] libavfi/dnn: enable LibTorch xpu device option support wenbin.chen-at-intel.com
2024-06-08 12:36 ` Guo, Yejun
Git Inbox Mirror of the ffmpeg-devel mailing list - see https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
This inbox may be cloned and mirrored by anyone:
git clone --mirror https://master.gitmailbox.com/ffmpegdev/0 ffmpegdev/git/0.git
# If you have public-inbox 1.1+ installed, you may
# initialize and index your mirror using the following commands:
public-inbox-init -V2 ffmpegdev ffmpegdev/ https://master.gitmailbox.com/ffmpegdev \
ffmpegdev@gitmailbox.com
public-inbox-index ffmpegdev
Example config snippet for mirrors.
AGPL code for this site: git clone https://public-inbox.org/public-inbox.git