Git Inbox Mirror of the ffmpeg-devel mailing list - see https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
 help / color / mirror / Atom feed
From: "Guo, Yejun" <yejun.guo-at-intel.com@ffmpeg.org>
To: FFmpeg development discussions and patches <ffmpeg-devel@ffmpeg.org>
Subject: Re: [FFmpeg-devel] [PATCH v5] libavfi/dnn: add LibTorch as one of DNN backend
Date: Thu, 14 Mar 2024 11:38:25 +0000
Message-ID: <PH7PR11MB5957E0938528E4BE5549B19CF1292@PH7PR11MB5957.namprd11.prod.outlook.com> (raw)
In-Reply-To: <20240311050229.1692658-1-wenbin.chen@intel.com>



> -----Original Message-----
> From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of
> wenbin.chen-at-intel.com@ffmpeg.org
> Sent: Monday, March 11, 2024 1:02 PM
> To: ffmpeg-devel@ffmpeg.org
> Subject: [FFmpeg-devel] [PATCH v5] libavfi/dnn: add LibTorch as one of DNN
> backend
> 
> From: Wenbin Chen <wenbin.chen@intel.com>
> 
> PyTorch is an open source machine learning framework that accelerates
> the path from research prototyping to production deployment. Official
> website: https://pytorch.org/. We call the C++ library of PyTorch as
> LibTorch, the same below.
> 
> To build FFmpeg with LibTorch, please take following steps as reference:
> 1. download LibTorch C++ library in https://pytorch.org/get-started/locally/,
> please select C++/Java for language, and other options as your need.
> Please download cxx11 ABI version (libtorch-cxx11-abi-shared-with-deps-
> *.zip).
> 2. unzip the file to your own dir, with command
> unzip libtorch-shared-with-deps-latest.zip -d your_dir
> 3. export libtorch_root/libtorch/include and
> libtorch_root/libtorch/include/torch/csrc/api/include to $PATH
> export libtorch_root/libtorch/lib/ to $LD_LIBRARY_PATH
> 4. config FFmpeg with ../configure --enable-libtorch --extra-cflag=-
> I/libtorch_root/libtorch/include --extra-cflag=-
> I/libtorch_root/libtorch/include/torch/csrc/api/include --extra-ldflags=-
> L/libtorch_root/libtorch/lib/
> 5. make
> 
> To run FFmpeg DNN inference with LibTorch backend:
> ./ffmpeg -i input.jpg -vf
> dnn_processing=dnn_backend=torch:model=LibTorch_model.pt -y output.jpg
> The LibTorch_model.pt can be generated by Python with torch.jit.script() api.
> Please note, torch.jit.trace() is not recommanded, since it does not support
> ambiguous input size.

Can you provide more detail (maybe a link from pytorch) about the 
libtorch_model.py generation and so we can have a try.

> 
> Signed-off-by: Ting Fu <ting.fu@intel.com>
> Signed-off-by: Wenbin Chen <wenbin.chen@intel.com>
> ---
>  configure                             |   5 +-
>  libavfilter/dnn/Makefile              |   1 +
>  libavfilter/dnn/dnn_backend_torch.cpp | 597
> ++++++++++++++++++++++++++
>  libavfilter/dnn/dnn_interface.c       |   5 +
>  libavfilter/dnn_filter_common.c       |  15 +-
>  libavfilter/dnn_interface.h           |   2 +-
>  libavfilter/vf_dnn_processing.c       |   3 +
>  7 files changed, 624 insertions(+), 4 deletions(-)
>  create mode 100644 libavfilter/dnn/dnn_backend_torch.cpp
> 
> +static int fill_model_input_th(THModel *th_model, THRequestItem *request)
> +{
> +    LastLevelTaskItem *lltask = NULL;
> +    TaskItem *task = NULL;
> +    THInferRequest *infer_request = NULL;
> +    DNNData input = { 0 };
> +    THContext *ctx = &th_model->ctx;
> +    int ret, width_idx, height_idx, channel_idx;
> +
> +    lltask = (LastLevelTaskItem *)ff_queue_pop_front(th_model-
> >lltask_queue);
> +    if (!lltask) {
> +        ret = AVERROR(EINVAL);
> +        goto err;
> +    }
> +    request->lltask = lltask;
> +    task = lltask->task;
> +    infer_request = request->infer_request;
> +
> +    ret = get_input_th(th_model, &input, NULL);
> +    if ( ret != 0) {
> +        goto err;
> +    }
> +    width_idx = dnn_get_width_idx_by_layout(input.layout);
> +    height_idx = dnn_get_height_idx_by_layout(input.layout);
> +    channel_idx = dnn_get_channel_idx_by_layout(input.layout);
> +    input.dims[height_idx] = task->in_frame->height;
> +    input.dims[width_idx] = task->in_frame->width;
> +    input.data = av_malloc(input.dims[height_idx] * input.dims[width_idx] *
> +                           input.dims[channel_idx] * sizeof(float));
> +    if (!input.data)
> +        return AVERROR(ENOMEM);
> +    infer_request->input_tensor = new torch::Tensor();
> +    infer_request->output = new torch::Tensor();
> +
> +    switch (th_model->model->func_type) {
> +    case DFT_PROCESS_FRAME:
> +        input.scale = 255;
> +        if (task->do_ioproc) {
> +            if (th_model->model->frame_pre_proc != NULL) {
> +                th_model->model->frame_pre_proc(task->in_frame, &input,
> th_model->model->filter_ctx);
> +            } else {
> +                ff_proc_from_frame_to_dnn(task->in_frame, &input, ctx);
> +            }
> +        }
> +        break;
> +    default:
> +        avpriv_report_missing_feature(NULL, "model function type %d",
> th_model->model->func_type);
> +        break;
> +    }
> +    *infer_request->input_tensor = torch::from_blob(input.data,
> +        {1, 1, input.dims[channel_idx], input.dims[height_idx],
> input.dims[width_idx]},

An extra dimension is added to support multiple frames for algorithms 
such as VideoSuperResolution, besides batch size, channel, height and width.

Let's first support the regular dimension for NCHW/NHWC,  and then
add support for multiple frames.

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

  reply	other threads:[~2024-03-14 11:38 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-11  5:02 wenbin.chen-at-intel.com
2024-03-14 11:38 ` Guo, Yejun [this message]
2024-03-15  2:01   ` Chen, Wenbin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=PH7PR11MB5957E0938528E4BE5549B19CF1292@PH7PR11MB5957.namprd11.prod.outlook.com \
    --to=yejun.guo-at-intel.com@ffmpeg.org \
    --cc=ffmpeg-devel@ffmpeg.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Git Inbox Mirror of the ffmpeg-devel mailing list - see https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

This inbox may be cloned and mirrored by anyone:

	git clone --mirror https://master.gitmailbox.com/ffmpegdev/0 ffmpegdev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 ffmpegdev ffmpegdev/ https://master.gitmailbox.com/ffmpegdev \
		ffmpegdev@gitmailbox.com
	public-inbox-index ffmpegdev

Example config snippet for mirrors.


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git