Git Inbox Mirror of the ffmpeg-devel mailing list - see https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
 help / color / mirror / Atom feed
From: "Fu, Ting" <ting.fu-at-intel.com@ffmpeg.org>
To: FFmpeg development discussions and patches <ffmpeg-devel@ffmpeg.org>
Subject: Re: [FFmpeg-devel] [PATCH 2/2] libavfi/dnn: add LibTorch as one of DNN backend
Date: Wed, 25 May 2022 03:50:27 +0000
Message-ID: <SN6PR11MB31179D4A8E25233018BDEE86E6D69@SN6PR11MB3117.namprd11.prod.outlook.com> (raw)
In-Reply-To: <8f3540a8-f902-4f6d-a761-9bd59fc094df@www.fastmail.com>



> -----Original Message-----
> From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of
> Jean-Baptiste Kempf
> Sent: Tuesday, May 24, 2022 10:52 PM
> To: ffmpeg-devel <ffmpeg-devel@ffmpeg.org>
> Subject: Re: [FFmpeg-devel] [PATCH 2/2] libavfi/dnn: add LibTorch as one of
> DNN backend
> 
> Hello,
> 
> On Tue, 24 May 2022, at 16:03, Fu, Ting wrote:
> > I am trying to add this backend since we got some users who have
> > interest in doing PyTorch model(BasicVSR model) inference with FFmpeg.
> 
> I think you are missing my point here.
> We already have 3 backends (TF, Native, OpenVino) in FFmpeg.
> Those are not to support different hardware, but different tastes for users,

Hi Jean-Baptiste,

Yes, you are right, we already got three backends with FFmpeg DNN. But for now, the native backend is barely workable, due to its layers and operations weak support.
And we do support different hardware. Like, the OpenVINO backend supports inference with Intel GPU. For now, the TensorFlow and OpenVINO backend support some models, which include Super Resolution model, object detect model, object classify model. I think it's not only a teste difference for users, but an option for them to choose for their work implementation. AFAIK, there are some individuals and organizations who are using FFmpeg DNN.

> who prefer one API to another one.
> Where does it end? How many of those backends will we get? 10?
> 
> What's the value to do that development inside ffmpeg?
> 

I think you are concerning why we need such backend. Because the users want to infer the BasicVSR and other VSR(video super solution) model. Those models are most implemented with PyTorch. And it can cause several issues if we convert such model to the other AI model file. Besides, the video codec is an advantage of FFmpeg framework, which can support various of hardware acceleration. We would like to utilize this framework to enhance the performance of AI inference and improve the user experience.
What I want to emphasis is that the LibTorch backend is not for adding patches but an actual requirement.

Thank you.
Ting FU

> --
> Jean-Baptiste Kempf -  President
> +33 672 704 734
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
> 
> To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org
> with subject "unsubscribe".
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

      parent reply	other threads:[~2022-05-25  3:50 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-05-23  9:29 [FFmpeg-devel] [PATCH 1/2] libavfi/dnn: refine enum DNNColorOrder Ting Fu
2022-05-23  9:29 ` [FFmpeg-devel] [PATCH 2/2] libavfi/dnn: add LibTorch as one of DNN backend Ting Fu
2022-05-23  9:51   ` Jean-Baptiste Kempf
2022-05-24 14:03     ` Fu, Ting
2022-05-24 14:23       ` Soft Works
2022-05-25  3:20         ` Fu, Ting
2022-05-24 14:51       ` Jean-Baptiste Kempf
2022-05-24 15:29         ` Soft Works
2022-05-25  3:50         ` Fu, Ting [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=SN6PR11MB31179D4A8E25233018BDEE86E6D69@SN6PR11MB3117.namprd11.prod.outlook.com \
    --to=ting.fu-at-intel.com@ffmpeg.org \
    --cc=ffmpeg-devel@ffmpeg.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Git Inbox Mirror of the ffmpeg-devel mailing list - see https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

This inbox may be cloned and mirrored by anyone:

	git clone --mirror https://master.gitmailbox.com/ffmpegdev/0 ffmpegdev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 ffmpegdev ffmpegdev/ https://master.gitmailbox.com/ffmpegdev \
		ffmpegdev@gitmailbox.com
	public-inbox-index ffmpegdev

Example config snippet for mirrors.


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git