From: "Xiang, Haihao" <haihao.xiang-at-intel.com@ffmpeg.org>
To: "ffmpeg-devel@ffmpeg.org" <ffmpeg-devel@ffmpeg.org>
Subject: Re: [FFmpeg-devel] [PATCH v4 1/1] avutils/hwcontext: When deriving a hwdevice, search for existing device in both directions
Date: Mon, 10 Jan 2022 06:47:33 +0000
Message-ID: <fec3c8a7fba4416cba7fae696c00975da994a416.camel@intel.com> (raw)
In-Reply-To: <DM8P223MB036563657A73BD5144916FD2BA509@DM8P223MB0365.NAMP223.PROD.OUTLOOK.COM>
On Mon, 2022-01-10 at 01:40 +0000, Soft Works wrote:
> > -----Original Message-----
> > From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of Mark
> > Thompson
> > Sent: Monday, January 10, 2022 1:57 AM
> > To: ffmpeg-devel@ffmpeg.org
> > Subject: Re: [FFmpeg-devel] [PATCH v4 1/1] avutils/hwcontext: When deriving
> > a
> > hwdevice, search for existing device in both directions
> >
> > On 09/01/2022 23:36, Soft Works wrote:>> -----Original Message-----
> > > > From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of Mark
> > > > Thompson
> > > > Sent: Monday, January 10, 2022 12:13 AM
> > > > To: ffmpeg-devel@ffmpeg.org
> > > > Subject: Re: [FFmpeg-devel] [PATCH v4 1/1] avutils/hwcontext: When
> >
> > deriving a
> > > > hwdevice, search for existing device in both directions
> > > >
> > > > On 09/01/2022 21:15, Soft Works wrote:
> > > > > > -----Original Message-----
> > > > > > From: ffmpeg-devel <ffmpeg-devel-bounces@ffmpeg.org> On Behalf Of
> > > > > > Mark
> > > > > > Thompson
> > > > > > Sent: Sunday, January 9, 2022 7:39 PM
> > > > > > To: ffmpeg-devel@ffmpeg.org
> > > > > > Subject: Re: [FFmpeg-devel] [PATCH v4 1/1] avutils/hwcontext: When
> > > >
> > > > deriving a
> > > > > > hwdevice, search for existing device in both directions
> > > > > >
> > > > > > On 05/01/2022 03:38, Xiang, Haihao wrote:
> > > > > > > ... this patch really fixed some issues for me and others.
> > > > > >
> > > > > > Can you explain this in more detail?
> > > > > >
> > > > > > I'd like to understand whether the issues you refer to are something
> >
> > which
> > > > > > would be fixed by the ffmpeg utility allowing selection of devices
> > > > > > for
> > > > > > libavfilter, or whether they are something unrelated.
> > > > > >
> > > > > > (For library users the currently-supported way of getting the same
> >
> > device
> > > > > > again is to keep a reference to the device and reuse it. If there
> > > > > > is
> >
> > some
> > > > > > case where you can't do that then it would be useful to hear about
> > > > > > it.)
> > > > >
> > > > > Hi Mark,
> > > > >
> > > > > they have 3 workaround patches on their staging repo, but I'll let
> > > > > Haihao
> > > > > answer in detail.
> > > > >
> > > > > I have another question. I've been searching high and low, yet I can't
> > > > > find the message. Do you remember that patch discussion from (quite a
> > > > > few) months ago, where it was about another QSV change (something
> > > > > about
> > > > > device creation from the command line, IIRC). There was a command line
> > > > > example with QSV and you correctly remarked something like:
> > > > > "Do you even know that just for this command line, there are 5 device
> > > > > creations happening in the background, implicit and explicit, and in
> > > > > one case (or 2), it's not even creating the specified device but
> > > > > a session for the default device instead"
> > > > > (just roughly from memory)
> > > > >
> > > > > Do you remember - or was it Philip?
> > > >
> > > > <https://lists.ffmpeg.org/pipermail/ffmpeg-devel/2021-March/277731.html>
> > > >
> > > > > Anyway, this is something that the patch will improve. There has been
> > > > > one
> > > > > other commit since that time regarding explicit device creation from
> > > > > Haihao (iirc), which already reduced the device creation and fixed the
> > > > > incorrect default session creation.
> > > >
> > > > Yes, the special ffmpeg utility code to work around the lack of
> > > > AV_CODEC_HW_CONFIG_METHOD_HW_FRAMES_CTX in the libmfx decoders caused
> > > > confusion by working differently to everything else - implementing that
> >
> > and
> > > > getting rid of the workarounds was definitely a good thing.
> > > >
> > > > > My patch tackles this from another side: at that time, you (or Philip)
> > > > > explained that the secondary context that QSV requires (VAAPI, D3Dx)
> > > > > and that is initially created when setting up the QSV device, does not
> > > > > even get used when subsequently deriving to a context of that kind.
> > > > > Instead, a new device is being created in this case.
> > > > >
> > > > > That's another scenario which is fixed by this patch.
> > > >
> > > > It does sound like you just always want a libmfx device to be derived
> > > > from
> > > > the thing which is really there sitting underneath it.
> > >
> > > "That's another scenario which is fixed by this patch"
> > >
> > > Things stop working as expected as soon as you are working with 3 or more
> > > derived hw devices and neither hwmap nor hwmap-revere can get you to the
> > > context you want.
> >
> > And your situation is sufficiently complex that specifying devices
> > explicitly
> > is probably what you want, rather that trying to trick some implicit route
> > into returning the answer you already know.
> >
> > > > If you are a library user then you get the original hw context by
> > > > reusing
> >
> > the
> > > > reference to it that you made when you created it. This includes
> >
> > libavfilter
> > > > users, who can provide a hw device to each hwmap separately.
> > > >
> > > > If you are an ffmpeg utility user then I agree there isn't currently a
> > > > way
> >
> > to
> > > > do this for filter graphs, hence the solution of providing an a way in
> > > > the
> > > > ffmpeg utility to set hw devices per-filter.
> > >
> > > just setting the context on a filter doesn't make any sense, because you
> >
> > need
> > > the mapping. It only makes sense for the hwmap and hwupload filters.
> >
> > Yes? The filters you need to give it to are the hwmap and hwupload filters,
> > the others indeed don't matter (though they are blindly set at the moment
> > because there is no way to know they don't need it).
> >
> > > > > Anyway I'm wondering whether it can even be logically valid to derive
> > > > > from one device to another and then to another instance of the
> > > > > previous
> > > > > device type.
> > > > > From my understanding, "deriving" or "hw mapping" from one device to
> > > > > another means to establish a different way or accessor to a common
> > > > > resource/data, which means that you can access the data in one or the
> > > > > other way.
> > > > >
> > > > > Now let's assume a forward device-derivation chain like this:
> > > > >
> > > > > D3D_1 >> OpenCL_1 >> D3D_2
> > > >
> > > > You can't do this because device derivation is unidirectional (and
> >
> > acyclic) -
> > > > you can only derive an OpenCL device from D3D (9 or 11), not the other
> > > > way
> > > > around.
> > > >
> > > > Similarly, you can only map frames from D3D to OpenCL. That's why the
> >
> > hwmap
> > > > reverse option exists, because of cases where you actually want the
> > > > other
> > > > direction which doesn't really exist.
> > >
> > > Yes, all true, but my point is something else: you can't have several
> >
> > context
> > > of the same type in a derivation chain.
> >
> > That's a consequence of unidirectionality + acyclity, yes.
> >
> > > And that's exactly what this patch addresses: it makes sure that you'll
> > > get
> > > an existing context instead of ffmpeg trying to derive to a new hw device
> > > which doesn't work anyway.
> >
> > I'm still only seeing one case where this bizarre operation is wanted: the
> > ffmpeg utility user trying to get devices into the right place in their
> > filter graphs, who I still think would be better served by being able to set
> > the right device directly on hwmap rather than implicitly through searching
> > derivation chains.
> >
> > > > > We have D3D surfaces, then we share them with OpenCL. Both *_1
> > > > > contexts provide access to the same data.
> > > > > Then we derive again "forward only" and we get a new D3D_2
> > > > > context. It is derived from OpenCL_1, so it must provide
> > > > > access to the same data as OpenCL_1 AND D3D_1.
> > > > >
> > > > > Now we have two different D3D contexts which are supposed to
> > > > > provide access to the same data!
> > > > >
> > > > >
> > > > > 1. This doesn't even work technically
> > > > > - neither from D3D (iirc)
> > > > > - nor from ffmpeg (not cleanly)
> > > > >
> > > > > 2. This doesn't make sense at all. There should always be
> > > > > only a single device context of a device type for the same
> > > > > resource
> > > > >
> > > > > 3. Why would somebody even want this - what kind of use case?
> > > >
> > > > The multiple derivation case is for when a single device doesn't work.
> > > > Typically that involves multiple separate components which don't want to
> > > > interact with the others, for example:
> > > >
> > > > * When something thread-unsafe might happen, so different threads need
> > > > separate instances to work with.
> > >
> > > Derivation means accessing shared resources (computing and memory), and
> > > you can't solve a concurrency problem by having two devices accessing
> > > the same resources - this makes it even worse (assuming a device would
> > > even allow this).
> >
> > Device derivation means making a compatible context of a different type on
> > the same physical device.
> >
> > Now that's probably intended because you are going to want to share some
> > particular resources, but exactly what can be shared and what is possible is
> > dependent on the setup.
> >
> > Similarly, any rules for concurrency are dependent on the setup - maybe you
> > can't do two specific things simultaneously in the same device context and
> > need two separate ones to solve it, but they still both want to share in
> > some
> > way with the different device context they were derived from.
> >
> > > > * When global options have to be set on a device, so a component which
> >
> > does
> > > > that needs its own instance to avoid interfering with anyone else.
> > >
> > > This is NOT derivation. This case is not affected.
> >
> > Suppose I have some DRM frames which come from somewhere (some hardware
> > decoder, say - doesn't matter).
> >
> > I want to do Vulkan things with some of the frames, so I call
> > av_hwdevice_ctx_create_derived_opts() to get a compatible Vulkan context.
> > Then I can map and do Vulkan things, yay!
> >
> > Later on, I want to do some independent Vulkan thing with my DRM frames. I
> > do the same operation again with different options (because my new thing
> > wants some new extensions, say). This returns a new Vulkan context and I
> > can
> > work with it completely independently, yay!
>
> You are describing the creation of a Vulkan context with other parameters
> with which you can work independently.
>
> That's not my understanding of deriving a context. I don't the implementation
> in case of Vulkan, but in case of the others, deriving is about sharing
> resources. And when you share resources, you can't "work with it
> independently",
> so what you're talking about is not a scenario of deriving a context.
>
>
> To wrap things up a bit:
>
> - you want an approach which requires even more complicated filter
> command lines.
> I have understood that. It is a possible addition for filter command lines
> and I would even welcome somebody who would implement precise hw context
> selection for hwdownload and also for hwmapn for (rare) cases where this
> might be needed. (that somebody won't be me, though)
>
> - but this is not what this patchset is about. it is about having things
> working nicely and automatically in a way as one would expect it instead
> of failing. this patchset only touches and changes behavior in those cases
> that are currently failing anyway
>
> - Or can you name any realistic use case that this patchset would break?
> (if yes, let's please go through a specific example with pseudo code)
>
>
> Maybe Haihao's reply will be more convincing.
> It might also be interesting what the Vulkan guys are thinking about it
> (as there has been some feedback from that side earlier).
Hi Mark,
We want to provide a more user friendly command-line to share gfx memory between
QSV, VAAPI and other HW methods.
E.g. VAAPI provides sharpness_vaapi but QSV doesn't provide a corresponding
filter, we want to use sharpness_vaapi filter on the output from QSV decoders.
Currently the first command-line below may work, however the second command line
below can't work because QSV device is not derived from a VAAPI device
explicitly, so ffmpeg fails to derive VAAPI device from QSV device (it may
derive VAAPI device from QSV device in the first case)
$ ffmpeg -init_hw_device vaapi=intel -init_hw_device qsv=qsv@intel -hwaccel qsv
-c:v h264_qsv -i input.mp4 -vf hwmap=derive_device=vaapi,sharpness_vaapi -f null
-
$ ffmpeg -hwaccel qsv -c:v h264_qsv -i input.mp4 -vf
hwmap=derive_device=vaapi,sharpness_vaapi -f null -
After applying Softworks' patch, the above two command-lines may work well. In
addition, we may use other HW methods on QSV output without copy for gfx memory,
e.g.
$ ffmpeg -hwaccel qsv -c:v h264_qsv -i input.mp4 -vf
"hwmap=derive_device=vaapi,format=vaapi,hwmap=derive_device=vulkan,scale_vulkan=
w=1920:h=1080" -f null -
Thanks
Haihao
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
next prev parent reply other threads:[~2022-01-10 6:47 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <cover.1637807570.git.softworkz@hotmail.com>
[not found] ` <163785839519.25323.16303122737288435026@lain.red.khirnov.net>
[not found] ` <163794332023.25323.7446601680884381987@lain.red.khirnov.net>
[not found] ` <163795393240.7822.9483345286843818669@lain.red.khirnov.net>
[not found] ` <DM8P223MB0365E3DFD61F63B305FC079BBA639@DM8P223MB0365.NAMP223.PROD.OUTLOOK.COM>
2021-12-23 14:01 ` Xiang, Haihao
2021-12-27 3:08 ` Xiang, Haihao
2022-01-05 3:19 ` James Almer
2022-01-05 3:38 ` Xiang, Haihao
2022-01-09 18:39 ` Mark Thompson
2022-01-09 21:15 ` Soft Works
2022-01-09 23:12 ` Mark Thompson
2022-01-09 23:36 ` Soft Works
2022-01-10 0:56 ` Mark Thompson
2022-01-10 1:40 ` Soft Works
2022-01-10 6:47 ` Xiang, Haihao [this message]
2022-01-10 21:16 ` Mark Thompson
2022-01-11 7:01 ` Xiang, Haihao
2022-01-10 20:56 ` Mark Thompson
2022-01-12 5:15 ` Soft Works
[not found] ` <DM8P223MB036578CDD5AEA447DD2DE424BA629@DM8P223MB0365.NAMP223.PROD.OUTLOOK.COM>
2021-12-29 23:04 ` Mark Thompson
2021-12-30 0:29 ` Soft Works
2021-12-30 11:21 ` Mark Thompson
2021-12-30 19:20 ` Soft Works
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=fec3c8a7fba4416cba7fae696c00975da994a416.camel@intel.com \
--to=haihao.xiang-at-intel.com@ffmpeg.org \
--cc=ffmpeg-devel@ffmpeg.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Git Inbox Mirror of the ffmpeg-devel mailing list - see https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
This inbox may be cloned and mirrored by anyone:
git clone --mirror https://master.gitmailbox.com/ffmpegdev/0 ffmpegdev/git/0.git
# If you have public-inbox 1.1+ installed, you may
# initialize and index your mirror using the following commands:
public-inbox-init -V2 ffmpegdev ffmpegdev/ https://master.gitmailbox.com/ffmpegdev \
ffmpegdev@gitmailbox.com
public-inbox-index ffmpegdev
Example config snippet for mirrors.
AGPL code for this site: git clone https://public-inbox.org/public-inbox.git