Git Inbox Mirror of the ffmpeg-devel mailing list - see https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
 help / color / mirror / Atom feed
From: Niklas Haas <ffmpeg@haasn.xyz>
To: ffmpeg-devel@ffmpeg.org
Cc: Niklas Haas <git@haasn.dev>
Subject: [FFmpeg-devel] [PATCH 2/2] avfilter/vf_scale2ref: switch to FFFrameSync
Date: Wed, 13 Mar 2024 13:24:25 +0100
Message-ID: <20240313122425.92457-2-ffmpeg@haasn.xyz> (raw)
In-Reply-To: <20240313122425.92457-1-ffmpeg@haasn.xyz>

From: Niklas Haas <git@haasn.dev>

This filter's existing design has a number of issues:

- There is no guarantee whatsoever about the order in which frames are
  pushed onto the main and ref link, due to this being entirely
  dependent on the order in which downstream filters decide to request
  frames from their various inputs. As such, there is absolutely no
  synchronization for ref streams with dynamically changing resolutions
  (see e.g. fate/h264/reinit-*).

- For some (likely historical) reason, this filter outputs its ref
  stream as a second ref output, which is in principle completely
  unnecessary (complex filter graph users can just duplicate the input
  pin), but in practice only required to allow this filter to
  "eventually" see changes to the ref stream (see first point). In
  particular, this means that if the user uses the "wrong" pin, this
  filter may break completely.

- The default filter activation function is fundamentally incapable of
  handling filters with multiple inputs cleanly, because doing so
  requires both knowledge of how these inputs should be internally
  ordered, but also how to handle EOF conditions on either input (or
  downstream). Both of these are best left to the filter specific
  options. (See #10795 for the consequences of using the default
  activate with multiple inputs).

Switching this filter to framesync fixes all three points:

- ff_framesync_activate() correctly handles multiple inputs and EOF
  conditions (and is configurable with the framesync-specific options)
- framesync only supports a single output, so we can (indeed must) drop
  the redundant ref output stream

Update documentation, changelog and tests to correspond to the new usage
pattern.

Fixes: https://trac.ffmpeg.org/ticket/10795
---
 Changelog                                |   2 +
 doc/filters.texi                         |  10 +-
 libavfilter/vf_scale.c                   | 130 ++++++++++++-----------
 tests/filtergraphs/scale2ref_keep_aspect |   3 +-
 4 files changed, 76 insertions(+), 69 deletions(-)

diff --git a/Changelog b/Changelog
index 069b8274489..bacda2524ea 100644
--- a/Changelog
+++ b/Changelog
@@ -32,6 +32,8 @@ version <next>:
 - DVD-Video demuxer, powered by libdvdnav and libdvdread
 - ffprobe -show_stream_groups option
 - ffprobe (with -export_side_data film_grain) now prints film grain metadata
+- scale2ref now only has a single output stream, and supports the framesync
+  options
 
 
 version 6.1:
diff --git a/doc/filters.texi b/doc/filters.texi
index e0436a5755c..07e8136adb3 100644
--- a/doc/filters.texi
+++ b/doc/filters.texi
@@ -21555,9 +21555,9 @@ Deprecated, do not use.
 Scale (resize) the input video, based on a reference video.
 
 See the scale filter for available options, scale2ref supports the same but
-uses the reference video instead of the main input as basis. scale2ref also
-supports the following additional constants for the @option{w} and
-@option{h} options:
+uses the reference video instead of the main input as basis. This filter also
+supports the @ref{framesync} options. In addition, scale2ref also supports the
+following additional constants for the @option{w} and @option{h} options:
 
 @table @var
 @item main_w
@@ -21600,13 +21600,13 @@ Only available with @code{eval=frame}.
 @item
 Scale a subtitle stream (b) to match the main video (a) in size before overlaying
 @example
-'scale2ref[b][a];[a][b]overlay'
+'[b][a]scale2ref[sub];[a][sub]overlay'
 @end example
 
 @item
 Scale a logo to 1/10th the height of a video, while preserving its display aspect ratio.
 @example
-[logo-in][video-in]scale2ref=w=oh*mdar:h=ih/10[logo-out][video-out]
+[logo-in][video]scale2ref=w=oh*mdar:h=ih/10[logo-out]
 @end example
 @end itemize
 
diff --git a/libavfilter/vf_scale.c b/libavfilter/vf_scale.c
index fc3b5a91e60..d4173b63097 100644
--- a/libavfilter/vf_scale.c
+++ b/libavfilter/vf_scale.c
@@ -29,6 +29,7 @@
 
 #include "avfilter.h"
 #include "formats.h"
+#include "framesync.h"
 #include "internal.h"
 #include "scale_eval.h"
 #include "video.h"
@@ -114,6 +115,7 @@ typedef struct ScaleContext {
     struct SwsContext *isws[2]; ///< software scaler context for interlaced material
     // context used for forwarding options to sws
     struct SwsContext *sws_opts;
+    FFFrameSync fs; ///< for scale2ref
 
     /**
      * New dimensions. Special values are:
@@ -288,6 +290,9 @@ static av_cold int preinit(AVFilterContext *ctx)
     if (ret < 0)
         return ret;
 
+    if (ctx->filter == &ff_vf_scale2ref)
+        ff_framesync_preinit(&scale->fs);
+
     return 0;
 }
 
@@ -303,6 +308,8 @@ static const int sws_colorspaces[] = {
     -1
 };
 
+static int do_scale2ref(FFFrameSync *fs);
+
 static av_cold int init(AVFilterContext *ctx)
 {
     ScaleContext *scale = ctx->priv;
@@ -380,6 +387,7 @@ static av_cold int init(AVFilterContext *ctx)
     if (!threads)
         av_opt_set_int(scale->sws_opts, "threads", ff_filter_get_nb_threads(ctx), 0);
 
+    scale->fs.on_event = do_scale2ref;
     return 0;
 }
 
@@ -389,6 +397,7 @@ static av_cold void uninit(AVFilterContext *ctx)
     av_expr_free(scale->w_pexpr);
     av_expr_free(scale->h_pexpr);
     scale->w_pexpr = scale->h_pexpr = NULL;
+    ff_framesync_uninit(&scale->fs);
     sws_freeContext(scale->sws_opts);
     sws_freeContext(scale->sws);
     sws_freeContext(scale->isws[0]);
@@ -678,35 +687,16 @@ static int config_props(AVFilterLink *outlink)
            flags_val);
     av_freep(&flags_val);
 
-    return 0;
-
-fail:
-    return ret;
-}
-
-static int config_props_ref(AVFilterLink *outlink)
-{
-    AVFilterLink *inlink = outlink->src->inputs[1];
-
-    outlink->w = inlink->w;
-    outlink->h = inlink->h;
-    outlink->sample_aspect_ratio = inlink->sample_aspect_ratio;
-    outlink->time_base = inlink->time_base;
-    outlink->frame_rate = inlink->frame_rate;
-    outlink->colorspace = inlink->colorspace;
-    outlink->color_range = inlink->color_range;
+    if (ctx->filter != &ff_vf_scale2ref)
+        return 0;
 
-    return 0;
-}
+    if ((ret = ff_framesync_init_dualinput(&scale->fs, ctx)) < 0)
+        return ret;
 
-static int request_frame(AVFilterLink *outlink)
-{
-    return ff_request_frame(outlink->src->inputs[0]);
-}
+    return ff_framesync_configure(&scale->fs);
 
-static int request_frame_ref(AVFilterLink *outlink)
-{
-    return ff_request_frame(outlink->src->inputs[1]);
+fail:
+    return ret;
 }
 
 static void frame_offset(AVFrame *frame, int dir, int is_pal)
@@ -909,43 +899,49 @@ static int filter_frame(AVFilterLink *link, AVFrame *in)
     return ret;
 }
 
-static int filter_frame_ref(AVFilterLink *link, AVFrame *in)
+static int do_scale2ref(FFFrameSync *fs)
 {
-    ScaleContext *scale = link->dst->priv;
-    AVFilterLink *outlink = link->dst->outputs[1];
-    int frame_changed;
+    AVFilterContext *ctx = fs->parent;
+    ScaleContext *scale = ctx->priv;
+    AVFilterLink *reflink = ctx->inputs[1];
+    AVFilterLink *outlink = ctx->outputs[0];
+    AVFrame *main, *ref, *out;
+    int ret;
 
-    frame_changed = in->width  != link->w ||
-                    in->height != link->h ||
-                    in->format != link->format ||
-                    in->sample_aspect_ratio.den != link->sample_aspect_ratio.den ||
-                    in->sample_aspect_ratio.num != link->sample_aspect_ratio.num ||
-                    in->colorspace != link->colorspace ||
-                    in->color_range != link->color_range;
+    ret = ff_framesync_dualinput_get(fs, &main, &ref);
+    if (ret < 0)
+        return ret;
 
-    if (frame_changed) {
-        link->format = in->format;
-        link->w = in->width;
-        link->h = in->height;
-        link->sample_aspect_ratio.num = in->sample_aspect_ratio.num;
-        link->sample_aspect_ratio.den = in->sample_aspect_ratio.den;
-        link->colorspace = in->colorspace;
-        link->color_range = in->color_range;
+    if (ref) {
+        reflink->format = ref->format;
+        reflink->w = ref->width;
+        reflink->h = ref->height;
+        reflink->sample_aspect_ratio.num = ref->sample_aspect_ratio.num;
+        reflink->sample_aspect_ratio.den = ref->sample_aspect_ratio.den;
+        reflink->colorspace = ref->colorspace;
+        reflink->color_range = ref->color_range;
 
-        config_props_ref(outlink);
-    }
+        ret = config_props(outlink);
+        if (ret < 0)
+            return ret;
 
-    if (scale->eval_mode == EVAL_MODE_FRAME) {
-        scale->var_values[VAR_N] = link->frame_count_out;
-        scale->var_values[VAR_T] = TS2T(in->pts, link->time_base);
+        if (scale->eval_mode == EVAL_MODE_FRAME) {
+            scale->var_values[VAR_N] = reflink->frame_count_out;
+            scale->var_values[VAR_T] = TS2T(ref->pts, reflink->time_base);
 #if FF_API_FRAME_PKT
 FF_DISABLE_DEPRECATION_WARNINGS
-        scale->var_values[VAR_POS] = in->pkt_pos == -1 ? NAN : in->pkt_pos;
+            scale->var_values[VAR_POS] = ref->pkt_pos == -1 ? NAN : ref->pkt_pos;
 FF_ENABLE_DEPRECATION_WARNINGS
 #endif
+        }
+    }
+
+    ret = scale_frame(ctx->inputs[0], main, &out);
+    if (out) {
+        return ff_filter_frame(outlink, out);
     }
 
-    return ff_filter_frame(outlink, in);
+    return ret;
 }
 
 static int process_command(AVFilterContext *ctx, const char *cmd, const char *args,
@@ -973,9 +969,25 @@ static int process_command(AVFilterContext *ctx, const char *cmd, const char *ar
     return ret;
 }
 
+static int activate(AVFilterContext *ctx)
+{
+    ScaleContext *scale = ctx->priv;
+    return ff_framesync_activate(&scale->fs);
+}
+
 static const AVClass *child_class_iterate(void **iter)
 {
-    const AVClass *c = *iter ? NULL : sws_get_class();
+    void *tmp = NULL;
+    const AVClass *sws = sws_get_class();
+    const AVClass *fs = ff_framesync_child_class_iterate(tmp);
+    const AVClass *c;
+    if (!*iter) {
+        c = sws;
+    } else if (*iter == (void*)(uintptr_t)sws) {
+        c = fs;
+    } else {
+        c = NULL;
+    }
     *iter = (void*)(uintptr_t)c;
     return c;
 }
@@ -985,6 +997,8 @@ static void *child_next(void *obj, void *prev)
     ScaleContext *s = obj;
     if (!prev)
         return s->sws_opts;
+    if (prev == s->sws_opts)
+        return &s->fs;
     return NULL;
 }
 
@@ -1082,12 +1096,10 @@ static const AVFilterPad avfilter_vf_scale2ref_inputs[] = {
     {
         .name         = "default",
         .type         = AVMEDIA_TYPE_VIDEO,
-        .filter_frame = filter_frame,
     },
     {
         .name         = "ref",
         .type         = AVMEDIA_TYPE_VIDEO,
-        .filter_frame = filter_frame_ref,
     },
 };
 
@@ -1096,13 +1108,6 @@ static const AVFilterPad avfilter_vf_scale2ref_outputs[] = {
         .name         = "default",
         .type         = AVMEDIA_TYPE_VIDEO,
         .config_props = config_props,
-        .request_frame= request_frame,
-    },
-    {
-        .name         = "ref",
-        .type         = AVMEDIA_TYPE_VIDEO,
-        .config_props = config_props_ref,
-        .request_frame= request_frame_ref,
     },
 };
 
@@ -1117,5 +1122,6 @@ const AVFilter ff_vf_scale2ref = {
     FILTER_INPUTS(avfilter_vf_scale2ref_inputs),
     FILTER_OUTPUTS(avfilter_vf_scale2ref_outputs),
     FILTER_QUERY_FUNC(query_formats),
+    .activate        = activate,
     .process_command = process_command,
 };
diff --git a/tests/filtergraphs/scale2ref_keep_aspect b/tests/filtergraphs/scale2ref_keep_aspect
index f407460ec7c..d63968666a8 100644
--- a/tests/filtergraphs/scale2ref_keep_aspect
+++ b/tests/filtergraphs/scale2ref_keep_aspect
@@ -1,5 +1,4 @@
 sws_flags=+accurate_rnd+bitexact;
 testsrc=size=320x240 [main];
 testsrc=size=640x360 [ref];
-[main][ref] scale2ref=iw/4:ow/mdar [main][ref];
-[ref] nullsink
+[main][ref] scale2ref=iw/4:ow/mdar [main]
-- 
2.44.0

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

  reply	other threads:[~2024-03-13 12:24 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-13 12:24 [FFmpeg-devel] [PATCH 1/2] avfilter: mark scale2ref as supporting dynamic sizes Niklas Haas
2024-03-13 12:24 ` Niklas Haas [this message]
2024-03-13 12:28   ` [FFmpeg-devel] [PATCH 2/2] avfilter/vf_scale2ref: switch to FFFrameSync Niklas Haas
2024-03-13 13:21   ` Gyan Doshi
2024-03-13 23:41   ` Michael Niedermayer
2024-03-13 23:43     ` James Almer
2024-03-14  0:00       ` Michael Niedermayer
2024-03-13 23:47   ` Andreas Rheinhardt
2024-03-15 10:18     ` Niklas Haas
2024-03-17 16:55       ` Andreas Rheinhardt
2024-03-19 21:55   ` Michael Niedermayer
2024-03-20 14:23     ` Niklas Haas
2024-03-20 19:55       ` Michael Niedermayer
2024-03-23 17:50         ` Niklas Haas
2024-03-13 13:14 ` [FFmpeg-devel] [PATCH 1/2] avfilter: mark scale2ref as supporting dynamic sizes Gyan Doshi
2024-03-16 23:26 ` Michael Niedermayer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240313122425.92457-2-ffmpeg@haasn.xyz \
    --to=ffmpeg@haasn.xyz \
    --cc=ffmpeg-devel@ffmpeg.org \
    --cc=git@haasn.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Git Inbox Mirror of the ffmpeg-devel mailing list - see https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

This inbox may be cloned and mirrored by anyone:

	git clone --mirror https://master.gitmailbox.com/ffmpegdev/0 ffmpegdev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 ffmpegdev ffmpegdev/ https://master.gitmailbox.com/ffmpegdev \
		ffmpegdev@gitmailbox.com
	public-inbox-index ffmpegdev

Example config snippet for mirrors.


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git