* [FFmpeg-devel] [PATCH v2] ffmpeg CLI multithreading
@ 2023-11-23 19:14 Anton Khirnov
2023-11-23 19:14 ` [FFmpeg-devel] [PATCH 01/13] lavfi/buffersink: avoid leaking peeked_frame on uninit Anton Khirnov
` (12 more replies)
0 siblings, 13 replies; 49+ messages in thread
From: Anton Khirnov @ 2023-11-23 19:14 UTC (permalink / raw)
To: ffmpeg-devel
Hi,
this is the updated version of the CLI multithreading set. All issues
reported in the previous version should be fixed.
The -fix_sub_duration_heartbeat option is enabled again, thanks to JEEB
for testing.
I've now merged the actual conversion patches into the last one, so
every patch of the set should work in isolation. If you want to review
conversions of individual components, see branch 'ffmpeg_threading' in
git://git.khirnov.net/libav
Cheers,
--
Anton Khirnov
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* [FFmpeg-devel] [PATCH 01/13] lavfi/buffersink: avoid leaking peeked_frame on uninit
2023-11-23 19:14 [FFmpeg-devel] [PATCH v2] ffmpeg CLI multithreading Anton Khirnov
@ 2023-11-23 19:14 ` Anton Khirnov
2023-11-23 22:16 ` Paul B Mahol
2023-11-27 9:45 ` Nicolas George
2023-11-23 19:14 ` [FFmpeg-devel] [PATCH 02/13] fftools/ffmpeg_filter: make sub2video heartbeat more robust Anton Khirnov
` (11 subsequent siblings)
12 siblings, 2 replies; 49+ messages in thread
From: Anton Khirnov @ 2023-11-23 19:14 UTC (permalink / raw)
To: ffmpeg-devel
---
libavfilter/buffersink.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/libavfilter/buffersink.c b/libavfilter/buffersink.c
index ca2af1bc07..3da3331159 100644
--- a/libavfilter/buffersink.c
+++ b/libavfilter/buffersink.c
@@ -164,6 +164,13 @@ static av_cold int common_init(AVFilterContext *ctx)
return 0;
}
+static void uninit(AVFilterContext *ctx)
+{
+ BufferSinkContext *buf = ctx->priv;
+
+ av_frame_free(&buf->peeked_frame);
+}
+
static int activate(AVFilterContext *ctx)
{
BufferSinkContext *buf = ctx->priv;
@@ -385,6 +392,7 @@ const AVFilter ff_vsink_buffer = {
.priv_size = sizeof(BufferSinkContext),
.priv_class = &buffersink_class,
.init = common_init,
+ .uninit = uninit,
.activate = activate,
FILTER_INPUTS(ff_video_default_filterpad),
.outputs = NULL,
@@ -397,6 +405,7 @@ const AVFilter ff_asink_abuffer = {
.priv_class = &abuffersink_class,
.priv_size = sizeof(BufferSinkContext),
.init = common_init,
+ .uninit = uninit,
.activate = activate,
FILTER_INPUTS(ff_audio_default_filterpad),
.outputs = NULL,
--
2.42.0
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* [FFmpeg-devel] [PATCH 02/13] fftools/ffmpeg_filter: make sub2video heartbeat more robust
2023-11-23 19:14 [FFmpeg-devel] [PATCH v2] ffmpeg CLI multithreading Anton Khirnov
2023-11-23 19:14 ` [FFmpeg-devel] [PATCH 01/13] lavfi/buffersink: avoid leaking peeked_frame on uninit Anton Khirnov
@ 2023-11-23 19:14 ` Anton Khirnov
2023-11-27 9:40 ` Nicolas George
2023-11-23 19:14 ` [FFmpeg-devel] [PATCH 03/13] fftools/ffmpeg_filter: track input/output index in {Input, Output}FilterPriv Anton Khirnov
` (10 subsequent siblings)
12 siblings, 1 reply; 49+ messages in thread
From: Anton Khirnov @ 2023-11-23 19:14 UTC (permalink / raw)
To: ffmpeg-devel
Avoid making decisions based on current graph input state, which makes
the output dependent on the order in which the frames from different
inputs are interleaved.
Makes the output of fate-filter-overlay-dvdsub-2397 more correct - the
subtitle appears two frames later, which is closer to its PTS as stored
in the file.
---
fftools/ffmpeg_filter.c | 3 +--
tests/ref/fate/filter-overlay-dvdsub-2397 | 4 ++--
tests/ref/fate/sub2video | 8 +++++---
3 files changed, 8 insertions(+), 7 deletions(-)
diff --git a/fftools/ffmpeg_filter.c b/fftools/ffmpeg_filter.c
index b7da105141..b6fbc5b195 100644
--- a/fftools/ffmpeg_filter.c
+++ b/fftools/ffmpeg_filter.c
@@ -2274,8 +2274,7 @@ void ifilter_sub2video_heartbeat(InputFilter *ifilter, int64_t pts, AVRational t
or if we need to initialize the system, update the
overlayed subpicture and its start/end times */
sub2video_update(ifp, pts2 + 1, NULL);
-
- if (av_buffersrc_get_nb_failed_requests(ifp->filter))
+ else
sub2video_push_ref(ifp, pts2);
}
diff --git a/tests/ref/fate/filter-overlay-dvdsub-2397 b/tests/ref/fate/filter-overlay-dvdsub-2397
index 7df4f50776..45c026f540 100644
--- a/tests/ref/fate/filter-overlay-dvdsub-2397
+++ b/tests/ref/fate/filter-overlay-dvdsub-2397
@@ -489,12 +489,12 @@
1, 3877, 3877, 10, 2013, 0x95a39f9c
1, 3887, 3887, 10, 2013, 0x4f7ea123
1, 3897, 3897, 10, 2013, 0x9efb9ba1
-0, 117, 117, 1, 518400, 0xbf8523da
+0, 117, 117, 1, 518400, 0x61e0f688
1, 3907, 3907, 10, 2013, 0xf395b2cd
1, 3917, 3917, 10, 2013, 0x261a881e
1, 3927, 3927, 10, 2013, 0x7f2d9f72
1, 3937, 3937, 10, 2013, 0x0105b38d
-0, 118, 118, 1, 518400, 0x41890ed6
+0, 118, 118, 1, 518400, 0xa47de755
1, 3952, 3952, 10, 2013, 0x0e5db67e
1, 3962, 3962, 10, 2013, 0xfc9baf97
0, 119, 119, 1, 518400, 0x588534fc
diff --git a/tests/ref/fate/sub2video b/tests/ref/fate/sub2video
index 80abe9c905..76347322f3 100644
--- a/tests/ref/fate/sub2video
+++ b/tests/ref/fate/sub2video
@@ -68,7 +68,8 @@
0, 258, 258, 1, 518400, 0x34cdddee
0, 269, 269, 1, 518400, 0xbab197ea
1, 53910000, 53910000, 2696000, 2095, 0x61bb15ed
-0, 270, 270, 1, 518400, 0x4db4ce51
+0, 270, 270, 1, 518400, 0xbab197ea
+0, 271, 271, 1, 518400, 0x4db4ce51
0, 283, 283, 1, 518400, 0xbab197ea
1, 56663000, 56663000, 1262000, 1013, 0xc9ae89b7
0, 284, 284, 1, 518400, 0xe6bc0ea9
@@ -137,7 +138,7 @@
1, 168049000, 168049000, 1900000, 1312, 0x0bf20e8d
0, 850, 850, 1, 518400, 0xbab197ea
1, 170035000, 170035000, 1524000, 1279, 0xb6c2dafe
-0, 851, 851, 1, 518400, 0x8780239e
+0, 851, 851, 1, 518400, 0xbab197ea
0, 858, 858, 1, 518400, 0xbab197ea
0, 861, 861, 1, 518400, 0x6eb72347
1, 172203000, 172203000, 1695000, 1826, 0x9a1ac769
@@ -161,7 +162,8 @@
0, 976, 976, 1, 518400, 0x923d1ce7
0, 981, 981, 1, 518400, 0xbab197ea
1, 196361000, 196361000, 1524000, 1715, 0x695ca41e
-0, 982, 982, 1, 518400, 0x6e652cd2
+0, 982, 982, 1, 518400, 0xbab197ea
+0, 983, 983, 1, 518400, 0x6e652cd2
0, 989, 989, 1, 518400, 0xbab197ea
1, 197946000, 197946000, 1160000, 789, 0xc63a189e
0, 990, 990, 1, 518400, 0x25113966
--
2.42.0
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* [FFmpeg-devel] [PATCH 03/13] fftools/ffmpeg_filter: track input/output index in {Input, Output}FilterPriv
2023-11-23 19:14 [FFmpeg-devel] [PATCH v2] ffmpeg CLI multithreading Anton Khirnov
2023-11-23 19:14 ` [FFmpeg-devel] [PATCH 01/13] lavfi/buffersink: avoid leaking peeked_frame on uninit Anton Khirnov
2023-11-23 19:14 ` [FFmpeg-devel] [PATCH 02/13] fftools/ffmpeg_filter: make sub2video heartbeat more robust Anton Khirnov
@ 2023-11-23 19:14 ` Anton Khirnov
2023-11-23 19:14 ` [FFmpeg-devel] [PATCH 04/13] fftools/ffmpeg: make sure FrameData is writable when we modify it Anton Khirnov
` (9 subsequent siblings)
12 siblings, 0 replies; 49+ messages in thread
From: Anton Khirnov @ 2023-11-23 19:14 UTC (permalink / raw)
To: ffmpeg-devel
Will be useful in following commits.
---
fftools/ffmpeg_filter.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/fftools/ffmpeg_filter.c b/fftools/ffmpeg_filter.c
index b6fbc5b195..f942c97c40 100644
--- a/fftools/ffmpeg_filter.c
+++ b/fftools/ffmpeg_filter.c
@@ -74,6 +74,8 @@ static const FilterGraphPriv *cfgp_from_cfg(const FilterGraph *fg)
typedef struct InputFilterPriv {
InputFilter ifilter;
+ int index;
+
AVFilterContext *filter;
InputStream *ist;
@@ -162,6 +164,8 @@ typedef struct FPSConvContext {
typedef struct OutputFilterPriv {
OutputFilter ofilter;
+ int index;
+
AVFilterContext *filter;
/* desired output stream properties */
@@ -594,6 +598,7 @@ static OutputFilter *ofilter_alloc(FilterGraph *fg)
ofilter = &ofp->ofilter;
ofilter->graph = fg;
ofp->format = -1;
+ ofp->index = fg->nb_outputs - 1;
ofilter->last_pts = AV_NOPTS_VALUE;
return ofilter;
@@ -787,6 +792,7 @@ static InputFilter *ifilter_alloc(FilterGraph *fg)
if (!ifp->frame)
return NULL;
+ ifp->index = fg->nb_inputs - 1;
ifp->format = -1;
ifp->fallback.format = -1;
--
2.42.0
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* [FFmpeg-devel] [PATCH 04/13] fftools/ffmpeg: make sure FrameData is writable when we modify it
2023-11-23 19:14 [FFmpeg-devel] [PATCH v2] ffmpeg CLI multithreading Anton Khirnov
` (2 preceding siblings ...)
2023-11-23 19:14 ` [FFmpeg-devel] [PATCH 03/13] fftools/ffmpeg_filter: track input/output index in {Input, Output}FilterPriv Anton Khirnov
@ 2023-11-23 19:14 ` Anton Khirnov
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 05/13] fftools/ffmpeg_filter: move filtering to a separate thread Anton Khirnov
` (8 subsequent siblings)
12 siblings, 0 replies; 49+ messages in thread
From: Anton Khirnov @ 2023-11-23 19:14 UTC (permalink / raw)
To: ffmpeg-devel
Also, add a function that returns const FrameData* for cases that only
read from it.
---
fftools/ffmpeg.c | 21 +++++++++++++++++----
fftools/ffmpeg.h | 2 ++
fftools/ffmpeg_filter.c | 4 ++--
3 files changed, 21 insertions(+), 6 deletions(-)
diff --git a/fftools/ffmpeg.c b/fftools/ffmpeg.c
index cdb16ef90b..61fcda2526 100644
--- a/fftools/ffmpeg.c
+++ b/fftools/ffmpeg.c
@@ -427,21 +427,34 @@ InputStream *ist_iter(InputStream *prev)
return NULL;
}
-FrameData *frame_data(AVFrame *frame)
+static int frame_data_ensure(AVFrame *frame, int writable)
{
if (!frame->opaque_ref) {
FrameData *fd;
frame->opaque_ref = av_buffer_allocz(sizeof(*fd));
if (!frame->opaque_ref)
- return NULL;
+ return AVERROR(ENOMEM);
fd = (FrameData*)frame->opaque_ref->data;
fd->dec.frame_num = UINT64_MAX;
fd->dec.pts = AV_NOPTS_VALUE;
- }
+ } else if (writable)
+ return av_buffer_make_writable(&frame->opaque_ref);
- return (FrameData*)frame->opaque_ref->data;
+ return 0;
+}
+
+FrameData *frame_data(AVFrame *frame)
+{
+ int ret = frame_data_ensure(frame, 1);
+ return ret < 0 ? NULL : (FrameData*)frame->opaque_ref->data;
+}
+
+const FrameData *frame_data_c(AVFrame *frame)
+{
+ int ret = frame_data_ensure(frame, 0);
+ return ret < 0 ? NULL : (const FrameData*)frame->opaque_ref->data;
}
void remove_avoptions(AVDictionary **a, AVDictionary *b)
diff --git a/fftools/ffmpeg.h b/fftools/ffmpeg.h
index 41935d39d5..1f11a2f002 100644
--- a/fftools/ffmpeg.h
+++ b/fftools/ffmpeg.h
@@ -726,6 +726,8 @@ int subtitle_wrap_frame(AVFrame *frame, AVSubtitle *subtitle, int copy);
*/
FrameData *frame_data(AVFrame *frame);
+const FrameData *frame_data_c(AVFrame *frame);
+
int ifilter_send_frame(InputFilter *ifilter, AVFrame *frame, int keep_reference);
int ifilter_send_eof(InputFilter *ifilter, int64_t pts, AVRational tb);
int ifilter_sub2video(InputFilter *ifilter, const AVFrame *frame);
diff --git a/fftools/ffmpeg_filter.c b/fftools/ffmpeg_filter.c
index f942c97c40..69c28a6b2b 100644
--- a/fftools/ffmpeg_filter.c
+++ b/fftools/ffmpeg_filter.c
@@ -1859,9 +1859,9 @@ static int choose_out_timebase(OutputFilterPriv *ofp, AVFrame *frame)
FPSConvContext *fps = &ofp->fps;
AVRational tb = (AVRational){ 0, 0 };
AVRational fr;
- FrameData *fd;
+ const FrameData *fd;
- fd = frame_data(frame);
+ fd = frame_data_c(frame);
// apply -enc_time_base
if (ofp->enc_timebase.num == ENC_TIME_BASE_DEMUX &&
--
2.42.0
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* [FFmpeg-devel] [PATCH 05/13] fftools/ffmpeg_filter: move filtering to a separate thread
2023-11-23 19:14 [FFmpeg-devel] [PATCH v2] ffmpeg CLI multithreading Anton Khirnov
` (3 preceding siblings ...)
2023-11-23 19:14 ` [FFmpeg-devel] [PATCH 04/13] fftools/ffmpeg: make sure FrameData is writable when we modify it Anton Khirnov
@ 2023-11-23 19:15 ` Anton Khirnov
2023-11-24 22:56 ` Michael Niedermayer
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 06/13] fftools/ffmpeg_filter: buffer sub2video heartbeat frames like other frames Anton Khirnov
` (7 subsequent siblings)
12 siblings, 1 reply; 49+ messages in thread
From: Anton Khirnov @ 2023-11-23 19:15 UTC (permalink / raw)
To: ffmpeg-devel
As previously for decoding, this is merely "scaffolding" for moving to a
fully threaded architecture and does not yet make filtering truly
parallel - the main thread will currently wait for the filtering thread
to finish its work before continuing. That will change in future commits
after encoders are also moved to threads and a thread-aware scheduler is
added.
---
fftools/ffmpeg.h | 9 +-
fftools/ffmpeg_dec.c | 39 +-
fftools/ffmpeg_filter.c | 825 ++++++++++++++++++++++++++++++++++------
3 files changed, 730 insertions(+), 143 deletions(-)
diff --git a/fftools/ffmpeg.h b/fftools/ffmpeg.h
index 1f11a2f002..f50222472c 100644
--- a/fftools/ffmpeg.h
+++ b/fftools/ffmpeg.h
@@ -80,6 +80,14 @@ enum HWAccelID {
HWACCEL_GENERIC,
};
+enum FrameOpaque {
+ FRAME_OPAQUE_REAP_FILTERS = 1,
+ FRAME_OPAQUE_CHOOSE_INPUT,
+ FRAME_OPAQUE_SUB_HEARTBEAT,
+ FRAME_OPAQUE_EOF,
+ FRAME_OPAQUE_SEND_COMMAND,
+};
+
typedef struct HWDevice {
const char *name;
enum AVHWDeviceType type;
@@ -730,7 +738,6 @@ const FrameData *frame_data_c(AVFrame *frame);
int ifilter_send_frame(InputFilter *ifilter, AVFrame *frame, int keep_reference);
int ifilter_send_eof(InputFilter *ifilter, int64_t pts, AVRational tb);
-int ifilter_sub2video(InputFilter *ifilter, const AVFrame *frame);
void ifilter_sub2video_heartbeat(InputFilter *ifilter, int64_t pts, AVRational tb);
/**
diff --git a/fftools/ffmpeg_dec.c b/fftools/ffmpeg_dec.c
index 517d6b3ced..b60bad1220 100644
--- a/fftools/ffmpeg_dec.c
+++ b/fftools/ffmpeg_dec.c
@@ -147,11 +147,12 @@ fail:
static int send_frame_to_filters(InputStream *ist, AVFrame *decoded_frame)
{
- int i, ret;
+ int i, ret = 0;
- av_assert1(ist->nb_filters > 0); /* ensure ret is initialized */
for (i = 0; i < ist->nb_filters; i++) {
- ret = ifilter_send_frame(ist->filters[i], decoded_frame, i < ist->nb_filters - 1);
+ ret = ifilter_send_frame(ist->filters[i], decoded_frame,
+ i < ist->nb_filters - 1 ||
+ ist->dec->type == AVMEDIA_TYPE_SUBTITLE);
if (ret == AVERROR_EOF)
ret = 0; /* ignore */
if (ret < 0) {
@@ -380,15 +381,6 @@ static int video_frame_process(InputStream *ist, AVFrame *frame)
return 0;
}
-static void sub2video_flush(InputStream *ist)
-{
- for (int i = 0; i < ist->nb_filters; i++) {
- int ret = ifilter_sub2video(ist->filters[i], NULL);
- if (ret != AVERROR_EOF && ret < 0)
- av_log(NULL, AV_LOG_WARNING, "Flush the frame error.\n");
- }
-}
-
static int process_subtitle(InputStream *ist, AVFrame *frame)
{
Decoder *d = ist->decoder;
@@ -426,14 +418,9 @@ static int process_subtitle(InputStream *ist, AVFrame *frame)
if (!subtitle)
return 0;
- for (int i = 0; i < ist->nb_filters; i++) {
- ret = ifilter_sub2video(ist->filters[i], frame);
- if (ret < 0) {
- av_log(ist, AV_LOG_ERROR, "Error sending a subtitle for filtering: %s\n",
- av_err2str(ret));
- return ret;
- }
- }
+ ret = send_frame_to_filters(ist, frame);
+ if (ret < 0)
+ return ret;
subtitle = (AVSubtitle*)frame->buf[0]->data;
if (!subtitle->num_rects)
@@ -824,14 +811,10 @@ finish:
return ret;
// signal EOF to our downstreams
- if (ist->dec->type == AVMEDIA_TYPE_SUBTITLE)
- sub2video_flush(ist);
- else {
- ret = send_filter_eof(ist);
- if (ret < 0) {
- av_log(NULL, AV_LOG_FATAL, "Error marking filters as finished\n");
- return ret;
- }
+ ret = send_filter_eof(ist);
+ if (ret < 0) {
+ av_log(NULL, AV_LOG_FATAL, "Error marking filters as finished\n");
+ return ret;
}
return AVERROR_EOF;
diff --git a/fftools/ffmpeg_filter.c b/fftools/ffmpeg_filter.c
index 69c28a6b2b..1b964fc53f 100644
--- a/fftools/ffmpeg_filter.c
+++ b/fftools/ffmpeg_filter.c
@@ -21,6 +21,8 @@
#include <stdint.h>
#include "ffmpeg.h"
+#include "ffmpeg_utils.h"
+#include "thread_queue.h"
#include "libavfilter/avfilter.h"
#include "libavfilter/buffersink.h"
@@ -53,12 +55,50 @@ typedef struct FilterGraphPriv {
int is_meta;
int disable_conversions;
+ int nb_inputs_bound;
+ int nb_outputs_bound;
+
const char *graph_desc;
// frame for temporarily holding output from the filtergraph
AVFrame *frame;
// frame for sending output to the encoder
AVFrame *frame_enc;
+
+ pthread_t thread;
+ /**
+ * Queue for sending frames from the main thread to the filtergraph. Has
+ * nb_inputs+1 streams - the first nb_inputs stream correspond to
+ * filtergraph inputs. Frames on those streams may have their opaque set to
+ * - FRAME_OPAQUE_EOF: frame contains no data, but pts+timebase of the
+ * EOF event for the correspondint stream. Will be immediately followed by
+ * this stream being send-closed.
+ * - FRAME_OPAQUE_SUB_HEARTBEAT: frame contains no data, but pts+timebase of
+ * a subtitle heartbeat event. Will only be sent for sub2video streams.
+ *
+ * The last stream is "control" - the main thread sends empty AVFrames with
+ * opaque set to
+ * - FRAME_OPAQUE_REAP_FILTERS: a request to retrieve all frame available
+ * from filtergraph outputs. These frames are sent to corresponding
+ * streams in queue_out. Finally an empty frame is sent to the control
+ * stream in queue_out.
+ * - FRAME_OPAQUE_CHOOSE_INPUT: same as above, but in case no frames are
+ * available the terminating empty frame's opaque will contain the index+1
+ * of the filtergraph input to which more input frames should be supplied.
+ */
+ ThreadQueue *queue_in;
+ /**
+ * Queue for sending frames from the filtergraph back to the main thread.
+ * Has nb_outputs+1 streams - the first nb_outputs stream correspond to
+ * filtergraph outputs.
+ *
+ * The last stream is "control" - see documentation for queue_in for more
+ * details.
+ */
+ ThreadQueue *queue_out;
+ // submitting frames to filter thread returned EOF
+ // this only happens on thread exit, so is not per-input
+ int eof_in;
} FilterGraphPriv;
static FilterGraphPriv *fgp_from_fg(FilterGraph *fg)
@@ -71,6 +111,22 @@ static const FilterGraphPriv *cfgp_from_cfg(const FilterGraph *fg)
return (const FilterGraphPriv*)fg;
}
+// data that is local to the filter thread and not visible outside of it
+typedef struct FilterGraphThread {
+ AVFrame *frame;
+
+ // Temporary buffer for output frames, since on filtergraph reset
+ // we cannot send them to encoders immediately.
+ // The output index is stored in frame opaque.
+ AVFifo *frame_queue_out;
+
+ int got_frame;
+
+ // EOF status of each input/output, as received by the thread
+ uint8_t *eof_in;
+ uint8_t *eof_out;
+} FilterGraphThread;
+
typedef struct InputFilterPriv {
InputFilter ifilter;
@@ -204,7 +260,25 @@ static OutputFilterPriv *ofp_from_ofilter(OutputFilter *ofilter)
return (OutputFilterPriv*)ofilter;
}
-static int configure_filtergraph(FilterGraph *fg);
+typedef struct FilterCommand {
+ char *target;
+ char *command;
+ char *arg;
+
+ double time;
+ int all_filters;
+} FilterCommand;
+
+static void filter_command_free(void *opaque, uint8_t *data)
+{
+ FilterCommand *fc = (FilterCommand*)data;
+
+ av_freep(&fc->target);
+ av_freep(&fc->command);
+ av_freep(&fc->arg);
+
+ av_free(data);
+}
static int sub2video_get_blank_frame(InputFilterPriv *ifp)
{
@@ -574,6 +648,59 @@ static int ifilter_has_all_input_formats(FilterGraph *fg)
return 1;
}
+static void *filter_thread(void *arg);
+
+// start the filtering thread once all inputs and outputs are bound
+static int fg_thread_try_start(FilterGraphPriv *fgp)
+{
+ FilterGraph *fg = &fgp->fg;
+ ObjPool *op;
+ int ret = 0;
+
+ if (fgp->nb_inputs_bound < fg->nb_inputs ||
+ fgp->nb_outputs_bound < fg->nb_outputs)
+ return 0;
+
+ op = objpool_alloc_frames();
+ if (!op)
+ return AVERROR(ENOMEM);
+
+ fgp->queue_in = tq_alloc(fg->nb_inputs + 1, 1, op, frame_move);
+ if (!fgp->queue_in) {
+ objpool_free(&op);
+ return AVERROR(ENOMEM);
+ }
+
+ // at least one output is mandatory
+ op = objpool_alloc_frames();
+ if (!op)
+ goto fail;
+
+ fgp->queue_out = tq_alloc(fg->nb_outputs + 1, 1, op, frame_move);
+ if (!fgp->queue_out) {
+ objpool_free(&op);
+ goto fail;
+ }
+
+ ret = pthread_create(&fgp->thread, NULL, filter_thread, fgp);
+ if (ret) {
+ ret = AVERROR(ret);
+ av_log(NULL, AV_LOG_ERROR, "pthread_create() for filtergraph %d failed: %s\n",
+ fg->index, av_err2str(ret));
+ goto fail;
+ }
+
+ return 0;
+fail:
+ if (ret >= 0)
+ ret = AVERROR(ENOMEM);
+
+ tq_free(&fgp->queue_in);
+ tq_free(&fgp->queue_out);
+
+ return ret;
+}
+
static char *describe_filter_link(FilterGraph *fg, AVFilterInOut *inout, int in)
{
AVFilterContext *ctx = inout->filter_ctx;
@@ -607,6 +734,7 @@ static OutputFilter *ofilter_alloc(FilterGraph *fg)
static int ifilter_bind_ist(InputFilter *ifilter, InputStream *ist)
{
InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
+ FilterGraphPriv *fgp = fgp_from_fg(ifilter->graph);
int ret;
av_assert0(!ifp->ist);
@@ -624,7 +752,10 @@ static int ifilter_bind_ist(InputFilter *ifilter, InputStream *ist)
return AVERROR(ENOMEM);
}
- return 0;
+ fgp->nb_inputs_bound++;
+ av_assert0(fgp->nb_inputs_bound <= ifilter->graph->nb_inputs);
+
+ return fg_thread_try_start(fgp);
}
static int set_channel_layout(OutputFilterPriv *f, OutputStream *ost)
@@ -756,24 +887,10 @@ int ofilter_bind_ost(OutputFilter *ofilter, OutputStream *ost)
break;
}
- // if we have all input parameters and all outputs are bound,
- // the graph can now be configured
- if (ifilter_has_all_input_formats(fg)) {
- int ret;
+ fgp->nb_outputs_bound++;
+ av_assert0(fgp->nb_outputs_bound <= fg->nb_outputs);
- for (int i = 0; i < fg->nb_outputs; i++)
- if (!fg->outputs[i]->ost)
- return 0;
-
- ret = configure_filtergraph(fg);
- if (ret < 0) {
- av_log(fg, AV_LOG_ERROR, "Error configuring filter graph: %s\n",
- av_err2str(ret));
- return ret;
- }
- }
-
- return 0;
+ return fg_thread_try_start(fgp);
}
static InputFilter *ifilter_alloc(FilterGraph *fg)
@@ -803,6 +920,34 @@ static InputFilter *ifilter_alloc(FilterGraph *fg)
return ifilter;
}
+static int fg_thread_stop(FilterGraphPriv *fgp)
+{
+ void *ret;
+
+ if (!fgp->queue_in)
+ return 0;
+
+ for (int i = 0; i <= fgp->fg.nb_inputs; i++) {
+ InputFilterPriv *ifp = i < fgp->fg.nb_inputs ?
+ ifp_from_ifilter(fgp->fg.inputs[i]) : NULL;
+
+ if (ifp)
+ ifp->eof = 1;
+
+ tq_send_finish(fgp->queue_in, i);
+ }
+
+ for (int i = 0; i <= fgp->fg.nb_outputs; i++)
+ tq_receive_finish(fgp->queue_out, i);
+
+ pthread_join(fgp->thread, &ret);
+
+ tq_free(&fgp->queue_in);
+ tq_free(&fgp->queue_out);
+
+ return (int)(intptr_t)ret;
+}
+
void fg_free(FilterGraph **pfg)
{
FilterGraph *fg = *pfg;
@@ -812,6 +957,8 @@ void fg_free(FilterGraph **pfg)
return;
fgp = fgp_from_fg(fg);
+ fg_thread_stop(fgp);
+
avfilter_graph_free(&fg->graph);
for (int j = 0; j < fg->nb_inputs; j++) {
InputFilter *ifilter = fg->inputs[j];
@@ -1622,7 +1769,7 @@ static int graph_is_meta(AVFilterGraph *graph)
return 1;
}
-static int configure_filtergraph(FilterGraph *fg)
+static int configure_filtergraph(FilterGraph *fg, const FilterGraphThread *fgt)
{
FilterGraphPriv *fgp = fgp_from_fg(fg);
AVBufferRef *hw_device;
@@ -1746,7 +1893,7 @@ static int configure_filtergraph(FilterGraph *fg)
/* send the EOFs for the finished inputs */
for (i = 0; i < fg->nb_inputs; i++) {
InputFilterPriv *ifp = ifp_from_ifilter(fg->inputs[i]);
- if (ifp->eof) {
+ if (fgt->eof_in[i]) {
ret = av_buffersrc_add_frame(ifp->filter, NULL);
if (ret < 0)
goto fail;
@@ -1829,8 +1976,8 @@ int filtergraph_is_simple(const FilterGraph *fg)
return fgp->is_simple;
}
-void fg_send_command(FilterGraph *fg, double time, const char *target,
- const char *command, const char *arg, int all_filters)
+static void send_command(FilterGraph *fg, double time, const char *target,
+ const char *command, const char *arg, int all_filters)
{
int ret;
@@ -1853,6 +2000,29 @@ void fg_send_command(FilterGraph *fg, double time, const char *target,
}
}
+static int choose_input(const FilterGraph *fg, const FilterGraphThread *fgt)
+{
+ int nb_requests, nb_requests_max = 0;
+ int best_input = -1;
+
+ for (int i = 0; i < fg->nb_inputs; i++) {
+ InputFilter *ifilter = fg->inputs[i];
+ InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
+ InputStream *ist = ifp->ist;
+
+ if (input_files[ist->file_index]->eagain || fgt->eof_in[i])
+ continue;
+
+ nb_requests = av_buffersrc_get_nb_failed_requests(ifp->filter);
+ if (nb_requests > nb_requests_max) {
+ nb_requests_max = nb_requests;
+ best_input = i;
+ }
+ }
+
+ return best_input;
+}
+
static int choose_out_timebase(OutputFilterPriv *ofp, AVFrame *frame)
{
OutputFilter *ofilter = &ofp->ofilter;
@@ -2088,16 +2258,16 @@ finish:
fps->dropped_keyframe |= fps->last_dropped && (frame->flags & AV_FRAME_FLAG_KEY);
}
-static int fg_output_frame(OutputFilterPriv *ofp, AVFrame *frame)
+static int fg_output_frame(OutputFilterPriv *ofp, FilterGraphThread *fgt,
+ AVFrame *frame, int buffer)
{
FilterGraphPriv *fgp = fgp_from_fg(ofp->ofilter.graph);
- OutputStream *ost = ofp->ofilter.ost;
AVFrame *frame_prev = ofp->fps.last_frame;
enum AVMediaType type = ofp->ofilter.type;
- int64_t nb_frames = 1, nb_frames_prev = 0;
+ int64_t nb_frames = !!frame, nb_frames_prev = 0;
- if (type == AVMEDIA_TYPE_VIDEO)
+ if (type == AVMEDIA_TYPE_VIDEO && (frame || fgt->got_frame))
video_sync_process(ofp, frame, &nb_frames, &nb_frames_prev);
for (int64_t i = 0; i < nb_frames; i++) {
@@ -2136,10 +2306,31 @@ static int fg_output_frame(OutputFilterPriv *ofp, AVFrame *frame)
frame_out = frame;
}
- ret = enc_frame(ost, frame_out);
- av_frame_unref(frame_out);
- if (ret < 0)
- return ret;
+ if (buffer) {
+ AVFrame *f = av_frame_alloc();
+
+ if (!f) {
+ av_frame_unref(frame_out);
+ return AVERROR(ENOMEM);
+ }
+
+ av_frame_move_ref(f, frame_out);
+ f->opaque = (void*)(intptr_t)ofp->index;
+
+ ret = av_fifo_write(fgt->frame_queue_out, &f, 1);
+ if (ret < 0) {
+ av_frame_free(&f);
+ return AVERROR(ENOMEM);
+ }
+ } else {
+ // return the frame to the main thread
+ ret = tq_send(fgp->queue_out, ofp->index, frame_out);
+ if (ret < 0) {
+ av_frame_unref(frame_out);
+ fgt->eof_out[ofp->index] = 1;
+ return ret == AVERROR_EOF ? 0 : ret;
+ }
+ }
if (type == AVMEDIA_TYPE_VIDEO) {
ofp->fps.frame_number++;
@@ -2149,7 +2340,7 @@ static int fg_output_frame(OutputFilterPriv *ofp, AVFrame *frame)
frame->flags &= ~AV_FRAME_FLAG_KEY;
}
- ofp->got_frame = 1;
+ fgt->got_frame = 1;
}
if (frame && frame_prev) {
@@ -2157,23 +2348,27 @@ static int fg_output_frame(OutputFilterPriv *ofp, AVFrame *frame)
av_frame_move_ref(frame_prev, frame);
}
+ if (!frame) {
+ tq_send_finish(fgp->queue_out, ofp->index);
+ fgt->eof_out[ofp->index] = 1;
+ }
+
return 0;
}
-static int fg_output_step(OutputFilterPriv *ofp, int flush)
+static int fg_output_step(OutputFilterPriv *ofp, FilterGraphThread *fgt,
+ AVFrame *frame, int buffer)
{
FilterGraphPriv *fgp = fgp_from_fg(ofp->ofilter.graph);
OutputStream *ost = ofp->ofilter.ost;
- AVFrame *frame = fgp->frame;
AVFilterContext *filter = ofp->filter;
FrameData *fd;
int ret;
ret = av_buffersink_get_frame_flags(filter, frame,
AV_BUFFERSINK_FLAG_NO_REQUEST);
- if (flush && ret == AVERROR_EOF && ofp->got_frame &&
- ost->type == AVMEDIA_TYPE_VIDEO) {
- ret = fg_output_frame(ofp, NULL);
+ if (ret == AVERROR_EOF && !buffer && !fgt->eof_out[ofp->index]) {
+ ret = fg_output_frame(ofp, fgt, NULL, buffer);
return (ret < 0) ? ret : 1;
} else if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
return 1;
@@ -2183,22 +2378,18 @@ static int fg_output_step(OutputFilterPriv *ofp, int flush)
av_err2str(ret));
return ret;
}
- if (ost->finished) {
+
+ if (fgt->eof_out[ofp->index]) {
av_frame_unref(frame);
return 0;
}
frame->time_base = av_buffersink_get_time_base(filter);
- if (frame->pts != AV_NOPTS_VALUE) {
- ost->filter->last_pts = av_rescale_q(frame->pts, frame->time_base,
- AV_TIME_BASE_Q);
-
- if (debug_ts)
- av_log(fgp, AV_LOG_INFO, "filter_raw -> pts:%s pts_time:%s time_base:%d/%d\n",
- av_ts2str(frame->pts), av_ts2timestr(frame->pts, &frame->time_base),
- frame->time_base.num, frame->time_base.den);
- }
+ if (debug_ts)
+ av_log(fgp, AV_LOG_INFO, "filter_raw -> pts:%s pts_time:%s time_base:%d/%d\n",
+ av_ts2str(frame->pts), av_ts2timestr(frame->pts, &frame->time_base),
+ frame->time_base.num, frame->time_base.den);
// Choose the output timebase the first time we get a frame.
if (!ofp->tb_out_locked) {
@@ -2231,7 +2422,7 @@ static int fg_output_step(OutputFilterPriv *ofp, int flush)
fd->frame_rate_filter = ofp->fps.framerate;
}
- ret = fg_output_frame(ofp, frame);
+ ret = fg_output_frame(ofp, fgt, frame, buffer);
av_frame_unref(frame);
if (ret < 0)
return ret;
@@ -2239,18 +2430,38 @@ static int fg_output_step(OutputFilterPriv *ofp, int flush)
return 0;
}
-int reap_filters(FilterGraph *fg, int flush)
+/* retrieve all frames available at filtergraph outputs and either send them to
+ * the main thread (buffer=0) or buffer them for later (buffer=1) */
+static int read_frames(FilterGraph *fg, FilterGraphThread *fgt,
+ AVFrame *frame, int buffer)
{
+ FilterGraphPriv *fgp = fgp_from_fg(fg);
+ int ret = 0;
+
if (!fg->graph)
return 0;
+ // process buffered frames
+ if (!buffer) {
+ AVFrame *f;
+
+ while (av_fifo_read(fgt->frame_queue_out, &f, 1) >= 0) {
+ int out_idx = (intptr_t)f->opaque;
+ f->opaque = NULL;
+ ret = tq_send(fgp->queue_out, out_idx, f);
+ av_frame_free(&f);
+ if (ret < 0 && ret != AVERROR_EOF)
+ return ret;
+ }
+ }
+
/* Reap all buffers present in the buffer sinks */
for (int i = 0; i < fg->nb_outputs; i++) {
OutputFilterPriv *ofp = ofp_from_ofilter(fg->outputs[i]);
int ret = 0;
while (!ret) {
- ret = fg_output_step(ofp, flush);
+ ret = fg_output_step(ofp, fgt, frame, buffer);
if (ret < 0)
return ret;
}
@@ -2259,7 +2470,7 @@ int reap_filters(FilterGraph *fg, int flush)
return 0;
}
-void ifilter_sub2video_heartbeat(InputFilter *ifilter, int64_t pts, AVRational tb)
+static void sub2video_heartbeat(InputFilter *ifilter, int64_t pts, AVRational tb)
{
InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
int64_t pts2;
@@ -2284,11 +2495,17 @@ void ifilter_sub2video_heartbeat(InputFilter *ifilter, int64_t pts, AVRational t
sub2video_push_ref(ifp, pts2);
}
-int ifilter_sub2video(InputFilter *ifilter, const AVFrame *frame)
+static int sub2video_frame(InputFilter *ifilter, const AVFrame *frame)
{
InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
int ret;
+ // heartbeat frame
+ if (frame && !frame->buf[0]) {
+ sub2video_heartbeat(ifilter, frame->pts, frame->time_base);
+ return 0;
+ }
+
if (ifilter->graph->graph) {
if (!frame) {
if (ifp->sub2video.end_pts < INT64_MAX)
@@ -2317,12 +2534,13 @@ int ifilter_sub2video(InputFilter *ifilter, const AVFrame *frame)
return 0;
}
-int ifilter_send_eof(InputFilter *ifilter, int64_t pts, AVRational tb)
+static int send_eof(FilterGraphThread *fgt, InputFilter *ifilter,
+ int64_t pts, AVRational tb)
{
InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
int ret;
- ifp->eof = 1;
+ fgt->eof_in[ifp->index] = 1;
if (ifp->filter) {
pts = av_rescale_q_rnd(pts, tb, ifp->time_base,
@@ -2346,7 +2564,7 @@ int ifilter_send_eof(InputFilter *ifilter, int64_t pts, AVRational tb)
return ret;
if (ifilter_has_all_input_formats(ifilter->graph)) {
- ret = configure_filtergraph(ifilter->graph);
+ ret = configure_filtergraph(ifilter->graph, fgt);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error initializing filters!\n");
return ret;
@@ -2365,10 +2583,10 @@ int ifilter_send_eof(InputFilter *ifilter, int64_t pts, AVRational tb)
return 0;
}
-int ifilter_send_frame(InputFilter *ifilter, AVFrame *frame, int keep_reference)
+static int send_frame(FilterGraph *fg, FilterGraphThread *fgt,
+ InputFilter *ifilter, AVFrame *frame)
{
InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
- FilterGraph *fg = ifilter->graph;
AVFrameSideData *sd;
int need_reinit, ret;
@@ -2408,10 +2626,13 @@ int ifilter_send_frame(InputFilter *ifilter, AVFrame *frame, int keep_reference)
/* (re)init the graph if possible, otherwise buffer the frame and return */
if (need_reinit || !fg->graph) {
+ AVFrame *tmp = av_frame_alloc();
+
+ if (!tmp)
+ return AVERROR(ENOMEM);
+
if (!ifilter_has_all_input_formats(fg)) {
- AVFrame *tmp = av_frame_clone(frame);
- if (!tmp)
- return AVERROR(ENOMEM);
+ av_frame_move_ref(tmp, frame);
ret = av_fifo_write(ifp->frame_queue, &tmp, 1);
if (ret < 0)
@@ -2420,27 +2641,18 @@ int ifilter_send_frame(InputFilter *ifilter, AVFrame *frame, int keep_reference)
return ret;
}
- ret = reap_filters(fg, 0);
- if (ret < 0 && ret != AVERROR_EOF) {
- av_log(fg, AV_LOG_ERROR, "Error while filtering: %s\n", av_err2str(ret));
+ ret = fg->graph ? read_frames(fg, fgt, tmp, 1) : 0;
+ av_frame_free(&tmp);
+ if (ret < 0)
return ret;
- }
- ret = configure_filtergraph(fg);
+ ret = configure_filtergraph(fg, fgt);
if (ret < 0) {
av_log(fg, AV_LOG_ERROR, "Error reinitializing filters!\n");
return ret;
}
}
- if (keep_reference) {
- ret = av_frame_ref(ifp->frame, frame);
- if (ret < 0)
- return ret;
- } else
- av_frame_move_ref(ifp->frame, frame);
- frame = ifp->frame;
-
frame->pts = av_rescale_q(frame->pts, frame->time_base, ifp->time_base);
frame->duration = av_rescale_q(frame->duration, frame->time_base, ifp->time_base);
frame->time_base = ifp->time_base;
@@ -2462,20 +2674,30 @@ int ifilter_send_frame(InputFilter *ifilter, AVFrame *frame, int keep_reference)
return 0;
}
-int fg_transcode_step(FilterGraph *graph, InputStream **best_ist)
+static int msg_process(FilterGraphPriv *fgp, FilterGraphThread *fgt,
+ AVFrame *frame)
{
- FilterGraphPriv *fgp = fgp_from_fg(graph);
- int i, ret;
- int nb_requests, nb_requests_max = 0;
- InputStream *ist;
+ const enum FrameOpaque msg = (intptr_t)frame->opaque;
+ FilterGraph *fg = &fgp->fg;
+ int graph_eof = 0;
+ int ret;
- if (!graph->graph) {
- for (int i = 0; i < graph->nb_inputs; i++) {
- InputFilter *ifilter = graph->inputs[i];
+ frame->opaque = NULL;
+ av_assert0(msg > 0);
+ av_assert0(msg == FRAME_OPAQUE_SEND_COMMAND || !frame->buf[0]);
+
+ if (!fg->graph) {
+ // graph not configured yet, ignore all messages other than choosing
+ // the input to read from
+ if (msg != FRAME_OPAQUE_CHOOSE_INPUT)
+ goto done;
+
+ for (int i = 0; i < fg->nb_inputs; i++) {
+ InputFilter *ifilter = fg->inputs[i];
InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
- if (ifp->format < 0 && !ifp->eof) {
- *best_ist = ifp->ist;
- return 0;
+ if (ifp->format < 0 && !fgt->eof_in[i]) {
+ frame->opaque = (void*)(intptr_t)(i + 1);
+ goto done;
}
}
@@ -2486,16 +2708,310 @@ int fg_transcode_step(FilterGraph *graph, InputStream **best_ist)
return AVERROR_BUG;
}
- *best_ist = NULL;
- ret = avfilter_graph_request_oldest(graph->graph);
- if (ret >= 0)
- return reap_filters(graph, 0);
+ if (msg == FRAME_OPAQUE_SEND_COMMAND) {
+ FilterCommand *fc = (FilterCommand*)frame->buf[0]->data;
+ send_command(fg, fc->time, fc->target, fc->command, fc->arg, fc->all_filters);
+ av_frame_unref(frame);
+ goto done;
+ }
- if (ret == AVERROR_EOF) {
- reap_filters(graph, 1);
- for (int i = 0; i < graph->nb_outputs; i++) {
- OutputFilter *ofilter = graph->outputs[i];
- OutputFilterPriv *ofp = ofp_from_ofilter(ofilter);
+ if (msg == FRAME_OPAQUE_CHOOSE_INPUT) {
+ ret = avfilter_graph_request_oldest(fg->graph);
+
+ graph_eof = ret == AVERROR_EOF;
+
+ if (ret == AVERROR(EAGAIN)) {
+ frame->opaque = (void*)(intptr_t)(choose_input(fg, fgt) + 1);
+ goto done;
+ } else if (ret < 0 && !graph_eof)
+ return ret;
+ }
+
+ ret = read_frames(fg, fgt, frame, 0);
+ if (ret < 0) {
+ av_log(fg, AV_LOG_ERROR, "Error sending filtered frames for encoding\n");
+ return ret;
+ }
+
+ if (graph_eof)
+ return AVERROR_EOF;
+
+ // signal to the main thread that we are done processing the message
+done:
+ ret = tq_send(fgp->queue_out, fg->nb_outputs, frame);
+ if (ret < 0) {
+ if (ret != AVERROR_EOF)
+ av_log(fg, AV_LOG_ERROR, "Error communicating with the main thread\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+static void fg_thread_set_name(const FilterGraph *fg)
+{
+ char name[16];
+ if (filtergraph_is_simple(fg)) {
+ OutputStream *ost = fg->outputs[0]->ost;
+ snprintf(name, sizeof(name), "%cf#%d:%d",
+ av_get_media_type_string(ost->type)[0],
+ ost->file_index, ost->index);
+ } else {
+ snprintf(name, sizeof(name), "fc%d", fg->index);
+ }
+
+ ff_thread_setname(name);
+}
+
+static void fg_thread_uninit(FilterGraphThread *fgt)
+{
+ if (fgt->frame_queue_out) {
+ AVFrame *frame;
+ while (av_fifo_read(fgt->frame_queue_out, &frame, 1) >= 0)
+ av_frame_free(&frame);
+ av_fifo_freep2(&fgt->frame_queue_out);
+ }
+
+ av_frame_free(&fgt->frame);
+ av_freep(&fgt->eof_in);
+ av_freep(&fgt->eof_out);
+
+ memset(fgt, 0, sizeof(*fgt));
+}
+
+static int fg_thread_init(FilterGraphThread *fgt, const FilterGraph *fg)
+{
+ memset(fgt, 0, sizeof(*fgt));
+
+ fgt->frame = av_frame_alloc();
+ if (!fgt->frame)
+ goto fail;
+
+ fgt->eof_in = av_calloc(fg->nb_inputs, sizeof(*fgt->eof_in));
+ if (!fgt->eof_in)
+ goto fail;
+
+ fgt->eof_out = av_calloc(fg->nb_outputs, sizeof(*fgt->eof_out));
+ if (!fgt->eof_out)
+ goto fail;
+
+ fgt->frame_queue_out = av_fifo_alloc2(1, sizeof(AVFrame*), AV_FIFO_FLAG_AUTO_GROW);
+ if (!fgt->frame_queue_out)
+ goto fail;
+
+ return 0;
+
+fail:
+ fg_thread_uninit(fgt);
+ return AVERROR(ENOMEM);
+}
+
+static void *filter_thread(void *arg)
+{
+ FilterGraphPriv *fgp = arg;
+ FilterGraph *fg = &fgp->fg;
+
+ FilterGraphThread fgt;
+ int ret = 0, input_status = 0;
+
+ ret = fg_thread_init(&fgt, fg);
+ if (ret < 0)
+ goto finish;
+
+ fg_thread_set_name(fg);
+
+ // if we have all input parameters the graph can now be configured
+ if (ifilter_has_all_input_formats(fg)) {
+ ret = configure_filtergraph(fg, &fgt);
+ if (ret < 0) {
+ av_log(fg, AV_LOG_ERROR, "Error configuring filter graph: %s\n",
+ av_err2str(ret));
+ goto finish;
+ }
+ }
+
+ while (1) {
+ InputFilter *ifilter;
+ InputFilterPriv *ifp;
+ enum FrameOpaque o;
+ int input_idx, eof_frame;
+
+ input_status = tq_receive(fgp->queue_in, &input_idx, fgt.frame);
+ if (input_idx < 0 ||
+ (input_idx == fg->nb_inputs && input_status < 0)) {
+ av_log(fg, AV_LOG_VERBOSE, "Filtering thread received EOF\n");
+ break;
+ }
+
+ o = (intptr_t)fgt.frame->opaque;
+
+ // message on the control stream
+ if (input_idx == fg->nb_inputs) {
+ ret = msg_process(fgp, &fgt, fgt.frame);
+ if (ret < 0)
+ goto finish;
+
+ continue;
+ }
+
+ // we received an input frame or EOF
+ ifilter = fg->inputs[input_idx];
+ ifp = ifp_from_ifilter(ifilter);
+ eof_frame = input_status >= 0 && o == FRAME_OPAQUE_EOF;
+ if (ifp->type_src == AVMEDIA_TYPE_SUBTITLE) {
+ int hb_frame = input_status >= 0 && o == FRAME_OPAQUE_SUB_HEARTBEAT;
+ ret = sub2video_frame(ifilter, (fgt.frame->buf[0] || hb_frame) ? fgt.frame : NULL);
+ } else if (input_status >= 0 && fgt.frame->buf[0]) {
+ ret = send_frame(fg, &fgt, ifilter, fgt.frame);
+ } else {
+ int64_t pts = input_status >= 0 ? fgt.frame->pts : AV_NOPTS_VALUE;
+ AVRational tb = input_status >= 0 ? fgt.frame->time_base : (AVRational){ 1, 1 };
+ ret = send_eof(&fgt, ifilter, pts, tb);
+ }
+ av_frame_unref(fgt.frame);
+ if (ret < 0)
+ break;
+
+ if (eof_frame) {
+ // an EOF frame is immediately followed by sender closing
+ // the corresponding stream, so retrieve that event
+ input_status = tq_receive(fgp->queue_in, &input_idx, fgt.frame);
+ av_assert0(input_status == AVERROR_EOF && input_idx == ifp->index);
+ }
+
+ // signal to the main thread that we are done
+ ret = tq_send(fgp->queue_out, fg->nb_outputs, fgt.frame);
+ if (ret < 0) {
+ if (ret == AVERROR_EOF)
+ break;
+
+ av_log(fg, AV_LOG_ERROR, "Error communicating with the main thread\n");
+ goto finish;
+ }
+ }
+
+finish:
+ // EOF is normal termination
+ if (ret == AVERROR_EOF)
+ ret = 0;
+
+ for (int i = 0; i <= fg->nb_inputs; i++)
+ tq_receive_finish(fgp->queue_in, i);
+ for (int i = 0; i <= fg->nb_outputs; i++)
+ tq_send_finish(fgp->queue_out, i);
+
+ fg_thread_uninit(&fgt);
+
+ av_log(fg, AV_LOG_VERBOSE, "Terminating filtering thread\n");
+
+ return (void*)(intptr_t)ret;
+}
+
+static int thread_send_frame(FilterGraphPriv *fgp, InputFilter *ifilter,
+ AVFrame *frame, enum FrameOpaque type)
+{
+ InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
+ int output_idx, ret;
+
+ if (ifp->eof) {
+ av_frame_unref(frame);
+ return AVERROR_EOF;
+ }
+
+ frame->opaque = (void*)(intptr_t)type;
+
+ ret = tq_send(fgp->queue_in, ifp->index, frame);
+ if (ret < 0) {
+ ifp->eof = 1;
+ av_frame_unref(frame);
+ return ret;
+ }
+
+ if (type == FRAME_OPAQUE_EOF)
+ tq_send_finish(fgp->queue_in, ifp->index);
+
+ // wait for the frame to be processed
+ ret = tq_receive(fgp->queue_out, &output_idx, frame);
+ av_assert0(output_idx == fgp->fg.nb_outputs || ret == AVERROR_EOF);
+
+ return ret;
+}
+
+int ifilter_send_frame(InputFilter *ifilter, AVFrame *frame, int keep_reference)
+{
+ FilterGraphPriv *fgp = fgp_from_fg(ifilter->graph);
+ int ret;
+
+ if (keep_reference) {
+ ret = av_frame_ref(fgp->frame, frame);
+ if (ret < 0)
+ return ret;
+ } else
+ av_frame_move_ref(fgp->frame, frame);
+
+ return thread_send_frame(fgp, ifilter, fgp->frame, 0);
+}
+
+int ifilter_send_eof(InputFilter *ifilter, int64_t pts, AVRational tb)
+{
+ FilterGraphPriv *fgp = fgp_from_fg(ifilter->graph);
+ int ret;
+
+ fgp->frame->pts = pts;
+ fgp->frame->time_base = tb;
+
+ ret = thread_send_frame(fgp, ifilter, fgp->frame, FRAME_OPAQUE_EOF);
+
+ return ret == AVERROR_EOF ? 0 : ret;
+}
+
+void ifilter_sub2video_heartbeat(InputFilter *ifilter, int64_t pts, AVRational tb)
+{
+ FilterGraphPriv *fgp = fgp_from_fg(ifilter->graph);
+
+ fgp->frame->pts = pts;
+ fgp->frame->time_base = tb;
+
+ thread_send_frame(fgp, ifilter, fgp->frame, FRAME_OPAQUE_SUB_HEARTBEAT);
+}
+
+int fg_transcode_step(FilterGraph *graph, InputStream **best_ist)
+{
+ FilterGraphPriv *fgp = fgp_from_fg(graph);
+ int ret, got_frames = 0;
+
+ if (fgp->eof_in)
+ return AVERROR_EOF;
+
+ // signal to the filtering thread to return all frames it can
+ av_assert0(!fgp->frame->buf[0]);
+ fgp->frame->opaque = (void*)(intptr_t)(best_ist ?
+ FRAME_OPAQUE_CHOOSE_INPUT :
+ FRAME_OPAQUE_REAP_FILTERS);
+
+ ret = tq_send(fgp->queue_in, graph->nb_inputs, fgp->frame);
+ if (ret < 0) {
+ fgp->eof_in = 1;
+ goto finish;
+ }
+
+ while (1) {
+ OutputFilter *ofilter;
+ OutputFilterPriv *ofp;
+ OutputStream *ost;
+ int output_idx;
+
+ ret = tq_receive(fgp->queue_out, &output_idx, fgp->frame);
+
+ // EOF on the whole queue or the control stream
+ if (output_idx < 0 ||
+ (ret < 0 && output_idx == graph->nb_outputs))
+ goto finish;
+
+ // EOF for a specific stream
+ if (ret < 0) {
+ ofilter = graph->outputs[output_idx];
+ ofp = ofp_from_ofilter(ofilter);
// we are finished and no frames were ever seen at this output,
// at least initialize the encoder with a dummy frame
@@ -2533,30 +3049,111 @@ int fg_transcode_step(FilterGraph *graph, InputStream **best_ist)
av_frame_unref(frame);
}
- close_output_stream(ofilter->ost);
- }
- return 0;
- }
- if (ret != AVERROR(EAGAIN))
- return ret;
-
- for (i = 0; i < graph->nb_inputs; i++) {
- InputFilter *ifilter = graph->inputs[i];
- InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
-
- ist = ifp->ist;
- if (input_files[ist->file_index]->eagain || ifp->eof)
+ close_output_stream(graph->outputs[output_idx]->ost);
continue;
- nb_requests = av_buffersrc_get_nb_failed_requests(ifp->filter);
- if (nb_requests > nb_requests_max) {
- nb_requests_max = nb_requests;
- *best_ist = ist;
}
+
+ // request was fully processed by the filtering thread,
+ // return the input stream to read from, if needed
+ if (output_idx == graph->nb_outputs) {
+ int input_idx = (intptr_t)fgp->frame->opaque - 1;
+ av_assert0(input_idx <= graph->nb_inputs);
+
+ if (best_ist) {
+ *best_ist = (input_idx >= 0 && input_idx < graph->nb_inputs) ?
+ ifp_from_ifilter(graph->inputs[input_idx])->ist : NULL;
+
+ if (input_idx < 0 && !got_frames) {
+ for (int i = 0; i < graph->nb_outputs; i++)
+ graph->outputs[i]->ost->unavailable = 1;
+ }
+ }
+ break;
+ }
+
+ // got a frame from the filtering thread, send it for encoding
+ ofilter = graph->outputs[output_idx];
+ ost = ofilter->ost;
+ ofp = ofp_from_ofilter(ofilter);
+
+ if (ost->finished) {
+ av_frame_unref(fgp->frame);
+ tq_receive_finish(fgp->queue_out, output_idx);
+ continue;
+ }
+
+ if (fgp->frame->pts != AV_NOPTS_VALUE) {
+ ofilter->last_pts = av_rescale_q(fgp->frame->pts,
+ fgp->frame->time_base,
+ AV_TIME_BASE_Q);
+ }
+
+ ret = enc_frame(ost, fgp->frame);
+ av_frame_unref(fgp->frame);
+ if (ret < 0)
+ goto finish;
+
+ ofp->got_frame = 1;
+ got_frames = 1;
}
- if (!*best_ist)
- for (i = 0; i < graph->nb_outputs; i++)
- graph->outputs[i]->ost->unavailable = 1;
+finish:
+ if (ret < 0) {
+ fgp->eof_in = 1;
+ for (int i = 0; i < graph->nb_outputs; i++)
+ close_output_stream(graph->outputs[i]->ost);
+ }
- return 0;
+ return ret;
+}
+
+int reap_filters(FilterGraph *fg, int flush)
+{
+ return fg_transcode_step(fg, NULL);
+}
+
+void fg_send_command(FilterGraph *fg, double time, const char *target,
+ const char *command, const char *arg, int all_filters)
+{
+ FilterGraphPriv *fgp = fgp_from_fg(fg);
+ AVBufferRef *buf;
+ FilterCommand *fc;
+ int output_idx, ret;
+
+ if (!fgp->queue_in)
+ return;
+
+ fc = av_mallocz(sizeof(*fc));
+ if (!fc)
+ return;
+
+ buf = av_buffer_create((uint8_t*)fc, sizeof(*fc), filter_command_free, NULL, 0);
+ if (!buf) {
+ av_freep(&fc);
+ return;
+ }
+
+ fc->target = av_strdup(target);
+ fc->command = av_strdup(command);
+ fc->arg = av_strdup(arg);
+ if (!fc->target || !fc->command || !fc->arg) {
+ av_buffer_unref(&buf);
+ return;
+ }
+
+ fc->time = time;
+ fc->all_filters = all_filters;
+
+ fgp->frame->buf[0] = buf;
+ fgp->frame->opaque = (void*)(intptr_t)FRAME_OPAQUE_SEND_COMMAND;
+
+ ret = tq_send(fgp->queue_in, fg->nb_inputs, fgp->frame);
+ if (ret < 0) {
+ av_frame_unref(fgp->frame);
+ return;
+ }
+
+ // wait for the frame to be processed
+ ret = tq_receive(fgp->queue_out, &output_idx, fgp->frame);
+ av_assert0(output_idx == fgp->fg.nb_outputs || ret == AVERROR_EOF);
}
--
2.42.0
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* [FFmpeg-devel] [PATCH 06/13] fftools/ffmpeg_filter: buffer sub2video heartbeat frames like other frames
2023-11-23 19:14 [FFmpeg-devel] [PATCH v2] ffmpeg CLI multithreading Anton Khirnov
` (4 preceding siblings ...)
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 05/13] fftools/ffmpeg_filter: move filtering to a separate thread Anton Khirnov
@ 2023-11-23 19:15 ` Anton Khirnov
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 07/13] fftools/ffmpeg_filter: reindent Anton Khirnov
` (6 subsequent siblings)
12 siblings, 0 replies; 49+ messages in thread
From: Anton Khirnov @ 2023-11-23 19:15 UTC (permalink / raw)
To: ffmpeg-devel
Otherwise they'd be silently ignored if received by the filtering thread
before the filtergraph can be initialized, which would make the output
dependent on the order in which frames from different inputs arrive.
---
fftools/ffmpeg_filter.c | 43 ++++++++++++++++++++++++-----------------
1 file changed, 25 insertions(+), 18 deletions(-)
diff --git a/fftools/ffmpeg_filter.c b/fftools/ffmpeg_filter.c
index 1b964fc53f..3bafef9717 100644
--- a/fftools/ffmpeg_filter.c
+++ b/fftools/ffmpeg_filter.c
@@ -1769,6 +1769,8 @@ static int graph_is_meta(AVFilterGraph *graph)
return 1;
}
+static int sub2video_frame(InputFilter *ifilter, AVFrame *frame);
+
static int configure_filtergraph(FilterGraph *fg, const FilterGraphThread *fgt)
{
FilterGraphPriv *fgp = fgp_from_fg(fg);
@@ -1880,7 +1882,7 @@ static int configure_filtergraph(FilterGraph *fg, const FilterGraphThread *fgt)
AVFrame *tmp;
while (av_fifo_read(ifp->frame_queue, &tmp, 1) >= 0) {
if (ifp->type_src == AVMEDIA_TYPE_SUBTITLE) {
- sub2video_update(ifp, INT64_MIN, (const AVSubtitle*)tmp->buf[0]->data);
+ sub2video_frame(&ifp->ifilter, tmp);
} else {
ret = av_buffersrc_add_frame(ifp->filter, tmp);
}
@@ -2475,9 +2477,6 @@ static void sub2video_heartbeat(InputFilter *ifilter, int64_t pts, AVRational tb
InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
int64_t pts2;
- if (!ifilter->graph->graph)
- return;
-
/* subtitles seem to be usually muxed ahead of other streams;
if not, subtracting a larger time here is necessary */
pts2 = av_rescale_q(pts, tb, ifp->time_base) - 1;
@@ -2495,18 +2494,38 @@ static void sub2video_heartbeat(InputFilter *ifilter, int64_t pts, AVRational tb
sub2video_push_ref(ifp, pts2);
}
-static int sub2video_frame(InputFilter *ifilter, const AVFrame *frame)
+static int sub2video_frame(InputFilter *ifilter, AVFrame *frame)
{
InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
int ret;
+ if (!ifilter->graph->graph) {
+ AVFrame *tmp;
+
+ if (!frame)
+ return 0;
+
+ tmp = av_frame_alloc();
+ if (!tmp)
+ return AVERROR(ENOMEM);
+
+ av_frame_move_ref(tmp, frame);
+
+ ret = av_fifo_write(ifp->frame_queue, &tmp, 1);
+ if (ret < 0) {
+ av_frame_free(&tmp);
+ return ret;
+ }
+
+ return 0;
+ }
+
// heartbeat frame
if (frame && !frame->buf[0]) {
sub2video_heartbeat(ifilter, frame->pts, frame->time_base);
return 0;
}
- if (ifilter->graph->graph) {
if (!frame) {
if (ifp->sub2video.end_pts < INT64_MAX)
sub2video_update(ifp, INT64_MAX, NULL);
@@ -2518,18 +2537,6 @@ static int sub2video_frame(InputFilter *ifilter, const AVFrame *frame)
ifp->height = frame->height ? frame->height : ifp->height;
sub2video_update(ifp, INT64_MIN, (const AVSubtitle*)frame->buf[0]->data);
- } else if (frame) {
- AVFrame *tmp = av_frame_clone(frame);
-
- if (!tmp)
- return AVERROR(ENOMEM);
-
- ret = av_fifo_write(ifp->frame_queue, &tmp, 1);
- if (ret < 0) {
- av_frame_free(&tmp);
- return ret;
- }
- }
return 0;
}
--
2.42.0
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* [FFmpeg-devel] [PATCH 07/13] fftools/ffmpeg_filter: reindent
2023-11-23 19:14 [FFmpeg-devel] [PATCH v2] ffmpeg CLI multithreading Anton Khirnov
` (5 preceding siblings ...)
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 06/13] fftools/ffmpeg_filter: buffer sub2video heartbeat frames like other frames Anton Khirnov
@ 2023-11-23 19:15 ` Anton Khirnov
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 08/13] fftools/ffmpeg_mux: add muxing thread private data Anton Khirnov
` (5 subsequent siblings)
12 siblings, 0 replies; 49+ messages in thread
From: Anton Khirnov @ 2023-11-23 19:15 UTC (permalink / raw)
To: ffmpeg-devel
---
fftools/ffmpeg_filter.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/fftools/ffmpeg_filter.c b/fftools/ffmpeg_filter.c
index 3bafef9717..cd4b55dd40 100644
--- a/fftools/ffmpeg_filter.c
+++ b/fftools/ffmpeg_filter.c
@@ -2526,17 +2526,17 @@ static int sub2video_frame(InputFilter *ifilter, AVFrame *frame)
return 0;
}
- if (!frame) {
- if (ifp->sub2video.end_pts < INT64_MAX)
- sub2video_update(ifp, INT64_MAX, NULL);
+ if (!frame) {
+ if (ifp->sub2video.end_pts < INT64_MAX)
+ sub2video_update(ifp, INT64_MAX, NULL);
- return av_buffersrc_add_frame(ifp->filter, NULL);
- }
+ return av_buffersrc_add_frame(ifp->filter, NULL);
+ }
- ifp->width = frame->width ? frame->width : ifp->width;
- ifp->height = frame->height ? frame->height : ifp->height;
+ ifp->width = frame->width ? frame->width : ifp->width;
+ ifp->height = frame->height ? frame->height : ifp->height;
- sub2video_update(ifp, INT64_MIN, (const AVSubtitle*)frame->buf[0]->data);
+ sub2video_update(ifp, INT64_MIN, (const AVSubtitle*)frame->buf[0]->data);
return 0;
}
--
2.42.0
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* [FFmpeg-devel] [PATCH 08/13] fftools/ffmpeg_mux: add muxing thread private data
2023-11-23 19:14 [FFmpeg-devel] [PATCH v2] ffmpeg CLI multithreading Anton Khirnov
` (6 preceding siblings ...)
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 07/13] fftools/ffmpeg_filter: reindent Anton Khirnov
@ 2023-11-23 19:15 ` Anton Khirnov
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 09/13] fftools/ffmpeg_mux: move bitstream filtering to the muxer thread Anton Khirnov
` (4 subsequent siblings)
12 siblings, 0 replies; 49+ messages in thread
From: Anton Khirnov @ 2023-11-23 19:15 UTC (permalink / raw)
To: ffmpeg-devel
To be used for data that never needs to be visible outside of the muxer
thread. Start by moving the muxed AVPacket in there.
---
fftools/ffmpeg_mux.c | 44 +++++++++++++++++++++++++++++++++++---------
1 file changed, 35 insertions(+), 9 deletions(-)
diff --git a/fftools/ffmpeg_mux.c b/fftools/ffmpeg_mux.c
index 30c033036d..82352b7981 100644
--- a/fftools/ffmpeg_mux.c
+++ b/fftools/ffmpeg_mux.c
@@ -39,6 +39,10 @@
#include "libavformat/avformat.h"
#include "libavformat/avio.h"
+typedef struct MuxThreadContext {
+ AVPacket *pkt;
+} MuxThreadContext;
+
int want_sdp = 1;
static Muxer *mux_from_of(OutputFile *of)
@@ -210,18 +214,40 @@ static void thread_set_name(OutputFile *of)
ff_thread_setname(name);
}
+static void mux_thread_uninit(MuxThreadContext *mt)
+{
+ av_packet_free(&mt->pkt);
+
+ memset(mt, 0, sizeof(*mt));
+}
+
+static int mux_thread_init(MuxThreadContext *mt)
+{
+ memset(mt, 0, sizeof(*mt));
+
+ mt->pkt = av_packet_alloc();
+ if (!mt->pkt)
+ goto fail;
+
+ return 0;
+
+fail:
+ mux_thread_uninit(mt);
+ return AVERROR(ENOMEM);
+}
+
static void *muxer_thread(void *arg)
{
Muxer *mux = arg;
OutputFile *of = &mux->of;
- AVPacket *pkt = NULL;
+
+ MuxThreadContext mt;
+
int ret = 0;
- pkt = av_packet_alloc();
- if (!pkt) {
- ret = AVERROR(ENOMEM);
+ ret = mux_thread_init(&mt);
+ if (ret < 0)
goto finish;
- }
thread_set_name(of);
@@ -229,7 +255,7 @@ static void *muxer_thread(void *arg)
OutputStream *ost;
int stream_idx, stream_eof = 0;
- ret = tq_receive(mux->tq, &stream_idx, pkt);
+ ret = tq_receive(mux->tq, &stream_idx, mt.pkt);
if (stream_idx < 0) {
av_log(mux, AV_LOG_VERBOSE, "All streams finished\n");
ret = 0;
@@ -237,8 +263,8 @@ static void *muxer_thread(void *arg)
}
ost = of->streams[stream_idx];
- ret = sync_queue_process(mux, ost, ret < 0 ? NULL : pkt, &stream_eof);
- av_packet_unref(pkt);
+ ret = sync_queue_process(mux, ost, ret < 0 ? NULL : mt.pkt, &stream_eof);
+ av_packet_unref(mt.pkt);
if (ret == AVERROR_EOF) {
if (stream_eof) {
tq_receive_finish(mux->tq, stream_idx);
@@ -254,7 +280,7 @@ static void *muxer_thread(void *arg)
}
finish:
- av_packet_free(&pkt);
+ mux_thread_uninit(&mt);
for (unsigned int i = 0; i < mux->fc->nb_streams; i++)
tq_receive_finish(mux->tq, i);
--
2.42.0
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* [FFmpeg-devel] [PATCH 09/13] fftools/ffmpeg_mux: move bitstream filtering to the muxer thread
2023-11-23 19:14 [FFmpeg-devel] [PATCH v2] ffmpeg CLI multithreading Anton Khirnov
` (7 preceding siblings ...)
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 08/13] fftools/ffmpeg_mux: add muxing thread private data Anton Khirnov
@ 2023-11-23 19:15 ` Anton Khirnov
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 10/13] fftools/ffmpeg_demux: switch from AVThreadMessageQueue to ThreadQueue Anton Khirnov
` (3 subsequent siblings)
12 siblings, 0 replies; 49+ messages in thread
From: Anton Khirnov @ 2023-11-23 19:15 UTC (permalink / raw)
To: ffmpeg-devel
This will be the appropriate place for it after the rest of transcoding
is switched to a threaded architecture.
---
fftools/ffmpeg_mux.c | 112 ++++++++++++++++++++++++++-----------------
1 file changed, 67 insertions(+), 45 deletions(-)
diff --git a/fftools/ffmpeg_mux.c b/fftools/ffmpeg_mux.c
index 82352b7981..57fb8a8413 100644
--- a/fftools/ffmpeg_mux.c
+++ b/fftools/ffmpeg_mux.c
@@ -207,6 +207,67 @@ static int sync_queue_process(Muxer *mux, OutputStream *ost, AVPacket *pkt, int
return 0;
}
+/* apply the output bitstream filters */
+static int mux_packet_filter(Muxer *mux, OutputStream *ost,
+ AVPacket *pkt, int *stream_eof)
+{
+ MuxStream *ms = ms_from_ost(ost);
+ const char *err_msg;
+ int ret = 0;
+
+ if (ms->bsf_ctx) {
+ int bsf_eof = 0;
+
+ if (pkt)
+ av_packet_rescale_ts(pkt, pkt->time_base, ms->bsf_ctx->time_base_in);
+
+ ret = av_bsf_send_packet(ms->bsf_ctx, pkt);
+ if (ret < 0) {
+ err_msg = "submitting a packet for bitstream filtering";
+ goto fail;
+ }
+
+ while (!bsf_eof) {
+ ret = av_bsf_receive_packet(ms->bsf_ctx, ms->bsf_pkt);
+ if (ret == AVERROR(EAGAIN))
+ return 0;
+ else if (ret == AVERROR_EOF)
+ bsf_eof = 1;
+ else if (ret < 0) {
+ av_log(ost, AV_LOG_ERROR,
+ "Error applying bitstream filters to a packet: %s",
+ av_err2str(ret));
+ if (exit_on_error)
+ return ret;
+ continue;
+ }
+
+ if (!bsf_eof)
+ ms->bsf_pkt->time_base = ms->bsf_ctx->time_base_out;
+
+ ret = sync_queue_process(mux, ost, bsf_eof ? NULL : ms->bsf_pkt, stream_eof);
+ if (ret < 0)
+ goto mux_fail;
+ }
+ *stream_eof = 1;
+ return AVERROR_EOF;
+ } else {
+ ret = sync_queue_process(mux, ost, pkt, stream_eof);
+ if (ret < 0)
+ goto mux_fail;
+ }
+
+ return 0;
+
+mux_fail:
+ err_msg = "submitting a packet to the muxer";
+
+fail:
+ if (ret != AVERROR_EOF)
+ av_log(ost, AV_LOG_ERROR, "Error %s: %s\n", err_msg, av_err2str(ret));
+ return ret;
+}
+
static void thread_set_name(OutputFile *of)
{
char name[16];
@@ -263,7 +324,7 @@ static void *muxer_thread(void *arg)
}
ost = of->streams[stream_idx];
- ret = sync_queue_process(mux, ost, ret < 0 ? NULL : mt.pkt, &stream_eof);
+ ret = mux_packet_filter(mux, ost, ret < 0 ? NULL : mt.pkt, &stream_eof);
av_packet_unref(mt.pkt);
if (ret == AVERROR_EOF) {
if (stream_eof) {
@@ -376,58 +437,19 @@ static int submit_packet(Muxer *mux, AVPacket *pkt, OutputStream *ost)
int of_output_packet(OutputFile *of, OutputStream *ost, AVPacket *pkt)
{
Muxer *mux = mux_from_of(of);
- MuxStream *ms = ms_from_ost(ost);
- const char *err_msg;
int ret = 0;
if (pkt && pkt->dts != AV_NOPTS_VALUE)
ost->last_mux_dts = av_rescale_q(pkt->dts, pkt->time_base, AV_TIME_BASE_Q);
- /* apply the output bitstream filters */
- if (ms->bsf_ctx) {
- int bsf_eof = 0;
-
- if (pkt)
- av_packet_rescale_ts(pkt, pkt->time_base, ms->bsf_ctx->time_base_in);
-
- ret = av_bsf_send_packet(ms->bsf_ctx, pkt);
- if (ret < 0) {
- err_msg = "submitting a packet for bitstream filtering";
- goto fail;
- }
-
- while (!bsf_eof) {
- ret = av_bsf_receive_packet(ms->bsf_ctx, ms->bsf_pkt);
- if (ret == AVERROR(EAGAIN))
- return 0;
- else if (ret == AVERROR_EOF)
- bsf_eof = 1;
- else if (ret < 0) {
- err_msg = "applying bitstream filters to a packet";
- goto fail;
- }
-
- if (!bsf_eof)
- ms->bsf_pkt->time_base = ms->bsf_ctx->time_base_out;
-
- ret = submit_packet(mux, bsf_eof ? NULL : ms->bsf_pkt, ost);
- if (ret < 0)
- goto mux_fail;
- }
- } else {
- ret = submit_packet(mux, pkt, ost);
- if (ret < 0)
- goto mux_fail;
+ ret = submit_packet(mux, pkt, ost);
+ if (ret < 0) {
+ av_log(ost, AV_LOG_ERROR, "Error submitting a packet to the muxer: %s",
+ av_err2str(ret));
+ return ret;
}
return 0;
-
-mux_fail:
- err_msg = "submitting a packet to the muxer";
-
-fail:
- av_log(ost, AV_LOG_ERROR, "Error %s\n", err_msg);
- return exit_on_error ? ret : 0;
}
int of_streamcopy(OutputStream *ost, const AVPacket *pkt, int64_t dts)
--
2.42.0
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* [FFmpeg-devel] [PATCH 10/13] fftools/ffmpeg_demux: switch from AVThreadMessageQueue to ThreadQueue
2023-11-23 19:14 [FFmpeg-devel] [PATCH v2] ffmpeg CLI multithreading Anton Khirnov
` (8 preceding siblings ...)
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 09/13] fftools/ffmpeg_mux: move bitstream filtering to the muxer thread Anton Khirnov
@ 2023-11-23 19:15 ` Anton Khirnov
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 11/13] fftools/ffmpeg_enc: move encoding to a separate thread Anton Khirnov
` (2 subsequent siblings)
12 siblings, 0 replies; 49+ messages in thread
From: Anton Khirnov @ 2023-11-23 19:15 UTC (permalink / raw)
To: ffmpeg-devel
* the code is made shorter and simpler
* avoids constantly allocating and freeing AVPackets, thanks to
ThreadQueue integration with ObjPool
* is consistent with decoding/filtering/muxing
* reduces the diff in the future switch to thread-aware scheduling
This makes ifile_get_packet() always block. Any potential issues caused
by this will be resolved by the switch to thread-aware scheduling in
future commits.
---
fftools/ffmpeg.c | 32 ++++++------
fftools/ffmpeg.h | 3 +-
fftools/ffmpeg_demux.c | 108 ++++++++++++++--------------------------
fftools/ffmpeg_filter.c | 5 +-
4 files changed, 58 insertions(+), 90 deletions(-)
diff --git a/fftools/ffmpeg.c b/fftools/ffmpeg.c
index 61fcda2526..cf8a50bffc 100644
--- a/fftools/ffmpeg.c
+++ b/fftools/ffmpeg.c
@@ -1043,9 +1043,6 @@ static int check_keyboard_interaction(int64_t cur_time)
static void reset_eagain(void)
{
- int i;
- for (i = 0; i < nb_input_files; i++)
- input_files[i]->eagain = 0;
for (OutputStream *ost = ost_iter(NULL); ost; ost = ost_iter(ost))
ost->unavailable = 0;
}
@@ -1069,19 +1066,14 @@ static void decode_flush(InputFile *ifile)
* this function should be called again
* - AVERROR_EOF -- this function should not be called again
*/
-static int process_input(int file_index)
+static int process_input(int file_index, AVPacket *pkt)
{
InputFile *ifile = input_files[file_index];
InputStream *ist;
- AVPacket *pkt;
int ret, i;
- ret = ifile_get_packet(ifile, &pkt);
+ ret = ifile_get_packet(ifile, pkt);
- if (ret == AVERROR(EAGAIN)) {
- ifile->eagain = 1;
- return ret;
- }
if (ret == 1) {
/* the input file is looped: flush the decoders */
decode_flush(ifile);
@@ -1128,7 +1120,7 @@ static int process_input(int file_index)
ret = process_input_packet(ist, pkt, 0);
- av_packet_free(&pkt);
+ av_packet_unref(pkt);
return ret < 0 ? ret : 0;
}
@@ -1138,7 +1130,7 @@ static int process_input(int file_index)
*
* @return 0 for success, <0 for error
*/
-static int transcode_step(OutputStream *ost)
+static int transcode_step(OutputStream *ost, AVPacket *demux_pkt)
{
InputStream *ist = NULL;
int ret;
@@ -1153,10 +1145,8 @@ static int transcode_step(OutputStream *ost)
av_assert0(ist);
}
- ret = process_input(ist->file_index);
+ ret = process_input(ist->file_index, demux_pkt);
if (ret == AVERROR(EAGAIN)) {
- if (input_files[ist->file_index]->eagain)
- ost->unavailable = 1;
return 0;
}
@@ -1182,12 +1172,19 @@ static int transcode(int *err_rate_exceeded)
int ret = 0, i;
InputStream *ist;
int64_t timer_start;
+ AVPacket *demux_pkt = NULL;
print_stream_maps();
*err_rate_exceeded = 0;
atomic_store(&transcode_init_done, 1);
+ demux_pkt = av_packet_alloc();
+ if (!demux_pkt) {
+ ret = AVERROR(ENOMEM);
+ goto fail;
+ }
+
if (stdin_interaction) {
av_log(NULL, AV_LOG_INFO, "Press [q] to stop, [?] for help\n");
}
@@ -1215,7 +1212,7 @@ static int transcode(int *err_rate_exceeded)
break;
}
- ret = transcode_step(ost);
+ ret = transcode_step(ost, demux_pkt);
if (ret < 0 && ret != AVERROR_EOF) {
av_log(NULL, AV_LOG_ERROR, "Error while filtering: %s\n", av_err2str(ret));
break;
@@ -1256,6 +1253,9 @@ static int transcode(int *err_rate_exceeded)
/* dump report by using the first video and audio streams */
print_report(1, timer_start, av_gettime_relative());
+fail:
+ av_packet_free(&demux_pkt);
+
return ret;
}
diff --git a/fftools/ffmpeg.h b/fftools/ffmpeg.h
index f50222472c..3c153021f8 100644
--- a/fftools/ffmpeg.h
+++ b/fftools/ffmpeg.h
@@ -407,7 +407,6 @@ typedef struct InputFile {
AVFormatContext *ctx;
int eof_reached; /* true if eof reached */
- int eagain; /* true if last read attempt returned EAGAIN */
int64_t input_ts_offset;
int input_sync_ref;
/**
@@ -859,7 +858,7 @@ void ifile_close(InputFile **f);
* caller should flush decoders and read from this demuxer again
* - a negative error code on failure
*/
-int ifile_get_packet(InputFile *f, AVPacket **pkt);
+int ifile_get_packet(InputFile *f, AVPacket *pkt);
int ist_output_add(InputStream *ist, OutputStream *ost);
int ist_filter_add(InputStream *ist, InputFilter *ifilter, int is_simple);
diff --git a/fftools/ffmpeg_demux.c b/fftools/ffmpeg_demux.c
index 791952f120..65a5e08ca5 100644
--- a/fftools/ffmpeg_demux.c
+++ b/fftools/ffmpeg_demux.c
@@ -21,6 +21,8 @@
#include "ffmpeg.h"
#include "ffmpeg_utils.h"
+#include "objpool.h"
+#include "thread_queue.h"
#include "libavutil/avassert.h"
#include "libavutil/avstring.h"
@@ -33,7 +35,6 @@
#include "libavutil/time.h"
#include "libavutil/timestamp.h"
#include "libavutil/thread.h"
-#include "libavutil/threadmessage.h"
#include "libavcodec/packet.h"
@@ -107,19 +108,13 @@ typedef struct Demuxer {
double readrate_initial_burst;
- AVThreadMessageQueue *in_thread_queue;
+ ThreadQueue *thread_queue;
int thread_queue_size;
pthread_t thread;
- int non_blocking;
int read_started;
} Demuxer;
-typedef struct DemuxMsg {
- AVPacket *pkt;
- int looping;
-} DemuxMsg;
-
static DemuxStream *ds_from_ist(InputStream *ist)
{
return (DemuxStream*)ist;
@@ -440,26 +435,16 @@ static int ts_fixup(Demuxer *d, AVPacket *pkt)
return 0;
}
-// process an input packet into a message to send to the consumer thread
-// src is always cleared by this function
-static int input_packet_process(Demuxer *d, DemuxMsg *msg, AVPacket *src)
+static int input_packet_process(Demuxer *d, AVPacket *pkt)
{
InputFile *f = &d->f;
- InputStream *ist = f->streams[src->stream_index];
+ InputStream *ist = f->streams[pkt->stream_index];
DemuxStream *ds = ds_from_ist(ist);
- AVPacket *pkt;
int ret = 0;
- pkt = av_packet_alloc();
- if (!pkt) {
- av_packet_unref(src);
- return AVERROR(ENOMEM);
- }
- av_packet_move_ref(pkt, src);
-
ret = ts_fixup(d, pkt);
if (ret < 0)
- goto fail;
+ return ret;
ds->data_size += pkt->size;
ds->nb_packets++;
@@ -475,13 +460,7 @@ static int input_packet_process(Demuxer *d, DemuxMsg *msg, AVPacket *src)
av_ts2timestr(input_files[ist->file_index]->ts_offset, &AV_TIME_BASE_Q));
}
- msg->pkt = pkt;
- pkt = NULL;
-
-fail:
- av_packet_free(&pkt);
-
- return ret;
+ return 0;
}
static void readrate_sleep(Demuxer *d)
@@ -531,7 +510,6 @@ static void *input_thread(void *arg)
Demuxer *d = arg;
InputFile *f = &d->f;
AVPacket *pkt;
- unsigned flags = d->non_blocking ? AV_THREAD_MESSAGE_NONBLOCK : 0;
int ret = 0;
pkt = av_packet_alloc();
@@ -547,8 +525,6 @@ static void *input_thread(void *arg)
d->wallclock_start = av_gettime_relative();
while (1) {
- DemuxMsg msg = { NULL };
-
ret = av_read_frame(f->ctx, pkt);
if (ret == AVERROR(EAGAIN)) {
@@ -558,8 +534,8 @@ static void *input_thread(void *arg)
if (ret < 0) {
if (d->loop) {
/* signal looping to the consumer thread */
- msg.looping = 1;
- ret = av_thread_message_queue_send(d->in_thread_queue, &msg, 0);
+ pkt->stream_index = -1;
+ ret = tq_send(d->thread_queue, 0, pkt);
if (ret >= 0)
ret = seek_to_start(d);
if (ret >= 0)
@@ -602,35 +578,26 @@ static void *input_thread(void *arg)
}
}
- ret = input_packet_process(d, &msg, pkt);
+ ret = input_packet_process(d, pkt);
if (ret < 0)
break;
if (f->readrate)
readrate_sleep(d);
- ret = av_thread_message_queue_send(d->in_thread_queue, &msg, flags);
- if (flags && ret == AVERROR(EAGAIN)) {
- flags = 0;
- ret = av_thread_message_queue_send(d->in_thread_queue, &msg, flags);
- av_log(f, AV_LOG_WARNING,
- "Thread message queue blocking; consider raising the "
- "thread_queue_size option (current value: %d)\n",
- d->thread_queue_size);
- }
+ ret = tq_send(d->thread_queue, 0, pkt);
if (ret < 0) {
if (ret != AVERROR_EOF)
av_log(f, AV_LOG_ERROR,
"Unable to send packet to main thread: %s\n",
av_err2str(ret));
- av_packet_free(&msg.pkt);
break;
}
}
finish:
av_assert0(ret < 0);
- av_thread_message_queue_set_err_recv(d->in_thread_queue, ret);
+ tq_send_finish(d->thread_queue, 0);
av_packet_free(&pkt);
@@ -642,16 +609,16 @@ finish:
static void thread_stop(Demuxer *d)
{
InputFile *f = &d->f;
- DemuxMsg msg;
- if (!d->in_thread_queue)
+ if (!d->thread_queue)
return;
- av_thread_message_queue_set_err_send(d->in_thread_queue, AVERROR_EOF);
- while (av_thread_message_queue_recv(d->in_thread_queue, &msg, 0) >= 0)
- av_packet_free(&msg.pkt);
+
+ tq_receive_finish(d->thread_queue, 0);
pthread_join(d->thread, NULL);
- av_thread_message_queue_free(&d->in_thread_queue);
+
+ tq_free(&d->thread_queue);
+
av_thread_message_queue_free(&f->audio_ts_queue);
}
@@ -659,18 +626,20 @@ static int thread_start(Demuxer *d)
{
int ret;
InputFile *f = &d->f;
+ ObjPool *op;
if (d->thread_queue_size <= 0)
d->thread_queue_size = (nb_input_files > 1 ? 8 : 1);
- if (nb_input_files > 1 &&
- (f->ctx->pb ? !f->ctx->pb->seekable :
- strcmp(f->ctx->iformat->name, "lavfi")))
- d->non_blocking = 1;
- ret = av_thread_message_queue_alloc(&d->in_thread_queue,
- d->thread_queue_size, sizeof(DemuxMsg));
- if (ret < 0)
- return ret;
+ op = objpool_alloc_packets();
+ if (!op)
+ return AVERROR(ENOMEM);
+
+ d->thread_queue = tq_alloc(1, d->thread_queue_size, op, pkt_move);
+ if (!d->thread_queue) {
+ objpool_free(&op);
+ return AVERROR(ENOMEM);
+ }
if (d->loop) {
int nb_audio_dec = 0;
@@ -700,31 +669,30 @@ static int thread_start(Demuxer *d)
return 0;
fail:
- av_thread_message_queue_free(&d->in_thread_queue);
+ tq_free(&d->thread_queue);
return ret;
}
-int ifile_get_packet(InputFile *f, AVPacket **pkt)
+int ifile_get_packet(InputFile *f, AVPacket *pkt)
{
Demuxer *d = demuxer_from_ifile(f);
- DemuxMsg msg;
- int ret;
+ int ret, dummy;
- if (!d->in_thread_queue) {
+ if (!d->thread_queue) {
ret = thread_start(d);
if (ret < 0)
return ret;
}
- ret = av_thread_message_queue_recv(d->in_thread_queue, &msg,
- d->non_blocking ?
- AV_THREAD_MESSAGE_NONBLOCK : 0);
+ ret = tq_receive(d->thread_queue, &dummy, pkt);
if (ret < 0)
return ret;
- if (msg.looping)
- return 1;
- *pkt = msg.pkt;
+ if (pkt->stream_index == -1) {
+ av_assert0(!pkt->data && !pkt->side_data_elems);
+ return 1;
+ }
+
return 0;
}
diff --git a/fftools/ffmpeg_filter.c b/fftools/ffmpeg_filter.c
index cd4b55dd40..d8320b7526 100644
--- a/fftools/ffmpeg_filter.c
+++ b/fftools/ffmpeg_filter.c
@@ -2010,9 +2010,8 @@ static int choose_input(const FilterGraph *fg, const FilterGraphThread *fgt)
for (int i = 0; i < fg->nb_inputs; i++) {
InputFilter *ifilter = fg->inputs[i];
InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
- InputStream *ist = ifp->ist;
- if (input_files[ist->file_index]->eagain || fgt->eof_in[i])
+ if (fgt->eof_in[i])
continue;
nb_requests = av_buffersrc_get_nb_failed_requests(ifp->filter);
@@ -2022,6 +2021,8 @@ static int choose_input(const FilterGraph *fg, const FilterGraphThread *fgt)
}
}
+ av_assert0(best_input >= 0);
+
return best_input;
}
--
2.42.0
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* [FFmpeg-devel] [PATCH 11/13] fftools/ffmpeg_enc: move encoding to a separate thread
2023-11-23 19:14 [FFmpeg-devel] [PATCH v2] ffmpeg CLI multithreading Anton Khirnov
` (9 preceding siblings ...)
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 10/13] fftools/ffmpeg_demux: switch from AVThreadMessageQueue to ThreadQueue Anton Khirnov
@ 2023-11-23 19:15 ` Anton Khirnov
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 12/13] fftools/ffmpeg: add thread-aware transcode scheduling infrastructure Anton Khirnov
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 13/13] fftools/ffmpeg: convert to a threaded architecture Anton Khirnov
12 siblings, 0 replies; 49+ messages in thread
From: Anton Khirnov @ 2023-11-23 19:15 UTC (permalink / raw)
To: ffmpeg-devel
As for the analogous decoding change, this is only a preparatory step to
a fully threaded architecture and does not yet make encoding truly
parallel. The main thread will currently submit a frame and wait until
it has been fully processed by the encoder before moving on. That will
change in future commits after filters are moved to threads and a
thread-aware scheduler is added.
This code suffers from a known issue - if an encoder with a sync queue
receives EOF it will terminate after processing everything it currently
has, even though the sync queue might still be triggered by other
threads. That will be fixed in following commits.
---
fftools/ffmpeg_enc.c | 360 ++++++++++++++++++++++++++++++++++++++-----
1 file changed, 320 insertions(+), 40 deletions(-)
diff --git a/fftools/ffmpeg_enc.c b/fftools/ffmpeg_enc.c
index fa4539664f..46c21fc0e4 100644
--- a/fftools/ffmpeg_enc.c
+++ b/fftools/ffmpeg_enc.c
@@ -20,6 +20,8 @@
#include <stdint.h>
#include "ffmpeg.h"
+#include "ffmpeg_utils.h"
+#include "thread_queue.h"
#include "libavutil/avassert.h"
#include "libavutil/avstring.h"
@@ -43,6 +45,7 @@ struct Encoder {
// packet for receiving encoded output
AVPacket *pkt;
+ AVFrame *sub_frame;
// combined size of all the packets received from the encoder
uint64_t data_size;
@@ -51,8 +54,48 @@ struct Encoder {
uint64_t packets_encoded;
int opened;
+ int finished;
+
+ pthread_t thread;
+ /**
+ * Queue for sending frames from the main thread to
+ * the encoder thread.
+ */
+ ThreadQueue *queue_in;
+ /**
+ * Queue for sending encoded packets from the encoder thread
+ * to the main thread.
+ *
+ * An empty packet is sent to signal that a previously sent
+ * frame has been fully processed.
+ */
+ ThreadQueue *queue_out;
};
+// data that is local to the decoder thread and not visible outside of it
+typedef struct EncoderThread {
+ AVFrame *frame;
+ AVPacket *pkt;
+} EncoderThread;
+
+static int enc_thread_stop(Encoder *e)
+{
+ void *ret;
+
+ if (!e->queue_in)
+ return 0;
+
+ tq_send_finish(e->queue_in, 0);
+ tq_receive_finish(e->queue_out, 0);
+
+ pthread_join(e->thread, &ret);
+
+ tq_free(&e->queue_in);
+ tq_free(&e->queue_out);
+
+ return (int)(intptr_t)ret;
+}
+
void enc_free(Encoder **penc)
{
Encoder *enc = *penc;
@@ -60,7 +103,10 @@ void enc_free(Encoder **penc)
if (!enc)
return;
+ enc_thread_stop(enc);
+
av_frame_free(&enc->sq_frame);
+ av_frame_free(&enc->sub_frame);
av_packet_free(&enc->pkt);
@@ -77,6 +123,12 @@ int enc_alloc(Encoder **penc, const AVCodec *codec)
if (!enc)
return AVERROR(ENOMEM);
+ if (codec->type == AVMEDIA_TYPE_SUBTITLE) {
+ enc->sub_frame = av_frame_alloc();
+ if (!enc->sub_frame)
+ goto fail;
+ }
+
enc->pkt = av_packet_alloc();
if (!enc->pkt)
goto fail;
@@ -165,6 +217,52 @@ static int set_encoder_id(OutputFile *of, OutputStream *ost)
return 0;
}
+static void *encoder_thread(void *arg);
+
+static int enc_thread_start(OutputStream *ost)
+{
+ Encoder *e = ost->enc;
+ ObjPool *op;
+ int ret = 0;
+
+ op = objpool_alloc_frames();
+ if (!op)
+ return AVERROR(ENOMEM);
+
+ e->queue_in = tq_alloc(1, 1, op, frame_move);
+ if (!e->queue_in) {
+ objpool_free(&op);
+ return AVERROR(ENOMEM);
+ }
+
+ op = objpool_alloc_packets();
+ if (!op)
+ goto fail;
+
+ e->queue_out = tq_alloc(1, 4, op, pkt_move);
+ if (!e->queue_out) {
+ objpool_free(&op);
+ goto fail;
+ }
+
+ ret = pthread_create(&e->thread, NULL, encoder_thread, ost);
+ if (ret) {
+ ret = AVERROR(ret);
+ av_log(ost, AV_LOG_ERROR, "pthread_create() failed: %s\n",
+ av_err2str(ret));
+ goto fail;
+ }
+
+ return 0;
+fail:
+ if (ret >= 0)
+ ret = AVERROR(ENOMEM);
+
+ tq_free(&e->queue_in);
+ tq_free(&e->queue_out);
+ return ret;
+}
+
int enc_open(OutputStream *ost, const AVFrame *frame)
{
InputStream *ist = ost->ist;
@@ -373,6 +471,13 @@ int enc_open(OutputStream *ost, const AVFrame *frame)
if (ost->st->time_base.num <= 0 || ost->st->time_base.den <= 0)
ost->st->time_base = av_add_q(ost->enc_ctx->time_base, (AVRational){0, 1});
+ ret = enc_thread_start(ost);
+ if (ret < 0) {
+ av_log(ost, AV_LOG_ERROR, "Error starting encoder thread: %s\n",
+ av_err2str(ret));
+ return ret;
+ }
+
ret = of_stream_init(of, ost);
if (ret < 0)
return ret;
@@ -386,19 +491,18 @@ static int check_recording_time(OutputStream *ost, int64_t ts, AVRational tb)
if (of->recording_time != INT64_MAX &&
av_compare_ts(ts, tb, of->recording_time, AV_TIME_BASE_Q) >= 0) {
- close_output_stream(ost);
return 0;
}
return 1;
}
-int enc_subtitle(OutputFile *of, OutputStream *ost, const AVSubtitle *sub)
+static int do_subtitle_out(OutputFile *of, OutputStream *ost, const AVSubtitle *sub,
+ AVPacket *pkt)
{
Encoder *e = ost->enc;
int subtitle_out_max_size = 1024 * 1024;
int subtitle_out_size, nb, i, ret;
AVCodecContext *enc;
- AVPacket *pkt = e->pkt;
int64_t pts;
if (sub->pts == AV_NOPTS_VALUE) {
@@ -429,7 +533,7 @@ int enc_subtitle(OutputFile *of, OutputStream *ost, const AVSubtitle *sub)
AVSubtitle local_sub = *sub;
if (!check_recording_time(ost, pts, AV_TIME_BASE_Q))
- return 0;
+ return AVERROR_EOF;
ret = av_new_packet(pkt, subtitle_out_max_size);
if (ret < 0)
@@ -470,9 +574,11 @@ int enc_subtitle(OutputFile *of, OutputStream *ost, const AVSubtitle *sub)
}
pkt->dts = pkt->pts;
- ret = of_output_packet(of, ost, pkt);
- if (ret < 0)
+ ret = tq_send(e->queue_out, 0, pkt);
+ if (ret < 0) {
+ av_packet_unref(pkt);
return ret;
+ }
}
return 0;
@@ -610,11 +716,11 @@ static int update_video_stats(OutputStream *ost, const AVPacket *pkt, int write_
return 0;
}
-static int encode_frame(OutputFile *of, OutputStream *ost, AVFrame *frame)
+static int encode_frame(OutputFile *of, OutputStream *ost, AVFrame *frame,
+ AVPacket *pkt)
{
Encoder *e = ost->enc;
AVCodecContext *enc = ost->enc_ctx;
- AVPacket *pkt = e->pkt;
const char *type_desc = av_get_media_type_string(enc->codec_type);
const char *action = frame ? "encode" : "flush";
int ret;
@@ -664,11 +770,9 @@ static int encode_frame(OutputFile *of, OutputStream *ost, AVFrame *frame)
if (ret == AVERROR(EAGAIN)) {
av_assert0(frame); // should never happen during flushing
return 0;
- } else if (ret == AVERROR_EOF) {
- ret = of_output_packet(of, ost, NULL);
- return ret < 0 ? ret : AVERROR_EOF;
} else if (ret < 0) {
- av_log(ost, AV_LOG_ERROR, "%s encoding failed\n", type_desc);
+ if (ret != AVERROR_EOF)
+ av_log(ost, AV_LOG_ERROR, "%s encoding failed\n", type_desc);
return ret;
}
@@ -703,22 +807,24 @@ static int encode_frame(OutputFile *of, OutputStream *ost, AVFrame *frame)
e->packets_encoded++;
- ret = of_output_packet(of, ost, pkt);
- if (ret < 0)
+ ret = tq_send(e->queue_out, 0, pkt);
+ if (ret < 0) {
+ av_packet_unref(pkt);
return ret;
+ }
}
av_assert0(0);
}
static int submit_encode_frame(OutputFile *of, OutputStream *ost,
- AVFrame *frame)
+ AVFrame *frame, AVPacket *pkt)
{
Encoder *e = ost->enc;
int ret;
if (ost->sq_idx_encode < 0)
- return encode_frame(of, ost, frame);
+ return encode_frame(of, ost, frame, pkt);
if (frame) {
ret = av_frame_ref(e->sq_frame, frame);
@@ -747,22 +853,18 @@ static int submit_encode_frame(OutputFile *of, OutputStream *ost,
return (ret == AVERROR(EAGAIN)) ? 0 : ret;
}
- ret = encode_frame(of, ost, enc_frame);
+ ret = encode_frame(of, ost, enc_frame, pkt);
if (enc_frame)
av_frame_unref(enc_frame);
- if (ret < 0) {
- if (ret == AVERROR_EOF)
- close_output_stream(ost);
+ if (ret < 0)
return ret;
- }
}
}
static int do_audio_out(OutputFile *of, OutputStream *ost,
- AVFrame *frame)
+ AVFrame *frame, AVPacket *pkt)
{
AVCodecContext *enc = ost->enc_ctx;
- int ret;
if (!(enc->codec->capabilities & AV_CODEC_CAP_PARAM_CHANGE) &&
enc->ch_layout.nb_channels != frame->ch_layout.nb_channels) {
@@ -772,10 +874,9 @@ static int do_audio_out(OutputFile *of, OutputStream *ost,
}
if (!check_recording_time(ost, frame->pts, frame->time_base))
- return 0;
+ return AVERROR_EOF;
- ret = submit_encode_frame(of, ost, frame);
- return (ret < 0 && ret != AVERROR_EOF) ? ret : 0;
+ return submit_encode_frame(of, ost, frame, pkt);
}
static enum AVPictureType forced_kf_apply(void *logctx, KeyframeForceCtx *kf,
@@ -825,13 +926,13 @@ force_keyframe:
}
/* May modify/reset frame */
-static int do_video_out(OutputFile *of, OutputStream *ost, AVFrame *in_picture)
+static int do_video_out(OutputFile *of, OutputStream *ost,
+ AVFrame *in_picture, AVPacket *pkt)
{
- int ret;
AVCodecContext *enc = ost->enc_ctx;
if (!check_recording_time(ost, in_picture->pts, ost->enc_ctx->time_base))
- return 0;
+ return AVERROR_EOF;
in_picture->quality = enc->global_quality;
in_picture->pict_type = forced_kf_apply(ost, &ost->kf, enc->time_base, in_picture);
@@ -843,26 +944,203 @@ static int do_video_out(OutputFile *of, OutputStream *ost, AVFrame *in_picture)
}
#endif
- ret = submit_encode_frame(of, ost, in_picture);
- return (ret == AVERROR_EOF) ? 0 : ret;
+ return submit_encode_frame(of, ost, in_picture, pkt);
+}
+
+static int frame_encode(OutputStream *ost, AVFrame *frame, AVPacket *pkt)
+{
+ OutputFile *of = output_files[ost->file_index];
+ enum AVMediaType type = ost->type;
+
+ if (type == AVMEDIA_TYPE_SUBTITLE) {
+ // no flushing for subtitles
+ return frame ?
+ do_subtitle_out(of, ost, (AVSubtitle*)frame->buf[0]->data, pkt) : 0;
+ }
+
+ if (frame) {
+ return (type == AVMEDIA_TYPE_VIDEO) ? do_video_out(of, ost, frame, pkt) :
+ do_audio_out(of, ost, frame, pkt);
+ }
+
+ return submit_encode_frame(of, ost, NULL, pkt);
+}
+
+static void enc_thread_set_name(const OutputStream *ost)
+{
+ char name[16];
+ snprintf(name, sizeof(name), "enc%d:%d:%s", ost->file_index, ost->index,
+ ost->enc_ctx->codec->name);
+ ff_thread_setname(name);
+}
+
+static void enc_thread_uninit(EncoderThread *et)
+{
+ av_packet_free(&et->pkt);
+ av_frame_free(&et->frame);
+
+ memset(et, 0, sizeof(*et));
+}
+
+static int enc_thread_init(EncoderThread *et)
+{
+ memset(et, 0, sizeof(*et));
+
+ et->frame = av_frame_alloc();
+ if (!et->frame)
+ goto fail;
+
+ et->pkt = av_packet_alloc();
+ if (!et->pkt)
+ goto fail;
+
+ return 0;
+
+fail:
+ enc_thread_uninit(et);
+ return AVERROR(ENOMEM);
+}
+
+static void *encoder_thread(void *arg)
+{
+ OutputStream *ost = arg;
+ OutputFile *of = output_files[ost->file_index];
+ Encoder *e = ost->enc;
+ EncoderThread et;
+ int ret = 0, input_status = 0;
+
+ ret = enc_thread_init(&et);
+ if (ret < 0)
+ goto finish;
+
+ enc_thread_set_name(ost);
+
+ while (!input_status) {
+ int dummy;
+
+ input_status = tq_receive(e->queue_in, &dummy, et.frame);
+ if (input_status < 0)
+ av_log(ost, AV_LOG_VERBOSE, "Encoder thread received EOF\n");
+
+ ret = frame_encode(ost, input_status >= 0 ? et.frame : NULL, et.pkt);
+
+ av_packet_unref(et.pkt);
+ av_frame_unref(et.frame);
+
+ if (ret < 0) {
+ if (ret == AVERROR_EOF)
+ av_log(ost, AV_LOG_VERBOSE, "Encoder returned EOF, finishing\n");
+ else
+ av_log(ost, AV_LOG_ERROR, "Error encoding a frame: %s\n",
+ av_err2str(ret));
+ break;
+ }
+
+ // signal to the consumer thread that the frame was encoded
+ ret = tq_send(e->queue_out, 0, et.pkt);
+ if (ret < 0) {
+ if (ret != AVERROR_EOF)
+ av_log(ost, AV_LOG_ERROR,
+ "Error communicating with the main thread\n");
+ break;
+ }
+ }
+
+ // EOF is normal thread termination
+ if (ret == AVERROR_EOF)
+ ret = 0;
+
+finish:
+ if (ost->sq_idx_encode >= 0)
+ sq_send(of->sq_encode, ost->sq_idx_encode, SQFRAME(NULL));
+
+ tq_receive_finish(e->queue_in, 0);
+ tq_send_finish (e->queue_out, 0);
+
+ enc_thread_uninit(&et);
+
+ av_log(ost, AV_LOG_VERBOSE, "Terminating encoder thread\n");
+
+ return (void*)(intptr_t)ret;
}
int enc_frame(OutputStream *ost, AVFrame *frame)
{
OutputFile *of = output_files[ost->file_index];
- int ret;
+ Encoder *e = ost->enc;
+ int ret, thread_ret;
ret = enc_open(ost, frame);
if (ret < 0)
return ret;
- return ost->enc_ctx->codec_type == AVMEDIA_TYPE_VIDEO ?
- do_video_out(of, ost, frame) : do_audio_out(of, ost, frame);
+ if (!e->queue_in)
+ return AVERROR_EOF;
+
+ // send the frame/EOF to the encoder thread
+ if (frame) {
+ ret = tq_send(e->queue_in, 0, frame);
+ if (ret < 0)
+ goto finish;
+ } else
+ tq_send_finish(e->queue_in, 0);
+
+ // retrieve all encoded data for the frame
+ while (1) {
+ int dummy;
+
+ ret = tq_receive(e->queue_out, &dummy, e->pkt);
+ if (ret < 0)
+ break;
+
+ // frame fully encoded
+ if (!e->pkt->data && !e->pkt->side_data_elems)
+ return 0;
+
+ // process the encoded packet
+ ret = of_output_packet(of, ost, e->pkt);
+ if (ret < 0)
+ goto finish;
+ }
+
+finish:
+ thread_ret = enc_thread_stop(e);
+ if (thread_ret < 0) {
+ av_log(ost, AV_LOG_ERROR, "Encoder thread returned error: %s\n",
+ av_err2str(thread_ret));
+ ret = err_merge(ret, thread_ret);
+ }
+
+ if (ret < 0 && ret != AVERROR_EOF)
+ return ret;
+
+ // signal EOF to the muxer
+ return of_output_packet(of, ost, NULL);
+}
+
+int enc_subtitle(OutputFile *of, OutputStream *ost, const AVSubtitle *sub)
+{
+ Encoder *e = ost->enc;
+ AVFrame *f = e->sub_frame;
+ int ret;
+
+ // XXX the queue for transferring data to the encoder thread runs
+ // on AVFrames, so we wrap AVSubtitle in an AVBufferRef and put
+ // that inside the frame
+ // eventually, subtitles should be switched to use AVFrames natively
+ ret = subtitle_wrap_frame(f, sub, 1);
+ if (ret < 0)
+ return ret;
+
+ ret = enc_frame(ost, f);
+ av_frame_unref(f);
+
+ return ret;
}
int enc_flush(void)
{
- int ret;
+ int ret = 0;
for (OutputStream *ost = ost_iter(NULL); ost; ost = ost_iter(ost)) {
OutputFile *of = output_files[ost->file_index];
@@ -873,16 +1151,18 @@ int enc_flush(void)
for (OutputStream *ost = ost_iter(NULL); ost; ost = ost_iter(ost)) {
Encoder *e = ost->enc;
AVCodecContext *enc = ost->enc_ctx;
- OutputFile *of = output_files[ost->file_index];
+ int err;
if (!enc || !e->opened ||
(enc->codec_type != AVMEDIA_TYPE_VIDEO && enc->codec_type != AVMEDIA_TYPE_AUDIO))
continue;
- ret = submit_encode_frame(of, ost, NULL);
- if (ret != AVERROR_EOF)
- return ret;
+ err = enc_frame(ost, NULL);
+ if (err != AVERROR_EOF && ret < 0)
+ ret = err_merge(ret, err);
+
+ av_assert0(!e->queue_in);
}
- return 0;
+ return ret;
}
--
2.42.0
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* [FFmpeg-devel] [PATCH 12/13] fftools/ffmpeg: add thread-aware transcode scheduling infrastructure
2023-11-23 19:14 [FFmpeg-devel] [PATCH v2] ffmpeg CLI multithreading Anton Khirnov
` (10 preceding siblings ...)
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 11/13] fftools/ffmpeg_enc: move encoding to a separate thread Anton Khirnov
@ 2023-11-23 19:15 ` Anton Khirnov
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 13/13] fftools/ffmpeg: convert to a threaded architecture Anton Khirnov
12 siblings, 0 replies; 49+ messages in thread
From: Anton Khirnov @ 2023-11-23 19:15 UTC (permalink / raw)
To: ffmpeg-devel
See the comment block at the top of fftools/ffmpeg_sched.h for more
details on what this scheduler is for.
This commit adds the scheduling code itself, along with minimal
integration with the rest of the program:
* allocating and freeing the scheduler
* passing it throughout the call stack in order to register the
individual components (demuxers/decoders/filtergraphs/encoders/muxers)
with the scheduler
The scheduler is not actually used as of this commit, so it should not
result in any change in behavior. That will change in future commits.
---
fftools/Makefile | 1 +
fftools/ffmpeg.c | 18 +-
fftools/ffmpeg.h | 24 +-
fftools/ffmpeg_dec.c | 10 +-
fftools/ffmpeg_demux.c | 46 +-
fftools/ffmpeg_enc.c | 13 +-
fftools/ffmpeg_filter.c | 37 +-
fftools/ffmpeg_mux.c | 17 +-
fftools/ffmpeg_mux.h | 12 +
fftools/ffmpeg_mux_init.c | 106 +-
fftools/ffmpeg_opt.c | 22 +-
fftools/ffmpeg_sched.c | 2174 +++++++++++++++++++++++++++++++++++++
fftools/ffmpeg_sched.h | 468 ++++++++
13 files changed, 2892 insertions(+), 56 deletions(-)
create mode 100644 fftools/ffmpeg_sched.c
create mode 100644 fftools/ffmpeg_sched.h
diff --git a/fftools/Makefile b/fftools/Makefile
index 3c763e3db9..083a1368ce 100644
--- a/fftools/Makefile
+++ b/fftools/Makefile
@@ -18,6 +18,7 @@ OBJS-ffmpeg += \
fftools/ffmpeg_mux.o \
fftools/ffmpeg_mux_init.o \
fftools/ffmpeg_opt.o \
+ fftools/ffmpeg_sched.o \
fftools/objpool.o \
fftools/sync_queue.o \
fftools/thread_queue.o \
diff --git a/fftools/ffmpeg.c b/fftools/ffmpeg.c
index cf8a50bffc..b8a97258a0 100644
--- a/fftools/ffmpeg.c
+++ b/fftools/ffmpeg.c
@@ -99,6 +99,7 @@
#include "cmdutils.h"
#include "ffmpeg.h"
+#include "ffmpeg_sched.h"
#include "ffmpeg_utils.h"
#include "sync_queue.h"
@@ -1167,7 +1168,7 @@ static int transcode_step(OutputStream *ost, AVPacket *demux_pkt)
/*
* The following code is the main loop of the file converter
*/
-static int transcode(int *err_rate_exceeded)
+static int transcode(Scheduler *sch, int *err_rate_exceeded)
{
int ret = 0, i;
InputStream *ist;
@@ -1305,6 +1306,8 @@ static int64_t getmaxrss(void)
int main(int argc, char **argv)
{
+ Scheduler *sch = NULL;
+
int ret, err_rate_exceeded;
BenchmarkTimeStamps ti;
@@ -1322,8 +1325,14 @@ int main(int argc, char **argv)
show_banner(argc, argv, options);
+ sch = sch_alloc();
+ if (!sch) {
+ ret = AVERROR(ENOMEM);
+ goto finish;
+ }
+
/* parse options and open all input/output files */
- ret = ffmpeg_parse_options(argc, argv);
+ ret = ffmpeg_parse_options(argc, argv, sch);
if (ret < 0)
goto finish;
@@ -1341,7 +1350,7 @@ int main(int argc, char **argv)
}
current_time = ti = get_benchmark_time_stamps();
- ret = transcode(&err_rate_exceeded);
+ ret = transcode(sch, &err_rate_exceeded);
if (ret >= 0 && do_benchmark) {
int64_t utime, stime, rtime;
current_time = get_benchmark_time_stamps();
@@ -1361,5 +1370,8 @@ finish:
ret = 0;
ffmpeg_cleanup(ret);
+
+ sch_free(&sch);
+
return ret;
}
diff --git a/fftools/ffmpeg.h b/fftools/ffmpeg.h
index 3c153021f8..a89038b765 100644
--- a/fftools/ffmpeg.h
+++ b/fftools/ffmpeg.h
@@ -27,6 +27,7 @@
#include <signal.h>
#include "cmdutils.h"
+#include "ffmpeg_sched.h"
#include "sync_queue.h"
#include "libavformat/avformat.h"
@@ -721,7 +722,8 @@ int parse_and_set_vsync(const char *arg, int *vsync_var, int file_idx, int st_id
int check_filter_outputs(void);
int filtergraph_is_simple(const FilterGraph *fg);
int init_simple_filtergraph(InputStream *ist, OutputStream *ost,
- char *graph_desc);
+ char *graph_desc,
+ Scheduler *sch, unsigned sch_idx_enc);
int init_complex_filtergraph(FilterGraph *fg);
int copy_av_subtitle(AVSubtitle *dst, const AVSubtitle *src);
@@ -746,7 +748,8 @@ void ifilter_sub2video_heartbeat(InputFilter *ifilter, int64_t pts, AVRational t
*/
int ifilter_parameters_from_dec(InputFilter *ifilter, const AVCodecContext *dec);
-int ofilter_bind_ost(OutputFilter *ofilter, OutputStream *ost);
+int ofilter_bind_ost(OutputFilter *ofilter, OutputStream *ost,
+ unsigned sched_idx_enc);
/**
* Create a new filtergraph in the global filtergraph list.
@@ -754,7 +757,7 @@ int ofilter_bind_ost(OutputFilter *ofilter, OutputStream *ost);
* @param graph_desc Graph description; an av_malloc()ed string, filtergraph
* takes ownership of it.
*/
-int fg_create(FilterGraph **pfg, char *graph_desc);
+int fg_create(FilterGraph **pfg, char *graph_desc, Scheduler *sch);
void fg_free(FilterGraph **pfg);
@@ -778,7 +781,7 @@ void fg_send_command(FilterGraph *fg, double time, const char *target,
*/
int reap_filters(FilterGraph *fg, int flush);
-int ffmpeg_parse_options(int argc, char **argv);
+int ffmpeg_parse_options(int argc, char **argv, Scheduler *sch);
void enc_stats_write(OutputStream *ost, EncStats *es,
const AVFrame *frame, const AVPacket *pkt,
@@ -801,7 +804,7 @@ AVBufferRef *hw_device_for_filter(void);
int hwaccel_retrieve_data(AVCodecContext *avctx, AVFrame *input);
-int dec_open(InputStream *ist);
+int dec_open(InputStream *ist, Scheduler *sch, unsigned sch_idx);
void dec_free(Decoder **pdec);
/**
@@ -815,7 +818,8 @@ void dec_free(Decoder **pdec);
*/
int dec_packet(InputStream *ist, const AVPacket *pkt, int no_eof);
-int enc_alloc(Encoder **penc, const AVCodec *codec);
+int enc_alloc(Encoder **penc, const AVCodec *codec,
+ Scheduler *sch, unsigned sch_idx);
void enc_free(Encoder **penc);
int enc_open(OutputStream *ost, const AVFrame *frame);
@@ -831,7 +835,7 @@ int enc_flush(void);
*/
int of_stream_init(OutputFile *of, OutputStream *ost);
int of_write_trailer(OutputFile *of);
-int of_open(const OptionsContext *o, const char *filename);
+int of_open(const OptionsContext *o, const char *filename, Scheduler *sch);
void of_free(OutputFile **pof);
void of_enc_stats_close(void);
@@ -845,7 +849,7 @@ int of_streamcopy(OutputStream *ost, const AVPacket *pkt, int64_t dts);
int64_t of_filesize(OutputFile *of);
-int ifile_open(const OptionsContext *o, const char *filename);
+int ifile_open(const OptionsContext *o, const char *filename, Scheduler *sch);
void ifile_close(InputFile **f);
/**
@@ -932,4 +936,8 @@ extern const char * const opt_name_frame_rates[];
extern const char * const opt_name_top_field_first[];
#endif
+void *muxer_thread(void *arg);
+void *decoder_thread(void *arg);
+void *encoder_thread(void *arg);
+
#endif /* FFTOOLS_FFMPEG_H */
diff --git a/fftools/ffmpeg_dec.c b/fftools/ffmpeg_dec.c
index b60bad1220..90ea0d6d93 100644
--- a/fftools/ffmpeg_dec.c
+++ b/fftools/ffmpeg_dec.c
@@ -52,6 +52,9 @@ struct Decoder {
AVFrame *sub_prev[2];
AVFrame *sub_heartbeat;
+ Scheduler *sch;
+ unsigned sch_idx;
+
pthread_t thread;
/**
* Queue for sending coded packets from the main thread to
@@ -673,7 +676,7 @@ fail:
return AVERROR(ENOMEM);
}
-static void *decoder_thread(void *arg)
+void *decoder_thread(void *arg)
{
InputStream *ist = arg;
InputFile *ifile = input_files[ist->file_index];
@@ -1045,7 +1048,7 @@ static int hw_device_setup_for_decode(InputStream *ist)
return 0;
}
-int dec_open(InputStream *ist)
+int dec_open(InputStream *ist, Scheduler *sch, unsigned sch_idx)
{
Decoder *d;
const AVCodec *codec = ist->dec;
@@ -1063,6 +1066,9 @@ int dec_open(InputStream *ist)
return ret;
d = ist->decoder;
+ d->sch = sch;
+ d->sch_idx = sch_idx;
+
if (codec->type == AVMEDIA_TYPE_SUBTITLE && ist->fix_sub_duration) {
for (int i = 0; i < FF_ARRAY_ELEMS(d->sub_prev); i++) {
d->sub_prev[i] = av_frame_alloc();
diff --git a/fftools/ffmpeg_demux.c b/fftools/ffmpeg_demux.c
index 65a5e08ca5..2234dbe076 100644
--- a/fftools/ffmpeg_demux.c
+++ b/fftools/ffmpeg_demux.c
@@ -20,6 +20,7 @@
#include <stdint.h>
#include "ffmpeg.h"
+#include "ffmpeg_sched.h"
#include "ffmpeg_utils.h"
#include "objpool.h"
#include "thread_queue.h"
@@ -60,6 +61,9 @@ typedef struct DemuxStream {
// name used for logging
char log_name[32];
+ int sch_idx_stream;
+ int sch_idx_dec;
+
double ts_scale;
int streamcopy_needed;
@@ -108,6 +112,7 @@ typedef struct Demuxer {
double readrate_initial_burst;
+ Scheduler *sch;
ThreadQueue *thread_queue;
int thread_queue_size;
pthread_t thread;
@@ -780,7 +785,9 @@ void ifile_close(InputFile **pf)
static int ist_use(InputStream *ist, int decoding_needed)
{
+ Demuxer *d = demuxer_from_ifile(input_files[ist->file_index]);
DemuxStream *ds = ds_from_ist(ist);
+ int ret;
if (ist->user_set_discard == AVDISCARD_ALL) {
av_log(ist, AV_LOG_ERROR, "Cannot %s a disabled input stream\n",
@@ -788,13 +795,32 @@ static int ist_use(InputStream *ist, int decoding_needed)
return AVERROR(EINVAL);
}
+ if (ds->sch_idx_stream < 0) {
+ ret = sch_add_demux_stream(d->sch, d->f.index);
+ if (ret < 0)
+ return ret;
+ ds->sch_idx_stream = ret;
+ }
+
ist->discard = 0;
ist->st->discard = ist->user_set_discard;
ist->decoding_needed |= decoding_needed;
ds->streamcopy_needed |= !decoding_needed;
- if (decoding_needed && !avcodec_is_open(ist->dec_ctx)) {
- int ret = dec_open(ist);
+ if (decoding_needed && ds->sch_idx_dec < 0) {
+ int is_audio = ist->st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO;
+
+ ret = sch_add_dec(d->sch, decoder_thread, ist, d->loop && is_audio);
+ if (ret < 0)
+ return ret;
+ ds->sch_idx_dec = ret;
+
+ ret = sch_connect(d->sch, SCH_DSTREAM(d->f.index, ds->sch_idx_stream),
+ SCH_DEC(ds->sch_idx_dec));
+ if (ret < 0)
+ return ret;
+
+ ret = dec_open(ist, d->sch, ds->sch_idx_dec);
if (ret < 0)
return ret;
}
@@ -804,6 +830,7 @@ static int ist_use(InputStream *ist, int decoding_needed)
int ist_output_add(InputStream *ist, OutputStream *ost)
{
+ DemuxStream *ds = ds_from_ist(ist);
int ret;
ret = ist_use(ist, ost->enc ? DECODING_FOR_OST : 0);
@@ -816,11 +843,12 @@ int ist_output_add(InputStream *ist, OutputStream *ost)
ist->outputs[ist->nb_outputs - 1] = ost;
- return 0;
+ return ost->enc ? ds->sch_idx_dec : ds->sch_idx_stream;
}
int ist_filter_add(InputStream *ist, InputFilter *ifilter, int is_simple)
{
+ DemuxStream *ds = ds_from_ist(ist);
int ret;
ret = ist_use(ist, is_simple ? DECODING_FOR_OST : DECODING_FOR_FILTER);
@@ -838,7 +866,7 @@ int ist_filter_add(InputStream *ist, InputFilter *ifilter, int is_simple)
if (ret < 0)
return ret;
- return 0;
+ return ds->sch_idx_dec;
}
static int choose_decoder(const OptionsContext *o, AVFormatContext *s, AVStream *st,
@@ -970,6 +998,9 @@ static DemuxStream *demux_stream_alloc(Demuxer *d, AVStream *st)
if (!ds)
return NULL;
+ ds->sch_idx_stream = -1;
+ ds->sch_idx_dec = -1;
+
ds->ist.st = st;
ds->ist.file_index = f->index;
ds->ist.index = st->index;
@@ -1295,7 +1326,7 @@ static Demuxer *demux_alloc(void)
return d;
}
-int ifile_open(const OptionsContext *o, const char *filename)
+int ifile_open(const OptionsContext *o, const char *filename, Scheduler *sch)
{
Demuxer *d;
InputFile *f;
@@ -1322,6 +1353,11 @@ int ifile_open(const OptionsContext *o, const char *filename)
f = &d->f;
+ ret = sch_add_demux(sch, input_thread, d);
+ if (ret < 0)
+ return ret;
+ d->sch = sch;
+
if (stop_time != INT64_MAX && recording_time != INT64_MAX) {
stop_time = INT64_MAX;
av_log(d, AV_LOG_WARNING, "-t and -to cannot be used together; using -t.\n");
diff --git a/fftools/ffmpeg_enc.c b/fftools/ffmpeg_enc.c
index 46c21fc0e4..9871381c0e 100644
--- a/fftools/ffmpeg_enc.c
+++ b/fftools/ffmpeg_enc.c
@@ -56,6 +56,9 @@ struct Encoder {
int opened;
int finished;
+ Scheduler *sch;
+ unsigned sch_idx;
+
pthread_t thread;
/**
* Queue for sending frames from the main thread to
@@ -113,7 +116,8 @@ void enc_free(Encoder **penc)
av_freep(penc);
}
-int enc_alloc(Encoder **penc, const AVCodec *codec)
+int enc_alloc(Encoder **penc, const AVCodec *codec,
+ Scheduler *sch, unsigned sch_idx)
{
Encoder *enc;
@@ -133,6 +137,9 @@ int enc_alloc(Encoder **penc, const AVCodec *codec)
if (!enc->pkt)
goto fail;
+ enc->sch = sch;
+ enc->sch_idx = sch_idx;
+
*penc = enc;
return 0;
@@ -217,8 +224,6 @@ static int set_encoder_id(OutputFile *of, OutputStream *ost)
return 0;
}
-static void *encoder_thread(void *arg);
-
static int enc_thread_start(OutputStream *ost)
{
Encoder *e = ost->enc;
@@ -1001,7 +1006,7 @@ fail:
return AVERROR(ENOMEM);
}
-static void *encoder_thread(void *arg)
+void *encoder_thread(void *arg)
{
OutputStream *ost = arg;
OutputFile *of = output_files[ost->file_index];
diff --git a/fftools/ffmpeg_filter.c b/fftools/ffmpeg_filter.c
index d8320b7526..1b41d32540 100644
--- a/fftools/ffmpeg_filter.c
+++ b/fftools/ffmpeg_filter.c
@@ -65,6 +65,9 @@ typedef struct FilterGraphPriv {
// frame for sending output to the encoder
AVFrame *frame_enc;
+ Scheduler *sch;
+ unsigned sch_idx;
+
pthread_t thread;
/**
* Queue for sending frames from the main thread to the filtergraph. Has
@@ -735,14 +738,19 @@ static int ifilter_bind_ist(InputFilter *ifilter, InputStream *ist)
{
InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
FilterGraphPriv *fgp = fgp_from_fg(ifilter->graph);
- int ret;
+ int ret, dec_idx;
av_assert0(!ifp->ist);
ifp->ist = ist;
ifp->type_src = ist->st->codecpar->codec_type;
- ret = ist_filter_add(ist, ifilter, filtergraph_is_simple(ifilter->graph));
+ dec_idx = ist_filter_add(ist, ifilter, filtergraph_is_simple(ifilter->graph));
+ if (dec_idx < 0)
+ return dec_idx;
+
+ ret = sch_connect(fgp->sch, SCH_DEC(dec_idx),
+ SCH_FILTER_IN(fgp->sch_idx, ifp->index));
if (ret < 0)
return ret;
@@ -798,13 +806,15 @@ static int set_channel_layout(OutputFilterPriv *f, OutputStream *ost)
return 0;
}
-int ofilter_bind_ost(OutputFilter *ofilter, OutputStream *ost)
+int ofilter_bind_ost(OutputFilter *ofilter, OutputStream *ost,
+ unsigned sched_idx_enc)
{
const OutputFile *of = output_files[ost->file_index];
OutputFilterPriv *ofp = ofp_from_ofilter(ofilter);
FilterGraph *fg = ofilter->graph;
FilterGraphPriv *fgp = fgp_from_fg(fg);
const AVCodec *c = ost->enc_ctx->codec;
+ int ret;
av_assert0(!ofilter->ost);
@@ -887,6 +897,11 @@ int ofilter_bind_ost(OutputFilter *ofilter, OutputStream *ost)
break;
}
+ ret = sch_connect(fgp->sch, SCH_FILTER_OUT(fgp->sch_idx, ofp->index),
+ SCH_ENC(sched_idx_enc));
+ if (ret < 0)
+ return ret;
+
fgp->nb_outputs_bound++;
av_assert0(fgp->nb_outputs_bound <= fg->nb_outputs);
@@ -1016,7 +1031,7 @@ static const AVClass fg_class = {
.category = AV_CLASS_CATEGORY_FILTER,
};
-int fg_create(FilterGraph **pfg, char *graph_desc)
+int fg_create(FilterGraph **pfg, char *graph_desc, Scheduler *sch)
{
FilterGraphPriv *fgp;
FilterGraph *fg;
@@ -1037,6 +1052,7 @@ int fg_create(FilterGraph **pfg, char *graph_desc)
fg->index = nb_filtergraphs - 1;
fgp->graph_desc = graph_desc;
fgp->disable_conversions = !auto_conversion_filters;
+ fgp->sch = sch;
snprintf(fgp->log_name, sizeof(fgp->log_name), "fc#%d", fg->index);
@@ -1104,6 +1120,12 @@ int fg_create(FilterGraph **pfg, char *graph_desc)
goto fail;
}
+ ret = sch_add_filtergraph(sch, fg->nb_inputs, fg->nb_outputs,
+ filter_thread, fgp);
+ if (ret < 0)
+ goto fail;
+ fgp->sch_idx = ret;
+
fail:
avfilter_inout_free(&inputs);
avfilter_inout_free(&outputs);
@@ -1116,13 +1138,14 @@ fail:
}
int init_simple_filtergraph(InputStream *ist, OutputStream *ost,
- char *graph_desc)
+ char *graph_desc,
+ Scheduler *sch, unsigned sched_idx_enc)
{
FilterGraph *fg;
FilterGraphPriv *fgp;
int ret;
- ret = fg_create(&fg, graph_desc);
+ ret = fg_create(&fg, graph_desc, sch);
if (ret < 0)
return ret;
fgp = fgp_from_fg(fg);
@@ -1148,7 +1171,7 @@ int init_simple_filtergraph(InputStream *ist, OutputStream *ost,
if (ret < 0)
return ret;
- ret = ofilter_bind_ost(fg->outputs[0], ost);
+ ret = ofilter_bind_ost(fg->outputs[0], ost, sched_idx_enc);
if (ret < 0)
return ret;
diff --git a/fftools/ffmpeg_mux.c b/fftools/ffmpeg_mux.c
index 57fb8a8413..ef5c2f60e0 100644
--- a/fftools/ffmpeg_mux.c
+++ b/fftools/ffmpeg_mux.c
@@ -297,7 +297,7 @@ fail:
return AVERROR(ENOMEM);
}
-static void *muxer_thread(void *arg)
+void *muxer_thread(void *arg)
{
Muxer *mux = arg;
OutputFile *of = &mux->of;
@@ -580,7 +580,9 @@ static int thread_start(Muxer *mux)
return 0;
}
-static int print_sdp(void)
+int print_sdp(const char *filename);
+
+int print_sdp(const char *filename)
{
char sdp[16384];
int i;
@@ -613,19 +615,18 @@ static int print_sdp(void)
if (ret < 0)
goto fail;
- if (!sdp_filename) {
+ if (!filename) {
printf("SDP:\n%s\n", sdp);
fflush(stdout);
} else {
- ret = avio_open2(&sdp_pb, sdp_filename, AVIO_FLAG_WRITE, &int_cb, NULL);
+ ret = avio_open2(&sdp_pb, filename, AVIO_FLAG_WRITE, &int_cb, NULL);
if (ret < 0) {
- av_log(NULL, AV_LOG_ERROR, "Failed to open sdp file '%s'\n", sdp_filename);
+ av_log(NULL, AV_LOG_ERROR, "Failed to open sdp file '%s'\n", filename);
goto fail;
}
avio_print(sdp_pb, sdp);
avio_closep(&sdp_pb);
- av_freep(&sdp_filename);
}
// SDP successfully written, allow muxer threads to start
@@ -661,7 +662,7 @@ int mux_check_init(Muxer *mux)
nb_output_dumped++;
if (sdp_filename || want_sdp) {
- ret = print_sdp();
+ ret = print_sdp(sdp_filename);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error writing the SDP.\n");
return ret;
@@ -984,6 +985,8 @@ void of_free(OutputFile **pof)
ost_free(&of->streams[i]);
av_freep(&of->streams);
+ av_freep(&mux->sch_stream_idx);
+
av_dict_free(&mux->opts);
av_packet_free(&mux->sq_pkt);
diff --git a/fftools/ffmpeg_mux.h b/fftools/ffmpeg_mux.h
index a2bb4dfc7d..eee2b2cb07 100644
--- a/fftools/ffmpeg_mux.h
+++ b/fftools/ffmpeg_mux.h
@@ -24,6 +24,7 @@
#include <stdatomic.h>
#include <stdint.h>
+#include "ffmpeg_sched.h"
#include "thread_queue.h"
#include "libavformat/avformat.h"
@@ -50,6 +51,10 @@ typedef struct MuxStream {
EncStats stats;
+ int sch_idx;
+ int sch_idx_enc;
+ int sch_idx_src;
+
int64_t max_frames;
/*
@@ -94,6 +99,13 @@ typedef struct Muxer {
AVFormatContext *fc;
+ Scheduler *sch;
+ unsigned sch_idx;
+
+ // OutputStream indices indexed by scheduler stream indices
+ int *sch_stream_idx;
+ int nb_sch_stream_idx;
+
pthread_t thread;
ThreadQueue *tq;
diff --git a/fftools/ffmpeg_mux_init.c b/fftools/ffmpeg_mux_init.c
index 63a25a350f..534b4379c7 100644
--- a/fftools/ffmpeg_mux_init.c
+++ b/fftools/ffmpeg_mux_init.c
@@ -23,6 +23,7 @@
#include "cmdutils.h"
#include "ffmpeg.h"
#include "ffmpeg_mux.h"
+#include "ffmpeg_sched.h"
#include "fopen_utf8.h"
#include "libavformat/avformat.h"
@@ -436,6 +437,9 @@ static MuxStream *mux_stream_alloc(Muxer *mux, enum AVMediaType type)
ms->ost.class = &output_stream_class;
+ ms->sch_idx = -1;
+ ms->sch_idx_enc = -1;
+
snprintf(ms->log_name, sizeof(ms->log_name), "%cost#%d:%d",
type_str ? *type_str : '?', mux->of.index, ms->ost.index);
@@ -1127,6 +1131,22 @@ static int ost_add(Muxer *mux, const OptionsContext *o, enum AVMediaType type,
if (!ms)
return AVERROR(ENOMEM);
+ // only streams with sources (i.e. not attachments)
+ // are handled by the scheduler
+ if (ist || ofilter) {
+ ret = GROW_ARRAY(mux->sch_stream_idx, mux->nb_sch_stream_idx);
+ if (ret < 0)
+ return ret;
+
+ ret = sch_add_mux_stream(mux->sch, mux->sch_idx);
+ if (ret < 0)
+ return ret;
+
+ av_assert0(ret == mux->nb_sch_stream_idx - 1);
+ mux->sch_stream_idx[ret] = ms->ost.index;
+ ms->sch_idx = ret;
+ }
+
ost = &ms->ost;
if (o->streamid) {
@@ -1170,7 +1190,12 @@ static int ost_add(Muxer *mux, const OptionsContext *o, enum AVMediaType type,
if (!ost->enc_ctx)
return AVERROR(ENOMEM);
- ret = enc_alloc(&ost->enc, enc);
+ ret = sch_add_enc(mux->sch, encoder_thread, ost, NULL);
+ if (ret < 0)
+ return ret;
+ ms->sch_idx_enc = ret;
+
+ ret = enc_alloc(&ost->enc, enc, mux->sch, ms->sch_idx_enc);
if (ret < 0)
return ret;
@@ -1380,11 +1405,19 @@ static int ost_add(Muxer *mux, const OptionsContext *o, enum AVMediaType type,
ost->enc_ctx->global_quality = FF_QP2LAMBDA * qscale;
}
- ms->max_muxing_queue_size = 128;
- MATCH_PER_STREAM_OPT(max_muxing_queue_size, i, ms->max_muxing_queue_size, oc, st);
+ if (ms->sch_idx >= 0) {
+ int max_muxing_queue_size = 128;
+ int muxing_queue_data_threshold = 50 * 1024 * 1024;
- ms->muxing_queue_data_threshold = 50*1024*1024;
- MATCH_PER_STREAM_OPT(muxing_queue_data_threshold, i, ms->muxing_queue_data_threshold, oc, st);
+ MATCH_PER_STREAM_OPT(max_muxing_queue_size, i, max_muxing_queue_size, oc, st);
+ MATCH_PER_STREAM_OPT(muxing_queue_data_threshold, i, muxing_queue_data_threshold, oc, st);
+
+ sch_mux_stream_buffering(mux->sch, mux->sch_idx, ms->sch_idx,
+ max_muxing_queue_size, muxing_queue_data_threshold);
+
+ ms->max_muxing_queue_size = max_muxing_queue_size;
+ ms->muxing_queue_data_threshold = muxing_queue_data_threshold;
+ }
MATCH_PER_STREAM_OPT(bits_per_raw_sample, i, ost->bits_per_raw_sample,
oc, st);
@@ -1425,23 +1458,47 @@ static int ost_add(Muxer *mux, const OptionsContext *o, enum AVMediaType type,
(type == AVMEDIA_TYPE_VIDEO || type == AVMEDIA_TYPE_AUDIO)) {
if (ofilter) {
ost->filter = ofilter;
- ret = ofilter_bind_ost(ofilter, ost);
+ ret = ofilter_bind_ost(ofilter, ost, ms->sch_idx_enc);
if (ret < 0)
return ret;
} else {
- ret = init_simple_filtergraph(ost->ist, ost, filters);
+ ret = init_simple_filtergraph(ost->ist, ost, filters,
+ mux->sch, ms->sch_idx_enc);
if (ret < 0) {
av_log(ost, AV_LOG_ERROR,
"Error initializing a simple filtergraph\n");
return ret;
}
}
+
+ ret = sch_connect(mux->sch, SCH_ENC(ms->sch_idx_enc),
+ SCH_MSTREAM(mux->sch_idx, ms->sch_idx));
+ if (ret < 0)
+ return ret;
} else if (ost->ist) {
- ret = ist_output_add(ost->ist, ost);
- if (ret < 0) {
+ int sched_idx = ist_output_add(ost->ist, ost);
+ if (sched_idx < 0) {
av_log(ost, AV_LOG_ERROR,
"Error binding an input stream\n");
- return ret;
+ return sched_idx;
+ }
+ ms->sch_idx_src = sched_idx;
+
+ if (ost->enc) {
+ ret = sch_connect(mux->sch, SCH_DEC(sched_idx),
+ SCH_ENC(ms->sch_idx_enc));
+ if (ret < 0)
+ return ret;
+
+ ret = sch_connect(mux->sch, SCH_ENC(ms->sch_idx_enc),
+ SCH_MSTREAM(mux->sch_idx, ms->sch_idx));
+ if (ret < 0)
+ return ret;
+ } else {
+ ret = sch_connect(mux->sch, SCH_DSTREAM(ost->ist->file_index, sched_idx),
+ SCH_MSTREAM(ost->file_index, ms->sch_idx));
+ if (ret < 0)
+ return ret;
}
}
@@ -1837,6 +1894,26 @@ static int create_streams(Muxer *mux, const OptionsContext *o)
if (ret < 0)
return ret;
+ // setup fix_sub_duration_heartbeat mappings
+ for (unsigned i = 0; i < oc->nb_streams; i++) {
+ MuxStream *src = ms_from_ost(mux->of.streams[i]);
+
+ if (!src->ost.fix_sub_duration_heartbeat)
+ continue;
+
+ for (unsigned j = 0; j < oc->nb_streams; j++) {
+ MuxStream *dst = ms_from_ost(mux->of.streams[j]);
+
+ if (src == dst || dst->ost.type != AVMEDIA_TYPE_SUBTITLE ||
+ !dst->ost.enc || !dst->ost.ist || !dst->ost.ist->fix_sub_duration)
+ continue;
+
+ ret = sch_mux_sub_heartbeat_add(mux->sch, mux->sch_idx, src->sch_idx,
+ dst->sch_idx_src);
+
+ }
+ }
+
if (!oc->nb_streams && !(oc->oformat->flags & AVFMT_NOSTREAMS)) {
av_dump_format(oc, nb_output_files - 1, oc->url, 1);
av_log(mux, AV_LOG_ERROR, "Output file does not contain any stream\n");
@@ -2621,7 +2698,7 @@ static Muxer *mux_alloc(void)
return mux;
}
-int of_open(const OptionsContext *o, const char *filename)
+int of_open(const OptionsContext *o, const char *filename, Scheduler *sch)
{
Muxer *mux;
AVFormatContext *oc;
@@ -2691,6 +2768,13 @@ int of_open(const OptionsContext *o, const char *filename)
AVFMT_FLAG_BITEXACT);
}
+ err = sch_add_mux(sch, muxer_thread, NULL, mux,
+ !strcmp(oc->oformat->name, "rtp"));
+ if (err < 0)
+ return err;
+ mux->sch = sch;
+ mux->sch_idx = err;
+
/* create all output streams for this file */
err = create_streams(mux, o);
if (err < 0)
diff --git a/fftools/ffmpeg_opt.c b/fftools/ffmpeg_opt.c
index 304471dd03..d463306546 100644
--- a/fftools/ffmpeg_opt.c
+++ b/fftools/ffmpeg_opt.c
@@ -28,6 +28,7 @@
#endif
#include "ffmpeg.h"
+#include "ffmpeg_sched.h"
#include "cmdutils.h"
#include "opt_common.h"
#include "sync_queue.h"
@@ -1157,20 +1158,22 @@ static int opt_audio_qscale(void *optctx, const char *opt, const char *arg)
static int opt_filter_complex(void *optctx, const char *opt, const char *arg)
{
+ Scheduler *sch = optctx;
char *graph_desc = av_strdup(arg);
if (!graph_desc)
return AVERROR(ENOMEM);
- return fg_create(NULL, graph_desc);
+ return fg_create(NULL, graph_desc, sch);
}
static int opt_filter_complex_script(void *optctx, const char *opt, const char *arg)
{
+ Scheduler *sch = optctx;
char *graph_desc = file_read(arg);
if (!graph_desc)
return AVERROR(EINVAL);
- return fg_create(NULL, graph_desc);
+ return fg_create(NULL, graph_desc, sch);
}
void show_help_default(const char *opt, const char *arg)
@@ -1262,8 +1265,9 @@ static const OptionGroupDef groups[] = {
[GROUP_INFILE] = { "input url", "i", OPT_INPUT },
};
-static int open_files(OptionGroupList *l, const char *inout,
- int (*open_file)(const OptionsContext*, const char*))
+static int open_files(OptionGroupList *l, const char *inout, Scheduler *sch,
+ int (*open_file)(const OptionsContext*, const char*,
+ Scheduler*))
{
int i, ret;
@@ -1283,7 +1287,7 @@ static int open_files(OptionGroupList *l, const char *inout,
}
av_log(NULL, AV_LOG_DEBUG, "Opening an %s file: %s.\n", inout, g->arg);
- ret = open_file(&o, g->arg);
+ ret = open_file(&o, g->arg, sch);
uninit_options(&o);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error opening %s file %s.\n",
@@ -1296,7 +1300,7 @@ static int open_files(OptionGroupList *l, const char *inout,
return 0;
}
-int ffmpeg_parse_options(int argc, char **argv)
+int ffmpeg_parse_options(int argc, char **argv, Scheduler *sch)
{
OptionParseContext octx;
const char *errmsg = NULL;
@@ -1313,7 +1317,7 @@ int ffmpeg_parse_options(int argc, char **argv)
}
/* apply global options */
- ret = parse_optgroup(NULL, &octx.global_opts);
+ ret = parse_optgroup(sch, &octx.global_opts);
if (ret < 0) {
errmsg = "parsing global options";
goto fail;
@@ -1323,7 +1327,7 @@ int ffmpeg_parse_options(int argc, char **argv)
term_init();
/* open input files */
- ret = open_files(&octx.groups[GROUP_INFILE], "input", ifile_open);
+ ret = open_files(&octx.groups[GROUP_INFILE], "input", sch, ifile_open);
if (ret < 0) {
errmsg = "opening input files";
goto fail;
@@ -1337,7 +1341,7 @@ int ffmpeg_parse_options(int argc, char **argv)
}
/* open output files */
- ret = open_files(&octx.groups[GROUP_OUTFILE], "output", of_open);
+ ret = open_files(&octx.groups[GROUP_OUTFILE], "output", sch, of_open);
if (ret < 0) {
errmsg = "opening output files";
goto fail;
diff --git a/fftools/ffmpeg_sched.c b/fftools/ffmpeg_sched.c
new file mode 100644
index 0000000000..51144a5d3f
--- /dev/null
+++ b/fftools/ffmpeg_sched.c
@@ -0,0 +1,2174 @@
+/*
+ * Inter-thread scheduling/synchronization.
+ * Copyright (c) 2023 Anton Khirnov
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include <stdatomic.h>
+#include <stddef.h>
+#include <stdint.h>
+
+#include "cmdutils.h"
+#include "ffmpeg_sched.h"
+#include "ffmpeg_utils.h"
+#include "sync_queue.h"
+#include "thread_queue.h"
+
+#include "libavcodec/packet.h"
+
+#include "libavutil/avassert.h"
+#include "libavutil/error.h"
+#include "libavutil/fifo.h"
+#include "libavutil/frame.h"
+#include "libavutil/mem.h"
+#include "libavutil/thread.h"
+#include "libavutil/threadmessage.h"
+#include "libavutil/time.h"
+
+// 100 ms
+// FIXME: some other value? make this dynamic?
+#define SCHEDULE_TOLERANCE (100 * 1000)
+
+enum QueueType {
+ QUEUE_PACKETS,
+ QUEUE_FRAMES,
+};
+
+typedef struct SchWaiter {
+ pthread_mutex_t lock;
+ pthread_cond_t cond;
+ atomic_int choked;
+
+ // the following are internal state of schedule_update_locked() and must not
+ // be accessed outside of it
+ int choked_prev;
+ int choked_next;
+} SchWaiter;
+
+typedef struct SchTask {
+ Scheduler *parent;
+ SchedulerNode node;
+
+ SchThreadFunc func;
+ void *func_arg;
+
+ pthread_t thread;
+ int thread_running;
+} SchTask;
+
+typedef struct SchDec {
+ const AVClass *class;
+
+ SchedulerNode src;
+ SchedulerNode *dst;
+ uint8_t *dst_finished;
+ unsigned nb_dst;
+
+ SchTask task;
+ // Queue for receiving input packets, one stream.
+ ThreadQueue *queue;
+
+ // Queue for sending post-flush end timestamps back to the source
+ AVThreadMessageQueue *queue_end_ts;
+ int expect_end_ts;
+
+ // temporary storage used by sch_dec_send()
+ AVFrame *send_frame;
+} SchDec;
+
+typedef struct SchSyncQueue {
+ SyncQueue *sq;
+ AVFrame *frame;
+ pthread_mutex_t lock;
+
+ unsigned *enc_idx;
+ unsigned nb_enc_idx;
+} SchSyncQueue;
+
+typedef struct SchEnc {
+ const AVClass *class;
+
+ SchedulerNode src;
+ SchedulerNode dst;
+
+ // [0] - index of the sync queue in Scheduler.sq_enc,
+ // [1] - index of this encoder in the sq
+ int sq_idx[2];
+
+ /* Opening encoders is somewhat nontrivial due to their interaction with
+ * sync queues, which are (among other things) responsible for maintaining
+ * constant audio frame size, when it is required by the encoder.
+ *
+ * Opening the encoder requires stream parameters, obtained from the first
+ * frame. However, that frame cannot be properly chunked by the sync queue
+ * without knowing the required frame size, which is only available after
+ * opening the encoder.
+ *
+ * This apparent circular dependency is resolved in the following way:
+ * - the caller creating the encoder gives us a callback which opens the
+ * encoder and returns the required frame size (if any)
+ * - when the first frame is sent to the encoder, the sending thread
+ * - calls this callback, opening the encoder
+ * - passes the returned frame size to the sync queue
+ */
+ int (*open_cb)(void *opaque, const AVFrame *frame);
+ int opened;
+
+ SchTask task;
+ // Queue for receiving input frames, one stream.
+ ThreadQueue *queue;
+ // tq_send() to queue returned EOF
+ int in_finished;
+} SchEnc;
+
+typedef struct SchDemuxStream {
+ SchedulerNode *dst;
+ uint8_t *dst_finished;
+ unsigned nb_dst;
+} SchDemuxStream;
+
+typedef struct SchDemux {
+ const AVClass *class;
+
+ SchDemuxStream *streams;
+ unsigned nb_streams;
+
+ SchTask task;
+ SchWaiter waiter;
+
+ // temporary storage used by sch_demux_send()
+ AVPacket *send_pkt;
+} SchDemux;
+
+typedef struct PreMuxQueue {
+ /**
+ * Queue for buffering the packets before the muxer task can be started.
+ */
+ AVFifo *fifo;
+ /**
+ * Maximum number of packets in fifo.
+ */
+ int max_packets;
+ /*
+ * The size of the AVPackets' buffers in queue.
+ * Updated when a packet is either pushed or pulled from the queue.
+ */
+ size_t data_size;
+ /* Threshold after which max_packets will be in effect */
+ size_t data_threshold;
+} PreMuxQueue;
+
+typedef struct SchMuxStream {
+ SchedulerNode src;
+ SchedulerNode src_sched;
+
+ unsigned *sub_heartbeat_dst;
+ unsigned nb_sub_heartbeat_dst;
+
+ PreMuxQueue pre_mux_queue;
+
+ ////////////////////////////////////////////////////////////
+ // The following are protected by Scheduler.schedule_lock //
+
+ /* dts of the last packet sent to this stream
+ in AV_TIME_BASE_Q */
+ int64_t last_dts;
+ // this stream no longer accepts input
+ int source_finished;
+ ////////////////////////////////////////////////////////////
+} SchMuxStream;
+
+typedef struct SchMux {
+ const AVClass *class;
+
+ SchMuxStream *streams;
+ unsigned nb_streams;
+ unsigned nb_streams_ready;
+
+ int (*init)(void *arg);
+
+ SchTask task;
+ /**
+ * Set to 1 after starting the muxer task and flushing the
+ * pre-muxing queues.
+ * Set either before any tasks have started, or with
+ * Scheduler.mux_ready_lock held.
+ */
+ atomic_int mux_started;
+ ThreadQueue *queue;
+
+ AVPacket *sub_heartbeat_pkt;
+} SchMux;
+
+typedef struct SchFilterIn {
+ SchedulerNode src;
+ SchedulerNode src_sched;
+ int send_finished;
+} SchFilterIn;
+
+typedef struct SchFilterOut {
+ SchedulerNode dst;
+} SchFilterOut;
+
+typedef struct SchFilterGraph {
+ const AVClass *class;
+
+ SchFilterIn *inputs;
+ unsigned nb_inputs;
+ atomic_uint nb_inputs_finished;
+
+ SchFilterOut *outputs;
+ unsigned nb_outputs;
+
+ SchTask task;
+ // input queue, nb_inputs+1 streams
+ // last stream is control
+ ThreadQueue *queue;
+ SchWaiter waiter;
+
+ // protected by schedule_lock
+ unsigned best_input;
+} SchFilterGraph;
+
+struct Scheduler {
+ const AVClass *class;
+
+ SchDemux *demux;
+ unsigned nb_demux;
+
+ SchMux *mux;
+ unsigned nb_mux;
+
+ unsigned nb_mux_ready;
+ pthread_mutex_t mux_ready_lock;
+
+ unsigned nb_mux_done;
+ pthread_mutex_t mux_done_lock;
+ pthread_cond_t mux_done_cond;
+
+
+ SchDec *dec;
+ unsigned nb_dec;
+
+ SchEnc *enc;
+ unsigned nb_enc;
+
+ SchSyncQueue *sq_enc;
+ unsigned nb_sq_enc;
+
+ SchFilterGraph *filters;
+ unsigned nb_filters;
+
+ char *sdp_filename;
+ int sdp_auto;
+
+ int transcode_started;
+ atomic_int terminate;
+ atomic_int task_failed;
+
+ pthread_mutex_t schedule_lock;
+
+ atomic_int_least64_t last_dts;
+};
+
+/**
+ * Wait until this task is allowed to proceed.
+ *
+ * @retval 0 the caller should proceed
+ * @retval 1 the caller should terminate
+ */
+static int waiter_wait(Scheduler *sch, SchWaiter *w)
+{
+ int terminate;
+
+ if (!atomic_load(&w->choked))
+ return 0;
+
+ pthread_mutex_lock(&w->lock);
+
+ while (atomic_load(&w->choked) && !atomic_load(&sch->terminate))
+ pthread_cond_wait(&w->cond, &w->lock);
+
+ terminate = atomic_load(&sch->terminate);
+
+ pthread_mutex_unlock(&w->lock);
+
+ return terminate;
+}
+
+static void waiter_set(SchWaiter *w, int choked)
+{
+ pthread_mutex_lock(&w->lock);
+
+ atomic_store(&w->choked, choked);
+ pthread_cond_signal(&w->cond);
+
+ pthread_mutex_unlock(&w->lock);
+}
+
+static int waiter_init(SchWaiter *w)
+{
+ int ret;
+
+ atomic_init(&w->choked, 0);
+
+ ret = pthread_mutex_init(&w->lock, NULL);
+ if (ret)
+ return AVERROR(ret);
+
+ ret = pthread_cond_init(&w->cond, NULL);
+ if (ret)
+ return AVERROR(ret);
+
+ return 0;
+}
+
+static void waiter_uninit(SchWaiter *w)
+{
+ pthread_mutex_destroy(&w->lock);
+ pthread_cond_destroy(&w->cond);
+}
+
+static int queue_alloc(ThreadQueue **ptq, unsigned nb_streams, unsigned queue_size,
+ enum QueueType type)
+{
+ ThreadQueue *tq;
+ ObjPool *op;
+
+ op = (type == QUEUE_PACKETS) ? objpool_alloc_packets() :
+ objpool_alloc_frames();
+ if (!op)
+ return AVERROR(ENOMEM);
+
+ tq = tq_alloc(nb_streams, queue_size, op,
+ (type == QUEUE_PACKETS) ? pkt_move : frame_move);
+ if (!tq) {
+ objpool_free(&op);
+ return AVERROR(ENOMEM);
+ }
+
+ *ptq = tq;
+ return 0;
+}
+
+static void *task_wrapper(void *arg);
+
+static int task_stop(SchTask *task)
+{
+ int ret;
+ void *thread_ret;
+
+ if (!task->thread_running)
+ return 0;
+
+ ret = pthread_join(task->thread, &thread_ret);
+ av_assert0(ret == 0);
+
+ task->thread_running = 0;
+
+ return (intptr_t)thread_ret;
+}
+
+static int task_start(SchTask *task)
+{
+ int ret;
+
+ av_log(task->func_arg, AV_LOG_VERBOSE, "Starting thread...\n");
+
+ av_assert0(!task->thread_running);
+
+ ret = pthread_create(&task->thread, NULL, task_wrapper, task);
+ if (ret) {
+ av_log(task->func_arg, AV_LOG_ERROR, "pthread_create() failed: %s\n",
+ strerror(ret));
+ return AVERROR(ret);
+ }
+
+ task->thread_running = 1;
+ return 0;
+}
+
+static void task_init(Scheduler *sch, SchTask *task, enum SchedulerNodeType type, unsigned idx,
+ SchThreadFunc func, void *func_arg)
+{
+ task->parent = sch;
+
+ task->node.type = type;
+ task->node.idx = idx;
+
+ task->func = func;
+ task->func_arg = func_arg;
+}
+
+int sch_stop(Scheduler *sch)
+{
+ int ret = 0, err;
+
+ atomic_store(&sch->terminate, 1);
+
+ for (unsigned type = 0; type < 2; type++)
+ for (unsigned i = 0; i < (type ? sch->nb_demux : sch->nb_filters); i++) {
+ SchWaiter *w = type ? &sch->demux[i].waiter : &sch->filters[i].waiter;
+ waiter_set(w, 1);
+ }
+
+ for (unsigned i = 0; i < sch->nb_demux; i++) {
+ SchDemux *d = &sch->demux[i];
+
+ err = task_stop(&d->task);
+ ret = err_merge(ret, err);
+ }
+
+ for (unsigned i = 0; i < sch->nb_dec; i++) {
+ SchDec *dec = &sch->dec[i];
+
+ err = task_stop(&dec->task);
+ ret = err_merge(ret, err);
+ }
+
+ for (unsigned i = 0; i < sch->nb_filters; i++) {
+ SchFilterGraph *fg = &sch->filters[i];
+
+ err = task_stop(&fg->task);
+ ret = err_merge(ret, err);
+ }
+
+ for (unsigned i = 0; i < sch->nb_enc; i++) {
+ SchEnc *enc = &sch->enc[i];
+
+ err = task_stop(&enc->task);
+ ret = err_merge(ret, err);
+ }
+
+ for (unsigned i = 0; i < sch->nb_mux; i++) {
+ SchMux *mux = &sch->mux[i];
+
+ err = task_stop(&mux->task);
+ ret = err_merge(ret, err);
+ }
+
+ return ret;
+}
+
+void sch_free(Scheduler **psch)
+{
+ Scheduler *sch = *psch;
+
+ if (!sch)
+ return;
+
+ sch_stop(sch);
+
+ for (unsigned i = 0; i < sch->nb_demux; i++) {
+ SchDemux *d = &sch->demux[i];
+
+ for (unsigned j = 0; j < d->nb_streams; j++) {
+ SchDemuxStream *ds = &d->streams[j];
+ av_freep(&ds->dst);
+ av_freep(&ds->dst_finished);
+ }
+ av_freep(&d->streams);
+
+ av_packet_free(&d->send_pkt);
+
+ waiter_uninit(&d->waiter);
+ }
+ av_freep(&sch->demux);
+
+ for (unsigned i = 0; i < sch->nb_mux; i++) {
+ SchMux *mux = &sch->mux[i];
+
+ for (unsigned j = 0; j < mux->nb_streams; j++) {
+ SchMuxStream *ms = &mux->streams[j];
+
+ if (ms->pre_mux_queue.fifo) {
+ AVPacket *pkt;
+ while (av_fifo_read(ms->pre_mux_queue.fifo, &pkt, 1) >= 0)
+ av_packet_free(&pkt);
+ av_fifo_freep2(&ms->pre_mux_queue.fifo);
+ }
+
+ av_freep(&ms->sub_heartbeat_dst);
+ }
+ av_freep(&mux->streams);
+
+ av_packet_free(&mux->sub_heartbeat_pkt);
+
+ tq_free(&mux->queue);
+ }
+ av_freep(&sch->mux);
+
+ for (unsigned i = 0; i < sch->nb_dec; i++) {
+ SchDec *dec = &sch->dec[i];
+
+ tq_free(&dec->queue);
+
+ av_thread_message_queue_free(&dec->queue_end_ts);
+
+ av_freep(&dec->dst);
+ av_freep(&dec->dst_finished);
+
+ av_frame_free(&dec->send_frame);
+ }
+ av_freep(&sch->dec);
+
+ for (unsigned i = 0; i < sch->nb_enc; i++) {
+ SchEnc *enc = &sch->enc[i];
+
+ tq_free(&enc->queue);
+ }
+ av_freep(&sch->enc);
+
+ for (unsigned i = 0; i < sch->nb_sq_enc; i++) {
+ SchSyncQueue *sq = &sch->sq_enc[i];
+ sq_free(&sq->sq);
+ av_frame_free(&sq->frame);
+ pthread_mutex_destroy(&sq->lock);
+ av_freep(&sq->enc_idx);
+ }
+ av_freep(&sch->sq_enc);
+
+ for (unsigned i = 0; i < sch->nb_filters; i++) {
+ SchFilterGraph *fg = &sch->filters[i];
+
+ tq_free(&fg->queue);
+
+ av_freep(&fg->inputs);
+ av_freep(&fg->outputs);
+
+ waiter_uninit(&fg->waiter);
+ }
+ av_freep(&sch->filters);
+
+ av_freep(&sch->sdp_filename);
+
+ pthread_mutex_destroy(&sch->mux_ready_lock);
+
+ pthread_mutex_destroy(&sch->mux_done_lock);
+ pthread_cond_destroy(&sch->mux_done_cond);
+
+ av_freep(psch);
+}
+
+static const AVClass scheduler_class = {
+ .class_name = "Scheduler",
+ .version = LIBAVUTIL_VERSION_INT,
+};
+
+Scheduler *sch_alloc(void)
+{
+ Scheduler *sch;
+ int ret;
+
+ sch = av_mallocz(sizeof(*sch));
+ if (!sch)
+ return NULL;
+
+ sch->class = &scheduler_class;
+ sch->sdp_auto = 1;
+
+ ret = pthread_mutex_init(&sch->mux_ready_lock, NULL);
+ if (ret)
+ goto fail;
+
+ ret = pthread_mutex_init(&sch->mux_done_lock, NULL);
+ if (ret)
+ goto fail;
+
+ ret = pthread_cond_init(&sch->mux_done_cond, NULL);
+ if (ret)
+ goto fail;
+
+ return sch;
+fail:
+ sch_free(&sch);
+ return NULL;
+}
+
+int sch_sdp_filename(Scheduler *sch, const char *sdp_filename)
+{
+ av_freep(&sch->sdp_filename);
+ sch->sdp_filename = av_strdup(sdp_filename);
+ return sch->sdp_filename ? 0 : AVERROR(ENOMEM);
+}
+
+static const AVClass sch_mux_class = {
+ .class_name = "SchMux",
+ .version = LIBAVUTIL_VERSION_INT,
+ .parent_log_context_offset = offsetof(SchMux, task.func_arg),
+};
+
+int sch_add_mux(Scheduler *sch, SchThreadFunc func, int (*init)(void *),
+ void *arg, int sdp_auto)
+{
+ const unsigned idx = sch->nb_mux;
+
+ SchMux *mux;
+ int ret;
+
+ ret = GROW_ARRAY(sch->mux, sch->nb_mux);
+ if (ret < 0)
+ return ret;
+
+ mux = &sch->mux[idx];
+ mux->class = &sch_mux_class;
+ mux->init = init;
+
+ task_init(sch, &mux->task, SCH_NODE_TYPE_MUX, idx, func, arg);
+
+ sch->sdp_auto &= sdp_auto;
+
+ return idx;
+}
+
+int sch_add_mux_stream(Scheduler *sch, unsigned mux_idx)
+{
+ SchMux *mux;
+ SchMuxStream *ms;
+ unsigned stream_idx;
+ int ret;
+
+ av_assert0(mux_idx < sch->nb_mux);
+ mux = &sch->mux[mux_idx];
+
+ ret = GROW_ARRAY(mux->streams, mux->nb_streams);
+ if (ret < 0)
+ return ret;
+ stream_idx = mux->nb_streams - 1;
+
+ ms = &mux->streams[stream_idx];
+
+ ms->pre_mux_queue.fifo = av_fifo_alloc2(8, sizeof(AVPacket*), 0);
+ if (!ms->pre_mux_queue.fifo)
+ return AVERROR(ENOMEM);
+
+ ms->last_dts = AV_NOPTS_VALUE;
+
+ return stream_idx;
+}
+
+static const AVClass sch_demux_class = {
+ .class_name = "SchDemux",
+ .version = LIBAVUTIL_VERSION_INT,
+ .parent_log_context_offset = offsetof(SchDemux, task.func_arg),
+};
+
+int sch_add_demux(Scheduler *sch, SchThreadFunc func, void *ctx)
+{
+ const unsigned idx = sch->nb_demux;
+
+ SchDemux *d;
+ int ret;
+
+ ret = GROW_ARRAY(sch->demux, sch->nb_demux);
+ if (ret < 0)
+ return ret;
+
+ d = &sch->demux[idx];
+
+ task_init(sch, &d->task, SCH_NODE_TYPE_DEMUX, idx, func, ctx);
+
+ d->class = &sch_demux_class;
+ d->send_pkt = av_packet_alloc();
+ if (!d->send_pkt)
+ return AVERROR(ENOMEM);
+
+ ret = waiter_init(&d->waiter);
+ if (ret < 0)
+ return ret;
+
+ return idx;
+}
+
+int sch_add_demux_stream(Scheduler *sch, unsigned demux_idx)
+{
+ SchDemux *d;
+ int ret;
+
+ av_assert0(demux_idx < sch->nb_demux);
+ d = &sch->demux[demux_idx];
+
+ ret = GROW_ARRAY(d->streams, d->nb_streams);
+ return ret < 0 ? ret : d->nb_streams - 1;
+}
+
+static const AVClass sch_dec_class = {
+ .class_name = "SchDec",
+ .version = LIBAVUTIL_VERSION_INT,
+ .parent_log_context_offset = offsetof(SchDec, task.func_arg),
+};
+
+int sch_add_dec(Scheduler *sch, SchThreadFunc func, void *ctx,
+ int send_end_ts)
+{
+ const unsigned idx = sch->nb_dec;
+
+ SchDec *dec;
+ int ret;
+
+ ret = GROW_ARRAY(sch->dec, sch->nb_dec);
+ if (ret < 0)
+ return ret;
+
+ dec = &sch->dec[idx];
+
+ task_init(sch, &dec->task, SCH_NODE_TYPE_DEC, idx, func, ctx);
+
+ dec->class = &sch_dec_class;
+ dec->send_frame = av_frame_alloc();
+ if (!dec->send_frame)
+ return AVERROR(ENOMEM);
+
+ ret = queue_alloc(&dec->queue, 1, 1, QUEUE_PACKETS);
+ if (ret < 0)
+ return ret;
+
+ if (send_end_ts) {
+ ret = av_thread_message_queue_alloc(&dec->queue_end_ts, 1, sizeof(Timestamp));
+ if (ret < 0)
+ return ret;
+ }
+
+ return idx;
+}
+
+static const AVClass sch_enc_class = {
+ .class_name = "SchEnc",
+ .version = LIBAVUTIL_VERSION_INT,
+ .parent_log_context_offset = offsetof(SchEnc, task.func_arg),
+};
+
+int sch_add_enc(Scheduler *sch, SchThreadFunc func, void *ctx,
+ int (*open_cb)(void *opaque, const AVFrame *frame))
+{
+ const unsigned idx = sch->nb_enc;
+
+ SchEnc *enc;
+ int ret;
+
+ ret = GROW_ARRAY(sch->enc, sch->nb_enc);
+ if (ret < 0)
+ return ret;
+
+ enc = &sch->enc[idx];
+
+ enc->class = &sch_enc_class;
+ enc->open_cb = open_cb;
+ enc->sq_idx[0] = -1;
+ enc->sq_idx[1] = -1;
+
+ task_init(sch, &enc->task, SCH_NODE_TYPE_ENC, idx, func, ctx);
+
+ ret = queue_alloc(&enc->queue, 1, 1, QUEUE_FRAMES);
+ if (ret < 0)
+ return ret;
+
+ return idx;
+}
+
+static const AVClass sch_fg_class = {
+ .class_name = "SchFilterGraph",
+ .version = LIBAVUTIL_VERSION_INT,
+ .parent_log_context_offset = offsetof(SchFilterGraph, task.func_arg),
+};
+
+int sch_add_filtergraph(Scheduler *sch, unsigned nb_inputs, unsigned nb_outputs,
+ SchThreadFunc func, void *ctx)
+{
+ const unsigned idx = sch->nb_filters;
+
+ SchFilterGraph *fg;
+ int ret;
+
+ ret = GROW_ARRAY(sch->filters, sch->nb_filters);
+ if (ret < 0)
+ return ret;
+ fg = &sch->filters[idx];
+
+ fg->class = &sch_fg_class;
+
+ task_init(sch, &fg->task, SCH_NODE_TYPE_FILTER_IN, idx, func, ctx);
+
+ if (nb_inputs) {
+ fg->inputs = av_calloc(nb_inputs, sizeof(*fg->inputs));
+ if (!fg->inputs)
+ return AVERROR(ENOMEM);
+ fg->nb_inputs = nb_inputs;
+ }
+
+ if (nb_outputs) {
+ fg->outputs = av_calloc(nb_outputs, sizeof(*fg->outputs));
+ if (!fg->outputs)
+ return AVERROR(ENOMEM);
+ fg->nb_outputs = nb_outputs;
+ }
+
+ ret = waiter_init(&fg->waiter);
+ if (ret < 0)
+ return ret;
+
+ ret = queue_alloc(&fg->queue, fg->nb_inputs + 1, 1, QUEUE_FRAMES);
+ if (ret < 0)
+ return ret;
+
+ return idx;
+}
+
+int sch_add_sq_enc(Scheduler *sch, uint64_t buf_size_us, void *logctx)
+{
+ SchSyncQueue *sq;
+ int ret;
+
+ ret = GROW_ARRAY(sch->sq_enc, sch->nb_sq_enc);
+ if (ret < 0)
+ return ret;
+ sq = &sch->sq_enc[sch->nb_sq_enc - 1];
+
+ sq->sq = sq_alloc(SYNC_QUEUE_FRAMES, buf_size_us, logctx);
+ if (!sq->sq)
+ return AVERROR(ENOMEM);
+
+ sq->frame = av_frame_alloc();
+ if (!sq->frame)
+ return AVERROR(ENOMEM);
+
+ ret = pthread_mutex_init(&sq->lock, NULL);
+ if (ret)
+ return AVERROR(ret);
+
+ return sq - sch->sq_enc;
+}
+
+int sch_sq_add_enc(Scheduler *sch, unsigned sq_idx, unsigned enc_idx,
+ int limiting, uint64_t max_frames)
+{
+ SchSyncQueue *sq;
+ SchEnc *enc;
+ int ret;
+
+ av_assert0(sq_idx < sch->nb_sq_enc);
+ sq = &sch->sq_enc[sq_idx];
+
+ av_assert0(enc_idx < sch->nb_enc);
+ enc = &sch->enc[enc_idx];
+
+ ret = GROW_ARRAY(sq->enc_idx, sq->nb_enc_idx);
+ if (ret < 0)
+ return ret;
+ sq->enc_idx[sq->nb_enc_idx - 1] = enc_idx;
+
+ ret = sq_add_stream(sq->sq, limiting);
+ if (ret < 0)
+ return ret;
+
+ enc->sq_idx[0] = sq_idx;
+ enc->sq_idx[1] = ret;
+
+ if (max_frames != INT64_MAX)
+ sq_limit_frames(sq->sq, enc->sq_idx[1], max_frames);
+
+ return 0;
+}
+
+int sch_connect(Scheduler *sch, SchedulerNode src, SchedulerNode dst)
+{
+ int ret;
+
+ switch (src.type) {
+ case SCH_NODE_TYPE_DEMUX: {
+ SchDemuxStream *ds;
+
+ av_assert0(src.idx < sch->nb_demux &&
+ src.idx_stream < sch->demux[src.idx].nb_streams);
+ ds = &sch->demux[src.idx].streams[src.idx_stream];
+
+ ret = GROW_ARRAY(ds->dst, ds->nb_dst);
+ if (ret < 0)
+ return ret;
+
+ ds->dst[ds->nb_dst - 1] = dst;
+
+ // demuxed packets go to decoding or streamcopy
+ switch (dst.type) {
+ case SCH_NODE_TYPE_DEC: {
+ SchDec *dec;
+
+ av_assert0(dst.idx < sch->nb_dec);
+ dec = &sch->dec[dst.idx];
+
+ av_assert0(!dec->src.type);
+ dec->src = src;
+ break;
+ }
+ case SCH_NODE_TYPE_MUX: {
+ SchMuxStream *ms;
+
+ av_assert0(dst.idx < sch->nb_mux &&
+ dst.idx_stream < sch->mux[dst.idx].nb_streams);
+ ms = &sch->mux[dst.idx].streams[dst.idx_stream];
+
+ av_assert0(!ms->src.type);
+ ms->src = src;
+
+ break;
+ }
+ default: av_assert0(0);
+ }
+
+ break;
+ }
+ case SCH_NODE_TYPE_DEC: {
+ SchDec *dec;
+
+ av_assert0(src.idx < sch->nb_dec);
+ dec = &sch->dec[src.idx];
+
+ ret = GROW_ARRAY(dec->dst, dec->nb_dst);
+ if (ret < 0)
+ return ret;
+
+ dec->dst[dec->nb_dst - 1] = dst;
+
+ // decoded frames go to filters or encoding
+ switch (dst.type) {
+ case SCH_NODE_TYPE_FILTER_IN: {
+ SchFilterIn *fi;
+
+ av_assert0(dst.idx < sch->nb_filters &&
+ dst.idx_stream < sch->filters[dst.idx].nb_inputs);
+ fi = &sch->filters[dst.idx].inputs[dst.idx_stream];
+
+ av_assert0(!fi->src.type);
+ fi->src = src;
+ break;
+ }
+ case SCH_NODE_TYPE_ENC: {
+ SchEnc *enc;
+
+ av_assert0(dst.idx < sch->nb_enc);
+ enc = &sch->enc[dst.idx];
+
+ av_assert0(!enc->src.type);
+ enc->src = src;
+ break;
+ }
+ default: av_assert0(0);
+ }
+
+ break;
+ }
+ case SCH_NODE_TYPE_FILTER_OUT: {
+ SchFilterOut *fo;
+ SchEnc *enc;
+
+ av_assert0(src.idx < sch->nb_filters &&
+ src.idx_stream < sch->filters[src.idx].nb_outputs);
+ // filtered frames go to encoding
+ av_assert0(dst.type == SCH_NODE_TYPE_ENC &&
+ dst.idx < sch->nb_enc);
+
+ fo = &sch->filters[src.idx].outputs[src.idx_stream];
+ enc = &sch->enc[dst.idx];
+
+ av_assert0(!fo->dst.type && !enc->src.type);
+ fo->dst = dst;
+ enc->src = src;
+
+ break;
+ }
+ case SCH_NODE_TYPE_ENC: {
+ SchEnc *enc;
+ SchMuxStream *ms;
+
+ av_assert0(src.idx < sch->nb_enc);
+ // encoding packets go to muxing
+ av_assert0(dst.type == SCH_NODE_TYPE_MUX &&
+ dst.idx < sch->nb_mux &&
+ dst.idx_stream < sch->mux[dst.idx].nb_streams);
+ enc = &sch->enc[src.idx];
+ ms = &sch->mux[dst.idx].streams[dst.idx_stream];
+
+ av_assert0(!enc->dst.type && !ms->src.type);
+ enc->dst = dst;
+ ms->src = src;
+
+ break;
+ }
+ default: av_assert0(0);
+ }
+
+ return 0;
+}
+
+static int mux_task_start(SchMux *mux)
+{
+ int ret = 0;
+
+ ret = task_start(&mux->task);
+ if (ret < 0)
+ return ret;
+
+ /* flush the pre-muxing queues */
+ for (unsigned i = 0; i < mux->nb_streams; i++) {
+ SchMuxStream *ms = &mux->streams[i];
+ AVPacket *pkt;
+ int finished = 0;
+
+ while (av_fifo_read(ms->pre_mux_queue.fifo, &pkt, 1) >= 0) {
+ if (pkt) {
+ if (!finished)
+ ret = tq_send(mux->queue, i, pkt);
+ av_packet_free(&pkt);
+ if (ret == AVERROR_EOF)
+ finished = 1;
+ else if (ret < 0)
+ return ret;
+ } else
+ tq_send_finish(mux->queue, i);
+ }
+ }
+
+ atomic_store(&mux->mux_started, 1);
+
+ return 0;
+}
+
+int print_sdp(const char *filename);
+
+static int mux_init(Scheduler *sch, SchMux *mux)
+{
+ int ret;
+
+ ret = mux->init(mux->task.func_arg);
+ if (ret < 0)
+ return ret;
+
+ sch->nb_mux_ready++;
+
+ if (sch->sdp_filename || sch->sdp_auto) {
+ if (sch->nb_mux_ready < sch->nb_mux)
+ return 0;
+
+ ret = print_sdp(sch->sdp_filename);
+ if (ret < 0) {
+ av_log(sch, AV_LOG_ERROR, "Error writing the SDP.\n");
+ return ret;
+ }
+
+ /* SDP is written only after all the muxers are ready, so now we
+ * start ALL the threads */
+ for (unsigned i = 0; i < sch->nb_mux; i++) {
+ ret = mux_task_start(&sch->mux[i]);
+ if (ret < 0)
+ return ret;
+ }
+ } else {
+ ret = mux_task_start(mux);
+ if (ret < 0)
+ return ret;
+ }
+
+ return 0;
+}
+
+void sch_mux_stream_buffering(Scheduler *sch, unsigned mux_idx, unsigned stream_idx,
+ size_t data_threshold, int max_packets)
+{
+ SchMux *mux;
+ SchMuxStream *ms;
+
+ av_assert0(mux_idx < sch->nb_mux);
+ mux = &sch->mux[mux_idx];
+
+ av_assert0(stream_idx < mux->nb_streams);
+ ms = &mux->streams[stream_idx];
+
+ ms->pre_mux_queue.max_packets = max_packets;
+ ms->pre_mux_queue.data_threshold = data_threshold;
+}
+
+int sch_mux_stream_ready(Scheduler *sch, unsigned mux_idx, unsigned stream_idx)
+{
+ SchMux *mux;
+ int ret = 0;
+
+ av_assert0(mux_idx < sch->nb_mux);
+ mux = &sch->mux[mux_idx];
+
+ av_assert0(stream_idx < mux->nb_streams);
+
+ pthread_mutex_lock(&sch->mux_ready_lock);
+
+ av_assert0(mux->nb_streams_ready < mux->nb_streams);
+
+ // this may be called during initialization - do not start
+ // threads before sch_start() is called
+ if (++mux->nb_streams_ready == mux->nb_streams && sch->transcode_started)
+ ret = mux_init(sch, mux);
+
+ pthread_mutex_unlock(&sch->mux_ready_lock);
+
+ return ret;
+}
+
+int sch_mux_sub_heartbeat_add(Scheduler *sch, unsigned mux_idx, unsigned stream_idx,
+ unsigned dec_idx)
+{
+ SchMux *mux;
+ SchMuxStream *ms;
+ int ret = 0;
+
+ av_assert0(mux_idx < sch->nb_mux);
+ mux = &sch->mux[mux_idx];
+
+ av_assert0(stream_idx < mux->nb_streams);
+ ms = &mux->streams[stream_idx];
+
+ ret = GROW_ARRAY(ms->sub_heartbeat_dst, ms->nb_sub_heartbeat_dst);
+ if (ret < 0)
+ return ret;
+
+ av_assert0(dec_idx < sch->nb_dec);
+ ms->sub_heartbeat_dst[ms->nb_sub_heartbeat_dst - 1] = dec_idx;
+
+ if (!mux->sub_heartbeat_pkt) {
+ mux->sub_heartbeat_pkt = av_packet_alloc();
+ if (!mux->sub_heartbeat_pkt)
+ return AVERROR(ENOMEM);
+ }
+
+ return 0;
+}
+
+static int64_t trailing_dts(const Scheduler *sch)
+{
+ int64_t min_dts = INT64_MAX;
+
+ for (unsigned i = 0; i < sch->nb_mux; i++) {
+ const SchMux *mux = &sch->mux[i];
+
+ for (unsigned j = 0; j < mux->nb_streams; j++) {
+ const SchMuxStream *ms = &mux->streams[j];
+
+ if (ms->source_finished)
+ continue;
+ if (ms->last_dts == AV_NOPTS_VALUE)
+ return AV_NOPTS_VALUE;
+
+ min_dts = FFMIN(min_dts, ms->last_dts);
+ }
+ }
+
+ return min_dts == INT64_MAX ? AV_NOPTS_VALUE : min_dts;
+}
+
+static void schedule_update_locked(Scheduler *sch)
+{
+ int64_t dts;
+
+ // on termination request all waiters are choked,
+ // we are not to unchoke them
+ if (atomic_load(&sch->terminate))
+ return;
+
+ dts = trailing_dts(sch);
+
+ atomic_store(&sch->last_dts, dts);
+
+ // initialize our internal state
+ for (unsigned type = 0; type < 2; type++)
+ for (unsigned i = 0; i < (type ? sch->nb_demux : sch->nb_filters); i++) {
+ SchWaiter *w = type ? &sch->demux[i].waiter : &sch->filters[i].waiter;
+ w->choked_prev = atomic_load(&w->choked);
+ w->choked_next = 1;
+ }
+
+ // figure out the sources that are allowed to proceed
+ for (unsigned i = 0; i < sch->nb_mux; i++) {
+ SchMux *mux = &sch->mux[i];
+
+ for (unsigned j = 0; j < mux->nb_streams; j++) {
+ SchMuxStream *ms = &mux->streams[j];
+ SchDemux *d;
+
+ // unblock sources for output streams that are not finished
+ // and not too far ahead of the trailing stream
+ if (ms->source_finished)
+ continue;
+ if (dts == AV_NOPTS_VALUE && ms->last_dts != AV_NOPTS_VALUE)
+ continue;
+ if (dts != AV_NOPTS_VALUE && ms->last_dts - dts >= SCHEDULE_TOLERANCE)
+ continue;
+
+ // for outputs fed from filtergraphs, consider that filtergraph's
+ // best_input information, in other cases there is a well-defined
+ // source demuxer
+ if (ms->src_sched.type == SCH_NODE_TYPE_FILTER_OUT) {
+ SchFilterGraph *fg = &sch->filters[ms->src_sched.idx];
+ SchFilterIn *fi;
+
+ // the filtergraph contains internal sources and
+ // requested to be scheduled directly
+ if (fg->best_input == fg->nb_inputs) {
+ fg->waiter.choked_next = 0;
+ continue;
+ }
+
+ fi = &fg->inputs[fg->best_input];
+ d = &sch->demux[fi->src_sched.idx];
+ } else
+ d = &sch->demux[ms->src_sched.idx];
+
+ d->waiter.choked_next = 0;
+ }
+ }
+
+ for (unsigned type = 0; type < 2; type++)
+ for (unsigned i = 0; i < (type ? sch->nb_demux : sch->nb_filters); i++) {
+ SchWaiter *w = type ? &sch->demux[i].waiter : &sch->filters[i].waiter;
+ if (w->choked_prev != w->choked_next)
+ waiter_set(w, w->choked_next);
+ }
+
+}
+
+int sch_start(Scheduler *sch)
+{
+ int ret;
+
+ sch->transcode_started = 1;
+
+ for (unsigned i = 0; i < sch->nb_mux; i++) {
+ SchMux *mux = &sch->mux[i];
+
+ for (unsigned j = 0; j < mux->nb_streams; j++) {
+ SchMuxStream *ms = &mux->streams[j];
+
+ switch (ms->src.type) {
+ case SCH_NODE_TYPE_ENC: {
+ SchEnc *enc = &sch->enc[ms->src.idx];
+ if (enc->src.type == SCH_NODE_TYPE_DEC) {
+ ms->src_sched = sch->dec[enc->src.idx].src;
+ av_assert0(ms->src_sched.type == SCH_NODE_TYPE_DEMUX);
+ } else {
+ ms->src_sched = enc->src;
+ av_assert0(ms->src_sched.type == SCH_NODE_TYPE_FILTER_OUT);
+ }
+ break;
+ }
+ case SCH_NODE_TYPE_DEMUX:
+ ms->src_sched = ms->src;
+ break;
+ default:
+ av_log(mux, AV_LOG_ERROR,
+ "Muxer stream #%u not connected to a source\n", j);
+ return AVERROR(EINVAL);
+ }
+ }
+
+ ret = queue_alloc(&mux->queue, mux->nb_streams, 1, QUEUE_PACKETS);
+ if (ret < 0)
+ return ret;
+
+ if (mux->nb_streams_ready == mux->nb_streams) {
+ ret = mux_init(sch, mux);
+ if (ret < 0)
+ return ret;
+ }
+ }
+
+ for (unsigned i = 0; i < sch->nb_enc; i++) {
+ SchEnc *enc = &sch->enc[i];
+
+ if (!enc->src.type) {
+ av_log(enc, AV_LOG_ERROR,
+ "Encoder not connected to a source\n");
+ return AVERROR(EINVAL);
+ }
+ if (!enc->dst.type) {
+ av_log(enc, AV_LOG_ERROR,
+ "Encoder not connected to a sink\n");
+ return AVERROR(EINVAL);
+ }
+
+ ret = task_start(&enc->task);
+ if (ret < 0)
+ return ret;
+ }
+
+ for (unsigned i = 0; i < sch->nb_filters; i++) {
+ SchFilterGraph *fg = &sch->filters[i];
+
+ for (unsigned j = 0; j < fg->nb_inputs; j++) {
+ SchFilterIn *fi = &fg->inputs[j];
+
+ if (!fi->src.type) {
+ av_log(fg, AV_LOG_ERROR,
+ "Filtergraph input %u not connected to a source\n", j);
+ return AVERROR(EINVAL);
+ }
+
+ fi->src_sched = sch->dec[fi->src.idx].src;
+ }
+
+ for (unsigned j = 0; j < fg->nb_outputs; j++) {
+ SchFilterOut *fo = &fg->outputs[j];
+
+ if (!fo->dst.type) {
+ av_log(fg, AV_LOG_ERROR,
+ "Filtergraph %u output %u not connected to a sink\n", i, j);
+ return AVERROR(EINVAL);
+ }
+ }
+
+ ret = task_start(&fg->task);
+ if (ret < 0)
+ return ret;
+ }
+
+ for (unsigned i = 0; i < sch->nb_dec; i++) {
+ SchDec *dec = &sch->dec[i];
+
+ if (!dec->src.type) {
+ av_log(dec, AV_LOG_ERROR,
+ "Decoder not connected to a source\n");
+ return AVERROR(EINVAL);
+ }
+ if (!dec->nb_dst) {
+ av_log(dec, AV_LOG_ERROR,
+ "Decoder not connected to any sink\n");
+ return AVERROR(EINVAL);
+ }
+
+ dec->dst_finished = av_calloc(dec->nb_dst, sizeof(*dec->dst_finished));
+ if (!dec->dst_finished)
+ return AVERROR(ENOMEM);
+
+ ret = task_start(&dec->task);
+ if (ret < 0)
+ return ret;
+ }
+
+ for (unsigned i = 0; i < sch->nb_demux; i++) {
+ SchDemux *d = &sch->demux[i];
+
+ if (!d->nb_streams)
+ continue;
+
+ for (unsigned j = 0; j < d->nb_streams; j++) {
+ SchDemuxStream *ds = &d->streams[j];
+
+ if (!ds->nb_dst) {
+ av_log(d, AV_LOG_ERROR,
+ "Demuxer stream %u not connected to any sink\n", j);
+ return AVERROR(EINVAL);
+ }
+
+ ds->dst_finished = av_calloc(ds->nb_dst, sizeof(*ds->dst_finished));
+ if (!ds->dst_finished)
+ return AVERROR(ENOMEM);
+ }
+
+ ret = task_start(&d->task);
+ if (ret < 0)
+ return ret;
+ }
+
+ pthread_mutex_lock(&sch->schedule_lock);
+ schedule_update_locked(sch);
+ pthread_mutex_unlock(&sch->schedule_lock);
+
+ return 0;
+}
+
+int sch_wait(Scheduler *sch, uint64_t timeout_us, int64_t *transcode_ts)
+{
+ int ret, err;
+
+ // convert delay to absolute timestamp
+ timeout_us += av_gettime();
+
+ pthread_mutex_lock(&sch->mux_done_lock);
+
+ if (sch->nb_mux_done < sch->nb_mux) {
+ struct timespec tv = { .tv_sec = timeout_us / 1000000,
+ .tv_nsec = (timeout_us % 1000000) * 1000 };
+ pthread_cond_timedwait(&sch->mux_done_cond, &sch->mux_done_lock, &tv);
+ }
+
+ ret = sch->nb_mux_done == sch->nb_mux;
+
+ pthread_mutex_unlock(&sch->mux_done_lock);
+
+ *transcode_ts = atomic_load(&sch->last_dts);
+
+ // abort transcoding if any task failed
+ err = atomic_load(&sch->task_failed);
+ if (err < 0)
+ return err;
+
+ return ret;
+}
+
+static int enc_open(Scheduler *sch, SchEnc *enc, const AVFrame *frame)
+{
+ int ret;
+
+ ret = enc->open_cb(enc->task.func_arg, frame);
+ if (ret < 0)
+ return ret;
+
+ // ret>0 signals audio frame size, which means sync queue must
+ // have been enabled during encoder creation
+ if (ret > 0) {
+ SchSyncQueue *sq;
+
+ av_assert0(enc->sq_idx[0] >= 0);
+ sq = &sch->sq_enc[enc->sq_idx[0]];
+
+ pthread_mutex_lock(&sq->lock);
+
+ sq_frame_samples(sq->sq, enc->sq_idx[1], ret);
+
+ pthread_mutex_unlock(&sq->lock);
+ }
+
+ return 0;
+}
+
+static int send_to_enc_thread(Scheduler *sch, SchEnc *enc, AVFrame *frame)
+{
+ int ret;
+
+ if (!frame) {
+ tq_send_finish(enc->queue, 0);
+ return 0;
+ }
+
+ if (enc->in_finished)
+ return AVERROR_EOF;
+
+ ret = tq_send(enc->queue, 0, frame);
+ if (ret < 0)
+ enc->in_finished = 1;
+
+ return ret;
+}
+
+static int send_to_enc_sq(Scheduler *sch, SchEnc *enc, AVFrame *frame)
+{
+ SchSyncQueue *sq = &sch->sq_enc[enc->sq_idx[0]];
+ int ret = 0;
+
+ // inform the scheduling code that no more input will arrive along this path;
+ // this is necessary because the sync queue may not send an EOF downstream
+ // until other streams finish
+ // TODO: consider a cleaner way of passing this information through
+ // the pipeline
+ if (!frame) {
+ SchMux *mux = &sch->mux[enc->dst.idx];
+ SchMuxStream *ms = &mux->streams[enc->dst.idx_stream];
+
+ pthread_mutex_lock(&sch->schedule_lock);
+
+ ms->source_finished = 1;
+ schedule_update_locked(sch);
+
+ pthread_mutex_unlock(&sch->schedule_lock);
+ }
+
+ pthread_mutex_lock(&sq->lock);
+
+ ret = sq_send(sq->sq, enc->sq_idx[1], SQFRAME(frame));
+ if (ret < 0)
+ goto finish;
+
+ while (1) {
+ SchEnc *enc;
+
+ // TODO: the SQ API should be extended to allow returning EOF
+ // for individual streams
+ ret = sq_receive(sq->sq, -1, SQFRAME(sq->frame));
+ if (ret == AVERROR(EAGAIN)) {
+ ret = 0;
+ goto finish;
+ } else if (ret < 0) {
+ // close all encoders fed from this sync queue
+ for (unsigned i = 0; i < sq->nb_enc_idx; i++) {
+ int err = send_to_enc_thread(sch, &sch->enc[sq->enc_idx[i]], NULL);
+
+ // if the sync queue error is EOF and closing the encoder
+ // produces a more serious error, make sure to pick the latter
+ ret = err_merge((ret == AVERROR_EOF && err < 0) ? 0 : ret, err);
+ }
+ goto finish;
+ }
+
+ enc = &sch->enc[sq->enc_idx[ret]];
+ ret = send_to_enc_thread(sch, enc, sq->frame);
+ if (ret < 0) {
+ av_assert0(ret == AVERROR_EOF);
+ av_frame_unref(sq->frame);
+ sq_send(sq->sq, enc->sq_idx[1], SQFRAME(NULL));
+ continue;
+ }
+ }
+
+finish:
+ pthread_mutex_unlock(&sq->lock);
+
+ return ret;
+}
+
+static int send_to_enc(Scheduler *sch, SchEnc *enc, AVFrame *frame)
+{
+ if (enc->open_cb && frame && !enc->opened) {
+ int ret = enc_open(sch, enc, frame);
+ if (ret < 0)
+ return ret;
+ enc->opened = 1;
+
+ // discard empty frames that only carry encoder init parameters
+ if (!frame->buf[0]) {
+ av_frame_unref(frame);
+ return 0;
+ }
+ }
+
+ return (enc->sq_idx[0] >= 0) ?
+ send_to_enc_sq (sch, enc, frame) :
+ send_to_enc_thread(sch, enc, frame);
+}
+
+static int mux_queue_packet(SchMux *mux, SchMuxStream *ms, AVPacket *pkt)
+{
+ PreMuxQueue *q = &ms->pre_mux_queue;
+ AVPacket *tmp_pkt = NULL;
+ int ret;
+
+ if (!av_fifo_can_write(q->fifo)) {
+ size_t packets = av_fifo_can_read(q->fifo);
+ size_t pkt_size = pkt ? pkt->size : 0;
+ int thresh_reached = (q->data_size + pkt_size) > q->data_threshold;
+ size_t max_packets = thresh_reached ? q->max_packets : SIZE_MAX;
+ size_t new_size = FFMIN(2 * packets, max_packets);
+
+ if (new_size <= packets) {
+ av_log(mux, AV_LOG_ERROR,
+ "Too many packets buffered for output stream.\n");
+ return AVERROR(ENOSPC);
+ }
+ ret = av_fifo_grow2(q->fifo, new_size - packets);
+ if (ret < 0)
+ return ret;
+ }
+
+ if (pkt) {
+ tmp_pkt = av_packet_alloc();
+ if (!tmp_pkt)
+ return AVERROR(ENOMEM);
+
+ av_packet_move_ref(tmp_pkt, pkt);
+ q->data_size += tmp_pkt->size;
+ }
+ av_fifo_write(q->fifo, &tmp_pkt, 1);
+
+ return 0;
+}
+
+static int send_to_mux(Scheduler *sch, SchMux *mux, unsigned stream_idx,
+ AVPacket *pkt)
+{
+ SchMuxStream *ms = &mux->streams[stream_idx];
+ int64_t dts = (pkt && pkt->dts != AV_NOPTS_VALUE) ?
+ av_rescale_q(pkt->dts, pkt->time_base, AV_TIME_BASE_Q) :
+ AV_NOPTS_VALUE;
+
+ // queue the packet if the muxer cannot be started yet
+ if (!atomic_load(&mux->mux_started)) {
+ int queued = 0;
+
+ // the muxer could have started between the above atomic check and
+ // locking the mutex, then this block falls through to normal send path
+ pthread_mutex_lock(&sch->mux_ready_lock);
+
+ if (!atomic_load(&mux->mux_started)) {
+ int ret = mux_queue_packet(mux, ms, pkt);
+ queued = ret < 0 ? ret : 1;
+ }
+
+ pthread_mutex_unlock(&sch->mux_ready_lock);
+
+ if (queued < 0)
+ return queued;
+ else if (queued)
+ goto update_schedule;
+ }
+
+ if (pkt) {
+ int ret = tq_send(mux->queue, stream_idx, pkt);
+ if (ret < 0)
+ return ret;
+ } else
+ tq_send_finish(mux->queue, stream_idx);
+
+update_schedule:
+ // TODO: use atomics to check whether this changes trailing dts
+ // to avoid locking unnecesarily
+ if (dts != AV_NOPTS_VALUE || !pkt) {
+ pthread_mutex_lock(&sch->schedule_lock);
+
+ if (pkt) ms->last_dts = dts;
+ else ms->source_finished = 1;
+
+ schedule_update_locked(sch);
+
+ pthread_mutex_unlock(&sch->schedule_lock);
+ }
+
+ return 0;
+}
+
+static int
+demux_stream_send_to_dst(Scheduler *sch, const SchedulerNode dst,
+ uint8_t *dst_finished, AVPacket *pkt, unsigned flags)
+{
+ int ret;
+
+ if (*dst_finished)
+ return AVERROR_EOF;
+
+ if (pkt && dst.type == SCH_NODE_TYPE_MUX &&
+ (flags & DEMUX_SEND_STREAMCOPY_EOF)) {
+ av_packet_unref(pkt);
+ pkt = NULL;
+ }
+
+ if (!pkt)
+ goto finish;
+
+ ret = (dst.type == SCH_NODE_TYPE_MUX) ?
+ send_to_mux(sch, &sch->mux[dst.idx], dst.idx_stream, pkt) :
+ tq_send(sch->dec[dst.idx].queue, 0, pkt);
+ if (ret == AVERROR_EOF)
+ goto finish;
+
+ return ret;
+
+finish:
+ if (dst.type == SCH_NODE_TYPE_MUX)
+ send_to_mux(sch, &sch->mux[dst.idx], dst.idx_stream, NULL);
+ else
+ tq_send_finish(sch->dec[dst.idx].queue, 0);
+
+ *dst_finished = 1;
+ return AVERROR_EOF;
+}
+
+static int demux_send_for_stream(Scheduler *sch, SchDemux *d, SchDemuxStream *ds,
+ AVPacket *pkt, unsigned flags)
+{
+ unsigned nb_done = 0;
+
+ for (unsigned i = 0; i < ds->nb_dst; i++) {
+ AVPacket *to_send = pkt;
+ uint8_t *finished = &ds->dst_finished[i];
+
+ int ret;
+
+ // sending a packet consumes it, so make a temporary reference if needed
+ if (pkt && i < ds->nb_dst - 1) {
+ to_send = d->send_pkt;
+
+ ret = av_packet_ref(to_send, pkt);
+ if (ret < 0)
+ return ret;
+ }
+
+ ret = demux_stream_send_to_dst(sch, ds->dst[i], finished, to_send, flags);
+ if (to_send)
+ av_packet_unref(to_send);
+ if (ret == AVERROR_EOF)
+ nb_done++;
+ else if (ret < 0)
+ return ret;
+ }
+
+ return (nb_done == ds->nb_dst) ? AVERROR_EOF : 0;
+}
+
+static int demux_flush(Scheduler *sch, SchDemux *d, AVPacket *pkt)
+{
+ Timestamp max_end_ts = (Timestamp){ .ts = AV_NOPTS_VALUE };
+
+ av_assert0(!pkt->buf && !pkt->data && !pkt->side_data_elems);
+
+ for (unsigned i = 0; i < d->nb_streams; i++) {
+ SchDemuxStream *ds = &d->streams[i];
+
+ for (unsigned j = 0; j < ds->nb_dst; j++) {
+ const SchedulerNode *dst = &ds->dst[j];
+ SchDec *dec;
+ int ret;
+
+ if (ds->dst_finished[j] || dst->type != SCH_NODE_TYPE_DEC)
+ continue;
+
+ dec = &sch->dec[dst->idx];
+
+ ret = tq_send(dec->queue, 0, pkt);
+ if (ret < 0)
+ return ret;
+
+ if (dec->queue_end_ts) {
+ Timestamp ts;
+ ret = av_thread_message_queue_recv(dec->queue_end_ts, &ts, 0);
+ if (ret < 0)
+ return ret;
+
+ if (max_end_ts.ts == AV_NOPTS_VALUE ||
+ (ts.ts != AV_NOPTS_VALUE &&
+ av_compare_ts(max_end_ts.ts, max_end_ts.tb, ts.ts, ts.tb) < 0))
+ max_end_ts = ts;
+
+ }
+ }
+ }
+
+ pkt->pts = max_end_ts.ts;
+ pkt->time_base = max_end_ts.tb;
+
+ return 0;
+}
+
+int sch_demux_send(Scheduler *sch, unsigned demux_idx, AVPacket *pkt,
+ unsigned flags)
+{
+ SchDemux *d;
+ int terminate;
+
+ av_assert0(demux_idx < sch->nb_demux);
+ d = &sch->demux[demux_idx];
+
+ terminate = waiter_wait(sch, &d->waiter);
+ if (terminate)
+ return AVERROR_EXIT;
+
+ // flush the downstreams after seek
+ if (pkt->stream_index == -1)
+ return demux_flush(sch, d, pkt);
+
+ av_assert0(pkt->stream_index < d->nb_streams);
+
+ return demux_send_for_stream(sch, d, &d->streams[pkt->stream_index], pkt, flags);
+}
+
+static int demux_done(Scheduler *sch, unsigned demux_idx)
+{
+ SchDemux *d = &sch->demux[demux_idx];
+ int ret = 0;
+
+ for (unsigned i = 0; i < d->nb_streams; i++) {
+ int err = demux_send_for_stream(sch, d, &d->streams[i], NULL, 0);
+ if (err != AVERROR_EOF)
+ ret = err_merge(ret, err);
+ }
+
+ pthread_mutex_lock(&sch->schedule_lock);
+
+ schedule_update_locked(sch);
+
+ pthread_mutex_unlock(&sch->schedule_lock);
+
+ return ret;
+}
+
+int sch_mux_receive(Scheduler *sch, unsigned mux_idx, AVPacket *pkt)
+{
+ SchMux *mux;
+ int ret, stream_idx;
+
+ av_assert0(mux_idx < sch->nb_mux);
+ mux = &sch->mux[mux_idx];
+
+ ret = tq_receive(mux->queue, &stream_idx, pkt);
+ pkt->stream_index = stream_idx;
+ return ret;
+}
+
+void sch_mux_receive_finish(Scheduler *sch, unsigned mux_idx, unsigned stream_idx)
+{
+ SchMux *mux;
+
+ av_assert0(mux_idx < sch->nb_mux);
+ mux = &sch->mux[mux_idx];
+
+ av_assert0(stream_idx < mux->nb_streams);
+ tq_receive_finish(mux->queue, stream_idx);
+
+ pthread_mutex_lock(&sch->schedule_lock);
+ mux->streams[stream_idx].source_finished = 1;
+
+ schedule_update_locked(sch);
+
+ pthread_mutex_unlock(&sch->schedule_lock);
+}
+
+int sch_mux_sub_heartbeat(Scheduler *sch, unsigned mux_idx, unsigned stream_idx,
+ const AVPacket *pkt)
+{
+ SchMux *mux;
+ SchMuxStream *ms;
+
+ av_assert0(mux_idx < sch->nb_mux);
+ mux = &sch->mux[mux_idx];
+
+ av_assert0(stream_idx < mux->nb_streams);
+ ms = &mux->streams[stream_idx];
+
+ for (unsigned i = 0; i < ms->nb_sub_heartbeat_dst; i++) {
+ SchDec *dst = &sch->dec[ms->sub_heartbeat_dst[i]];
+ int ret;
+
+ ret = av_packet_copy_props(mux->sub_heartbeat_pkt, pkt);
+ if (ret < 0)
+ return ret;
+
+ tq_send(dst->queue, 0, mux->sub_heartbeat_pkt);
+ }
+
+ return 0;
+}
+
+static int mux_done(Scheduler *sch, unsigned mux_idx)
+{
+ SchMux *mux = &sch->mux[mux_idx];
+
+ pthread_mutex_lock(&sch->schedule_lock);
+
+ for (unsigned i = 0; i < mux->nb_streams; i++) {
+ tq_receive_finish(mux->queue, i);
+ mux->streams[i].source_finished = 1;
+ }
+
+ schedule_update_locked(sch);
+
+ pthread_mutex_unlock(&sch->schedule_lock);
+
+ pthread_mutex_lock(&sch->mux_done_lock);
+
+ av_assert0(sch->nb_mux_done < sch->nb_mux);
+ sch->nb_mux_done++;
+
+ pthread_cond_signal(&sch->mux_done_cond);
+
+ pthread_mutex_unlock(&sch->mux_done_lock);
+
+ return 0;
+}
+
+int sch_dec_receive(Scheduler *sch, unsigned dec_idx, AVPacket *pkt)
+{
+ SchDec *dec;
+ int ret, dummy;
+
+ av_assert0(dec_idx < sch->nb_dec);
+ dec = &sch->dec[dec_idx];
+
+ // the decoder should have given us post-flush end timestamp in pkt
+ if (dec->expect_end_ts) {
+ Timestamp ts = (Timestamp){ .ts = pkt->pts, .tb = pkt->time_base };
+ ret = av_thread_message_queue_send(dec->queue_end_ts, &ts, 0);
+ if (ret < 0)
+ return ret;
+
+ dec->expect_end_ts = 0;
+ }
+
+ ret = tq_receive(dec->queue, &dummy, pkt);
+ av_assert0(dummy <= 0);
+
+ // got a flush packet, on the next call to this function the decoder
+ // will give us post-flush end timestamp
+ if (ret >= 0 && !pkt->data && !pkt->side_data_elems && dec->queue_end_ts)
+ dec->expect_end_ts = 1;
+
+ return ret;
+}
+
+static int send_to_filter(Scheduler *sch, SchFilterGraph *fg,
+ unsigned in_idx, AVFrame *frame)
+{
+ if (frame)
+ return tq_send(fg->queue, in_idx, frame);
+
+ if (!fg->inputs[in_idx].send_finished) {
+ fg->inputs[in_idx].send_finished = 1;
+ tq_send_finish(fg->queue, in_idx);
+
+ // close the control stream when all actual inputs are done
+ if (atomic_fetch_add(&fg->nb_inputs_finished, 1) == fg->nb_inputs - 1)
+ tq_send_finish(fg->queue, fg->nb_inputs);
+ }
+ return 0;
+}
+
+static int dec_send_to_dst(Scheduler *sch, const SchedulerNode dst,
+ uint8_t *dst_finished, AVFrame *frame)
+{
+ int ret;
+
+ if (*dst_finished)
+ return AVERROR_EOF;
+
+ if (!frame)
+ goto finish;
+
+ ret = (dst.type == SCH_NODE_TYPE_FILTER_IN) ?
+ send_to_filter(sch, &sch->filters[dst.idx], dst.idx_stream, frame) :
+ send_to_enc(sch, &sch->enc[dst.idx], frame);
+ if (ret == AVERROR_EOF)
+ goto finish;
+
+ return ret;
+
+finish:
+ if (dst.type == SCH_NODE_TYPE_FILTER_IN)
+ send_to_filter(sch, &sch->filters[dst.idx], dst.idx_stream, NULL);
+ else
+ send_to_enc(sch, &sch->enc[dst.idx], NULL);
+
+ *dst_finished = 1;
+
+ return AVERROR_EOF;
+}
+
+int sch_dec_send(Scheduler *sch, unsigned dec_idx, AVFrame *frame)
+{
+ SchDec *dec;
+ int ret = 0;
+ unsigned nb_done = 0;
+
+ av_assert0(dec_idx < sch->nb_dec);
+ dec = &sch->dec[dec_idx];
+
+ for (unsigned i = 0; i < dec->nb_dst; i++) {
+ uint8_t *finished = &dec->dst_finished[i];
+ AVFrame *to_send = frame;
+
+ // sending a frame consumes it, so make a temporary reference if needed
+ if (i < dec->nb_dst - 1) {
+ to_send = dec->send_frame;
+
+ // frame may sometimes contain props only,
+ // e.g. to signal EOF timestamp
+ ret = frame->buf[0] ? av_frame_ref(to_send, frame) :
+ av_frame_copy_props(to_send, frame);
+ if (ret < 0)
+ return ret;
+ }
+
+ ret = dec_send_to_dst(sch, dec->dst[i], finished, to_send);
+ if (ret < 0) {
+ av_frame_unref(to_send);
+ if (ret == AVERROR_EOF) {
+ nb_done++;
+ ret = 0;
+ continue;
+ }
+ goto finish;
+ }
+ }
+
+finish:
+ return ret < 0 ? ret :
+ (nb_done == dec->nb_dst) ? AVERROR_EOF : 0;
+}
+
+static int dec_done(Scheduler *sch, unsigned dec_idx)
+{
+ SchDec *dec = &sch->dec[dec_idx];
+ int ret = 0;
+
+ tq_receive_finish(dec->queue, 0);
+
+ // make sure our source does not get stuck waiting for end timestamps
+ // that will never arrive
+ if (dec->queue_end_ts)
+ av_thread_message_queue_set_err_recv(dec->queue_end_ts, AVERROR_EOF);
+
+ for (unsigned i = 0; i < dec->nb_dst; i++) {
+ int err = dec_send_to_dst(sch, dec->dst[i], &dec->dst_finished[i], NULL);
+ if (err < 0 && err != AVERROR_EOF)
+ ret = err_merge(ret, err);
+ }
+
+ return ret;
+}
+
+int sch_enc_receive(Scheduler *sch, unsigned enc_idx, AVFrame *frame)
+{
+ SchEnc *enc;
+ int ret, dummy;
+
+ av_assert0(enc_idx < sch->nb_enc);
+ enc = &sch->enc[enc_idx];
+
+ ret = tq_receive(enc->queue, &dummy, frame);
+ av_assert0(dummy <= 0);
+
+ return ret;
+}
+
+int sch_enc_send(Scheduler *sch, unsigned enc_idx, AVPacket *pkt)
+{
+ SchEnc *enc;
+
+ av_assert0(enc_idx < sch->nb_enc);
+ enc = &sch->enc[enc_idx];
+
+ return send_to_mux(sch, &sch->mux[enc->dst.idx], enc->dst.idx_stream, pkt);
+}
+
+static int enc_done(Scheduler *sch, unsigned enc_idx)
+{
+ SchEnc *enc = &sch->enc[enc_idx];
+
+ tq_receive_finish(enc->queue, 0);
+
+ return send_to_mux(sch, &sch->mux[enc->dst.idx], enc->dst.idx_stream, NULL);
+}
+
+int sch_filter_receive(Scheduler *sch, unsigned fg_idx,
+ unsigned *in_idx, AVFrame *frame)
+{
+ SchFilterGraph *fg;
+
+ av_assert0(fg_idx < sch->nb_filters);
+ fg = &sch->filters[fg_idx];
+
+ av_assert0(*in_idx <= fg->nb_inputs);
+
+ // update scheduling to account for desired input stream, if it changed
+ //
+ // this check needs no locking because only the filtering thread
+ // updates this value
+ if (*in_idx != fg->best_input) {
+ pthread_mutex_lock(&sch->schedule_lock);
+
+ fg->best_input = *in_idx;
+ schedule_update_locked(sch);
+
+ pthread_mutex_unlock(&sch->schedule_lock);
+ }
+
+ if (*in_idx == fg->nb_inputs) {
+ int terminate = waiter_wait(sch, &fg->waiter);
+ return terminate ? AVERROR_EOF : AVERROR(EAGAIN);
+ }
+
+ while (1) {
+ int ret, idx;
+
+ ret = tq_receive(fg->queue, &idx, frame);
+ if (idx < 0)
+ return AVERROR_EOF;
+ else if (ret >= 0) {
+ *in_idx = idx;
+ return 0;
+ }
+
+ // disregard EOFs for specific streams - they should always be
+ // preceded by an EOF frame
+ }
+}
+
+int sch_filter_send(Scheduler *sch, unsigned fg_idx, unsigned out_idx, AVFrame *frame)
+{
+ SchFilterGraph *fg;
+
+ av_assert0(fg_idx < sch->nb_filters);
+ fg = &sch->filters[fg_idx];
+
+ av_assert0(out_idx < fg->nb_outputs);
+ return send_to_enc(sch, &sch->enc[fg->outputs[out_idx].dst.idx], frame);
+}
+
+static int filter_done(Scheduler *sch, unsigned fg_idx)
+{
+ SchFilterGraph *fg = &sch->filters[fg_idx];
+ int ret = 0;
+
+ for (unsigned i = 0; i <= fg->nb_inputs; i++)
+ tq_receive_finish(fg->queue, i);
+
+ for (unsigned i = 0; i < fg->nb_outputs; i++) {
+ SchEnc *enc = &sch->enc[fg->outputs[i].dst.idx];
+ int err = send_to_enc(sch, enc, NULL);
+ if (err < 0 && err != AVERROR_EOF)
+ ret = err_merge(ret, err);
+ }
+
+ return ret;
+}
+
+int sch_filter_command(Scheduler *sch, unsigned fg_idx, AVFrame *frame)
+{
+ SchFilterGraph *fg;
+
+ av_assert0(fg_idx < sch->nb_filters);
+ fg = &sch->filters[fg_idx];
+
+ return send_to_filter(sch, fg, fg->nb_inputs, frame);
+}
+
+static void *task_wrapper(void *arg)
+{
+ SchTask *task = arg;
+ Scheduler *sch = task->parent;
+ int ret;
+ int err = 0;
+
+ ret = (intptr_t)task->func(task->func_arg);
+ if (ret < 0)
+ av_log(task->func_arg, AV_LOG_ERROR,
+ "Task finished with error code: %d (%s)\n", ret, av_err2str(ret));
+
+ switch (task->node.type) {
+ case SCH_NODE_TYPE_DEMUX: err = demux_done (sch, task->node.idx); break;
+ case SCH_NODE_TYPE_MUX: err = mux_done (sch, task->node.idx); break;
+ case SCH_NODE_TYPE_DEC: err = dec_done (sch, task->node.idx); break;
+ case SCH_NODE_TYPE_ENC: err = enc_done (sch, task->node.idx); break;
+ case SCH_NODE_TYPE_FILTER_IN: err = filter_done(sch, task->node.idx); break;
+ default: av_assert0(0);
+ }
+
+ ret = err_merge(ret, err);
+
+ // EOF is considered normal termination
+ if (ret == AVERROR_EOF)
+ ret = 0;
+ if (ret < 0)
+ atomic_store(&sch->task_failed, 1);
+
+ av_log(task->func_arg, ret < 0 ? AV_LOG_ERROR : AV_LOG_VERBOSE,
+ "Terminating thread with return code %d (%s)\n", ret,
+ ret < 0 ? av_err2str(ret) : "success");
+
+ return (void*)(intptr_t)ret;
+}
diff --git a/fftools/ffmpeg_sched.h b/fftools/ffmpeg_sched.h
new file mode 100644
index 0000000000..94bbd30e98
--- /dev/null
+++ b/fftools/ffmpeg_sched.h
@@ -0,0 +1,468 @@
+/*
+ * Inter-thread scheduling/synchronization.
+ * Copyright (c) 2023 Anton Khirnov
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef FFTOOLS_FFMPEG_SCHED_H
+#define FFTOOLS_FFMPEG_SCHED_H
+
+#include <stddef.h>
+#include <stdint.h>
+
+#include "ffmpeg_utils.h"
+
+/*
+ * This file contains the API for the transcode scheduler.
+ *
+ * Overall architecture of the transcoding process involves instances of the
+ * following components:
+ * - demuxers, each containing any number of demuxed streams; demuxed packets
+ * belonging to some stream are sent to any number of decoders (transcoding)
+ * and/or muxers (streamcopy);
+ * - decoders, which receive encoded packets from some demuxed stream, decode
+ * them, and send decoded frames to any number of filtergraph inputs
+ * (audio/video) or encoders (subtitles);
+ * - filtergraphs, each containing zero or more inputs (0 in case the
+ * filtergraph contains a lavfi source filter), and one or more outputs; the
+ * inputs and outputs need not have matching media types;
+ * each filtergraph input receives decoded frames from some decoder;
+ * filtered frames from each output are sent to some encoder;
+ * - encoders, which receive decoded frames from some decoder (subtitles) or
+ * some filtergraph output (audio/video), encode them, and send encoded
+ * packets to some muxed stream;
+ * - muxers, each containing any number of muxed streams; each muxed stream
+ * receives encoded packets from some demuxed stream (streamcopy) or some
+ * encoder (transcoding); those packets are interleaved and written out by the
+ * muxer.
+ *
+ * There must be at least one muxer instance, otherwise the transcode produces
+ * no output and is meaningless. Otherwise, in a generic transcoding scenario
+ * there may be arbitrary number of instances of any of the above components,
+ * interconnected in various ways.
+ *
+ * The code tries to keep all the output streams across all the muxers in sync
+ * (i.e. at the same DTS), which is accomplished by varying the rates at which
+ * packets are read from different demuxers and lavfi sources. Note that the
+ * degree of control we have over synchronization is fundamentally limited - if
+ * some demuxed streams in the same input are interleaved at different rates
+ * than that at which they are to be muxed (e.g. because an input file is badly
+ * interleaved, or the user changed their speed by mismatching amounts), then
+ * there will be increasing amounts of buffering followed by eventual
+ * transcoding failure.
+ *
+ * N.B. 1: there are meaningful transcode scenarios with no demuxers, e.g.
+ * - encoding and muxing output from filtergraph(s) that have no inputs;
+ * - creating a file that contains nothing but attachments and/or metadata.
+ *
+ * N.B. 2: a filtergraph output could, in principle, feed multiple encoders, but
+ * this is unnecessary because the (a)split filter provides the same
+ * functionality.
+ *
+ * The scheduler, in the above model, is the master object that oversees and
+ * facilitates the transcoding process. The basic idea is that all instances
+ * of the abovementioned components communicate only with the scheduler and not
+ * with each other. The scheduler is then the single place containing the
+ * knowledge about the whole transcoding pipeline.
+ */
+
+struct AVFrame;
+struct AVPacket;
+
+typedef struct Scheduler Scheduler;
+
+enum SchedulerNodeType {
+ SCH_NODE_TYPE_NONE = 0,
+ SCH_NODE_TYPE_DEMUX,
+ SCH_NODE_TYPE_MUX,
+ SCH_NODE_TYPE_DEC,
+ SCH_NODE_TYPE_ENC,
+ SCH_NODE_TYPE_FILTER_IN,
+ SCH_NODE_TYPE_FILTER_OUT,
+};
+
+typedef struct SchedulerNode {
+ enum SchedulerNodeType type;
+ unsigned idx;
+ unsigned idx_stream;
+} SchedulerNode;
+
+typedef void* (*SchThreadFunc)(void *arg);
+
+#define SCH_DSTREAM(file, stream) \
+ (SchedulerNode){ .type = SCH_NODE_TYPE_DEMUX, \
+ .idx = file, .idx_stream = stream }
+#define SCH_MSTREAM(file, stream) \
+ (SchedulerNode){ .type = SCH_NODE_TYPE_MUX, \
+ .idx = file, .idx_stream = stream }
+#define SCH_DEC(decoder) \
+ (SchedulerNode){ .type = SCH_NODE_TYPE_DEC, \
+ .idx = decoder }
+#define SCH_ENC(encoder) \
+ (SchedulerNode){ .type = SCH_NODE_TYPE_ENC, \
+ .idx = encoder }
+#define SCH_FILTER_IN(filter, input) \
+ (SchedulerNode){ .type = SCH_NODE_TYPE_FILTER_IN, \
+ .idx = filter, .idx_stream = input }
+#define SCH_FILTER_OUT(filter, output) \
+ (SchedulerNode){ .type = SCH_NODE_TYPE_FILTER_OUT, \
+ .idx = filter, .idx_stream = output }
+
+Scheduler *sch_alloc(void);
+void sch_free(Scheduler **sch);
+
+int sch_start(Scheduler *sch);
+int sch_stop(Scheduler *sch);
+
+/**
+ * Wait until transcoding terminates or the specified timeout elapses.
+ *
+ * @param timeout_us Amount of time in microseconds after which this function
+ * will timeout.
+ * @param transcode_ts Current transcode timestamp in AV_TIME_BASE_Q, for
+ * informational purposes only.
+ *
+ * @retval 0 waiting timed out, transcoding is not finished
+ * @retval 1 transcoding is finished
+ */
+int sch_wait(Scheduler *sch, uint64_t timeout_us, int64_t *transcode_ts);
+
+/**
+ * Add a demuxer to the scheduler.
+ *
+ * @param func Function executed as the demuxer task.
+ * @param ctx Demuxer state; will be passed to func and used for logging.
+ *
+ * @retval ">=0" Index of the newly-created demuxer.
+ * @retval "<0" Error code.
+ */
+int sch_add_demux(Scheduler *sch, SchThreadFunc func, void *ctx);
+/**
+ * Add a demuxed stream for a previously added demuxer.
+ *
+ * @param demux_idx index previously returned by sch_add_demux()
+ *
+ * @retval ">=0" Index of the newly-created demuxed stream.
+ * @retval "<0" Error code.
+ */
+int sch_add_demux_stream(Scheduler *sch, unsigned demux_idx);
+
+/**
+ * Add a decoder to the scheduler.
+ *
+ * @param func Function executed as the decoder task.
+ * @param ctx Decoder state; will be passed to func and used for logging.
+ * @param send_end_ts The decoder will return an end timestamp after flush packets
+ * are delivered to it. See documentation for
+ * sch_dec_receive() for more details.
+ *
+ * @retval ">=0" Index of the newly-created decoder.
+ * @retval "<0" Error code.
+ */
+int sch_add_dec(Scheduler *sch, SchThreadFunc func, void *ctx,
+ int send_end_ts);
+
+/**
+ * Add a filtergraph to the scheduler.
+ *
+ * @param nb_inputs Number of filtergraph inputs.
+ * @param nb_outputs number of filtergraph outputs
+ * @param func Function executed as the filtering task.
+ * @param ctx Filter state; will be passed to func and used for logging.
+ *
+ * @retval ">=0" Index of the newly-created filtergraph.
+ * @retval "<0" Error code.
+ */
+int sch_add_filtergraph(Scheduler *sch, unsigned nb_inputs, unsigned nb_outputs,
+ SchThreadFunc func, void *ctx);
+
+/**
+ * Add a muxer to the scheduler.
+ *
+ * Note that muxer thread startup is more complicated than for other components,
+ * because
+ * - muxer streams fed by audio/video encoders become initialized dynamically at
+ * runtime, after those encoders receive their first frame and initialize
+ * themselves, followed by calling sch_mux_stream_ready()
+ * - the header can be written after all the streams for a muxer are initialized
+ * - we may need to write an SDP, which must happen
+ * - AFTER all the headers are written
+ * - BEFORE any packets are written by any muxer
+ * - with all the muxers quiescent
+ * To avoid complicated muxer-thread synchronization dances, we postpone
+ * starting the muxer threads until after the SDP is written. The sequence of
+ * events is then as follows:
+ * - After sch_mux_stream_ready() is called for all the streams in a given muxer,
+ * the header for that muxer is written (care is taken that headers for
+ * different muxers are not written concurrently, since they write file
+ * information to stderr). If SDP is not wanted, the muxer thread then starts
+ * and muxing begins.
+ * - When SDP _is_ wanted, no muxer threads start until the header for the last
+ * muxer is written. After that, the SDP is written, after which all the muxer
+ * threads are started at once.
+ *
+ * In order for the above to work, the scheduler needs to be able to invoke
+ * just writing the header, which is the reason the init parameter exists.
+ *
+ * @param func Function executed as the muxing task.
+ * @param init Callback that is called to initialize the muxer and write the
+ * header. Called after sch_mux_stream_ready() is called for all the
+ * streams in the muxer.
+ * @param ctx Muxer state; will be passed to func/init and used for logging.
+ * @param sdp_auto Determines automatic SDP writing - see sch_sdp_filename().
+ *
+ * @retval ">=0" Index of the newly-created muxer.
+ * @retval "<0" Error code.
+ */
+int sch_add_mux(Scheduler *sch, SchThreadFunc func, int (*init)(void *),
+ void *ctx, int sdp_auto);
+/**
+ * Add a muxed stream for a previously added muxer.
+ *
+ * @param mux_idx index previously returned by sch_add_mux()
+ *
+ * @retval ">=0" Index of the newly-created muxed stream.
+ * @retval "<0" Error code.
+ */
+int sch_add_mux_stream(Scheduler *sch, unsigned mux_idx);
+
+/**
+ * Configure limits on packet buffering performed before the muxer task is
+ * started.
+ *
+ * @param mux_idx index previously returned by sch_add_mux()
+ * @param stream_idx_idx index previously returned by sch_add_mux_stream()
+ * @param data_threshold Total size of the buffered packets' data after which
+ * max_packets applies.
+ * @param max_packets maximum Maximum number of buffered packets after
+ * data_threshold is reached.
+ */
+void sch_mux_stream_buffering(Scheduler *sch, unsigned mux_idx, unsigned stream_idx,
+ size_t data_threshold, int max_packets);
+
+/**
+ * Signal to the scheduler that the specified muxed stream is initialized and
+ * ready. Muxing is started once all the streams are ready.
+ */
+int sch_mux_stream_ready(Scheduler *sch, unsigned mux_idx, unsigned stream_idx);
+
+/**
+ * Set the file path for the SDP.
+ *
+ * The SDP is written when either of the following is true:
+ * - this function is called at least once
+ * - sdp_auto=1 is passed to EVERY call of sch_add_mux()
+ */
+int sch_sdp_filename(Scheduler *sch, const char *sdp_filename);
+
+/**
+ * Add an encoder to the scheduler.
+ *
+ * @param func Function executed as the encoding task.
+ * @param ctx Encoder state; will be passed to func and used for logging.
+ * @param open_cb This callback, if specified, will be called when the first
+ * frame is obtained for this encoder. For audio encoders with a
+ * fixed frame size (which use a sync queue in the scheduler to
+ * rechunk frames), it must return that frame size on success.
+ * Otherwise (non-audio, variable frame size) it should return 0.
+ *
+ * @retval ">=0" Index of the newly-created encoder.
+ * @retval "<0" Error code.
+ */
+int sch_add_enc(Scheduler *sch, SchThreadFunc func, void *ctx,
+ int (*open_cb)(void *func_arg, const struct AVFrame *frame));
+
+/**
+ * Add an pre-encoding sync queue to the scheduler.
+ *
+ * @param buf_size_us Sync queue buffering size, passed to sq_alloc().
+ * @param logctx Logging context for the sync queue. passed to sq_alloc().
+ *
+ * @retval ">=0" Index of the newly-created sync queue.
+ * @retval "<0" Error code.
+ */
+int sch_add_sq_enc(Scheduler *sch, uint64_t buf_size_us, void *logctx);
+int sch_sq_add_enc(Scheduler *sch, unsigned sq_idx, unsigned enc_idx,
+ int limiting, uint64_t max_frames);
+
+int sch_connect(Scheduler *sch, SchedulerNode src, SchedulerNode dst);
+
+enum DemuxSendFlags {
+ /**
+ * Treat the packet as an EOF for SCH_NODE_TYPE_MUX destinations
+ * send normally to other types.
+ */
+ DEMUX_SEND_STREAMCOPY_EOF = (1 << 0),
+};
+
+/**
+ * Called by demuxer tasks to communicate with their downstreams. The following
+ * may be sent:
+ * - a demuxed packet for the stream identified by pkt->stream_index;
+ * - demuxer discontinuity/reset (e.g. after a seek) - this is signalled by an
+ * empty packet with stream_index=-1.
+ *
+ * @param demux_idx demuxer index
+ * @param pkt A demuxed packet to send.
+ * When flushing (i.e. pkt->stream_index=-1 on entry to this
+ * function), on successful return pkt->pts/pkt->time_base will be
+ * set to the maximum end timestamp of any decoded audio stream, or
+ * AV_NOPTS_VALUE if no decoded audio streams are present.
+ *
+ * @retval "non-negative value" success
+ * @retval AVERROR_EOF all consumers for the stream are done
+ * @retval AVERROR_EXIT all consumers are done, should terminate demuxing
+ * @retval "anoter negative error code" other failure
+ */
+int sch_demux_send(Scheduler *sch, unsigned demux_idx, struct AVPacket *pkt,
+ unsigned flags);
+
+/**
+ * Called by decoder tasks to receive a packet for decoding.
+ *
+ * @param dec_idx decoder index
+ * @param pkt Input packet will be written here on success.
+ *
+ * An empty packet signals that the decoder should be flushed, but
+ * more packets will follow (e.g. after seeking). When a decoder
+ * created with send_end_ts=1 receives a flush packet, it must write
+ * the end timestamp of the stream after flushing to
+ * pkt->pts/time_base on the next call to this function (if any).
+ *
+ * @retval "non-negative value" success
+ * @retval AVERROR_EOF no more packets will arrive, should terminate decoding
+ * @retval "another negative error code" other failure
+ */
+int sch_dec_receive(Scheduler *sch, unsigned dec_idx, struct AVPacket *pkt);
+
+/**
+ * Called by decoder tasks to send a decoded frame downstream.
+ *
+ * @param dec_idx Decoder index previously returned by sch_add_dec().
+ * @param frame Decoded frame; on success it is consumed and cleared by this
+ * function
+ *
+ * @retval ">=0" success
+ * @retval AVERROR_EOF all consumers are done, should terminate decoding
+ * @retval "another negative error code" other failure
+ */
+int sch_dec_send(Scheduler *sch, unsigned dec_idx, struct AVFrame *frame);
+
+/**
+ * Called by filtergraph tasks to obtain frames for filtering. Will wait for a
+ * frame to become available and return it in frame.
+ *
+ * Filtergraphs that contain lavfi sources and do not currently require new
+ * input frames should call this function as a means of rate control - then
+ * in_idx should be set equal to nb_inputs on entry to this function.
+ *
+ * @param fg_idx Filtergraph index previously returned by sch_add_filtergraph().
+ * @param[in,out] in_idx On input contains the index of the input on which a frame
+ * is most desired. May be set to nb_inputs to signal that
+ * the filtergraph does not need more input currently.
+ *
+ * On success, will be replaced with the input index of
+ * the actually returned frame or EOF timestamp.
+ *
+ * @retval ">=0" Frame data or EOF timestamp was delivered into frame, in_idx
+ * contains the index of the input it belongs to.
+ * @retval AVERROR(EAGAIN) No frame was returned, the filtergraph should
+ * resume filtering. May only be returned when
+ * in_idx=nb_inputs on entry to this function.
+ * @retval AVERROR_EOF No more frames will arrive, should terminate filtering.
+ */
+int sch_filter_receive(Scheduler *sch, unsigned fg_idx,
+ unsigned *in_idx, struct AVFrame *frame);
+
+/**
+ * Called by filtergraph tasks to send a filtered frame or EOF to consumers.
+ *
+ * @param fg_idx Filtergraph index previously returned by sch_add_filtergraph().
+ * @param out_idx Index of the output which produced the frame.
+ * @param frame The frame to send to consumers. When NULL, signals that no more
+ * frames will be produced for the specified output. When non-NULL,
+ * the frame is consumed and cleared by this function on success.
+ *
+ * @retval "non-negative value" success
+ * @retval AVERROR_EOF all consumers are done
+ * @retval "anoter negative error code" other failure
+ */
+int sch_filter_send(Scheduler *sch, unsigned fg_idx, unsigned out_idx,
+ struct AVFrame *frame);
+
+int sch_filter_command(Scheduler *sch, unsigned fg_idx, struct AVFrame *frame);
+
+/**
+ * Called by encoder tasks to obtain frames for encoding. Will wait for a frame
+ * to become available and return it in frame.
+ *
+ * @param enc_idx Encoder index previously returned by sch_add_enc().
+ * @param frame Newly-received frame will be stored here on success. Must be
+ * clean on entrance to this function.
+ *
+ * @retval 0 A frame was successfully delivered into frame.
+ * @retval AVERROR_EOF No more frames will be delivered, the encoder should
+ * flush everything and terminate.
+ *
+ */
+int sch_enc_receive(Scheduler *sch, unsigned enc_idx, struct AVFrame *frame);
+
+/**
+ * Called by encoder tasks to send encoded packets downstream.
+ *
+ * @param enc_idx Encoder index previously returned by sch_add_enc().
+ * @param pkt An encoded packet; it will be consumed and cleared by this
+ * function on success.
+ *
+ * @retval 0 success
+ * @retval "<0" Error code.
+ */
+int sch_enc_send (Scheduler *sch, unsigned enc_idx, struct AVPacket *pkt);
+
+/**
+ * Called by muxer tasks to obtain packets for muxing. Will wait for a packet
+ * for any muxed stream to become available and return it in pkt.
+ *
+ * @param mux_idx Muxer index previously returned by sch_add_mux().
+ * @param pkt Newly-received packet will be stored here on success. Must be
+ * clean on entrance to this function.
+ *
+ * @retval 0 A packet was successfully delivered into pkt. Its stream_index
+ * corresponds to a stream index previously returned from
+ * sch_add_mux_stream().
+ * @retval AVERROR_EOF When pkt->stream_index is non-negative, this signals that
+ * no more packets will be delivered for this stream index.
+ * Otherwise this indicates that no more packets will be
+ * delivered for any stream and the muxer should therefore
+ * flush everything and terminate.
+ */
+int sch_mux_receive(Scheduler *sch, unsigned mux_idx, struct AVPacket *pkt);
+
+/**
+ * Called by muxer tasks to signal that a stream will no longer accept input.
+ *
+ * @param stream_idx Stream index previously returned from sch_add_mux_stream().
+ */
+void sch_mux_receive_finish(Scheduler *sch, unsigned mux_idx, unsigned stream_idx);
+
+int sch_mux_sub_heartbeat_add(Scheduler *sch, unsigned mux_idx, unsigned stream_idx,
+ unsigned dec_idx);
+int sch_mux_sub_heartbeat(Scheduler *sch, unsigned mux_idx, unsigned stream_idx,
+ const AVPacket *pkt);
+
+#endif /* FFTOOLS_FFMPEG_SCHED_H */
--
2.42.0
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* [FFmpeg-devel] [PATCH 13/13] fftools/ffmpeg: convert to a threaded architecture
2023-11-23 19:14 [FFmpeg-devel] [PATCH v2] ffmpeg CLI multithreading Anton Khirnov
` (11 preceding siblings ...)
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 12/13] fftools/ffmpeg: add thread-aware transcode scheduling infrastructure Anton Khirnov
@ 2023-11-23 19:15 ` Anton Khirnov
2023-11-24 22:26 ` Michael Niedermayer
12 siblings, 1 reply; 49+ messages in thread
From: Anton Khirnov @ 2023-11-23 19:15 UTC (permalink / raw)
To: ffmpeg-devel
Change the main loop and every component (demuxers, decoders, filters,
encoders, muxers) to use the previously added transcode scheduler. Every
instance of every such component was already running in a separate
thread, but now they can actually run in parallel.
Changes the results of ffmpeg-fix_sub_duration_heartbeat - tested by
JEEB to be more correct and deterministic.
---
fftools/ffmpeg.c | 374 +--------
fftools/ffmpeg.h | 97 +--
fftools/ffmpeg_dec.c | 321 ++------
fftools/ffmpeg_demux.c | 268 ++++---
fftools/ffmpeg_enc.c | 368 ++-------
fftools/ffmpeg_filter.c | 720 +++++-------------
fftools/ffmpeg_mux.c | 324 ++------
fftools/ffmpeg_mux.h | 24 +-
fftools/ffmpeg_mux_init.c | 88 +--
fftools/ffmpeg_opt.c | 6 +-
.../fate/ffmpeg-fix_sub_duration_heartbeat | 36 +-
11 files changed, 598 insertions(+), 2028 deletions(-)
diff --git a/fftools/ffmpeg.c b/fftools/ffmpeg.c
index b8a97258a0..30b594fd97 100644
--- a/fftools/ffmpeg.c
+++ b/fftools/ffmpeg.c
@@ -117,7 +117,7 @@ typedef struct BenchmarkTimeStamps {
static BenchmarkTimeStamps get_benchmark_time_stamps(void);
static int64_t getmaxrss(void);
-unsigned nb_output_dumped = 0;
+atomic_uint nb_output_dumped = 0;
static BenchmarkTimeStamps current_time;
AVIOContext *progress_avio = NULL;
@@ -138,30 +138,6 @@ static struct termios oldtty;
static int restore_tty;
#endif
-/* sub2video hack:
- Convert subtitles to video with alpha to insert them in filter graphs.
- This is a temporary solution until libavfilter gets real subtitles support.
- */
-
-static void sub2video_heartbeat(InputFile *infile, int64_t pts, AVRational tb)
-{
- /* When a frame is read from a file, examine all sub2video streams in
- the same file and send the sub2video frame again. Otherwise, decoded
- video frames could be accumulating in the filter graph while a filter
- (possibly overlay) is desperately waiting for a subtitle frame. */
- for (int i = 0; i < infile->nb_streams; i++) {
- InputStream *ist = infile->streams[i];
-
- if (ist->dec_ctx->codec_type != AVMEDIA_TYPE_SUBTITLE)
- continue;
-
- for (int j = 0; j < ist->nb_filters; j++)
- ifilter_sub2video_heartbeat(ist->filters[j], pts, tb);
- }
-}
-
-/* end of sub2video hack */
-
static void term_exit_sigsafe(void)
{
#if HAVE_TERMIOS_H
@@ -499,23 +475,13 @@ void update_benchmark(const char *fmt, ...)
}
}
-void close_output_stream(OutputStream *ost)
-{
- OutputFile *of = output_files[ost->file_index];
- ost->finished |= ENCODER_FINISHED;
-
- if (ost->sq_idx_encode >= 0)
- sq_send(of->sq_encode, ost->sq_idx_encode, SQFRAME(NULL));
-}
-
-static void print_report(int is_last_report, int64_t timer_start, int64_t cur_time)
+static void print_report(int is_last_report, int64_t timer_start, int64_t cur_time, int64_t pts)
{
AVBPrint buf, buf_script;
int64_t total_size = of_filesize(output_files[0]);
int vid;
double bitrate;
double speed;
- int64_t pts = AV_NOPTS_VALUE;
static int64_t last_time = -1;
static int first_report = 1;
uint64_t nb_frames_dup = 0, nb_frames_drop = 0;
@@ -533,7 +499,7 @@ static void print_report(int is_last_report, int64_t timer_start, int64_t cur_ti
last_time = cur_time;
}
if (((cur_time - last_time) < stats_period && !first_report) ||
- (first_report && nb_output_dumped < nb_output_files))
+ (first_report && atomic_load(&nb_output_dumped) < nb_output_files))
return;
last_time = cur_time;
}
@@ -544,7 +510,7 @@ static void print_report(int is_last_report, int64_t timer_start, int64_t cur_ti
av_bprint_init(&buf, 0, AV_BPRINT_SIZE_AUTOMATIC);
av_bprint_init(&buf_script, 0, AV_BPRINT_SIZE_AUTOMATIC);
for (OutputStream *ost = ost_iter(NULL); ost; ost = ost_iter(ost)) {
- const float q = ost->enc ? ost->quality / (float) FF_QP2LAMBDA : -1;
+ const float q = ost->enc ? atomic_load(&ost->quality) / (float) FF_QP2LAMBDA : -1;
if (vid && ost->type == AVMEDIA_TYPE_VIDEO) {
av_bprintf(&buf, "q=%2.1f ", q);
@@ -565,22 +531,18 @@ static void print_report(int is_last_report, int64_t timer_start, int64_t cur_ti
if (is_last_report)
av_bprintf(&buf, "L");
- nb_frames_dup = ost->filter->nb_frames_dup;
- nb_frames_drop = ost->filter->nb_frames_drop;
+ nb_frames_dup = atomic_load(&ost->filter->nb_frames_dup);
+ nb_frames_drop = atomic_load(&ost->filter->nb_frames_drop);
vid = 1;
}
- /* compute min output value */
- if (ost->last_mux_dts != AV_NOPTS_VALUE) {
- if (pts == AV_NOPTS_VALUE || ost->last_mux_dts > pts)
- pts = ost->last_mux_dts;
- if (copy_ts) {
- if (copy_ts_first_pts == AV_NOPTS_VALUE && pts > 1)
- copy_ts_first_pts = pts;
- if (copy_ts_first_pts != AV_NOPTS_VALUE)
- pts -= copy_ts_first_pts;
- }
- }
+ }
+
+ if (copy_ts) {
+ if (copy_ts_first_pts == AV_NOPTS_VALUE && pts > 1)
+ copy_ts_first_pts = pts;
+ if (copy_ts_first_pts != AV_NOPTS_VALUE)
+ pts -= copy_ts_first_pts;
}
us = FFABS64U(pts) % AV_TIME_BASE;
@@ -783,81 +745,6 @@ int subtitle_wrap_frame(AVFrame *frame, AVSubtitle *subtitle, int copy)
return 0;
}
-int trigger_fix_sub_duration_heartbeat(OutputStream *ost, const AVPacket *pkt)
-{
- OutputFile *of = output_files[ost->file_index];
- int64_t signal_pts = av_rescale_q(pkt->pts, pkt->time_base,
- AV_TIME_BASE_Q);
-
- if (!ost->fix_sub_duration_heartbeat || !(pkt->flags & AV_PKT_FLAG_KEY))
- // we are only interested in heartbeats on streams configured, and
- // only on random access points.
- return 0;
-
- for (int i = 0; i < of->nb_streams; i++) {
- OutputStream *iter_ost = of->streams[i];
- InputStream *ist = iter_ost->ist;
- int ret = AVERROR_BUG;
-
- if (iter_ost == ost || !ist || !ist->decoding_needed ||
- ist->dec_ctx->codec_type != AVMEDIA_TYPE_SUBTITLE)
- // We wish to skip the stream that causes the heartbeat,
- // output streams without an input stream, streams not decoded
- // (as fix_sub_duration is only done for decoded subtitles) as
- // well as non-subtitle streams.
- continue;
-
- if ((ret = fix_sub_duration_heartbeat(ist, signal_pts)) < 0)
- return ret;
- }
-
- return 0;
-}
-
-/* pkt = NULL means EOF (needed to flush decoder buffers) */
-static int process_input_packet(InputStream *ist, const AVPacket *pkt, int no_eof)
-{
- InputFile *f = input_files[ist->file_index];
- int64_t dts_est = AV_NOPTS_VALUE;
- int ret = 0;
- int eof_reached = 0;
-
- if (ist->decoding_needed) {
- ret = dec_packet(ist, pkt, no_eof);
- if (ret < 0 && ret != AVERROR_EOF)
- return ret;
- }
- if (ret == AVERROR_EOF || (!pkt && !ist->decoding_needed))
- eof_reached = 1;
-
- if (pkt && pkt->opaque_ref) {
- DemuxPktData *pd = (DemuxPktData*)pkt->opaque_ref->data;
- dts_est = pd->dts_est;
- }
-
- if (f->recording_time != INT64_MAX) {
- int64_t start_time = 0;
- if (copy_ts) {
- start_time += f->start_time != AV_NOPTS_VALUE ? f->start_time : 0;
- start_time += start_at_zero ? 0 : f->start_time_effective;
- }
- if (dts_est >= f->recording_time + start_time)
- pkt = NULL;
- }
-
- for (int oidx = 0; oidx < ist->nb_outputs; oidx++) {
- OutputStream *ost = ist->outputs[oidx];
- if (ost->enc || (!pkt && no_eof))
- continue;
-
- ret = of_streamcopy(ost, pkt, dts_est);
- if (ret < 0)
- return ret;
- }
-
- return !eof_reached;
-}
-
static void print_stream_maps(void)
{
av_log(NULL, AV_LOG_INFO, "Stream mapping:\n");
@@ -934,43 +821,6 @@ static void print_stream_maps(void)
}
}
-/**
- * Select the output stream to process.
- *
- * @retval 0 an output stream was selected
- * @retval AVERROR(EAGAIN) need to wait until more input is available
- * @retval AVERROR_EOF no more streams need output
- */
-static int choose_output(OutputStream **post)
-{
- int64_t opts_min = INT64_MAX;
- OutputStream *ost_min = NULL;
-
- for (OutputStream *ost = ost_iter(NULL); ost; ost = ost_iter(ost)) {
- int64_t opts;
-
- if (ost->filter && ost->filter->last_pts != AV_NOPTS_VALUE) {
- opts = ost->filter->last_pts;
- } else {
- opts = ost->last_mux_dts == AV_NOPTS_VALUE ?
- INT64_MIN : ost->last_mux_dts;
- }
-
- if (!ost->initialized && !ost->finished) {
- ost_min = ost;
- break;
- }
- if (!ost->finished && opts < opts_min) {
- opts_min = opts;
- ost_min = ost;
- }
- }
- if (!ost_min)
- return AVERROR_EOF;
- *post = ost_min;
- return ost_min->unavailable ? AVERROR(EAGAIN) : 0;
-}
-
static void set_tty_echo(int on)
{
#if HAVE_TERMIOS_H
@@ -1042,149 +892,21 @@ static int check_keyboard_interaction(int64_t cur_time)
return 0;
}
-static void reset_eagain(void)
-{
- for (OutputStream *ost = ost_iter(NULL); ost; ost = ost_iter(ost))
- ost->unavailable = 0;
-}
-
-static void decode_flush(InputFile *ifile)
-{
- for (int i = 0; i < ifile->nb_streams; i++) {
- InputStream *ist = ifile->streams[i];
-
- if (ist->discard || !ist->decoding_needed)
- continue;
-
- dec_packet(ist, NULL, 1);
- }
-}
-
-/*
- * Return
- * - 0 -- one packet was read and processed
- * - AVERROR(EAGAIN) -- no packets were available for selected file,
- * this function should be called again
- * - AVERROR_EOF -- this function should not be called again
- */
-static int process_input(int file_index, AVPacket *pkt)
-{
- InputFile *ifile = input_files[file_index];
- InputStream *ist;
- int ret, i;
-
- ret = ifile_get_packet(ifile, pkt);
-
- if (ret == 1) {
- /* the input file is looped: flush the decoders */
- decode_flush(ifile);
- return AVERROR(EAGAIN);
- }
- if (ret < 0) {
- if (ret != AVERROR_EOF) {
- av_log(ifile, AV_LOG_ERROR,
- "Error retrieving a packet from demuxer: %s\n", av_err2str(ret));
- if (exit_on_error)
- return ret;
- }
-
- for (i = 0; i < ifile->nb_streams; i++) {
- ist = ifile->streams[i];
- if (!ist->discard) {
- ret = process_input_packet(ist, NULL, 0);
- if (ret>0)
- return 0;
- else if (ret < 0)
- return ret;
- }
-
- /* mark all outputs that don't go through lavfi as finished */
- for (int oidx = 0; oidx < ist->nb_outputs; oidx++) {
- OutputStream *ost = ist->outputs[oidx];
- OutputFile *of = output_files[ost->file_index];
-
- ret = of_output_packet(of, ost, NULL);
- if (ret < 0)
- return ret;
- }
- }
-
- ifile->eof_reached = 1;
- return AVERROR(EAGAIN);
- }
-
- reset_eagain();
-
- ist = ifile->streams[pkt->stream_index];
-
- sub2video_heartbeat(ifile, pkt->pts, pkt->time_base);
-
- ret = process_input_packet(ist, pkt, 0);
-
- av_packet_unref(pkt);
-
- return ret < 0 ? ret : 0;
-}
-
-/**
- * Run a single step of transcoding.
- *
- * @return 0 for success, <0 for error
- */
-static int transcode_step(OutputStream *ost, AVPacket *demux_pkt)
-{
- InputStream *ist = NULL;
- int ret;
-
- if (ost->filter) {
- if ((ret = fg_transcode_step(ost->filter->graph, &ist)) < 0)
- return ret;
- if (!ist)
- return 0;
- } else {
- ist = ost->ist;
- av_assert0(ist);
- }
-
- ret = process_input(ist->file_index, demux_pkt);
- if (ret == AVERROR(EAGAIN)) {
- return 0;
- }
-
- if (ret < 0)
- return ret == AVERROR_EOF ? 0 : ret;
-
- // process_input() above might have caused output to become available
- // in multiple filtergraphs, so we process all of them
- for (int i = 0; i < nb_filtergraphs; i++) {
- ret = reap_filters(filtergraphs[i], 0);
- if (ret < 0)
- return ret;
- }
-
- return 0;
-}
-
/*
* The following code is the main loop of the file converter
*/
-static int transcode(Scheduler *sch, int *err_rate_exceeded)
+static int transcode(Scheduler *sch)
{
int ret = 0, i;
- InputStream *ist;
- int64_t timer_start;
- AVPacket *demux_pkt = NULL;
+ int64_t timer_start, transcode_ts = 0;
print_stream_maps();
- *err_rate_exceeded = 0;
atomic_store(&transcode_init_done, 1);
- demux_pkt = av_packet_alloc();
- if (!demux_pkt) {
- ret = AVERROR(ENOMEM);
- goto fail;
- }
+ ret = sch_start(sch);
+ if (ret < 0)
+ return ret;
if (stdin_interaction) {
av_log(NULL, AV_LOG_INFO, "Press [q] to stop, [?] for help\n");
@@ -1192,8 +914,7 @@ static int transcode(Scheduler *sch, int *err_rate_exceeded)
timer_start = av_gettime_relative();
- while (!received_sigterm) {
- OutputStream *ost;
+ while (!sch_wait(sch, stats_period, &transcode_ts)) {
int64_t cur_time= av_gettime_relative();
/* if 'q' pressed, exits */
@@ -1201,49 +922,11 @@ static int transcode(Scheduler *sch, int *err_rate_exceeded)
if (check_keyboard_interaction(cur_time) < 0)
break;
- ret = choose_output(&ost);
- if (ret == AVERROR(EAGAIN)) {
- reset_eagain();
- av_usleep(10000);
- ret = 0;
- continue;
- } else if (ret < 0) {
- av_log(NULL, AV_LOG_VERBOSE, "No more output streams to write to, finishing.\n");
- ret = 0;
- break;
- }
-
- ret = transcode_step(ost, demux_pkt);
- if (ret < 0 && ret != AVERROR_EOF) {
- av_log(NULL, AV_LOG_ERROR, "Error while filtering: %s\n", av_err2str(ret));
- break;
- }
-
/* dump report by using the output first video and audio streams */
- print_report(0, timer_start, cur_time);
+ print_report(0, timer_start, cur_time, transcode_ts);
}
- /* at the end of stream, we must flush the decoder buffers */
- for (ist = ist_iter(NULL); ist; ist = ist_iter(ist)) {
- float err_rate;
-
- if (!input_files[ist->file_index]->eof_reached) {
- int err = process_input_packet(ist, NULL, 0);
- ret = err_merge(ret, err);
- }
-
- err_rate = (ist->frames_decoded || ist->decode_errors) ?
- ist->decode_errors / (ist->frames_decoded + ist->decode_errors) : 0.f;
- if (err_rate > max_error_rate) {
- av_log(ist, AV_LOG_FATAL, "Decode error rate %g exceeds maximum %g\n",
- err_rate, max_error_rate);
- *err_rate_exceeded = 1;
- } else if (err_rate)
- av_log(ist, AV_LOG_VERBOSE, "Decode error rate %g\n", err_rate);
- }
- ret = err_merge(ret, enc_flush());
-
- term_exit();
+ ret = sch_stop(sch);
/* write the trailer if needed */
for (i = 0; i < nb_output_files; i++) {
@@ -1251,11 +934,10 @@ static int transcode(Scheduler *sch, int *err_rate_exceeded)
ret = err_merge(ret, err);
}
- /* dump report by using the first video and audio streams */
- print_report(1, timer_start, av_gettime_relative());
+ term_exit();
-fail:
- av_packet_free(&demux_pkt);
+ /* dump report by using the first video and audio streams */
+ print_report(1, timer_start, av_gettime_relative(), transcode_ts);
return ret;
}
@@ -1308,7 +990,7 @@ int main(int argc, char **argv)
{
Scheduler *sch = NULL;
- int ret, err_rate_exceeded;
+ int ret;
BenchmarkTimeStamps ti;
init_dynload();
@@ -1350,7 +1032,7 @@ int main(int argc, char **argv)
}
current_time = ti = get_benchmark_time_stamps();
- ret = transcode(sch, &err_rate_exceeded);
+ ret = transcode(sch);
if (ret >= 0 && do_benchmark) {
int64_t utime, stime, rtime;
current_time = get_benchmark_time_stamps();
@@ -1362,8 +1044,8 @@ int main(int argc, char **argv)
utime / 1000000.0, stime / 1000000.0, rtime / 1000000.0);
}
- ret = received_nb_signals ? 255 :
- err_rate_exceeded ? 69 : ret;
+ ret = received_nb_signals ? 255 :
+ (ret == FFMPEG_ERROR_RATE_EXCEEDED) ? 69 : ret;
finish:
if (ret == AVERROR_EXIT)
diff --git a/fftools/ffmpeg.h b/fftools/ffmpeg.h
index a89038b765..ba82b7490d 100644
--- a/fftools/ffmpeg.h
+++ b/fftools/ffmpeg.h
@@ -61,6 +61,8 @@
#define FFMPEG_OPT_TOP 1
#define FFMPEG_OPT_FORCE_KF_SOURCE_NO_DROP 1
+#define FFMPEG_ERROR_RATE_EXCEEDED FFERRTAG('E', 'R', 'E', 'D')
+
enum VideoSyncMethod {
VSYNC_AUTO = -1,
VSYNC_PASSTHROUGH,
@@ -82,13 +84,16 @@ enum HWAccelID {
};
enum FrameOpaque {
- FRAME_OPAQUE_REAP_FILTERS = 1,
- FRAME_OPAQUE_CHOOSE_INPUT,
- FRAME_OPAQUE_SUB_HEARTBEAT,
+ FRAME_OPAQUE_SUB_HEARTBEAT = 1,
FRAME_OPAQUE_EOF,
FRAME_OPAQUE_SEND_COMMAND,
};
+enum PacketOpaque {
+ PKT_OPAQUE_SUB_HEARTBEAT = 1,
+ PKT_OPAQUE_FIX_SUB_DURATION,
+};
+
typedef struct HWDevice {
const char *name;
enum AVHWDeviceType type;
@@ -309,11 +314,8 @@ typedef struct OutputFilter {
enum AVMediaType type;
- /* pts of the last frame received from this filter, in AV_TIME_BASE_Q */
- int64_t last_pts;
-
- uint64_t nb_frames_dup;
- uint64_t nb_frames_drop;
+ atomic_uint_least64_t nb_frames_dup;
+ atomic_uint_least64_t nb_frames_drop;
} OutputFilter;
typedef struct FilterGraph {
@@ -426,11 +428,6 @@ typedef struct InputFile {
float readrate;
int accurate_seek;
-
- /* when looping the input file, this queue is used by decoders to report
- * the last frame timestamp back to the demuxer thread */
- AVThreadMessageQueue *audio_ts_queue;
- int audio_ts_queue_size;
} InputFile;
enum forced_keyframes_const {
@@ -532,8 +529,6 @@ typedef struct OutputStream {
InputStream *ist;
AVStream *st; /* stream in the output file */
- /* dts of the last packet sent to the muxing queue, in AV_TIME_BASE_Q */
- int64_t last_mux_dts;
AVRational enc_timebase;
@@ -578,13 +573,6 @@ typedef struct OutputStream {
AVDictionary *sws_dict;
AVDictionary *swr_opts;
char *apad;
- OSTFinished finished; /* no more packets should be written for this stream */
- int unavailable; /* true if the steram is unavailable (possibly temporarily) */
-
- // init_output_stream() has been called for this stream
- // The encoder and the bitstream filters have been initialized and the stream
- // parameters are set in the AVStream.
- int initialized;
const char *attachment_filename;
@@ -598,9 +586,8 @@ typedef struct OutputStream {
uint64_t samples_encoded;
/* packet quality factor */
- int quality;
+ atomic_int quality;
- int sq_idx_encode;
int sq_idx_mux;
EncStats enc_stats_pre;
@@ -658,7 +645,6 @@ extern FilterGraph **filtergraphs;
extern int nb_filtergraphs;
extern char *vstats_filename;
-extern char *sdp_filename;
extern float dts_delta_threshold;
extern float dts_error_threshold;
@@ -691,7 +677,7 @@ extern const AVIOInterruptCB int_cb;
extern const OptionDef options[];
extern HWDevice *filter_hw_device;
-extern unsigned nb_output_dumped;
+extern atomic_uint nb_output_dumped;
extern int ignore_unknown_streams;
extern int copy_unknown_streams;
@@ -737,10 +723,6 @@ FrameData *frame_data(AVFrame *frame);
const FrameData *frame_data_c(AVFrame *frame);
-int ifilter_send_frame(InputFilter *ifilter, AVFrame *frame, int keep_reference);
-int ifilter_send_eof(InputFilter *ifilter, int64_t pts, AVRational tb);
-void ifilter_sub2video_heartbeat(InputFilter *ifilter, int64_t pts, AVRational tb);
-
/**
* Set up fallback filtering parameters from a decoder context. They will only
* be used if no frames are ever sent on this input, otherwise the actual
@@ -761,26 +743,9 @@ int fg_create(FilterGraph **pfg, char *graph_desc, Scheduler *sch);
void fg_free(FilterGraph **pfg);
-/**
- * Perform a step of transcoding for the specified filter graph.
- *
- * @param[in] graph filter graph to consider
- * @param[out] best_ist input stream where a frame would allow to continue
- * @return 0 for success, <0 for error
- */
-int fg_transcode_step(FilterGraph *graph, InputStream **best_ist);
-
void fg_send_command(FilterGraph *fg, double time, const char *target,
const char *command, const char *arg, int all_filters);
-/**
- * Get and encode new output from specified filtergraph, without causing
- * activity.
- *
- * @return 0 for success, <0 for severe errors
- */
-int reap_filters(FilterGraph *fg, int flush);
-
int ffmpeg_parse_options(int argc, char **argv, Scheduler *sch);
void enc_stats_write(OutputStream *ost, EncStats *es,
@@ -807,25 +772,11 @@ int hwaccel_retrieve_data(AVCodecContext *avctx, AVFrame *input);
int dec_open(InputStream *ist, Scheduler *sch, unsigned sch_idx);
void dec_free(Decoder **pdec);
-/**
- * Submit a packet for decoding
- *
- * When pkt==NULL and no_eof=0, there will be no more input. Flush decoders and
- * mark all downstreams as finished.
- *
- * When pkt==NULL and no_eof=1, the stream was reset (e.g. after a seek). Flush
- * decoders and await further input.
- */
-int dec_packet(InputStream *ist, const AVPacket *pkt, int no_eof);
-
int enc_alloc(Encoder **penc, const AVCodec *codec,
Scheduler *sch, unsigned sch_idx);
void enc_free(Encoder **penc);
-int enc_open(OutputStream *ost, const AVFrame *frame);
-int enc_subtitle(OutputFile *of, OutputStream *ost, const AVSubtitle *sub);
-int enc_frame(OutputStream *ost, AVFrame *frame);
-int enc_flush(void);
+int enc_open(void *opaque, const AVFrame *frame);
/*
* Initialize muxing state for the given stream, should be called
@@ -840,30 +791,11 @@ void of_free(OutputFile **pof);
void of_enc_stats_close(void);
-int of_output_packet(OutputFile *of, OutputStream *ost, AVPacket *pkt);
-
-/**
- * @param dts predicted packet dts in AV_TIME_BASE_Q
- */
-int of_streamcopy(OutputStream *ost, const AVPacket *pkt, int64_t dts);
-
int64_t of_filesize(OutputFile *of);
int ifile_open(const OptionsContext *o, const char *filename, Scheduler *sch);
void ifile_close(InputFile **f);
-/**
- * Get next input packet from the demuxer.
- *
- * @param pkt the packet is written here when this function returns 0
- * @return
- * - 0 when a packet has been read successfully
- * - 1 when stream end was reached, but the stream is looped;
- * caller should flush decoders and read from this demuxer again
- * - a negative error code on failure
- */
-int ifile_get_packet(InputFile *f, AVPacket *pkt);
-
int ist_output_add(InputStream *ist, OutputStream *ost);
int ist_filter_add(InputStream *ist, InputFilter *ifilter, int is_simple);
@@ -880,9 +812,6 @@ InputStream *ist_iter(InputStream *prev);
* pass NULL to start iteration */
OutputStream *ost_iter(OutputStream *prev);
-void close_output_stream(OutputStream *ost);
-int trigger_fix_sub_duration_heartbeat(OutputStream *ost, const AVPacket *pkt);
-int fix_sub_duration_heartbeat(InputStream *ist, int64_t signal_pts);
void update_benchmark(const char *fmt, ...);
#define SPECIFIER_OPT_FMT_str "%s"
diff --git a/fftools/ffmpeg_dec.c b/fftools/ffmpeg_dec.c
index 90ea0d6d93..5dde82a276 100644
--- a/fftools/ffmpeg_dec.c
+++ b/fftools/ffmpeg_dec.c
@@ -54,24 +54,6 @@ struct Decoder {
Scheduler *sch;
unsigned sch_idx;
-
- pthread_t thread;
- /**
- * Queue for sending coded packets from the main thread to
- * the decoder thread.
- *
- * An empty packet is sent to flush the decoder without terminating
- * decoding.
- */
- ThreadQueue *queue_in;
- /**
- * Queue for sending decoded frames from the decoder thread
- * to the main thread.
- *
- * An empty frame is sent to signal that a single packet has been fully
- * processed.
- */
- ThreadQueue *queue_out;
};
// data that is local to the decoder thread and not visible outside of it
@@ -80,24 +62,6 @@ typedef struct DecThreadContext {
AVPacket *pkt;
} DecThreadContext;
-static int dec_thread_stop(Decoder *d)
-{
- void *ret;
-
- if (!d->queue_in)
- return 0;
-
- tq_send_finish(d->queue_in, 0);
- tq_receive_finish(d->queue_out, 0);
-
- pthread_join(d->thread, &ret);
-
- tq_free(&d->queue_in);
- tq_free(&d->queue_out);
-
- return (intptr_t)ret;
-}
-
void dec_free(Decoder **pdec)
{
Decoder *dec = *pdec;
@@ -105,8 +69,6 @@ void dec_free(Decoder **pdec)
if (!dec)
return;
- dec_thread_stop(dec);
-
av_frame_free(&dec->frame);
av_packet_free(&dec->pkt);
@@ -148,25 +110,6 @@ fail:
return AVERROR(ENOMEM);
}
-static int send_frame_to_filters(InputStream *ist, AVFrame *decoded_frame)
-{
- int i, ret = 0;
-
- for (i = 0; i < ist->nb_filters; i++) {
- ret = ifilter_send_frame(ist->filters[i], decoded_frame,
- i < ist->nb_filters - 1 ||
- ist->dec->type == AVMEDIA_TYPE_SUBTITLE);
- if (ret == AVERROR_EOF)
- ret = 0; /* ignore */
- if (ret < 0) {
- av_log(NULL, AV_LOG_ERROR,
- "Failed to inject frame into filter network: %s\n", av_err2str(ret));
- break;
- }
- }
- return ret;
-}
-
static AVRational audio_samplerate_update(void *logctx, Decoder *d,
const AVFrame *frame)
{
@@ -421,28 +364,14 @@ static int process_subtitle(InputStream *ist, AVFrame *frame)
if (!subtitle)
return 0;
- ret = send_frame_to_filters(ist, frame);
+ ret = sch_dec_send(d->sch, d->sch_idx, frame);
if (ret < 0)
- return ret;
+ av_frame_unref(frame);
- subtitle = (AVSubtitle*)frame->buf[0]->data;
- if (!subtitle->num_rects)
- return 0;
-
- for (int oidx = 0; oidx < ist->nb_outputs; oidx++) {
- OutputStream *ost = ist->outputs[oidx];
- if (!ost->enc || ost->type != AVMEDIA_TYPE_SUBTITLE)
- continue;
-
- ret = enc_subtitle(output_files[ost->file_index], ost, subtitle);
- if (ret < 0)
- return ret;
- }
-
- return 0;
+ return ret == AVERROR_EOF ? AVERROR_EXIT : ret;
}
-int fix_sub_duration_heartbeat(InputStream *ist, int64_t signal_pts)
+static int fix_sub_duration_heartbeat(InputStream *ist, int64_t signal_pts)
{
Decoder *d = ist->decoder;
int ret = AVERROR_BUG;
@@ -468,12 +397,24 @@ int fix_sub_duration_heartbeat(InputStream *ist, int64_t signal_pts)
static int transcode_subtitles(InputStream *ist, const AVPacket *pkt,
AVFrame *frame)
{
- Decoder *d = ist->decoder;
+ Decoder *d = ist->decoder;
AVPacket *flush_pkt = NULL;
AVSubtitle subtitle;
int got_output;
int ret;
+ if (pkt && (intptr_t)pkt->opaque == PKT_OPAQUE_SUB_HEARTBEAT) {
+ frame->pts = pkt->pts;
+ frame->time_base = pkt->time_base;
+ frame->opaque = (void*)(intptr_t)FRAME_OPAQUE_SUB_HEARTBEAT;
+
+ ret = sch_dec_send(d->sch, d->sch_idx, frame);
+ return ret == AVERROR_EOF ? AVERROR_EXIT : ret;
+ } else if (pkt && (intptr_t)pkt->opaque == PKT_OPAQUE_FIX_SUB_DURATION) {
+ return fix_sub_duration_heartbeat(ist, av_rescale_q(pkt->pts, pkt->time_base,
+ AV_TIME_BASE_Q));
+ }
+
if (!pkt) {
flush_pkt = av_packet_alloc();
if (!flush_pkt)
@@ -496,7 +437,7 @@ static int transcode_subtitles(InputStream *ist, const AVPacket *pkt,
ist->frames_decoded++;
- // XXX the queue for transferring data back to the main thread runs
+ // XXX the queue for transferring data to consumers runs
// on AVFrames, so we wrap AVSubtitle in an AVBufferRef and put that
// inside the frame
// eventually, subtitles should be switched to use AVFrames natively
@@ -509,26 +450,7 @@ static int transcode_subtitles(InputStream *ist, const AVPacket *pkt,
frame->width = ist->dec_ctx->width;
frame->height = ist->dec_ctx->height;
- ret = tq_send(d->queue_out, 0, frame);
- if (ret < 0)
- av_frame_unref(frame);
-
- return ret;
-}
-
-static int send_filter_eof(InputStream *ist)
-{
- Decoder *d = ist->decoder;
- int i, ret;
-
- for (i = 0; i < ist->nb_filters; i++) {
- int64_t end_pts = d->last_frame_pts == AV_NOPTS_VALUE ? AV_NOPTS_VALUE :
- d->last_frame_pts + d->last_frame_duration_est;
- ret = ifilter_send_eof(ist->filters[i], end_pts, d->last_frame_tb);
- if (ret < 0)
- return ret;
- }
- return 0;
+ return process_subtitle(ist, frame);
}
static int packet_decode(InputStream *ist, AVPacket *pkt, AVFrame *frame)
@@ -635,9 +557,11 @@ static int packet_decode(InputStream *ist, AVPacket *pkt, AVFrame *frame)
ist->frames_decoded++;
- ret = tq_send(d->queue_out, 0, frame);
- if (ret < 0)
- return ret;
+ ret = sch_dec_send(d->sch, d->sch_idx, frame);
+ if (ret < 0) {
+ av_frame_unref(frame);
+ return ret == AVERROR_EOF ? AVERROR_EXIT : ret;
+ }
}
}
@@ -679,7 +603,6 @@ fail:
void *decoder_thread(void *arg)
{
InputStream *ist = arg;
- InputFile *ifile = input_files[ist->file_index];
Decoder *d = ist->decoder;
DecThreadContext dt;
int ret = 0, input_status = 0;
@@ -691,19 +614,31 @@ void *decoder_thread(void *arg)
dec_thread_set_name(ist);
while (!input_status) {
- int dummy, flush_buffers;
+ int flush_buffers, have_data;
- input_status = tq_receive(d->queue_in, &dummy, dt.pkt);
- flush_buffers = input_status >= 0 && !dt.pkt->buf;
- if (!dt.pkt->buf)
+ input_status = sch_dec_receive(d->sch, d->sch_idx, dt.pkt);
+ have_data = input_status >= 0 &&
+ (dt.pkt->buf || dt.pkt->side_data_elems ||
+ (intptr_t)dt.pkt->opaque == PKT_OPAQUE_SUB_HEARTBEAT ||
+ (intptr_t)dt.pkt->opaque == PKT_OPAQUE_FIX_SUB_DURATION);
+ flush_buffers = input_status >= 0 && !have_data;
+ if (!have_data)
av_log(ist, AV_LOG_VERBOSE, "Decoder thread received %s packet\n",
flush_buffers ? "flush" : "EOF");
- ret = packet_decode(ist, dt.pkt->buf ? dt.pkt : NULL, dt.frame);
+ ret = packet_decode(ist, have_data ? dt.pkt : NULL, dt.frame);
av_packet_unref(dt.pkt);
av_frame_unref(dt.frame);
+ // AVERROR_EOF - EOF from the decoder
+ // AVERROR_EXIT - EOF from the scheduler
+ // we treat them differently when flushing
+ if (ret == AVERROR_EXIT) {
+ ret = AVERROR_EOF;
+ flush_buffers = 0;
+ }
+
if (ret == AVERROR_EOF) {
av_log(ist, AV_LOG_VERBOSE, "Decoder returned EOF, %s\n",
flush_buffers ? "resetting" : "finishing");
@@ -711,11 +646,10 @@ void *decoder_thread(void *arg)
if (!flush_buffers)
break;
- /* report last frame duration to the demuxer thread */
+ /* report last frame duration to the scheduler */
if (ist->dec->type == AVMEDIA_TYPE_AUDIO) {
- Timestamp ts = { .ts = d->last_frame_pts + d->last_frame_duration_est,
- .tb = d->last_frame_tb };
- av_thread_message_queue_send(ifile->audio_ts_queue, &ts, 0);
+ dt.pkt->pts = d->last_frame_pts + d->last_frame_duration_est;
+ dt.pkt->time_base = d->last_frame_tb;
}
avcodec_flush_buffers(ist->dec_ctx);
@@ -724,149 +658,47 @@ void *decoder_thread(void *arg)
av_err2str(ret));
break;
}
-
- // signal to the consumer thread that the entire packet was processed
- ret = tq_send(d->queue_out, 0, dt.frame);
- if (ret < 0) {
- if (ret != AVERROR_EOF)
- av_log(ist, AV_LOG_ERROR, "Error communicating with the main thread\n");
- break;
- }
}
// EOF is normal thread termination
if (ret == AVERROR_EOF)
ret = 0;
+ // on success send EOF timestamp to our downstreams
+ if (ret >= 0) {
+ float err_rate;
+
+ av_frame_unref(dt.frame);
+
+ dt.frame->opaque = (void*)(intptr_t)FRAME_OPAQUE_EOF;
+ dt.frame->pts = d->last_frame_pts == AV_NOPTS_VALUE ? AV_NOPTS_VALUE :
+ d->last_frame_pts + d->last_frame_duration_est;
+ dt.frame->time_base = d->last_frame_tb;
+
+ ret = sch_dec_send(d->sch, d->sch_idx, dt.frame);
+ if (ret < 0 && ret != AVERROR_EOF) {
+ av_log(NULL, AV_LOG_FATAL,
+ "Error signalling EOF timestamp: %s\n", av_err2str(ret));
+ goto finish;
+ }
+ ret = 0;
+
+ err_rate = (ist->frames_decoded || ist->decode_errors) ?
+ ist->decode_errors / (ist->frames_decoded + ist->decode_errors) : 0.f;
+ if (err_rate > max_error_rate) {
+ av_log(ist, AV_LOG_FATAL, "Decode error rate %g exceeds maximum %g\n",
+ err_rate, max_error_rate);
+ ret = FFMPEG_ERROR_RATE_EXCEEDED;
+ } else if (err_rate)
+ av_log(ist, AV_LOG_VERBOSE, "Decode error rate %g\n", err_rate);
+ }
+
finish:
- tq_receive_finish(d->queue_in, 0);
- tq_send_finish (d->queue_out, 0);
-
- // make sure the demuxer does not get stuck waiting for audio durations
- // that will never arrive
- if (ifile->audio_ts_queue && ist->dec->type == AVMEDIA_TYPE_AUDIO)
- av_thread_message_queue_set_err_recv(ifile->audio_ts_queue, AVERROR_EOF);
-
dec_thread_uninit(&dt);
- av_log(ist, AV_LOG_VERBOSE, "Terminating decoder thread\n");
-
return (void*)(intptr_t)ret;
}
-int dec_packet(InputStream *ist, const AVPacket *pkt, int no_eof)
-{
- Decoder *d = ist->decoder;
- int ret = 0, thread_ret;
-
- // thread already joined
- if (!d->queue_in)
- return AVERROR_EOF;
-
- // send the packet/flush request/EOF to the decoder thread
- if (pkt || no_eof) {
- av_packet_unref(d->pkt);
-
- if (pkt) {
- ret = av_packet_ref(d->pkt, pkt);
- if (ret < 0)
- goto finish;
- }
-
- ret = tq_send(d->queue_in, 0, d->pkt);
- if (ret < 0)
- goto finish;
- } else
- tq_send_finish(d->queue_in, 0);
-
- // retrieve all decoded data for the packet
- while (1) {
- int dummy;
-
- ret = tq_receive(d->queue_out, &dummy, d->frame);
- if (ret < 0)
- goto finish;
-
- // packet fully processed
- if (!d->frame->buf[0])
- return 0;
-
- // process the decoded frame
- if (ist->dec->type == AVMEDIA_TYPE_SUBTITLE) {
- ret = process_subtitle(ist, d->frame);
- } else {
- ret = send_frame_to_filters(ist, d->frame);
- }
- av_frame_unref(d->frame);
- if (ret < 0)
- goto finish;
- }
-
-finish:
- thread_ret = dec_thread_stop(d);
- if (thread_ret < 0) {
- av_log(ist, AV_LOG_ERROR, "Decoder thread returned error: %s\n",
- av_err2str(thread_ret));
- ret = err_merge(ret, thread_ret);
- }
- // non-EOF errors here are all fatal
- if (ret < 0 && ret != AVERROR_EOF)
- return ret;
-
- // signal EOF to our downstreams
- ret = send_filter_eof(ist);
- if (ret < 0) {
- av_log(NULL, AV_LOG_FATAL, "Error marking filters as finished\n");
- return ret;
- }
-
- return AVERROR_EOF;
-}
-
-static int dec_thread_start(InputStream *ist)
-{
- Decoder *d = ist->decoder;
- ObjPool *op;
- int ret = 0;
-
- op = objpool_alloc_packets();
- if (!op)
- return AVERROR(ENOMEM);
-
- d->queue_in = tq_alloc(1, 1, op, pkt_move);
- if (!d->queue_in) {
- objpool_free(&op);
- return AVERROR(ENOMEM);
- }
-
- op = objpool_alloc_frames();
- if (!op)
- goto fail;
-
- d->queue_out = tq_alloc(1, 4, op, frame_move);
- if (!d->queue_out) {
- objpool_free(&op);
- goto fail;
- }
-
- ret = pthread_create(&d->thread, NULL, decoder_thread, ist);
- if (ret) {
- ret = AVERROR(ret);
- av_log(ist, AV_LOG_ERROR, "pthread_create() failed: %s\n",
- av_err2str(ret));
- goto fail;
- }
-
- return 0;
-fail:
- if (ret >= 0)
- ret = AVERROR(ENOMEM);
-
- tq_free(&d->queue_in);
- tq_free(&d->queue_out);
- return ret;
-}
-
static enum AVPixelFormat get_format(AVCodecContext *s, const enum AVPixelFormat *pix_fmts)
{
InputStream *ist = s->opaque;
@@ -1118,12 +950,5 @@ int dec_open(InputStream *ist, Scheduler *sch, unsigned sch_idx)
if (ret < 0)
return ret;
- ret = dec_thread_start(ist);
- if (ret < 0) {
- av_log(ist, AV_LOG_ERROR, "Error starting decoder thread: %s\n",
- av_err2str(ret));
- return ret;
- }
-
return 0;
}
diff --git a/fftools/ffmpeg_demux.c b/fftools/ffmpeg_demux.c
index 2234dbe076..91cd7a1125 100644
--- a/fftools/ffmpeg_demux.c
+++ b/fftools/ffmpeg_demux.c
@@ -22,8 +22,6 @@
#include "ffmpeg.h"
#include "ffmpeg_sched.h"
#include "ffmpeg_utils.h"
-#include "objpool.h"
-#include "thread_queue.h"
#include "libavutil/avassert.h"
#include "libavutil/avstring.h"
@@ -35,7 +33,6 @@
#include "libavutil/pixdesc.h"
#include "libavutil/time.h"
#include "libavutil/timestamp.h"
-#include "libavutil/thread.h"
#include "libavcodec/packet.h"
@@ -66,7 +63,11 @@ typedef struct DemuxStream {
double ts_scale;
+ // scheduler returned EOF for this stream
+ int finished;
+
int streamcopy_needed;
+ int have_sub2video;
int wrap_correction_done;
int saw_first_ts;
@@ -101,6 +102,7 @@ typedef struct Demuxer {
/* number of times input stream should be looped */
int loop;
+ int have_audio_dec;
/* duration of the looped segment of the input file */
Timestamp duration;
/* pts with the smallest/largest values ever seen */
@@ -113,11 +115,12 @@ typedef struct Demuxer {
double readrate_initial_burst;
Scheduler *sch;
- ThreadQueue *thread_queue;
- int thread_queue_size;
- pthread_t thread;
+
+ AVPacket *pkt_heartbeat;
int read_started;
+ int nb_streams_used;
+ int nb_streams_finished;
} Demuxer;
static DemuxStream *ds_from_ist(InputStream *ist)
@@ -153,7 +156,7 @@ static void report_new_stream(Demuxer *d, const AVPacket *pkt)
d->nb_streams_warn = pkt->stream_index + 1;
}
-static int seek_to_start(Demuxer *d)
+static int seek_to_start(Demuxer *d, Timestamp end_pts)
{
InputFile *ifile = &d->f;
AVFormatContext *is = ifile->ctx;
@@ -163,21 +166,10 @@ static int seek_to_start(Demuxer *d)
if (ret < 0)
return ret;
- if (ifile->audio_ts_queue_size) {
- int got_ts = 0;
-
- while (got_ts < ifile->audio_ts_queue_size) {
- Timestamp ts;
- ret = av_thread_message_queue_recv(ifile->audio_ts_queue, &ts, 0);
- if (ret < 0)
- return ret;
- got_ts++;
-
- if (d->max_pts.ts == AV_NOPTS_VALUE ||
- av_compare_ts(d->max_pts.ts, d->max_pts.tb, ts.ts, ts.tb) < 0)
- d->max_pts = ts;
- }
- }
+ if (end_pts.ts != AV_NOPTS_VALUE &&
+ (d->max_pts.ts == AV_NOPTS_VALUE ||
+ av_compare_ts(d->max_pts.ts, d->max_pts.tb, end_pts.ts, end_pts.tb) < 0))
+ d->max_pts = end_pts;
if (d->max_pts.ts != AV_NOPTS_VALUE) {
int64_t min_pts = d->min_pts.ts == AV_NOPTS_VALUE ? 0 : d->min_pts.ts;
@@ -404,7 +396,7 @@ static int ts_fixup(Demuxer *d, AVPacket *pkt)
duration = av_rescale_q(d->duration.ts, d->duration.tb, pkt->time_base);
if (pkt->pts != AV_NOPTS_VALUE) {
// audio decoders take precedence for estimating total file duration
- int64_t pkt_duration = ifile->audio_ts_queue_size ? 0 : pkt->duration;
+ int64_t pkt_duration = d->have_audio_dec ? 0 : pkt->duration;
pkt->pts += duration;
@@ -440,7 +432,7 @@ static int ts_fixup(Demuxer *d, AVPacket *pkt)
return 0;
}
-static int input_packet_process(Demuxer *d, AVPacket *pkt)
+static int input_packet_process(Demuxer *d, AVPacket *pkt, unsigned *send_flags)
{
InputFile *f = &d->f;
InputStream *ist = f->streams[pkt->stream_index];
@@ -451,6 +443,16 @@ static int input_packet_process(Demuxer *d, AVPacket *pkt)
if (ret < 0)
return ret;
+ if (f->recording_time != INT64_MAX) {
+ int64_t start_time = 0;
+ if (copy_ts) {
+ start_time += f->start_time != AV_NOPTS_VALUE ? f->start_time : 0;
+ start_time += start_at_zero ? 0 : f->start_time_effective;
+ }
+ if (ds->dts >= f->recording_time + start_time)
+ *send_flags |= DEMUX_SEND_STREAMCOPY_EOF;
+ }
+
ds->data_size += pkt->size;
ds->nb_packets++;
@@ -465,6 +467,8 @@ static int input_packet_process(Demuxer *d, AVPacket *pkt)
av_ts2timestr(input_files[ist->file_index]->ts_offset, &AV_TIME_BASE_Q));
}
+ pkt->stream_index = ds->sch_idx_stream;
+
return 0;
}
@@ -488,6 +492,65 @@ static void readrate_sleep(Demuxer *d)
}
}
+static int do_send(Demuxer *d, DemuxStream *ds, AVPacket *pkt, unsigned flags,
+ const char *pkt_desc)
+{
+ int ret;
+
+ ret = sch_demux_send(d->sch, d->f.index, pkt, flags);
+ if (ret == AVERROR_EOF) {
+ av_packet_unref(pkt);
+
+ av_log(ds, AV_LOG_VERBOSE, "All consumers of this stream are done\n");
+ ds->finished = 1;
+
+ if (++d->nb_streams_finished == d->nb_streams_used) {
+ av_log(d, AV_LOG_VERBOSE, "All consumers are done\n");
+ return AVERROR_EOF;
+ }
+ } else if (ret < 0) {
+ if (ret != AVERROR_EXIT)
+ av_log(d, AV_LOG_ERROR,
+ "Unable to send %s packet to consumers: %s\n",
+ pkt_desc, av_err2str(ret));
+ return ret;
+ }
+
+ return 0;
+}
+
+static int demux_send(Demuxer *d, DemuxStream *ds, AVPacket *pkt, unsigned flags)
+{
+ InputFile *f = &d->f;
+ int ret;
+
+ // send heartbeat for sub2video streams
+ if (d->pkt_heartbeat && pkt->pts != AV_NOPTS_VALUE) {
+ for (int i = 0; i < f->nb_streams; i++) {
+ DemuxStream *ds1 = ds_from_ist(f->streams[i]);
+
+ if (ds1->finished || !ds1->have_sub2video)
+ continue;
+
+ d->pkt_heartbeat->pts = pkt->pts;
+ d->pkt_heartbeat->time_base = pkt->time_base;
+ d->pkt_heartbeat->stream_index = ds1->sch_idx_stream;
+ d->pkt_heartbeat->opaque = (void*)(intptr_t)PKT_OPAQUE_SUB_HEARTBEAT;
+
+ ret = do_send(d, ds1, d->pkt_heartbeat, 0, "heartbeat");
+ if (ret < 0)
+ return ret;
+ }
+ }
+
+ ret = do_send(d, ds, pkt, flags, "demuxed");
+ if (ret < 0)
+ return ret;
+
+
+ return 0;
+}
+
static void discard_unused_programs(InputFile *ifile)
{
for (int j = 0; j < ifile->ctx->nb_programs; j++) {
@@ -527,9 +590,13 @@ static void *input_thread(void *arg)
discard_unused_programs(f);
+ d->read_started = 1;
d->wallclock_start = av_gettime_relative();
while (1) {
+ DemuxStream *ds;
+ unsigned send_flags = 0;
+
ret = av_read_frame(f->ctx, pkt);
if (ret == AVERROR(EAGAIN)) {
@@ -538,11 +605,13 @@ static void *input_thread(void *arg)
}
if (ret < 0) {
if (d->loop) {
- /* signal looping to the consumer thread */
+ /* signal looping to our consumers */
pkt->stream_index = -1;
- ret = tq_send(d->thread_queue, 0, pkt);
+
+ ret = sch_demux_send(d->sch, f->index, pkt, 0);
if (ret >= 0)
- ret = seek_to_start(d);
+ ret = seek_to_start(d, (Timestamp){ .ts = pkt->pts,
+ .tb = pkt->time_base });
if (ret >= 0)
continue;
@@ -551,9 +620,11 @@ static void *input_thread(void *arg)
if (ret == AVERROR_EOF)
av_log(d, AV_LOG_VERBOSE, "EOF while reading input\n");
- else
+ else {
av_log(d, AV_LOG_ERROR, "Error during demuxing: %s\n",
av_err2str(ret));
+ ret = exit_on_error ? ret : 0;
+ }
break;
}
@@ -565,8 +636,9 @@ static void *input_thread(void *arg)
/* the following test is needed in case new streams appear
dynamically in stream : we ignore them */
- if (pkt->stream_index >= f->nb_streams ||
- f->streams[pkt->stream_index]->discard) {
+ ds = pkt->stream_index < f->nb_streams ?
+ ds_from_ist(f->streams[pkt->stream_index]) : NULL;
+ if (!ds || ds->ist.discard || ds->finished) {
report_new_stream(d, pkt);
av_packet_unref(pkt);
continue;
@@ -583,122 +655,26 @@ static void *input_thread(void *arg)
}
}
- ret = input_packet_process(d, pkt);
+ ret = input_packet_process(d, pkt, &send_flags);
if (ret < 0)
break;
if (f->readrate)
readrate_sleep(d);
- ret = tq_send(d->thread_queue, 0, pkt);
- if (ret < 0) {
- if (ret != AVERROR_EOF)
- av_log(f, AV_LOG_ERROR,
- "Unable to send packet to main thread: %s\n",
- av_err2str(ret));
+ ret = demux_send(d, ds, pkt, send_flags);
+ if (ret < 0)
break;
- }
}
+ // EOF/EXIT is normal termination
+ if (ret == AVERROR_EOF || ret == AVERROR_EXIT)
+ ret = 0;
+
finish:
- av_assert0(ret < 0);
- tq_send_finish(d->thread_queue, 0);
-
av_packet_free(&pkt);
- av_log(d, AV_LOG_VERBOSE, "Terminating demuxer thread\n");
-
- return NULL;
-}
-
-static void thread_stop(Demuxer *d)
-{
- InputFile *f = &d->f;
-
- if (!d->thread_queue)
- return;
-
- tq_receive_finish(d->thread_queue, 0);
-
- pthread_join(d->thread, NULL);
-
- tq_free(&d->thread_queue);
-
- av_thread_message_queue_free(&f->audio_ts_queue);
-}
-
-static int thread_start(Demuxer *d)
-{
- int ret;
- InputFile *f = &d->f;
- ObjPool *op;
-
- if (d->thread_queue_size <= 0)
- d->thread_queue_size = (nb_input_files > 1 ? 8 : 1);
-
- op = objpool_alloc_packets();
- if (!op)
- return AVERROR(ENOMEM);
-
- d->thread_queue = tq_alloc(1, d->thread_queue_size, op, pkt_move);
- if (!d->thread_queue) {
- objpool_free(&op);
- return AVERROR(ENOMEM);
- }
-
- if (d->loop) {
- int nb_audio_dec = 0;
-
- for (int i = 0; i < f->nb_streams; i++) {
- InputStream *ist = f->streams[i];
- nb_audio_dec += !!(ist->decoding_needed &&
- ist->st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO);
- }
-
- if (nb_audio_dec) {
- ret = av_thread_message_queue_alloc(&f->audio_ts_queue,
- nb_audio_dec, sizeof(Timestamp));
- if (ret < 0)
- goto fail;
- f->audio_ts_queue_size = nb_audio_dec;
- }
- }
-
- if ((ret = pthread_create(&d->thread, NULL, input_thread, d))) {
- av_log(d, AV_LOG_ERROR, "pthread_create failed: %s. Try to increase `ulimit -v` or decrease `ulimit -s`.\n", strerror(ret));
- ret = AVERROR(ret);
- goto fail;
- }
-
- d->read_started = 1;
-
- return 0;
-fail:
- tq_free(&d->thread_queue);
- return ret;
-}
-
-int ifile_get_packet(InputFile *f, AVPacket *pkt)
-{
- Demuxer *d = demuxer_from_ifile(f);
- int ret, dummy;
-
- if (!d->thread_queue) {
- ret = thread_start(d);
- if (ret < 0)
- return ret;
- }
-
- ret = tq_receive(d->thread_queue, &dummy, pkt);
- if (ret < 0)
- return ret;
-
- if (pkt->stream_index == -1) {
- av_assert0(!pkt->data && !pkt->side_data_elems);
- return 1;
- }
-
- return 0;
+ return (void*)(intptr_t)ret;
}
static void demux_final_stats(Demuxer *d)
@@ -769,8 +745,6 @@ void ifile_close(InputFile **pf)
if (!f)
return;
- thread_stop(d);
-
if (d->read_started)
demux_final_stats(d);
@@ -780,6 +754,8 @@ void ifile_close(InputFile **pf)
avformat_close_input(&f->ctx);
+ av_packet_free(&d->pkt_heartbeat);
+
av_freep(pf);
}
@@ -802,7 +778,11 @@ static int ist_use(InputStream *ist, int decoding_needed)
ds->sch_idx_stream = ret;
}
- ist->discard = 0;
+ if (ist->discard) {
+ ist->discard = 0;
+ d->nb_streams_used++;
+ }
+
ist->st->discard = ist->user_set_discard;
ist->decoding_needed |= decoding_needed;
ds->streamcopy_needed |= !decoding_needed;
@@ -823,6 +803,8 @@ static int ist_use(InputStream *ist, int decoding_needed)
ret = dec_open(ist, d->sch, ds->sch_idx_dec);
if (ret < 0)
return ret;
+
+ d->have_audio_dec |= is_audio;
}
return 0;
@@ -848,6 +830,7 @@ int ist_output_add(InputStream *ist, OutputStream *ost)
int ist_filter_add(InputStream *ist, InputFilter *ifilter, int is_simple)
{
+ Demuxer *d = demuxer_from_ifile(input_files[ist->file_index]);
DemuxStream *ds = ds_from_ist(ist);
int ret;
@@ -866,6 +849,15 @@ int ist_filter_add(InputStream *ist, InputFilter *ifilter, int is_simple)
if (ret < 0)
return ret;
+ if (ist->dec_ctx->codec_type == AVMEDIA_TYPE_SUBTITLE) {
+ if (!d->pkt_heartbeat) {
+ d->pkt_heartbeat = av_packet_alloc();
+ if (!d->pkt_heartbeat)
+ return AVERROR(ENOMEM);
+ }
+ ds->have_sub2video = 1;
+ }
+
return ds->sch_idx_dec;
}
@@ -1607,8 +1599,6 @@ int ifile_open(const OptionsContext *o, const char *filename, Scheduler *sch)
"since neither -readrate nor -re were given\n");
}
- d->thread_queue_size = o->thread_queue_size;
-
/* Add all the streams from the given input file to the demuxer */
for (int i = 0; i < ic->nb_streams; i++) {
ret = ist_add(o, d, ic->streams[i]);
diff --git a/fftools/ffmpeg_enc.c b/fftools/ffmpeg_enc.c
index 9871381c0e..9383b167f7 100644
--- a/fftools/ffmpeg_enc.c
+++ b/fftools/ffmpeg_enc.c
@@ -41,12 +41,6 @@
#include "libavformat/avformat.h"
struct Encoder {
- AVFrame *sq_frame;
-
- // packet for receiving encoded output
- AVPacket *pkt;
- AVFrame *sub_frame;
-
// combined size of all the packets received from the encoder
uint64_t data_size;
@@ -54,25 +48,9 @@ struct Encoder {
uint64_t packets_encoded;
int opened;
- int finished;
Scheduler *sch;
unsigned sch_idx;
-
- pthread_t thread;
- /**
- * Queue for sending frames from the main thread to
- * the encoder thread.
- */
- ThreadQueue *queue_in;
- /**
- * Queue for sending encoded packets from the encoder thread
- * to the main thread.
- *
- * An empty packet is sent to signal that a previously sent
- * frame has been fully processed.
- */
- ThreadQueue *queue_out;
};
// data that is local to the decoder thread and not visible outside of it
@@ -81,24 +59,6 @@ typedef struct EncoderThread {
AVPacket *pkt;
} EncoderThread;
-static int enc_thread_stop(Encoder *e)
-{
- void *ret;
-
- if (!e->queue_in)
- return 0;
-
- tq_send_finish(e->queue_in, 0);
- tq_receive_finish(e->queue_out, 0);
-
- pthread_join(e->thread, &ret);
-
- tq_free(&e->queue_in);
- tq_free(&e->queue_out);
-
- return (int)(intptr_t)ret;
-}
-
void enc_free(Encoder **penc)
{
Encoder *enc = *penc;
@@ -106,13 +66,6 @@ void enc_free(Encoder **penc)
if (!enc)
return;
- enc_thread_stop(enc);
-
- av_frame_free(&enc->sq_frame);
- av_frame_free(&enc->sub_frame);
-
- av_packet_free(&enc->pkt);
-
av_freep(penc);
}
@@ -127,25 +80,12 @@ int enc_alloc(Encoder **penc, const AVCodec *codec,
if (!enc)
return AVERROR(ENOMEM);
- if (codec->type == AVMEDIA_TYPE_SUBTITLE) {
- enc->sub_frame = av_frame_alloc();
- if (!enc->sub_frame)
- goto fail;
- }
-
- enc->pkt = av_packet_alloc();
- if (!enc->pkt)
- goto fail;
-
enc->sch = sch;
enc->sch_idx = sch_idx;
*penc = enc;
return 0;
-fail:
- enc_free(&enc);
- return AVERROR(ENOMEM);
}
static int hw_device_setup_for_encode(OutputStream *ost, AVBufferRef *frames_ref)
@@ -224,52 +164,9 @@ static int set_encoder_id(OutputFile *of, OutputStream *ost)
return 0;
}
-static int enc_thread_start(OutputStream *ost)
-{
- Encoder *e = ost->enc;
- ObjPool *op;
- int ret = 0;
-
- op = objpool_alloc_frames();
- if (!op)
- return AVERROR(ENOMEM);
-
- e->queue_in = tq_alloc(1, 1, op, frame_move);
- if (!e->queue_in) {
- objpool_free(&op);
- return AVERROR(ENOMEM);
- }
-
- op = objpool_alloc_packets();
- if (!op)
- goto fail;
-
- e->queue_out = tq_alloc(1, 4, op, pkt_move);
- if (!e->queue_out) {
- objpool_free(&op);
- goto fail;
- }
-
- ret = pthread_create(&e->thread, NULL, encoder_thread, ost);
- if (ret) {
- ret = AVERROR(ret);
- av_log(ost, AV_LOG_ERROR, "pthread_create() failed: %s\n",
- av_err2str(ret));
- goto fail;
- }
-
- return 0;
-fail:
- if (ret >= 0)
- ret = AVERROR(ENOMEM);
-
- tq_free(&e->queue_in);
- tq_free(&e->queue_out);
- return ret;
-}
-
-int enc_open(OutputStream *ost, const AVFrame *frame)
+int enc_open(void *opaque, const AVFrame *frame)
{
+ OutputStream *ost = opaque;
InputStream *ist = ost->ist;
Encoder *e = ost->enc;
AVCodecContext *enc_ctx = ost->enc_ctx;
@@ -277,6 +174,7 @@ int enc_open(OutputStream *ost, const AVFrame *frame)
const AVCodec *enc = enc_ctx->codec;
OutputFile *of = output_files[ost->file_index];
FrameData *fd;
+ int frame_samples = 0;
int ret;
if (e->opened)
@@ -420,17 +318,8 @@ int enc_open(OutputStream *ost, const AVFrame *frame)
e->opened = 1;
- if (ost->sq_idx_encode >= 0) {
- e->sq_frame = av_frame_alloc();
- if (!e->sq_frame)
- return AVERROR(ENOMEM);
- }
-
- if (ost->enc_ctx->frame_size) {
- av_assert0(ost->sq_idx_encode >= 0);
- sq_frame_samples(output_files[ost->file_index]->sq_encode,
- ost->sq_idx_encode, ost->enc_ctx->frame_size);
- }
+ if (ost->enc_ctx->frame_size)
+ frame_samples = ost->enc_ctx->frame_size;
ret = check_avoptions(ost->encoder_opts);
if (ret < 0)
@@ -476,18 +365,11 @@ int enc_open(OutputStream *ost, const AVFrame *frame)
if (ost->st->time_base.num <= 0 || ost->st->time_base.den <= 0)
ost->st->time_base = av_add_q(ost->enc_ctx->time_base, (AVRational){0, 1});
- ret = enc_thread_start(ost);
- if (ret < 0) {
- av_log(ost, AV_LOG_ERROR, "Error starting encoder thread: %s\n",
- av_err2str(ret));
- return ret;
- }
-
ret = of_stream_init(of, ost);
if (ret < 0)
return ret;
- return 0;
+ return frame_samples;
}
static int check_recording_time(OutputStream *ost, int64_t ts, AVRational tb)
@@ -514,8 +396,7 @@ static int do_subtitle_out(OutputFile *of, OutputStream *ost, const AVSubtitle *
av_log(ost, AV_LOG_ERROR, "Subtitle packets must have a pts\n");
return exit_on_error ? AVERROR(EINVAL) : 0;
}
- if (ost->finished ||
- (of->start_time != AV_NOPTS_VALUE && sub->pts < of->start_time))
+ if ((of->start_time != AV_NOPTS_VALUE && sub->pts < of->start_time))
return 0;
enc = ost->enc_ctx;
@@ -579,7 +460,7 @@ static int do_subtitle_out(OutputFile *of, OutputStream *ost, const AVSubtitle *
}
pkt->dts = pkt->pts;
- ret = tq_send(e->queue_out, 0, pkt);
+ ret = sch_enc_send(e->sch, e->sch_idx, pkt);
if (ret < 0) {
av_packet_unref(pkt);
return ret;
@@ -671,10 +552,13 @@ static int update_video_stats(OutputStream *ost, const AVPacket *pkt, int write_
int64_t frame_number;
double ti1, bitrate, avg_bitrate;
double psnr_val = -1;
+ int quality;
- ost->quality = sd ? AV_RL32(sd) : -1;
+ quality = sd ? AV_RL32(sd) : -1;
pict_type = sd ? sd[4] : AV_PICTURE_TYPE_NONE;
+ atomic_store(&ost->quality, quality);
+
if ((enc->flags & AV_CODEC_FLAG_PSNR) && sd && sd[5]) {
// FIXME the scaling assumes 8bit
double error = AV_RL64(sd + 8) / (enc->width * enc->height * 255.0 * 255.0);
@@ -697,10 +581,10 @@ static int update_video_stats(OutputStream *ost, const AVPacket *pkt, int write_
frame_number = e->packets_encoded;
if (vstats_version <= 1) {
fprintf(vstats_file, "frame= %5"PRId64" q= %2.1f ", frame_number,
- ost->quality / (float)FF_QP2LAMBDA);
+ quality / (float)FF_QP2LAMBDA);
} else {
fprintf(vstats_file, "out= %2d st= %2d frame= %5"PRId64" q= %2.1f ", ost->file_index, ost->index, frame_number,
- ost->quality / (float)FF_QP2LAMBDA);
+ quality / (float)FF_QP2LAMBDA);
}
if (psnr_val >= 0)
@@ -801,18 +685,11 @@ static int encode_frame(OutputFile *of, OutputStream *ost, AVFrame *frame,
av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, &enc->time_base));
}
- if ((ret = trigger_fix_sub_duration_heartbeat(ost, pkt)) < 0) {
- av_log(NULL, AV_LOG_ERROR,
- "Subtitle heartbeat logic failed in %s! (%s)\n",
- __func__, av_err2str(ret));
- return ret;
- }
-
e->data_size += pkt->size;
e->packets_encoded++;
- ret = tq_send(e->queue_out, 0, pkt);
+ ret = sch_enc_send(e->sch, e->sch_idx, pkt);
if (ret < 0) {
av_packet_unref(pkt);
return ret;
@@ -822,50 +699,6 @@ static int encode_frame(OutputFile *of, OutputStream *ost, AVFrame *frame,
av_assert0(0);
}
-static int submit_encode_frame(OutputFile *of, OutputStream *ost,
- AVFrame *frame, AVPacket *pkt)
-{
- Encoder *e = ost->enc;
- int ret;
-
- if (ost->sq_idx_encode < 0)
- return encode_frame(of, ost, frame, pkt);
-
- if (frame) {
- ret = av_frame_ref(e->sq_frame, frame);
- if (ret < 0)
- return ret;
- frame = e->sq_frame;
- }
-
- ret = sq_send(of->sq_encode, ost->sq_idx_encode,
- SQFRAME(frame));
- if (ret < 0) {
- if (frame)
- av_frame_unref(frame);
- if (ret != AVERROR_EOF)
- return ret;
- }
-
- while (1) {
- AVFrame *enc_frame = e->sq_frame;
-
- ret = sq_receive(of->sq_encode, ost->sq_idx_encode,
- SQFRAME(enc_frame));
- if (ret == AVERROR_EOF) {
- enc_frame = NULL;
- } else if (ret < 0) {
- return (ret == AVERROR(EAGAIN)) ? 0 : ret;
- }
-
- ret = encode_frame(of, ost, enc_frame, pkt);
- if (enc_frame)
- av_frame_unref(enc_frame);
- if (ret < 0)
- return ret;
- }
-}
-
static int do_audio_out(OutputFile *of, OutputStream *ost,
AVFrame *frame, AVPacket *pkt)
{
@@ -881,7 +714,7 @@ static int do_audio_out(OutputFile *of, OutputStream *ost,
if (!check_recording_time(ost, frame->pts, frame->time_base))
return AVERROR_EOF;
- return submit_encode_frame(of, ost, frame, pkt);
+ return encode_frame(of, ost, frame, pkt);
}
static enum AVPictureType forced_kf_apply(void *logctx, KeyframeForceCtx *kf,
@@ -949,7 +782,7 @@ static int do_video_out(OutputFile *of, OutputStream *ost,
}
#endif
- return submit_encode_frame(of, ost, in_picture, pkt);
+ return encode_frame(of, ost, in_picture, pkt);
}
static int frame_encode(OutputStream *ost, AVFrame *frame, AVPacket *pkt)
@@ -958,9 +791,12 @@ static int frame_encode(OutputStream *ost, AVFrame *frame, AVPacket *pkt)
enum AVMediaType type = ost->type;
if (type == AVMEDIA_TYPE_SUBTITLE) {
+ const AVSubtitle *subtitle = frame && frame->buf[0] ?
+ (AVSubtitle*)frame->buf[0]->data : NULL;
+
// no flushing for subtitles
- return frame ?
- do_subtitle_out(of, ost, (AVSubtitle*)frame->buf[0]->data, pkt) : 0;
+ return subtitle && subtitle->num_rects ?
+ do_subtitle_out(of, ost, subtitle, pkt) : 0;
}
if (frame) {
@@ -968,7 +804,7 @@ static int frame_encode(OutputStream *ost, AVFrame *frame, AVPacket *pkt)
do_audio_out(of, ost, frame, pkt);
}
- return submit_encode_frame(of, ost, NULL, pkt);
+ return encode_frame(of, ost, NULL, pkt);
}
static void enc_thread_set_name(const OutputStream *ost)
@@ -1009,24 +845,50 @@ fail:
void *encoder_thread(void *arg)
{
OutputStream *ost = arg;
- OutputFile *of = output_files[ost->file_index];
Encoder *e = ost->enc;
EncoderThread et;
int ret = 0, input_status = 0;
+ int name_set = 0;
ret = enc_thread_init(&et);
if (ret < 0)
goto finish;
- enc_thread_set_name(ost);
+ /* Open the subtitle encoders immediately. AVFrame-based encoders
+ * are opened through a callback from the scheduler once they get
+ * their first frame
+ *
+ * N.B.: because the callback is called from a different thread,
+ * enc_ctx MUST NOT be accessed before sch_enc_receive() returns
+ * for the first time for audio/video. */
+ if (ost->type != AVMEDIA_TYPE_VIDEO && ost->type != AVMEDIA_TYPE_AUDIO) {
+ ret = enc_open(ost, NULL);
+ if (ret < 0)
+ goto finish;
+ }
while (!input_status) {
- int dummy;
-
- input_status = tq_receive(e->queue_in, &dummy, et.frame);
- if (input_status < 0)
+ input_status = sch_enc_receive(e->sch, e->sch_idx, et.frame);
+ if (input_status == AVERROR_EOF) {
av_log(ost, AV_LOG_VERBOSE, "Encoder thread received EOF\n");
+ if (!e->opened) {
+ av_log(ost, AV_LOG_ERROR, "Could not open encoder before EOF\n");
+ ret = AVERROR(EINVAL);
+ goto finish;
+ }
+ } else if (input_status < 0) {
+ ret = input_status;
+ av_log(ost, AV_LOG_ERROR, "Error receiving a frame for encoding: %s\n",
+ av_err2str(ret));
+ goto finish;
+ }
+
+ if (!name_set) {
+ enc_thread_set_name(ost);
+ name_set = 1;
+ }
+
ret = frame_encode(ost, input_status >= 0 ? et.frame : NULL, et.pkt);
av_packet_unref(et.pkt);
@@ -1040,15 +902,6 @@ void *encoder_thread(void *arg)
av_err2str(ret));
break;
}
-
- // signal to the consumer thread that the frame was encoded
- ret = tq_send(e->queue_out, 0, et.pkt);
- if (ret < 0) {
- if (ret != AVERROR_EOF)
- av_log(ost, AV_LOG_ERROR,
- "Error communicating with the main thread\n");
- break;
- }
}
// EOF is normal thread termination
@@ -1056,118 +909,7 @@ void *encoder_thread(void *arg)
ret = 0;
finish:
- if (ost->sq_idx_encode >= 0)
- sq_send(of->sq_encode, ost->sq_idx_encode, SQFRAME(NULL));
-
- tq_receive_finish(e->queue_in, 0);
- tq_send_finish (e->queue_out, 0);
-
enc_thread_uninit(&et);
- av_log(ost, AV_LOG_VERBOSE, "Terminating encoder thread\n");
-
return (void*)(intptr_t)ret;
}
-
-int enc_frame(OutputStream *ost, AVFrame *frame)
-{
- OutputFile *of = output_files[ost->file_index];
- Encoder *e = ost->enc;
- int ret, thread_ret;
-
- ret = enc_open(ost, frame);
- if (ret < 0)
- return ret;
-
- if (!e->queue_in)
- return AVERROR_EOF;
-
- // send the frame/EOF to the encoder thread
- if (frame) {
- ret = tq_send(e->queue_in, 0, frame);
- if (ret < 0)
- goto finish;
- } else
- tq_send_finish(e->queue_in, 0);
-
- // retrieve all encoded data for the frame
- while (1) {
- int dummy;
-
- ret = tq_receive(e->queue_out, &dummy, e->pkt);
- if (ret < 0)
- break;
-
- // frame fully encoded
- if (!e->pkt->data && !e->pkt->side_data_elems)
- return 0;
-
- // process the encoded packet
- ret = of_output_packet(of, ost, e->pkt);
- if (ret < 0)
- goto finish;
- }
-
-finish:
- thread_ret = enc_thread_stop(e);
- if (thread_ret < 0) {
- av_log(ost, AV_LOG_ERROR, "Encoder thread returned error: %s\n",
- av_err2str(thread_ret));
- ret = err_merge(ret, thread_ret);
- }
-
- if (ret < 0 && ret != AVERROR_EOF)
- return ret;
-
- // signal EOF to the muxer
- return of_output_packet(of, ost, NULL);
-}
-
-int enc_subtitle(OutputFile *of, OutputStream *ost, const AVSubtitle *sub)
-{
- Encoder *e = ost->enc;
- AVFrame *f = e->sub_frame;
- int ret;
-
- // XXX the queue for transferring data to the encoder thread runs
- // on AVFrames, so we wrap AVSubtitle in an AVBufferRef and put
- // that inside the frame
- // eventually, subtitles should be switched to use AVFrames natively
- ret = subtitle_wrap_frame(f, sub, 1);
- if (ret < 0)
- return ret;
-
- ret = enc_frame(ost, f);
- av_frame_unref(f);
-
- return ret;
-}
-
-int enc_flush(void)
-{
- int ret = 0;
-
- for (OutputStream *ost = ost_iter(NULL); ost; ost = ost_iter(ost)) {
- OutputFile *of = output_files[ost->file_index];
- if (ost->sq_idx_encode >= 0)
- sq_send(of->sq_encode, ost->sq_idx_encode, SQFRAME(NULL));
- }
-
- for (OutputStream *ost = ost_iter(NULL); ost; ost = ost_iter(ost)) {
- Encoder *e = ost->enc;
- AVCodecContext *enc = ost->enc_ctx;
- int err;
-
- if (!enc || !e->opened ||
- (enc->codec_type != AVMEDIA_TYPE_VIDEO && enc->codec_type != AVMEDIA_TYPE_AUDIO))
- continue;
-
- err = enc_frame(ost, NULL);
- if (err != AVERROR_EOF && ret < 0)
- ret = err_merge(ret, err);
-
- av_assert0(!e->queue_in);
- }
-
- return ret;
-}
diff --git a/fftools/ffmpeg_filter.c b/fftools/ffmpeg_filter.c
index 1b41d32540..fe8787cacb 100644
--- a/fftools/ffmpeg_filter.c
+++ b/fftools/ffmpeg_filter.c
@@ -21,8 +21,6 @@
#include <stdint.h>
#include "ffmpeg.h"
-#include "ffmpeg_utils.h"
-#include "thread_queue.h"
#include "libavfilter/avfilter.h"
#include "libavfilter/buffersink.h"
@@ -53,10 +51,11 @@ typedef struct FilterGraphPriv {
// true when the filtergraph contains only meta filters
// that do not modify the frame data
int is_meta;
+ // source filters are present in the graph
+ int have_sources;
int disable_conversions;
- int nb_inputs_bound;
- int nb_outputs_bound;
+ unsigned nb_outputs_done;
const char *graph_desc;
@@ -67,41 +66,6 @@ typedef struct FilterGraphPriv {
Scheduler *sch;
unsigned sch_idx;
-
- pthread_t thread;
- /**
- * Queue for sending frames from the main thread to the filtergraph. Has
- * nb_inputs+1 streams - the first nb_inputs stream correspond to
- * filtergraph inputs. Frames on those streams may have their opaque set to
- * - FRAME_OPAQUE_EOF: frame contains no data, but pts+timebase of the
- * EOF event for the correspondint stream. Will be immediately followed by
- * this stream being send-closed.
- * - FRAME_OPAQUE_SUB_HEARTBEAT: frame contains no data, but pts+timebase of
- * a subtitle heartbeat event. Will only be sent for sub2video streams.
- *
- * The last stream is "control" - the main thread sends empty AVFrames with
- * opaque set to
- * - FRAME_OPAQUE_REAP_FILTERS: a request to retrieve all frame available
- * from filtergraph outputs. These frames are sent to corresponding
- * streams in queue_out. Finally an empty frame is sent to the control
- * stream in queue_out.
- * - FRAME_OPAQUE_CHOOSE_INPUT: same as above, but in case no frames are
- * available the terminating empty frame's opaque will contain the index+1
- * of the filtergraph input to which more input frames should be supplied.
- */
- ThreadQueue *queue_in;
- /**
- * Queue for sending frames from the filtergraph back to the main thread.
- * Has nb_outputs+1 streams - the first nb_outputs stream correspond to
- * filtergraph outputs.
- *
- * The last stream is "control" - see documentation for queue_in for more
- * details.
- */
- ThreadQueue *queue_out;
- // submitting frames to filter thread returned EOF
- // this only happens on thread exit, so is not per-input
- int eof_in;
} FilterGraphPriv;
static FilterGraphPriv *fgp_from_fg(FilterGraph *fg)
@@ -123,6 +87,9 @@ typedef struct FilterGraphThread {
// The output index is stored in frame opaque.
AVFifo *frame_queue_out;
+ // index of the next input to request from the scheduler
+ unsigned next_in;
+ // set to 1 after at least one frame passed through this output
int got_frame;
// EOF status of each input/output, as received by the thread
@@ -253,9 +220,6 @@ typedef struct OutputFilterPriv {
int64_t ts_offset;
int64_t next_pts;
FPSConvContext fps;
-
- // set to 1 after at least one frame passed through this output
- int got_frame;
} OutputFilterPriv;
static OutputFilterPriv *ofp_from_ofilter(OutputFilter *ofilter)
@@ -653,57 +617,6 @@ static int ifilter_has_all_input_formats(FilterGraph *fg)
static void *filter_thread(void *arg);
-// start the filtering thread once all inputs and outputs are bound
-static int fg_thread_try_start(FilterGraphPriv *fgp)
-{
- FilterGraph *fg = &fgp->fg;
- ObjPool *op;
- int ret = 0;
-
- if (fgp->nb_inputs_bound < fg->nb_inputs ||
- fgp->nb_outputs_bound < fg->nb_outputs)
- return 0;
-
- op = objpool_alloc_frames();
- if (!op)
- return AVERROR(ENOMEM);
-
- fgp->queue_in = tq_alloc(fg->nb_inputs + 1, 1, op, frame_move);
- if (!fgp->queue_in) {
- objpool_free(&op);
- return AVERROR(ENOMEM);
- }
-
- // at least one output is mandatory
- op = objpool_alloc_frames();
- if (!op)
- goto fail;
-
- fgp->queue_out = tq_alloc(fg->nb_outputs + 1, 1, op, frame_move);
- if (!fgp->queue_out) {
- objpool_free(&op);
- goto fail;
- }
-
- ret = pthread_create(&fgp->thread, NULL, filter_thread, fgp);
- if (ret) {
- ret = AVERROR(ret);
- av_log(NULL, AV_LOG_ERROR, "pthread_create() for filtergraph %d failed: %s\n",
- fg->index, av_err2str(ret));
- goto fail;
- }
-
- return 0;
-fail:
- if (ret >= 0)
- ret = AVERROR(ENOMEM);
-
- tq_free(&fgp->queue_in);
- tq_free(&fgp->queue_out);
-
- return ret;
-}
-
static char *describe_filter_link(FilterGraph *fg, AVFilterInOut *inout, int in)
{
AVFilterContext *ctx = inout->filter_ctx;
@@ -729,7 +642,6 @@ static OutputFilter *ofilter_alloc(FilterGraph *fg)
ofilter->graph = fg;
ofp->format = -1;
ofp->index = fg->nb_outputs - 1;
- ofilter->last_pts = AV_NOPTS_VALUE;
return ofilter;
}
@@ -760,10 +672,7 @@ static int ifilter_bind_ist(InputFilter *ifilter, InputStream *ist)
return AVERROR(ENOMEM);
}
- fgp->nb_inputs_bound++;
- av_assert0(fgp->nb_inputs_bound <= ifilter->graph->nb_inputs);
-
- return fg_thread_try_start(fgp);
+ return 0;
}
static int set_channel_layout(OutputFilterPriv *f, OutputStream *ost)
@@ -902,10 +811,7 @@ int ofilter_bind_ost(OutputFilter *ofilter, OutputStream *ost,
if (ret < 0)
return ret;
- fgp->nb_outputs_bound++;
- av_assert0(fgp->nb_outputs_bound <= fg->nb_outputs);
-
- return fg_thread_try_start(fgp);
+ return 0;
}
static InputFilter *ifilter_alloc(FilterGraph *fg)
@@ -935,34 +841,6 @@ static InputFilter *ifilter_alloc(FilterGraph *fg)
return ifilter;
}
-static int fg_thread_stop(FilterGraphPriv *fgp)
-{
- void *ret;
-
- if (!fgp->queue_in)
- return 0;
-
- for (int i = 0; i <= fgp->fg.nb_inputs; i++) {
- InputFilterPriv *ifp = i < fgp->fg.nb_inputs ?
- ifp_from_ifilter(fgp->fg.inputs[i]) : NULL;
-
- if (ifp)
- ifp->eof = 1;
-
- tq_send_finish(fgp->queue_in, i);
- }
-
- for (int i = 0; i <= fgp->fg.nb_outputs; i++)
- tq_receive_finish(fgp->queue_out, i);
-
- pthread_join(fgp->thread, &ret);
-
- tq_free(&fgp->queue_in);
- tq_free(&fgp->queue_out);
-
- return (int)(intptr_t)ret;
-}
-
void fg_free(FilterGraph **pfg)
{
FilterGraph *fg = *pfg;
@@ -972,8 +850,6 @@ void fg_free(FilterGraph **pfg)
return;
fgp = fgp_from_fg(fg);
- fg_thread_stop(fgp);
-
avfilter_graph_free(&fg->graph);
for (int j = 0; j < fg->nb_inputs; j++) {
InputFilter *ifilter = fg->inputs[j];
@@ -1072,6 +948,15 @@ int fg_create(FilterGraph **pfg, char *graph_desc, Scheduler *sch)
if (ret < 0)
goto fail;
+ for (unsigned i = 0; i < graph->nb_filters; i++) {
+ const AVFilter *f = graph->filters[i]->filter;
+ if (!avfilter_filter_pad_count(f, 0) &&
+ !(f->flags & AVFILTER_FLAG_DYNAMIC_INPUTS)) {
+ fgp->have_sources = 1;
+ break;
+ }
+ }
+
for (AVFilterInOut *cur = inputs; cur; cur = cur->next) {
InputFilter *const ifilter = ifilter_alloc(fg);
InputFilterPriv *ifp;
@@ -1800,6 +1685,7 @@ static int configure_filtergraph(FilterGraph *fg, const FilterGraphThread *fgt)
AVBufferRef *hw_device;
AVFilterInOut *inputs, *outputs, *cur;
int ret, i, simple = filtergraph_is_simple(fg);
+ int have_input_eof = 0;
const char *graph_desc = fgp->graph_desc;
cleanup_filtergraph(fg);
@@ -1922,11 +1808,18 @@ static int configure_filtergraph(FilterGraph *fg, const FilterGraphThread *fgt)
ret = av_buffersrc_add_frame(ifp->filter, NULL);
if (ret < 0)
goto fail;
+ have_input_eof = 1;
}
}
- return 0;
+ if (have_input_eof) {
+ // make sure the EOF propagates to the end of the graph
+ ret = avfilter_graph_request_oldest(fg->graph);
+ if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF)
+ goto fail;
+ }
+ return 0;
fail:
cleanup_filtergraph(fg);
return ret;
@@ -2182,7 +2075,7 @@ static void video_sync_process(OutputFilterPriv *ofp, AVFrame *frame,
fps->frames_prev_hist[2]);
if (!*nb_frames && fps->last_dropped) {
- ofilter->nb_frames_drop++;
+ atomic_fetch_add(&ofilter->nb_frames_drop, 1);
fps->last_dropped++;
}
@@ -2260,21 +2153,23 @@ finish:
fps->frames_prev_hist[0] = *nb_frames_prev;
if (*nb_frames_prev == 0 && fps->last_dropped) {
- ofilter->nb_frames_drop++;
+ atomic_fetch_add(&ofilter->nb_frames_drop, 1);
av_log(ost, AV_LOG_VERBOSE,
"*** dropping frame %"PRId64" at ts %"PRId64"\n",
fps->frame_number, fps->last_frame->pts);
}
if (*nb_frames > (*nb_frames_prev && fps->last_dropped) + (*nb_frames > *nb_frames_prev)) {
+ uint64_t nb_frames_dup;
if (*nb_frames > dts_error_threshold * 30) {
av_log(ost, AV_LOG_ERROR, "%"PRId64" frame duplication too large, skipping\n", *nb_frames - 1);
- ofilter->nb_frames_drop++;
+ atomic_fetch_add(&ofilter->nb_frames_drop, 1);
*nb_frames = 0;
return;
}
- ofilter->nb_frames_dup += *nb_frames - (*nb_frames_prev && fps->last_dropped) - (*nb_frames > *nb_frames_prev);
+ nb_frames_dup = atomic_fetch_add(&ofilter->nb_frames_dup,
+ *nb_frames - (*nb_frames_prev && fps->last_dropped) - (*nb_frames > *nb_frames_prev));
av_log(ost, AV_LOG_VERBOSE, "*** %"PRId64" dup!\n", *nb_frames - 1);
- if (ofilter->nb_frames_dup > fps->dup_warning) {
+ if (nb_frames_dup > fps->dup_warning) {
av_log(ost, AV_LOG_WARNING, "More than %"PRIu64" frames duplicated\n", fps->dup_warning);
fps->dup_warning *= 10;
}
@@ -2284,8 +2179,57 @@ finish:
fps->dropped_keyframe |= fps->last_dropped && (frame->flags & AV_FRAME_FLAG_KEY);
}
+static int close_output(OutputFilterPriv *ofp, FilterGraphThread *fgt)
+{
+ FilterGraphPriv *fgp = fgp_from_fg(ofp->ofilter.graph);
+ int ret;
+
+ // we are finished and no frames were ever seen at this output,
+ // at least initialize the encoder with a dummy frame
+ if (!fgt->got_frame) {
+ AVFrame *frame = fgt->frame;
+ FrameData *fd;
+
+ frame->time_base = ofp->tb_out;
+ frame->format = ofp->format;
+
+ frame->width = ofp->width;
+ frame->height = ofp->height;
+ frame->sample_aspect_ratio = ofp->sample_aspect_ratio;
+
+ frame->sample_rate = ofp->sample_rate;
+ if (ofp->ch_layout.nb_channels) {
+ ret = av_channel_layout_copy(&frame->ch_layout, &ofp->ch_layout);
+ if (ret < 0)
+ return ret;
+ }
+
+ fd = frame_data(frame);
+ if (!fd)
+ return AVERROR(ENOMEM);
+
+ fd->frame_rate_filter = ofp->fps.framerate;
+
+ av_assert0(!frame->buf[0]);
+
+ av_log(ofp->ofilter.ost, AV_LOG_WARNING,
+ "No filtered frames for output stream, trying to "
+ "initialize anyway.\n");
+
+ ret = sch_filter_send(fgp->sch, fgp->sch_idx, ofp->index, frame);
+ if (ret < 0) {
+ av_frame_unref(frame);
+ return ret;
+ }
+ }
+
+ fgt->eof_out[ofp->index] = 1;
+
+ return sch_filter_send(fgp->sch, fgp->sch_idx, ofp->index, NULL);
+}
+
static int fg_output_frame(OutputFilterPriv *ofp, FilterGraphThread *fgt,
- AVFrame *frame, int buffer)
+ AVFrame *frame)
{
FilterGraphPriv *fgp = fgp_from_fg(ofp->ofilter.graph);
AVFrame *frame_prev = ofp->fps.last_frame;
@@ -2332,28 +2276,17 @@ static int fg_output_frame(OutputFilterPriv *ofp, FilterGraphThread *fgt,
frame_out = frame;
}
- if (buffer) {
- AVFrame *f = av_frame_alloc();
-
- if (!f) {
- av_frame_unref(frame_out);
- return AVERROR(ENOMEM);
- }
-
- av_frame_move_ref(f, frame_out);
- f->opaque = (void*)(intptr_t)ofp->index;
-
- ret = av_fifo_write(fgt->frame_queue_out, &f, 1);
- if (ret < 0) {
- av_frame_free(&f);
- return AVERROR(ENOMEM);
- }
- } else {
- // return the frame to the main thread
- ret = tq_send(fgp->queue_out, ofp->index, frame_out);
+ {
+ // send the frame to consumers
+ ret = sch_filter_send(fgp->sch, fgp->sch_idx, ofp->index, frame_out);
if (ret < 0) {
av_frame_unref(frame_out);
- fgt->eof_out[ofp->index] = 1;
+
+ if (!fgt->eof_out[ofp->index]) {
+ fgt->eof_out[ofp->index] = 1;
+ fgp->nb_outputs_done++;
+ }
+
return ret == AVERROR_EOF ? 0 : ret;
}
}
@@ -2374,16 +2307,14 @@ static int fg_output_frame(OutputFilterPriv *ofp, FilterGraphThread *fgt,
av_frame_move_ref(frame_prev, frame);
}
- if (!frame) {
- tq_send_finish(fgp->queue_out, ofp->index);
- fgt->eof_out[ofp->index] = 1;
- }
+ if (!frame)
+ return close_output(ofp, fgt);
return 0;
}
static int fg_output_step(OutputFilterPriv *ofp, FilterGraphThread *fgt,
- AVFrame *frame, int buffer)
+ AVFrame *frame)
{
FilterGraphPriv *fgp = fgp_from_fg(ofp->ofilter.graph);
OutputStream *ost = ofp->ofilter.ost;
@@ -2393,8 +2324,8 @@ static int fg_output_step(OutputFilterPriv *ofp, FilterGraphThread *fgt,
ret = av_buffersink_get_frame_flags(filter, frame,
AV_BUFFERSINK_FLAG_NO_REQUEST);
- if (ret == AVERROR_EOF && !buffer && !fgt->eof_out[ofp->index]) {
- ret = fg_output_frame(ofp, fgt, NULL, buffer);
+ if (ret == AVERROR_EOF && !fgt->eof_out[ofp->index]) {
+ ret = fg_output_frame(ofp, fgt, NULL);
return (ret < 0) ? ret : 1;
} else if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
return 1;
@@ -2448,7 +2379,7 @@ static int fg_output_step(OutputFilterPriv *ofp, FilterGraphThread *fgt,
fd->frame_rate_filter = ofp->fps.framerate;
}
- ret = fg_output_frame(ofp, fgt, frame, buffer);
+ ret = fg_output_frame(ofp, fgt, frame);
av_frame_unref(frame);
if (ret < 0)
return ret;
@@ -2456,44 +2387,68 @@ static int fg_output_step(OutputFilterPriv *ofp, FilterGraphThread *fgt,
return 0;
}
-/* retrieve all frames available at filtergraph outputs and either send them to
- * the main thread (buffer=0) or buffer them for later (buffer=1) */
+/* retrieve all frames available at filtergraph outputs
+ * and send them to consumers */
static int read_frames(FilterGraph *fg, FilterGraphThread *fgt,
- AVFrame *frame, int buffer)
+ AVFrame *frame)
{
FilterGraphPriv *fgp = fgp_from_fg(fg);
- int ret = 0;
+ int did_step = 0;
- if (!fg->graph)
- return 0;
-
- // process buffered frames
- if (!buffer) {
- AVFrame *f;
-
- while (av_fifo_read(fgt->frame_queue_out, &f, 1) >= 0) {
- int out_idx = (intptr_t)f->opaque;
- f->opaque = NULL;
- ret = tq_send(fgp->queue_out, out_idx, f);
- av_frame_free(&f);
- if (ret < 0 && ret != AVERROR_EOF)
- return ret;
+ // graph not configured, just select the input to request
+ if (!fg->graph) {
+ for (int i = 0; i < fg->nb_inputs; i++) {
+ InputFilterPriv *ifp = ifp_from_ifilter(fg->inputs[i]);
+ if (ifp->format < 0 && !fgt->eof_in[i]) {
+ fgt->next_in = i;
+ return 0;
+ }
}
+
+ // This state - graph is not configured, but all inputs are either
+ // initialized or EOF - should be unreachable because sending EOF to a
+ // filter without even a fallback format should fail
+ av_assert0(0);
+ return AVERROR_BUG;
}
- /* Reap all buffers present in the buffer sinks */
- for (int i = 0; i < fg->nb_outputs; i++) {
- OutputFilterPriv *ofp = ofp_from_ofilter(fg->outputs[i]);
- int ret = 0;
+ while (1) {
+ int ret;
- while (!ret) {
- ret = fg_output_step(ofp, fgt, frame, buffer);
- if (ret < 0)
- return ret;
+ ret = avfilter_graph_request_oldest(fg->graph);
+ if (ret == AVERROR(EAGAIN)) {
+ fgt->next_in = choose_input(fg, fgt);
+ break;
+ } else if (ret < 0) {
+ if (ret == AVERROR_EOF)
+ av_log(fg, AV_LOG_VERBOSE, "Filtergraph returned EOF, finishing\n");
+ else
+ av_log(fg, AV_LOG_ERROR,
+ "Error requesting a frame from the filtergraph: %s\n",
+ av_err2str(ret));
+ return ret;
}
- }
+ fgt->next_in = fg->nb_inputs;
- return 0;
+ // return after one iteration, so that scheduler can rate-control us
+ if (did_step && fgp->have_sources)
+ return 0;
+
+ /* Reap all buffers present in the buffer sinks */
+ for (int i = 0; i < fg->nb_outputs; i++) {
+ OutputFilterPriv *ofp = ofp_from_ofilter(fg->outputs[i]);
+
+ ret = 0;
+ while (!ret) {
+ ret = fg_output_step(ofp, fgt, frame);
+ if (ret < 0)
+ return ret;
+ }
+ }
+ did_step = 1;
+ };
+
+ return (fgp->nb_outputs_done == fg->nb_outputs) ? AVERROR_EOF : 0;
}
static void sub2video_heartbeat(InputFilter *ifilter, int64_t pts, AVRational tb)
@@ -2571,6 +2526,9 @@ static int send_eof(FilterGraphThread *fgt, InputFilter *ifilter,
InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
int ret;
+ if (fgt->eof_in[ifp->index])
+ return 0;
+
fgt->eof_in[ifp->index] = 1;
if (ifp->filter) {
@@ -2672,7 +2630,7 @@ static int send_frame(FilterGraph *fg, FilterGraphThread *fgt,
return ret;
}
- ret = fg->graph ? read_frames(fg, fgt, tmp, 1) : 0;
+ ret = fg->graph ? read_frames(fg, fgt, tmp) : 0;
av_frame_free(&tmp);
if (ret < 0)
return ret;
@@ -2705,80 +2663,6 @@ static int send_frame(FilterGraph *fg, FilterGraphThread *fgt,
return 0;
}
-static int msg_process(FilterGraphPriv *fgp, FilterGraphThread *fgt,
- AVFrame *frame)
-{
- const enum FrameOpaque msg = (intptr_t)frame->opaque;
- FilterGraph *fg = &fgp->fg;
- int graph_eof = 0;
- int ret;
-
- frame->opaque = NULL;
- av_assert0(msg > 0);
- av_assert0(msg == FRAME_OPAQUE_SEND_COMMAND || !frame->buf[0]);
-
- if (!fg->graph) {
- // graph not configured yet, ignore all messages other than choosing
- // the input to read from
- if (msg != FRAME_OPAQUE_CHOOSE_INPUT)
- goto done;
-
- for (int i = 0; i < fg->nb_inputs; i++) {
- InputFilter *ifilter = fg->inputs[i];
- InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
- if (ifp->format < 0 && !fgt->eof_in[i]) {
- frame->opaque = (void*)(intptr_t)(i + 1);
- goto done;
- }
- }
-
- // This state - graph is not configured, but all inputs are either
- // initialized or EOF - should be unreachable because sending EOF to a
- // filter without even a fallback format should fail
- av_assert0(0);
- return AVERROR_BUG;
- }
-
- if (msg == FRAME_OPAQUE_SEND_COMMAND) {
- FilterCommand *fc = (FilterCommand*)frame->buf[0]->data;
- send_command(fg, fc->time, fc->target, fc->command, fc->arg, fc->all_filters);
- av_frame_unref(frame);
- goto done;
- }
-
- if (msg == FRAME_OPAQUE_CHOOSE_INPUT) {
- ret = avfilter_graph_request_oldest(fg->graph);
-
- graph_eof = ret == AVERROR_EOF;
-
- if (ret == AVERROR(EAGAIN)) {
- frame->opaque = (void*)(intptr_t)(choose_input(fg, fgt) + 1);
- goto done;
- } else if (ret < 0 && !graph_eof)
- return ret;
- }
-
- ret = read_frames(fg, fgt, frame, 0);
- if (ret < 0) {
- av_log(fg, AV_LOG_ERROR, "Error sending filtered frames for encoding\n");
- return ret;
- }
-
- if (graph_eof)
- return AVERROR_EOF;
-
- // signal to the main thread that we are done processing the message
-done:
- ret = tq_send(fgp->queue_out, fg->nb_outputs, frame);
- if (ret < 0) {
- if (ret != AVERROR_EOF)
- av_log(fg, AV_LOG_ERROR, "Error communicating with the main thread\n");
- return ret;
- }
-
- return 0;
-}
-
static void fg_thread_set_name(const FilterGraph *fg)
{
char name[16];
@@ -2865,294 +2749,94 @@ static void *filter_thread(void *arg)
InputFilter *ifilter;
InputFilterPriv *ifp;
enum FrameOpaque o;
- int input_idx, eof_frame;
+ unsigned input_idx = fgt.next_in;
- input_status = tq_receive(fgp->queue_in, &input_idx, fgt.frame);
- if (input_idx < 0 ||
- (input_idx == fg->nb_inputs && input_status < 0)) {
+ input_status = sch_filter_receive(fgp->sch, fgp->sch_idx,
+ &input_idx, fgt.frame);
+ if (input_status == AVERROR_EOF) {
av_log(fg, AV_LOG_VERBOSE, "Filtering thread received EOF\n");
break;
+ } else if (input_status == AVERROR(EAGAIN)) {
+ // should only happen when we didn't request any input
+ av_assert0(input_idx == fg->nb_inputs);
+ goto read_frames;
}
+ av_assert0(input_status >= 0);
+
+ o = (intptr_t)fgt.frame->opaque;
o = (intptr_t)fgt.frame->opaque;
// message on the control stream
if (input_idx == fg->nb_inputs) {
- ret = msg_process(fgp, &fgt, fgt.frame);
- if (ret < 0)
- goto finish;
+ FilterCommand *fc;
+ av_assert0(o == FRAME_OPAQUE_SEND_COMMAND && fgt.frame->buf[0]);
+
+ fc = (FilterCommand*)fgt.frame->buf[0]->data;
+ send_command(fg, fc->time, fc->target, fc->command, fc->arg,
+ fc->all_filters);
+ av_frame_unref(fgt.frame);
continue;
}
// we received an input frame or EOF
ifilter = fg->inputs[input_idx];
ifp = ifp_from_ifilter(ifilter);
- eof_frame = input_status >= 0 && o == FRAME_OPAQUE_EOF;
+
if (ifp->type_src == AVMEDIA_TYPE_SUBTITLE) {
int hb_frame = input_status >= 0 && o == FRAME_OPAQUE_SUB_HEARTBEAT;
ret = sub2video_frame(ifilter, (fgt.frame->buf[0] || hb_frame) ? fgt.frame : NULL);
- } else if (input_status >= 0 && fgt.frame->buf[0]) {
+ } else if (fgt.frame->buf[0]) {
ret = send_frame(fg, &fgt, ifilter, fgt.frame);
} else {
- int64_t pts = input_status >= 0 ? fgt.frame->pts : AV_NOPTS_VALUE;
- AVRational tb = input_status >= 0 ? fgt.frame->time_base : (AVRational){ 1, 1 };
- ret = send_eof(&fgt, ifilter, pts, tb);
+ av_assert1(o == FRAME_OPAQUE_EOF);
+ ret = send_eof(&fgt, ifilter, fgt.frame->pts, fgt.frame->time_base);
}
av_frame_unref(fgt.frame);
if (ret < 0)
+ goto finish;
+
+read_frames:
+ // retrieve all newly avalable frames
+ ret = read_frames(fg, &fgt, fgt.frame);
+ if (ret == AVERROR_EOF) {
+ av_log(fg, AV_LOG_VERBOSE, "All consumers returned EOF\n");
break;
-
- if (eof_frame) {
- // an EOF frame is immediately followed by sender closing
- // the corresponding stream, so retrieve that event
- input_status = tq_receive(fgp->queue_in, &input_idx, fgt.frame);
- av_assert0(input_status == AVERROR_EOF && input_idx == ifp->index);
- }
-
- // signal to the main thread that we are done
- ret = tq_send(fgp->queue_out, fg->nb_outputs, fgt.frame);
- if (ret < 0) {
- if (ret == AVERROR_EOF)
- break;
-
- av_log(fg, AV_LOG_ERROR, "Error communicating with the main thread\n");
+ } else if (ret < 0) {
+ av_log(fg, AV_LOG_ERROR, "Error sending frames to consumers: %s\n",
+ av_err2str(ret));
goto finish;
}
}
+ for (unsigned i = 0; i < fg->nb_outputs; i++) {
+ OutputFilterPriv *ofp = ofp_from_ofilter(fg->outputs[i]);
+
+ if (fgt.eof_out[i])
+ continue;
+
+ ret = fg_output_frame(ofp, &fgt, NULL);
+ if (ret < 0)
+ goto finish;
+ }
+
finish:
// EOF is normal termination
if (ret == AVERROR_EOF)
ret = 0;
- for (int i = 0; i <= fg->nb_inputs; i++)
- tq_receive_finish(fgp->queue_in, i);
- for (int i = 0; i <= fg->nb_outputs; i++)
- tq_send_finish(fgp->queue_out, i);
-
fg_thread_uninit(&fgt);
- av_log(fg, AV_LOG_VERBOSE, "Terminating filtering thread\n");
-
return (void*)(intptr_t)ret;
}
-static int thread_send_frame(FilterGraphPriv *fgp, InputFilter *ifilter,
- AVFrame *frame, enum FrameOpaque type)
-{
- InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
- int output_idx, ret;
-
- if (ifp->eof) {
- av_frame_unref(frame);
- return AVERROR_EOF;
- }
-
- frame->opaque = (void*)(intptr_t)type;
-
- ret = tq_send(fgp->queue_in, ifp->index, frame);
- if (ret < 0) {
- ifp->eof = 1;
- av_frame_unref(frame);
- return ret;
- }
-
- if (type == FRAME_OPAQUE_EOF)
- tq_send_finish(fgp->queue_in, ifp->index);
-
- // wait for the frame to be processed
- ret = tq_receive(fgp->queue_out, &output_idx, frame);
- av_assert0(output_idx == fgp->fg.nb_outputs || ret == AVERROR_EOF);
-
- return ret;
-}
-
-int ifilter_send_frame(InputFilter *ifilter, AVFrame *frame, int keep_reference)
-{
- FilterGraphPriv *fgp = fgp_from_fg(ifilter->graph);
- int ret;
-
- if (keep_reference) {
- ret = av_frame_ref(fgp->frame, frame);
- if (ret < 0)
- return ret;
- } else
- av_frame_move_ref(fgp->frame, frame);
-
- return thread_send_frame(fgp, ifilter, fgp->frame, 0);
-}
-
-int ifilter_send_eof(InputFilter *ifilter, int64_t pts, AVRational tb)
-{
- FilterGraphPriv *fgp = fgp_from_fg(ifilter->graph);
- int ret;
-
- fgp->frame->pts = pts;
- fgp->frame->time_base = tb;
-
- ret = thread_send_frame(fgp, ifilter, fgp->frame, FRAME_OPAQUE_EOF);
-
- return ret == AVERROR_EOF ? 0 : ret;
-}
-
-void ifilter_sub2video_heartbeat(InputFilter *ifilter, int64_t pts, AVRational tb)
-{
- FilterGraphPriv *fgp = fgp_from_fg(ifilter->graph);
-
- fgp->frame->pts = pts;
- fgp->frame->time_base = tb;
-
- thread_send_frame(fgp, ifilter, fgp->frame, FRAME_OPAQUE_SUB_HEARTBEAT);
-}
-
-int fg_transcode_step(FilterGraph *graph, InputStream **best_ist)
-{
- FilterGraphPriv *fgp = fgp_from_fg(graph);
- int ret, got_frames = 0;
-
- if (fgp->eof_in)
- return AVERROR_EOF;
-
- // signal to the filtering thread to return all frames it can
- av_assert0(!fgp->frame->buf[0]);
- fgp->frame->opaque = (void*)(intptr_t)(best_ist ?
- FRAME_OPAQUE_CHOOSE_INPUT :
- FRAME_OPAQUE_REAP_FILTERS);
-
- ret = tq_send(fgp->queue_in, graph->nb_inputs, fgp->frame);
- if (ret < 0) {
- fgp->eof_in = 1;
- goto finish;
- }
-
- while (1) {
- OutputFilter *ofilter;
- OutputFilterPriv *ofp;
- OutputStream *ost;
- int output_idx;
-
- ret = tq_receive(fgp->queue_out, &output_idx, fgp->frame);
-
- // EOF on the whole queue or the control stream
- if (output_idx < 0 ||
- (ret < 0 && output_idx == graph->nb_outputs))
- goto finish;
-
- // EOF for a specific stream
- if (ret < 0) {
- ofilter = graph->outputs[output_idx];
- ofp = ofp_from_ofilter(ofilter);
-
- // we are finished and no frames were ever seen at this output,
- // at least initialize the encoder with a dummy frame
- if (!ofp->got_frame) {
- AVFrame *frame = fgp->frame;
- FrameData *fd;
-
- frame->time_base = ofp->tb_out;
- frame->format = ofp->format;
-
- frame->width = ofp->width;
- frame->height = ofp->height;
- frame->sample_aspect_ratio = ofp->sample_aspect_ratio;
-
- frame->sample_rate = ofp->sample_rate;
- if (ofp->ch_layout.nb_channels) {
- ret = av_channel_layout_copy(&frame->ch_layout, &ofp->ch_layout);
- if (ret < 0)
- return ret;
- }
-
- fd = frame_data(frame);
- if (!fd)
- return AVERROR(ENOMEM);
-
- fd->frame_rate_filter = ofp->fps.framerate;
-
- av_assert0(!frame->buf[0]);
-
- av_log(ofilter->ost, AV_LOG_WARNING,
- "No filtered frames for output stream, trying to "
- "initialize anyway.\n");
-
- enc_open(ofilter->ost, frame);
- av_frame_unref(frame);
- }
-
- close_output_stream(graph->outputs[output_idx]->ost);
- continue;
- }
-
- // request was fully processed by the filtering thread,
- // return the input stream to read from, if needed
- if (output_idx == graph->nb_outputs) {
- int input_idx = (intptr_t)fgp->frame->opaque - 1;
- av_assert0(input_idx <= graph->nb_inputs);
-
- if (best_ist) {
- *best_ist = (input_idx >= 0 && input_idx < graph->nb_inputs) ?
- ifp_from_ifilter(graph->inputs[input_idx])->ist : NULL;
-
- if (input_idx < 0 && !got_frames) {
- for (int i = 0; i < graph->nb_outputs; i++)
- graph->outputs[i]->ost->unavailable = 1;
- }
- }
- break;
- }
-
- // got a frame from the filtering thread, send it for encoding
- ofilter = graph->outputs[output_idx];
- ost = ofilter->ost;
- ofp = ofp_from_ofilter(ofilter);
-
- if (ost->finished) {
- av_frame_unref(fgp->frame);
- tq_receive_finish(fgp->queue_out, output_idx);
- continue;
- }
-
- if (fgp->frame->pts != AV_NOPTS_VALUE) {
- ofilter->last_pts = av_rescale_q(fgp->frame->pts,
- fgp->frame->time_base,
- AV_TIME_BASE_Q);
- }
-
- ret = enc_frame(ost, fgp->frame);
- av_frame_unref(fgp->frame);
- if (ret < 0)
- goto finish;
-
- ofp->got_frame = 1;
- got_frames = 1;
- }
-
-finish:
- if (ret < 0) {
- fgp->eof_in = 1;
- for (int i = 0; i < graph->nb_outputs; i++)
- close_output_stream(graph->outputs[i]->ost);
- }
-
- return ret;
-}
-
-int reap_filters(FilterGraph *fg, int flush)
-{
- return fg_transcode_step(fg, NULL);
-}
-
void fg_send_command(FilterGraph *fg, double time, const char *target,
const char *command, const char *arg, int all_filters)
{
FilterGraphPriv *fgp = fgp_from_fg(fg);
AVBufferRef *buf;
FilterCommand *fc;
- int output_idx, ret;
-
- if (!fgp->queue_in)
- return;
fc = av_mallocz(sizeof(*fc));
if (!fc)
@@ -3178,13 +2862,5 @@ void fg_send_command(FilterGraph *fg, double time, const char *target,
fgp->frame->buf[0] = buf;
fgp->frame->opaque = (void*)(intptr_t)FRAME_OPAQUE_SEND_COMMAND;
- ret = tq_send(fgp->queue_in, fg->nb_inputs, fgp->frame);
- if (ret < 0) {
- av_frame_unref(fgp->frame);
- return;
- }
-
- // wait for the frame to be processed
- ret = tq_receive(fgp->queue_out, &output_idx, fgp->frame);
- av_assert0(output_idx == fgp->fg.nb_outputs || ret == AVERROR_EOF);
+ sch_filter_command(fgp->sch, fgp->sch_idx, fgp->frame);
}
diff --git a/fftools/ffmpeg_mux.c b/fftools/ffmpeg_mux.c
index ef5c2f60e0..067dc65d4e 100644
--- a/fftools/ffmpeg_mux.c
+++ b/fftools/ffmpeg_mux.c
@@ -23,16 +23,13 @@
#include "ffmpeg.h"
#include "ffmpeg_mux.h"
#include "ffmpeg_utils.h"
-#include "objpool.h"
#include "sync_queue.h"
-#include "thread_queue.h"
#include "libavutil/fifo.h"
#include "libavutil/intreadwrite.h"
#include "libavutil/log.h"
#include "libavutil/mem.h"
#include "libavutil/timestamp.h"
-#include "libavutil/thread.h"
#include "libavcodec/packet.h"
@@ -41,10 +38,9 @@
typedef struct MuxThreadContext {
AVPacket *pkt;
+ AVPacket *fix_sub_duration_pkt;
} MuxThreadContext;
-int want_sdp = 1;
-
static Muxer *mux_from_of(OutputFile *of)
{
return (Muxer*)of;
@@ -207,14 +203,41 @@ static int sync_queue_process(Muxer *mux, OutputStream *ost, AVPacket *pkt, int
return 0;
}
+static int of_streamcopy(OutputStream *ost, AVPacket *pkt);
+
/* apply the output bitstream filters */
-static int mux_packet_filter(Muxer *mux, OutputStream *ost,
- AVPacket *pkt, int *stream_eof)
+static int mux_packet_filter(Muxer *mux, MuxThreadContext *mt,
+ OutputStream *ost, AVPacket *pkt, int *stream_eof)
{
MuxStream *ms = ms_from_ost(ost);
const char *err_msg;
int ret = 0;
+ if (pkt && !ost->enc) {
+ ret = of_streamcopy(ost, pkt);
+ if (ret == AVERROR(EAGAIN))
+ return 0;
+ else if (ret == AVERROR_EOF) {
+ av_packet_unref(pkt);
+ pkt = NULL;
+ ret = 0;
+ } else if (ret < 0)
+ goto fail;
+ }
+
+ // emit heartbeat for -fix_sub_duration;
+ // we are only interested in heartbeats on on random access points.
+ if (pkt && (pkt->flags & AV_PKT_FLAG_KEY)) {
+ mt->fix_sub_duration_pkt->opaque = (void*)(intptr_t)PKT_OPAQUE_FIX_SUB_DURATION;
+ mt->fix_sub_duration_pkt->pts = pkt->pts;
+ mt->fix_sub_duration_pkt->time_base = pkt->time_base;
+
+ ret = sch_mux_sub_heartbeat(mux->sch, mux->sch_idx, ms->sch_idx,
+ mt->fix_sub_duration_pkt);
+ if (ret < 0)
+ goto fail;
+ }
+
if (ms->bsf_ctx) {
int bsf_eof = 0;
@@ -278,6 +301,7 @@ static void thread_set_name(OutputFile *of)
static void mux_thread_uninit(MuxThreadContext *mt)
{
av_packet_free(&mt->pkt);
+ av_packet_free(&mt->fix_sub_duration_pkt);
memset(mt, 0, sizeof(*mt));
}
@@ -290,6 +314,10 @@ static int mux_thread_init(MuxThreadContext *mt)
if (!mt->pkt)
goto fail;
+ mt->fix_sub_duration_pkt = av_packet_alloc();
+ if (!mt->fix_sub_duration_pkt)
+ goto fail;
+
return 0;
fail:
@@ -316,19 +344,22 @@ void *muxer_thread(void *arg)
OutputStream *ost;
int stream_idx, stream_eof = 0;
- ret = tq_receive(mux->tq, &stream_idx, mt.pkt);
+ ret = sch_mux_receive(mux->sch, of->index, mt.pkt);
+ stream_idx = mt.pkt->stream_index;
if (stream_idx < 0) {
av_log(mux, AV_LOG_VERBOSE, "All streams finished\n");
ret = 0;
break;
}
- ost = of->streams[stream_idx];
- ret = mux_packet_filter(mux, ost, ret < 0 ? NULL : mt.pkt, &stream_eof);
+ ost = of->streams[mux->sch_stream_idx[stream_idx]];
+ mt.pkt->stream_index = ost->index;
+
+ ret = mux_packet_filter(mux, &mt, ost, ret < 0 ? NULL : mt.pkt, &stream_eof);
av_packet_unref(mt.pkt);
if (ret == AVERROR_EOF) {
if (stream_eof) {
- tq_receive_finish(mux->tq, stream_idx);
+ sch_mux_receive_finish(mux->sch, of->index, stream_idx);
} else {
av_log(mux, AV_LOG_VERBOSE, "Muxer returned EOF\n");
ret = 0;
@@ -343,243 +374,55 @@ void *muxer_thread(void *arg)
finish:
mux_thread_uninit(&mt);
- for (unsigned int i = 0; i < mux->fc->nb_streams; i++)
- tq_receive_finish(mux->tq, i);
-
- av_log(mux, AV_LOG_VERBOSE, "Terminating muxer thread\n");
-
return (void*)(intptr_t)ret;
}
-static int thread_submit_packet(Muxer *mux, OutputStream *ost, AVPacket *pkt)
-{
- int ret = 0;
-
- if (!pkt || ost->finished & MUXER_FINISHED)
- goto finish;
-
- ret = tq_send(mux->tq, ost->index, pkt);
- if (ret < 0)
- goto finish;
-
- return 0;
-
-finish:
- if (pkt)
- av_packet_unref(pkt);
-
- ost->finished |= MUXER_FINISHED;
- tq_send_finish(mux->tq, ost->index);
- return ret == AVERROR_EOF ? 0 : ret;
-}
-
-static int queue_packet(OutputStream *ost, AVPacket *pkt)
-{
- MuxStream *ms = ms_from_ost(ost);
- AVPacket *tmp_pkt = NULL;
- int ret;
-
- if (!av_fifo_can_write(ms->muxing_queue)) {
- size_t cur_size = av_fifo_can_read(ms->muxing_queue);
- size_t pkt_size = pkt ? pkt->size : 0;
- unsigned int are_we_over_size =
- (ms->muxing_queue_data_size + pkt_size) > ms->muxing_queue_data_threshold;
- size_t limit = are_we_over_size ? ms->max_muxing_queue_size : SIZE_MAX;
- size_t new_size = FFMIN(2 * cur_size, limit);
-
- if (new_size <= cur_size) {
- av_log(ost, AV_LOG_ERROR,
- "Too many packets buffered for output stream %d:%d.\n",
- ost->file_index, ost->st->index);
- return AVERROR(ENOSPC);
- }
- ret = av_fifo_grow2(ms->muxing_queue, new_size - cur_size);
- if (ret < 0)
- return ret;
- }
-
- if (pkt) {
- ret = av_packet_make_refcounted(pkt);
- if (ret < 0)
- return ret;
-
- tmp_pkt = av_packet_alloc();
- if (!tmp_pkt)
- return AVERROR(ENOMEM);
-
- av_packet_move_ref(tmp_pkt, pkt);
- ms->muxing_queue_data_size += tmp_pkt->size;
- }
- av_fifo_write(ms->muxing_queue, &tmp_pkt, 1);
-
- return 0;
-}
-
-static int submit_packet(Muxer *mux, AVPacket *pkt, OutputStream *ost)
-{
- int ret;
-
- if (mux->tq) {
- return thread_submit_packet(mux, ost, pkt);
- } else {
- /* the muxer is not initialized yet, buffer the packet */
- ret = queue_packet(ost, pkt);
- if (ret < 0) {
- if (pkt)
- av_packet_unref(pkt);
- return ret;
- }
- }
-
- return 0;
-}
-
-int of_output_packet(OutputFile *of, OutputStream *ost, AVPacket *pkt)
-{
- Muxer *mux = mux_from_of(of);
- int ret = 0;
-
- if (pkt && pkt->dts != AV_NOPTS_VALUE)
- ost->last_mux_dts = av_rescale_q(pkt->dts, pkt->time_base, AV_TIME_BASE_Q);
-
- ret = submit_packet(mux, pkt, ost);
- if (ret < 0) {
- av_log(ost, AV_LOG_ERROR, "Error submitting a packet to the muxer: %s",
- av_err2str(ret));
- return ret;
- }
-
- return 0;
-}
-
-int of_streamcopy(OutputStream *ost, const AVPacket *pkt, int64_t dts)
+static int of_streamcopy(OutputStream *ost, AVPacket *pkt)
{
OutputFile *of = output_files[ost->file_index];
MuxStream *ms = ms_from_ost(ost);
+ DemuxPktData *pd = pkt->opaque_ref ? (DemuxPktData*)pkt->opaque_ref->data : NULL;
+ int64_t dts = pd ? pd->dts_est : AV_NOPTS_VALUE;
int64_t start_time = (of->start_time == AV_NOPTS_VALUE) ? 0 : of->start_time;
int64_t ts_offset;
- AVPacket *opkt = ms->pkt;
- int ret;
-
- av_packet_unref(opkt);
if (of->recording_time != INT64_MAX &&
dts >= of->recording_time + start_time)
- pkt = NULL;
-
- // EOF: flush output bitstream filters.
- if (!pkt)
- return of_output_packet(of, ost, NULL);
+ return AVERROR_EOF;
if (!ms->streamcopy_started && !(pkt->flags & AV_PKT_FLAG_KEY) &&
!ms->copy_initial_nonkeyframes)
- return 0;
+ return AVERROR(EAGAIN);
if (!ms->streamcopy_started) {
if (!ms->copy_prior_start &&
(pkt->pts == AV_NOPTS_VALUE ?
dts < ms->ts_copy_start :
pkt->pts < av_rescale_q(ms->ts_copy_start, AV_TIME_BASE_Q, pkt->time_base)))
- return 0;
+ return AVERROR(EAGAIN);
if (of->start_time != AV_NOPTS_VALUE && dts < of->start_time)
- return 0;
+ return AVERROR(EAGAIN);
}
- ret = av_packet_ref(opkt, pkt);
- if (ret < 0)
- return ret;
-
- ts_offset = av_rescale_q(start_time, AV_TIME_BASE_Q, opkt->time_base);
+ ts_offset = av_rescale_q(start_time, AV_TIME_BASE_Q, pkt->time_base);
if (pkt->pts != AV_NOPTS_VALUE)
- opkt->pts -= ts_offset;
+ pkt->pts -= ts_offset;
if (pkt->dts == AV_NOPTS_VALUE) {
- opkt->dts = av_rescale_q(dts, AV_TIME_BASE_Q, opkt->time_base);
+ pkt->dts = av_rescale_q(dts, AV_TIME_BASE_Q, pkt->time_base);
} else if (ost->st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {
- opkt->pts = opkt->dts - ts_offset;
- }
- opkt->dts -= ts_offset;
-
- {
- int ret = trigger_fix_sub_duration_heartbeat(ost, pkt);
- if (ret < 0) {
- av_log(NULL, AV_LOG_ERROR,
- "Subtitle heartbeat logic failed in %s! (%s)\n",
- __func__, av_err2str(ret));
- return ret;
- }
+ pkt->pts = pkt->dts - ts_offset;
}
- ret = of_output_packet(of, ost, opkt);
- if (ret < 0)
- return ret;
+ pkt->dts -= ts_offset;
ms->streamcopy_started = 1;
return 0;
}
-static int thread_stop(Muxer *mux)
-{
- void *ret;
-
- if (!mux || !mux->tq)
- return 0;
-
- for (unsigned int i = 0; i < mux->fc->nb_streams; i++)
- tq_send_finish(mux->tq, i);
-
- pthread_join(mux->thread, &ret);
-
- tq_free(&mux->tq);
-
- return (int)(intptr_t)ret;
-}
-
-static int thread_start(Muxer *mux)
-{
- AVFormatContext *fc = mux->fc;
- ObjPool *op;
- int ret;
-
- op = objpool_alloc_packets();
- if (!op)
- return AVERROR(ENOMEM);
-
- mux->tq = tq_alloc(fc->nb_streams, mux->thread_queue_size, op, pkt_move);
- if (!mux->tq) {
- objpool_free(&op);
- return AVERROR(ENOMEM);
- }
-
- ret = pthread_create(&mux->thread, NULL, muxer_thread, (void*)mux);
- if (ret) {
- tq_free(&mux->tq);
- return AVERROR(ret);
- }
-
- /* flush the muxing queues */
- for (int i = 0; i < fc->nb_streams; i++) {
- OutputStream *ost = mux->of.streams[i];
- MuxStream *ms = ms_from_ost(ost);
- AVPacket *pkt;
-
- while (av_fifo_read(ms->muxing_queue, &pkt, 1) >= 0) {
- ret = thread_submit_packet(mux, ost, pkt);
- if (pkt) {
- ms->muxing_queue_data_size -= pkt->size;
- av_packet_free(&pkt);
- }
- if (ret < 0)
- return ret;
- }
- }
-
- return 0;
-}
-
int print_sdp(const char *filename);
int print_sdp(const char *filename)
@@ -590,11 +433,6 @@ int print_sdp(const char *filename)
AVIOContext *sdp_pb;
AVFormatContext **avc;
- for (i = 0; i < nb_output_files; i++) {
- if (!mux_from_of(output_files[i])->header_written)
- return 0;
- }
-
avc = av_malloc_array(nb_output_files, sizeof(*avc));
if (!avc)
return AVERROR(ENOMEM);
@@ -629,25 +467,17 @@ int print_sdp(const char *filename)
avio_closep(&sdp_pb);
}
- // SDP successfully written, allow muxer threads to start
- ret = 1;
-
fail:
av_freep(&avc);
return ret;
}
-int mux_check_init(Muxer *mux)
+int mux_check_init(void *arg)
{
+ Muxer *mux = arg;
OutputFile *of = &mux->of;
AVFormatContext *fc = mux->fc;
- int ret, i;
-
- for (i = 0; i < fc->nb_streams; i++) {
- OutputStream *ost = of->streams[i];
- if (!ost->initialized)
- return 0;
- }
+ int ret;
ret = avformat_write_header(fc, &mux->opts);
if (ret < 0) {
@@ -659,27 +489,7 @@ int mux_check_init(Muxer *mux)
mux->header_written = 1;
av_dump_format(fc, of->index, fc->url, 1);
- nb_output_dumped++;
-
- if (sdp_filename || want_sdp) {
- ret = print_sdp(sdp_filename);
- if (ret < 0) {
- av_log(NULL, AV_LOG_ERROR, "Error writing the SDP.\n");
- return ret;
- } else if (ret == 1) {
- /* SDP is written only after all the muxers are ready, so now we
- * start ALL the threads */
- for (i = 0; i < nb_output_files; i++) {
- ret = thread_start(mux_from_of(output_files[i]));
- if (ret < 0)
- return ret;
- }
- }
- } else {
- ret = thread_start(mux_from_of(of));
- if (ret < 0)
- return ret;
- }
+ atomic_fetch_add(&nb_output_dumped, 1);
return 0;
}
@@ -736,9 +546,10 @@ int of_stream_init(OutputFile *of, OutputStream *ost)
ost->st->time_base);
}
- ost->initialized = 1;
+ if (ms->sch_idx >= 0)
+ return sch_mux_stream_ready(mux->sch, of->index, ms->sch_idx);
- return mux_check_init(mux);
+ return 0;
}
static int check_written(OutputFile *of)
@@ -852,15 +663,13 @@ int of_write_trailer(OutputFile *of)
AVFormatContext *fc = mux->fc;
int ret, mux_result = 0;
- if (!mux->tq) {
+ if (!mux->header_written) {
av_log(mux, AV_LOG_ERROR,
"Nothing was written into output file, because "
"at least one of its streams received no packets.\n");
return AVERROR(EINVAL);
}
- mux_result = thread_stop(mux);
-
ret = av_write_trailer(fc);
if (ret < 0) {
av_log(mux, AV_LOG_ERROR, "Error writing trailer: %s\n", av_err2str(ret));
@@ -905,13 +714,6 @@ static void ost_free(OutputStream **post)
ost->logfile = NULL;
}
- if (ms->muxing_queue) {
- AVPacket *pkt;
- while (av_fifo_read(ms->muxing_queue, &pkt, 1) >= 0)
- av_packet_free(&pkt);
- av_fifo_freep2(&ms->muxing_queue);
- }
-
avcodec_parameters_free(&ost->par_in);
av_bsf_free(&ms->bsf_ctx);
@@ -976,8 +778,6 @@ void of_free(OutputFile **pof)
return;
mux = mux_from_of(of);
- thread_stop(mux);
-
sq_free(&of->sq_encode);
sq_free(&mux->sq_mux);
diff --git a/fftools/ffmpeg_mux.h b/fftools/ffmpeg_mux.h
index eee2b2cb07..5d7cf3fa76 100644
--- a/fftools/ffmpeg_mux.h
+++ b/fftools/ffmpeg_mux.h
@@ -25,7 +25,6 @@
#include <stdint.h>
#include "ffmpeg_sched.h"
-#include "thread_queue.h"
#include "libavformat/avformat.h"
@@ -33,7 +32,6 @@
#include "libavutil/dict.h"
#include "libavutil/fifo.h"
-#include "libavutil/thread.h"
typedef struct MuxStream {
OutputStream ost;
@@ -41,9 +39,6 @@ typedef struct MuxStream {
// name used for logging
char log_name[32];
- /* the packets are buffered here until the muxer is ready to be initialized */
- AVFifo *muxing_queue;
-
AVBSFContext *bsf_ctx;
AVPacket *bsf_pkt;
@@ -57,17 +52,6 @@ typedef struct MuxStream {
int64_t max_frames;
- /*
- * The size of the AVPackets' buffers in queue.
- * Updated when a packet is either pushed or pulled from the queue.
- */
- size_t muxing_queue_data_size;
-
- int max_muxing_queue_size;
-
- /* Threshold after which max_muxing_queue_size will be in effect */
- size_t muxing_queue_data_threshold;
-
// timestamp from which the streamcopied streams should start,
// in AV_TIME_BASE_Q;
// everything before it should be discarded
@@ -106,9 +90,6 @@ typedef struct Muxer {
int *sch_stream_idx;
int nb_sch_stream_idx;
- pthread_t thread;
- ThreadQueue *tq;
-
AVDictionary *opts;
int thread_queue_size;
@@ -122,10 +103,7 @@ typedef struct Muxer {
AVPacket *sq_pkt;
} Muxer;
-/* whether we want to print an SDP, set in of_open() */
-extern int want_sdp;
-
-int mux_check_init(Muxer *mux);
+int mux_check_init(void *arg);
static MuxStream *ms_from_ost(OutputStream *ost)
{
diff --git a/fftools/ffmpeg_mux_init.c b/fftools/ffmpeg_mux_init.c
index 534b4379c7..6459296ab0 100644
--- a/fftools/ffmpeg_mux_init.c
+++ b/fftools/ffmpeg_mux_init.c
@@ -924,13 +924,6 @@ static int new_stream_audio(Muxer *mux, const OptionsContext *o,
return 0;
}
-static int new_stream_attachment(Muxer *mux, const OptionsContext *o,
- OutputStream *ost)
-{
- ost->finished = 1;
- return 0;
-}
-
static int new_stream_subtitle(Muxer *mux, const OptionsContext *o,
OutputStream *ost)
{
@@ -1168,9 +1161,6 @@ static int ost_add(Muxer *mux, const OptionsContext *o, enum AVMediaType type,
if (!ost->par_in)
return AVERROR(ENOMEM);
- ms->muxing_queue = av_fifo_alloc2(8, sizeof(AVPacket*), 0);
- if (!ms->muxing_queue)
- return AVERROR(ENOMEM);
ms->last_mux_dts = AV_NOPTS_VALUE;
ost->st = st;
@@ -1190,7 +1180,8 @@ static int ost_add(Muxer *mux, const OptionsContext *o, enum AVMediaType type,
if (!ost->enc_ctx)
return AVERROR(ENOMEM);
- ret = sch_add_enc(mux->sch, encoder_thread, ost, NULL);
+ ret = sch_add_enc(mux->sch, encoder_thread, ost,
+ ost->type == AVMEDIA_TYPE_SUBTITLE ? NULL : enc_open);
if (ret < 0)
return ret;
ms->sch_idx_enc = ret;
@@ -1414,9 +1405,6 @@ static int ost_add(Muxer *mux, const OptionsContext *o, enum AVMediaType type,
sch_mux_stream_buffering(mux->sch, mux->sch_idx, ms->sch_idx,
max_muxing_queue_size, muxing_queue_data_threshold);
-
- ms->max_muxing_queue_size = max_muxing_queue_size;
- ms->muxing_queue_data_threshold = muxing_queue_data_threshold;
}
MATCH_PER_STREAM_OPT(bits_per_raw_sample, i, ost->bits_per_raw_sample,
@@ -1434,8 +1422,6 @@ static int ost_add(Muxer *mux, const OptionsContext *o, enum AVMediaType type,
if (ost->enc_ctx && av_get_exact_bits_per_sample(ost->enc_ctx->codec_id) == 24)
av_dict_set(&ost->swr_opts, "output_sample_bits", "24", 0);
- ost->last_mux_dts = AV_NOPTS_VALUE;
-
MATCH_PER_STREAM_OPT(copy_initial_nonkeyframes, i,
ms->copy_initial_nonkeyframes, oc, st);
@@ -1443,7 +1429,6 @@ static int ost_add(Muxer *mux, const OptionsContext *o, enum AVMediaType type,
case AVMEDIA_TYPE_VIDEO: ret = new_stream_video (mux, o, ost); break;
case AVMEDIA_TYPE_AUDIO: ret = new_stream_audio (mux, o, ost); break;
case AVMEDIA_TYPE_SUBTITLE: ret = new_stream_subtitle (mux, o, ost); break;
- case AVMEDIA_TYPE_ATTACHMENT: ret = new_stream_attachment(mux, o, ost); break;
}
if (ret < 0)
return ret;
@@ -1938,7 +1923,6 @@ static int setup_sync_queues(Muxer *mux, AVFormatContext *oc, int64_t buf_size_u
MuxStream *ms = ms_from_ost(ost);
enum AVMediaType type = ost->type;
- ost->sq_idx_encode = -1;
ost->sq_idx_mux = -1;
nb_interleaved += IS_INTERLEAVED(type);
@@ -1961,11 +1945,17 @@ static int setup_sync_queues(Muxer *mux, AVFormatContext *oc, int64_t buf_size_u
* - at least one encoded audio/video stream is frame-limited, since
* that has similar semantics to 'shortest'
* - at least one audio encoder requires constant frame sizes
+ *
+ * Note that encoding sync queues are handled in the scheduler, because
+ * different encoders run in different threads and need external
+ * synchronization, while muxer sync queues can be handled inside the muxer
*/
if ((of->shortest && nb_av_enc > 1) || limit_frames_av_enc || nb_audio_fs) {
- of->sq_encode = sq_alloc(SYNC_QUEUE_FRAMES, buf_size_us, mux);
- if (!of->sq_encode)
- return AVERROR(ENOMEM);
+ int sq_idx, ret;
+
+ sq_idx = sch_add_sq_enc(mux->sch, buf_size_us, mux);
+ if (sq_idx < 0)
+ return sq_idx;
for (int i = 0; i < oc->nb_streams; i++) {
OutputStream *ost = of->streams[i];
@@ -1975,13 +1965,11 @@ static int setup_sync_queues(Muxer *mux, AVFormatContext *oc, int64_t buf_size_u
if (!IS_AV_ENC(ost, type))
continue;
- ost->sq_idx_encode = sq_add_stream(of->sq_encode,
- of->shortest || ms->max_frames < INT64_MAX);
- if (ost->sq_idx_encode < 0)
- return ost->sq_idx_encode;
-
- if (ms->max_frames != INT64_MAX)
- sq_limit_frames(of->sq_encode, ost->sq_idx_encode, ms->max_frames);
+ ret = sch_sq_add_enc(mux->sch, sq_idx, ms->sch_idx_enc,
+ of->shortest || ms->max_frames < INT64_MAX,
+ ms->max_frames);
+ if (ret < 0)
+ return ret;
}
}
@@ -2652,23 +2640,6 @@ static int validate_enc_avopt(Muxer *mux, const AVDictionary *codec_avopt)
return 0;
}
-static int init_output_stream_nofilter(OutputStream *ost)
-{
- int ret = 0;
-
- if (ost->enc_ctx) {
- ret = enc_open(ost, NULL);
- if (ret < 0)
- return ret;
- } else {
- ret = of_stream_init(output_files[ost->file_index], ost);
- if (ret < 0)
- return ret;
- }
-
- return ret;
-}
-
static const char *output_file_item_name(void *obj)
{
const Muxer *mux = obj;
@@ -2751,8 +2722,6 @@ int of_open(const OptionsContext *o, const char *filename, Scheduler *sch)
av_strlcat(mux->log_name, "/", sizeof(mux->log_name));
av_strlcat(mux->log_name, oc->oformat->name, sizeof(mux->log_name));
- if (strcmp(oc->oformat->name, "rtp"))
- want_sdp = 0;
of->format = oc->oformat;
if (recording_time != INT64_MAX)
@@ -2768,7 +2737,7 @@ int of_open(const OptionsContext *o, const char *filename, Scheduler *sch)
AVFMT_FLAG_BITEXACT);
}
- err = sch_add_mux(sch, muxer_thread, NULL, mux,
+ err = sch_add_mux(sch, muxer_thread, mux_check_init, mux,
!strcmp(oc->oformat->name, "rtp"));
if (err < 0)
return err;
@@ -2854,26 +2823,15 @@ int of_open(const OptionsContext *o, const char *filename, Scheduler *sch)
of->url = filename;
- /* initialize stream copy and subtitle/data streams.
- * Encoded AVFrame based streams will get initialized when the first AVFrame
- * is received in do_video_out
- */
+ /* initialize streamcopy streams. */
for (int i = 0; i < of->nb_streams; i++) {
OutputStream *ost = of->streams[i];
- if (ost->filter)
- continue;
-
- err = init_output_stream_nofilter(ost);
- if (err < 0)
- return err;
- }
-
- /* write the header for files with no streams */
- if (of->format->flags & AVFMT_NOSTREAMS && oc->nb_streams == 0) {
- int ret = mux_check_init(mux);
- if (ret < 0)
- return ret;
+ if (!ost->enc) {
+ err = of_stream_init(of, ost);
+ if (err < 0)
+ return err;
+ }
}
return 0;
diff --git a/fftools/ffmpeg_opt.c b/fftools/ffmpeg_opt.c
index d463306546..6177a96a4e 100644
--- a/fftools/ffmpeg_opt.c
+++ b/fftools/ffmpeg_opt.c
@@ -64,7 +64,6 @@ const char *const opt_name_top_field_first[] = {"top", NULL};
HWDevice *filter_hw_device;
char *vstats_filename;
-char *sdp_filename;
float audio_drift_threshold = 0.1;
float dts_delta_threshold = 10;
@@ -580,9 +579,8 @@ fail:
static int opt_sdp_file(void *optctx, const char *opt, const char *arg)
{
- av_free(sdp_filename);
- sdp_filename = av_strdup(arg);
- return 0;
+ Scheduler *sch = optctx;
+ return sch_sdp_filename(sch, arg);
}
#if CONFIG_VAAPI
diff --git a/tests/ref/fate/ffmpeg-fix_sub_duration_heartbeat b/tests/ref/fate/ffmpeg-fix_sub_duration_heartbeat
index 957a410921..bc9b833799 100644
--- a/tests/ref/fate/ffmpeg-fix_sub_duration_heartbeat
+++ b/tests/ref/fate/ffmpeg-fix_sub_duration_heartbeat
@@ -1,48 +1,40 @@
1
-00:00:00,968 --> 00:00:01,001
+00:00:00,968 --> 00:00:01,168
<font face="Monospace">{\an7}(</font>
2
-00:00:01,001 --> 00:00:01,168
-<font face="Monospace">{\an7}(</font>
-
-3
00:00:01,168 --> 00:00:01,368
<font face="Monospace">{\an7}(<i> inaudibl</i></font>
-4
+3
00:00:01,368 --> 00:00:01,568
<font face="Monospace">{\an7}(<i> inaudible radio chat</i></font>
-5
+4
00:00:01,568 --> 00:00:02,002
<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )</font>
+5
+00:00:02,002 --> 00:00:03,103
+<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )</font>
+
6
-00:00:02,002 --> 00:00:03,003
-<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )</font>
-
-7
-00:00:03,003 --> 00:00:03,103
-<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )</font>
-
-8
00:00:03,103 --> 00:00:03,303
-<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )
+<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )
>></font>
-9
+7
00:00:03,303 --> 00:00:03,503
-<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )
+<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )
>> Safety rema</font>
-10
+8
00:00:03,504 --> 00:00:03,704
-<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )
+<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )
>> Safety remains our numb</font>
-11
+9
00:00:03,704 --> 00:00:04,004
-<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )
+<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )
>> Safety remains our number one</font>
--
2.42.0
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 01/13] lavfi/buffersink: avoid leaking peeked_frame on uninit
2023-11-23 19:14 ` [FFmpeg-devel] [PATCH 01/13] lavfi/buffersink: avoid leaking peeked_frame on uninit Anton Khirnov
@ 2023-11-23 22:16 ` Paul B Mahol
2023-11-27 9:45 ` Nicolas George
1 sibling, 0 replies; 49+ messages in thread
From: Paul B Mahol @ 2023-11-23 22:16 UTC (permalink / raw)
To: FFmpeg development discussions and patches
LGTM
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 13/13] fftools/ffmpeg: convert to a threaded architecture
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 13/13] fftools/ffmpeg: convert to a threaded architecture Anton Khirnov
@ 2023-11-24 22:26 ` Michael Niedermayer
2023-11-25 20:32 ` [FFmpeg-devel] [PATCH 13/13 v2] " Anton Khirnov
0 siblings, 1 reply; 49+ messages in thread
From: Michael Niedermayer @ 2023-11-24 22:26 UTC (permalink / raw)
To: FFmpeg development discussions and patches
[-- Attachment #1.1: Type: text/plain, Size: 1526 bytes --]
On Thu, Nov 23, 2023 at 08:15:08PM +0100, Anton Khirnov wrote:
> Change the main loop and every component (demuxers, decoders, filters,
> encoders, muxers) to use the previously added transcode scheduler. Every
> instance of every such component was already running in a separate
> thread, but now they can actually run in parallel.
>
> Changes the results of ffmpeg-fix_sub_duration_heartbeat - tested by
> JEEB to be more correct and deterministic.
> ---
> fftools/ffmpeg.c | 374 +--------
> fftools/ffmpeg.h | 97 +--
> fftools/ffmpeg_dec.c | 321 ++------
> fftools/ffmpeg_demux.c | 268 ++++---
> fftools/ffmpeg_enc.c | 368 ++-------
> fftools/ffmpeg_filter.c | 720 +++++-------------
> fftools/ffmpeg_mux.c | 324 ++------
> fftools/ffmpeg_mux.h | 24 +-
> fftools/ffmpeg_mux_init.c | 88 +--
> fftools/ffmpeg_opt.c | 6 +-
> .../fate/ffmpeg-fix_sub_duration_heartbeat | 36 +-
> 11 files changed, 598 insertions(+), 2028 deletions(-)
this (and many other things) infinite loops
./ffmpeg -f lavfi -i testsrc2 -bsf:v noise -bitexact -t 2 -y /tmp/y.y4m
thx
[...]
--
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
The worst form of inequality is to try to make unequal things equal.
-- Aristotle
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]
[-- Attachment #2: Type: text/plain, Size: 251 bytes --]
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 05/13] fftools/ffmpeg_filter: move filtering to a separate thread
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 05/13] fftools/ffmpeg_filter: move filtering to a separate thread Anton Khirnov
@ 2023-11-24 22:56 ` Michael Niedermayer
2023-11-25 20:18 ` [FFmpeg-devel] [PATCH 05/13 v2] " Anton Khirnov
2023-11-25 20:23 ` [FFmpeg-devel] [PATCH 05/13] " James Almer
0 siblings, 2 replies; 49+ messages in thread
From: Michael Niedermayer @ 2023-11-24 22:56 UTC (permalink / raw)
To: FFmpeg development discussions and patches
[-- Attachment #1.1: Type: text/plain, Size: 1205 bytes --]
On Thu, Nov 23, 2023 at 08:15:00PM +0100, Anton Khirnov wrote:
> As previously for decoding, this is merely "scaffolding" for moving to a
> fully threaded architecture and does not yet make filtering truly
> parallel - the main thread will currently wait for the filtering thread
> to finish its work before continuing. That will change in future commits
> after encoders are also moved to threads and a thread-aware scheduler is
> added.
> ---
> fftools/ffmpeg.h | 9 +-
> fftools/ffmpeg_dec.c | 39 +-
> fftools/ffmpeg_filter.c | 825 ++++++++++++++++++++++++++++++++++------
> 3 files changed, 730 insertions(+), 143 deletions(-)
This seems to cause a new assertion failure with this:
echo 'Call 0 ping' | ./ffmpeg -nostats -i mm-short.mpg -vf nullsink,color=green -bitexact -vframes 1 -f null -
Press [q] to stop, [?] for help
Enter command: <target>|all <time>|-1 <command>[ <argument>]
Assertion !fgp->frame->buf[0] failed at fftools/ffmpeg_filter.c:2987
Aborted (core dumped)
[...]
--
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
Nations do behave wisely once they have exhausted all other alternatives.
-- Abba Eban
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]
[-- Attachment #2: Type: text/plain, Size: 251 bytes --]
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* [FFmpeg-devel] [PATCH 05/13 v2] fftools/ffmpeg_filter: move filtering to a separate thread
2023-11-24 22:56 ` Michael Niedermayer
@ 2023-11-25 20:18 ` Anton Khirnov
2023-11-25 20:23 ` [FFmpeg-devel] [PATCH 05/13] " James Almer
1 sibling, 0 replies; 49+ messages in thread
From: Anton Khirnov @ 2023-11-25 20:18 UTC (permalink / raw)
To: ffmpeg-devel
As previously for decoding, this is merely "scaffolding" for moving to a
fully threaded architecture and does not yet make filtering truly
parallel - the main thread will currently wait for the filtering thread
to finish its work before continuing. That will change in future commits
after encoders are also moved to threads and a thread-aware scheduler is
added.
---
fftools/ffmpeg.h | 9 +-
fftools/ffmpeg_dec.c | 39 +-
fftools/ffmpeg_filter.c | 827 ++++++++++++++++++++++++++++++++++------
3 files changed, 732 insertions(+), 143 deletions(-)
diff --git a/fftools/ffmpeg.h b/fftools/ffmpeg.h
index 1f11a2f002..f50222472c 100644
--- a/fftools/ffmpeg.h
+++ b/fftools/ffmpeg.h
@@ -80,6 +80,14 @@ enum HWAccelID {
HWACCEL_GENERIC,
};
+enum FrameOpaque {
+ FRAME_OPAQUE_REAP_FILTERS = 1,
+ FRAME_OPAQUE_CHOOSE_INPUT,
+ FRAME_OPAQUE_SUB_HEARTBEAT,
+ FRAME_OPAQUE_EOF,
+ FRAME_OPAQUE_SEND_COMMAND,
+};
+
typedef struct HWDevice {
const char *name;
enum AVHWDeviceType type;
@@ -730,7 +738,6 @@ const FrameData *frame_data_c(AVFrame *frame);
int ifilter_send_frame(InputFilter *ifilter, AVFrame *frame, int keep_reference);
int ifilter_send_eof(InputFilter *ifilter, int64_t pts, AVRational tb);
-int ifilter_sub2video(InputFilter *ifilter, const AVFrame *frame);
void ifilter_sub2video_heartbeat(InputFilter *ifilter, int64_t pts, AVRational tb);
/**
diff --git a/fftools/ffmpeg_dec.c b/fftools/ffmpeg_dec.c
index 517d6b3ced..b60bad1220 100644
--- a/fftools/ffmpeg_dec.c
+++ b/fftools/ffmpeg_dec.c
@@ -147,11 +147,12 @@ fail:
static int send_frame_to_filters(InputStream *ist, AVFrame *decoded_frame)
{
- int i, ret;
+ int i, ret = 0;
- av_assert1(ist->nb_filters > 0); /* ensure ret is initialized */
for (i = 0; i < ist->nb_filters; i++) {
- ret = ifilter_send_frame(ist->filters[i], decoded_frame, i < ist->nb_filters - 1);
+ ret = ifilter_send_frame(ist->filters[i], decoded_frame,
+ i < ist->nb_filters - 1 ||
+ ist->dec->type == AVMEDIA_TYPE_SUBTITLE);
if (ret == AVERROR_EOF)
ret = 0; /* ignore */
if (ret < 0) {
@@ -380,15 +381,6 @@ static int video_frame_process(InputStream *ist, AVFrame *frame)
return 0;
}
-static void sub2video_flush(InputStream *ist)
-{
- for (int i = 0; i < ist->nb_filters; i++) {
- int ret = ifilter_sub2video(ist->filters[i], NULL);
- if (ret != AVERROR_EOF && ret < 0)
- av_log(NULL, AV_LOG_WARNING, "Flush the frame error.\n");
- }
-}
-
static int process_subtitle(InputStream *ist, AVFrame *frame)
{
Decoder *d = ist->decoder;
@@ -426,14 +418,9 @@ static int process_subtitle(InputStream *ist, AVFrame *frame)
if (!subtitle)
return 0;
- for (int i = 0; i < ist->nb_filters; i++) {
- ret = ifilter_sub2video(ist->filters[i], frame);
- if (ret < 0) {
- av_log(ist, AV_LOG_ERROR, "Error sending a subtitle for filtering: %s\n",
- av_err2str(ret));
- return ret;
- }
- }
+ ret = send_frame_to_filters(ist, frame);
+ if (ret < 0)
+ return ret;
subtitle = (AVSubtitle*)frame->buf[0]->data;
if (!subtitle->num_rects)
@@ -824,14 +811,10 @@ finish:
return ret;
// signal EOF to our downstreams
- if (ist->dec->type == AVMEDIA_TYPE_SUBTITLE)
- sub2video_flush(ist);
- else {
- ret = send_filter_eof(ist);
- if (ret < 0) {
- av_log(NULL, AV_LOG_FATAL, "Error marking filters as finished\n");
- return ret;
- }
+ ret = send_filter_eof(ist);
+ if (ret < 0) {
+ av_log(NULL, AV_LOG_FATAL, "Error marking filters as finished\n");
+ return ret;
}
return AVERROR_EOF;
diff --git a/fftools/ffmpeg_filter.c b/fftools/ffmpeg_filter.c
index 69c28a6b2b..d845448332 100644
--- a/fftools/ffmpeg_filter.c
+++ b/fftools/ffmpeg_filter.c
@@ -21,6 +21,8 @@
#include <stdint.h>
#include "ffmpeg.h"
+#include "ffmpeg_utils.h"
+#include "thread_queue.h"
#include "libavfilter/avfilter.h"
#include "libavfilter/buffersink.h"
@@ -53,12 +55,50 @@ typedef struct FilterGraphPriv {
int is_meta;
int disable_conversions;
+ int nb_inputs_bound;
+ int nb_outputs_bound;
+
const char *graph_desc;
// frame for temporarily holding output from the filtergraph
AVFrame *frame;
// frame for sending output to the encoder
AVFrame *frame_enc;
+
+ pthread_t thread;
+ /**
+ * Queue for sending frames from the main thread to the filtergraph. Has
+ * nb_inputs+1 streams - the first nb_inputs stream correspond to
+ * filtergraph inputs. Frames on those streams may have their opaque set to
+ * - FRAME_OPAQUE_EOF: frame contains no data, but pts+timebase of the
+ * EOF event for the correspondint stream. Will be immediately followed by
+ * this stream being send-closed.
+ * - FRAME_OPAQUE_SUB_HEARTBEAT: frame contains no data, but pts+timebase of
+ * a subtitle heartbeat event. Will only be sent for sub2video streams.
+ *
+ * The last stream is "control" - the main thread sends empty AVFrames with
+ * opaque set to
+ * - FRAME_OPAQUE_REAP_FILTERS: a request to retrieve all frame available
+ * from filtergraph outputs. These frames are sent to corresponding
+ * streams in queue_out. Finally an empty frame is sent to the control
+ * stream in queue_out.
+ * - FRAME_OPAQUE_CHOOSE_INPUT: same as above, but in case no frames are
+ * available the terminating empty frame's opaque will contain the index+1
+ * of the filtergraph input to which more input frames should be supplied.
+ */
+ ThreadQueue *queue_in;
+ /**
+ * Queue for sending frames from the filtergraph back to the main thread.
+ * Has nb_outputs+1 streams - the first nb_outputs stream correspond to
+ * filtergraph outputs.
+ *
+ * The last stream is "control" - see documentation for queue_in for more
+ * details.
+ */
+ ThreadQueue *queue_out;
+ // submitting frames to filter thread returned EOF
+ // this only happens on thread exit, so is not per-input
+ int eof_in;
} FilterGraphPriv;
static FilterGraphPriv *fgp_from_fg(FilterGraph *fg)
@@ -71,6 +111,22 @@ static const FilterGraphPriv *cfgp_from_cfg(const FilterGraph *fg)
return (const FilterGraphPriv*)fg;
}
+// data that is local to the filter thread and not visible outside of it
+typedef struct FilterGraphThread {
+ AVFrame *frame;
+
+ // Temporary buffer for output frames, since on filtergraph reset
+ // we cannot send them to encoders immediately.
+ // The output index is stored in frame opaque.
+ AVFifo *frame_queue_out;
+
+ int got_frame;
+
+ // EOF status of each input/output, as received by the thread
+ uint8_t *eof_in;
+ uint8_t *eof_out;
+} FilterGraphThread;
+
typedef struct InputFilterPriv {
InputFilter ifilter;
@@ -204,7 +260,25 @@ static OutputFilterPriv *ofp_from_ofilter(OutputFilter *ofilter)
return (OutputFilterPriv*)ofilter;
}
-static int configure_filtergraph(FilterGraph *fg);
+typedef struct FilterCommand {
+ char *target;
+ char *command;
+ char *arg;
+
+ double time;
+ int all_filters;
+} FilterCommand;
+
+static void filter_command_free(void *opaque, uint8_t *data)
+{
+ FilterCommand *fc = (FilterCommand*)data;
+
+ av_freep(&fc->target);
+ av_freep(&fc->command);
+ av_freep(&fc->arg);
+
+ av_free(data);
+}
static int sub2video_get_blank_frame(InputFilterPriv *ifp)
{
@@ -574,6 +648,59 @@ static int ifilter_has_all_input_formats(FilterGraph *fg)
return 1;
}
+static void *filter_thread(void *arg);
+
+// start the filtering thread once all inputs and outputs are bound
+static int fg_thread_try_start(FilterGraphPriv *fgp)
+{
+ FilterGraph *fg = &fgp->fg;
+ ObjPool *op;
+ int ret = 0;
+
+ if (fgp->nb_inputs_bound < fg->nb_inputs ||
+ fgp->nb_outputs_bound < fg->nb_outputs)
+ return 0;
+
+ op = objpool_alloc_frames();
+ if (!op)
+ return AVERROR(ENOMEM);
+
+ fgp->queue_in = tq_alloc(fg->nb_inputs + 1, 1, op, frame_move);
+ if (!fgp->queue_in) {
+ objpool_free(&op);
+ return AVERROR(ENOMEM);
+ }
+
+ // at least one output is mandatory
+ op = objpool_alloc_frames();
+ if (!op)
+ goto fail;
+
+ fgp->queue_out = tq_alloc(fg->nb_outputs + 1, 1, op, frame_move);
+ if (!fgp->queue_out) {
+ objpool_free(&op);
+ goto fail;
+ }
+
+ ret = pthread_create(&fgp->thread, NULL, filter_thread, fgp);
+ if (ret) {
+ ret = AVERROR(ret);
+ av_log(NULL, AV_LOG_ERROR, "pthread_create() for filtergraph %d failed: %s\n",
+ fg->index, av_err2str(ret));
+ goto fail;
+ }
+
+ return 0;
+fail:
+ if (ret >= 0)
+ ret = AVERROR(ENOMEM);
+
+ tq_free(&fgp->queue_in);
+ tq_free(&fgp->queue_out);
+
+ return ret;
+}
+
static char *describe_filter_link(FilterGraph *fg, AVFilterInOut *inout, int in)
{
AVFilterContext *ctx = inout->filter_ctx;
@@ -607,6 +734,7 @@ static OutputFilter *ofilter_alloc(FilterGraph *fg)
static int ifilter_bind_ist(InputFilter *ifilter, InputStream *ist)
{
InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
+ FilterGraphPriv *fgp = fgp_from_fg(ifilter->graph);
int ret;
av_assert0(!ifp->ist);
@@ -624,7 +752,10 @@ static int ifilter_bind_ist(InputFilter *ifilter, InputStream *ist)
return AVERROR(ENOMEM);
}
- return 0;
+ fgp->nb_inputs_bound++;
+ av_assert0(fgp->nb_inputs_bound <= ifilter->graph->nb_inputs);
+
+ return fg_thread_try_start(fgp);
}
static int set_channel_layout(OutputFilterPriv *f, OutputStream *ost)
@@ -756,24 +887,10 @@ int ofilter_bind_ost(OutputFilter *ofilter, OutputStream *ost)
break;
}
- // if we have all input parameters and all outputs are bound,
- // the graph can now be configured
- if (ifilter_has_all_input_formats(fg)) {
- int ret;
+ fgp->nb_outputs_bound++;
+ av_assert0(fgp->nb_outputs_bound <= fg->nb_outputs);
- for (int i = 0; i < fg->nb_outputs; i++)
- if (!fg->outputs[i]->ost)
- return 0;
-
- ret = configure_filtergraph(fg);
- if (ret < 0) {
- av_log(fg, AV_LOG_ERROR, "Error configuring filter graph: %s\n",
- av_err2str(ret));
- return ret;
- }
- }
-
- return 0;
+ return fg_thread_try_start(fgp);
}
static InputFilter *ifilter_alloc(FilterGraph *fg)
@@ -803,6 +920,34 @@ static InputFilter *ifilter_alloc(FilterGraph *fg)
return ifilter;
}
+static int fg_thread_stop(FilterGraphPriv *fgp)
+{
+ void *ret;
+
+ if (!fgp->queue_in)
+ return 0;
+
+ for (int i = 0; i <= fgp->fg.nb_inputs; i++) {
+ InputFilterPriv *ifp = i < fgp->fg.nb_inputs ?
+ ifp_from_ifilter(fgp->fg.inputs[i]) : NULL;
+
+ if (ifp)
+ ifp->eof = 1;
+
+ tq_send_finish(fgp->queue_in, i);
+ }
+
+ for (int i = 0; i <= fgp->fg.nb_outputs; i++)
+ tq_receive_finish(fgp->queue_out, i);
+
+ pthread_join(fgp->thread, &ret);
+
+ tq_free(&fgp->queue_in);
+ tq_free(&fgp->queue_out);
+
+ return (int)(intptr_t)ret;
+}
+
void fg_free(FilterGraph **pfg)
{
FilterGraph *fg = *pfg;
@@ -812,6 +957,8 @@ void fg_free(FilterGraph **pfg)
return;
fgp = fgp_from_fg(fg);
+ fg_thread_stop(fgp);
+
avfilter_graph_free(&fg->graph);
for (int j = 0; j < fg->nb_inputs; j++) {
InputFilter *ifilter = fg->inputs[j];
@@ -1622,7 +1769,7 @@ static int graph_is_meta(AVFilterGraph *graph)
return 1;
}
-static int configure_filtergraph(FilterGraph *fg)
+static int configure_filtergraph(FilterGraph *fg, const FilterGraphThread *fgt)
{
FilterGraphPriv *fgp = fgp_from_fg(fg);
AVBufferRef *hw_device;
@@ -1746,7 +1893,7 @@ static int configure_filtergraph(FilterGraph *fg)
/* send the EOFs for the finished inputs */
for (i = 0; i < fg->nb_inputs; i++) {
InputFilterPriv *ifp = ifp_from_ifilter(fg->inputs[i]);
- if (ifp->eof) {
+ if (fgt->eof_in[i]) {
ret = av_buffersrc_add_frame(ifp->filter, NULL);
if (ret < 0)
goto fail;
@@ -1829,8 +1976,8 @@ int filtergraph_is_simple(const FilterGraph *fg)
return fgp->is_simple;
}
-void fg_send_command(FilterGraph *fg, double time, const char *target,
- const char *command, const char *arg, int all_filters)
+static void send_command(FilterGraph *fg, double time, const char *target,
+ const char *command, const char *arg, int all_filters)
{
int ret;
@@ -1853,6 +2000,29 @@ void fg_send_command(FilterGraph *fg, double time, const char *target,
}
}
+static int choose_input(const FilterGraph *fg, const FilterGraphThread *fgt)
+{
+ int nb_requests, nb_requests_max = 0;
+ int best_input = -1;
+
+ for (int i = 0; i < fg->nb_inputs; i++) {
+ InputFilter *ifilter = fg->inputs[i];
+ InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
+ InputStream *ist = ifp->ist;
+
+ if (input_files[ist->file_index]->eagain || fgt->eof_in[i])
+ continue;
+
+ nb_requests = av_buffersrc_get_nb_failed_requests(ifp->filter);
+ if (nb_requests > nb_requests_max) {
+ nb_requests_max = nb_requests;
+ best_input = i;
+ }
+ }
+
+ return best_input;
+}
+
static int choose_out_timebase(OutputFilterPriv *ofp, AVFrame *frame)
{
OutputFilter *ofilter = &ofp->ofilter;
@@ -2088,16 +2258,16 @@ finish:
fps->dropped_keyframe |= fps->last_dropped && (frame->flags & AV_FRAME_FLAG_KEY);
}
-static int fg_output_frame(OutputFilterPriv *ofp, AVFrame *frame)
+static int fg_output_frame(OutputFilterPriv *ofp, FilterGraphThread *fgt,
+ AVFrame *frame, int buffer)
{
FilterGraphPriv *fgp = fgp_from_fg(ofp->ofilter.graph);
- OutputStream *ost = ofp->ofilter.ost;
AVFrame *frame_prev = ofp->fps.last_frame;
enum AVMediaType type = ofp->ofilter.type;
- int64_t nb_frames = 1, nb_frames_prev = 0;
+ int64_t nb_frames = !!frame, nb_frames_prev = 0;
- if (type == AVMEDIA_TYPE_VIDEO)
+ if (type == AVMEDIA_TYPE_VIDEO && (frame || fgt->got_frame))
video_sync_process(ofp, frame, &nb_frames, &nb_frames_prev);
for (int64_t i = 0; i < nb_frames; i++) {
@@ -2136,10 +2306,31 @@ static int fg_output_frame(OutputFilterPriv *ofp, AVFrame *frame)
frame_out = frame;
}
- ret = enc_frame(ost, frame_out);
- av_frame_unref(frame_out);
- if (ret < 0)
- return ret;
+ if (buffer) {
+ AVFrame *f = av_frame_alloc();
+
+ if (!f) {
+ av_frame_unref(frame_out);
+ return AVERROR(ENOMEM);
+ }
+
+ av_frame_move_ref(f, frame_out);
+ f->opaque = (void*)(intptr_t)ofp->index;
+
+ ret = av_fifo_write(fgt->frame_queue_out, &f, 1);
+ if (ret < 0) {
+ av_frame_free(&f);
+ return AVERROR(ENOMEM);
+ }
+ } else {
+ // return the frame to the main thread
+ ret = tq_send(fgp->queue_out, ofp->index, frame_out);
+ if (ret < 0) {
+ av_frame_unref(frame_out);
+ fgt->eof_out[ofp->index] = 1;
+ return ret == AVERROR_EOF ? 0 : ret;
+ }
+ }
if (type == AVMEDIA_TYPE_VIDEO) {
ofp->fps.frame_number++;
@@ -2149,7 +2340,7 @@ static int fg_output_frame(OutputFilterPriv *ofp, AVFrame *frame)
frame->flags &= ~AV_FRAME_FLAG_KEY;
}
- ofp->got_frame = 1;
+ fgt->got_frame = 1;
}
if (frame && frame_prev) {
@@ -2157,23 +2348,27 @@ static int fg_output_frame(OutputFilterPriv *ofp, AVFrame *frame)
av_frame_move_ref(frame_prev, frame);
}
+ if (!frame) {
+ tq_send_finish(fgp->queue_out, ofp->index);
+ fgt->eof_out[ofp->index] = 1;
+ }
+
return 0;
}
-static int fg_output_step(OutputFilterPriv *ofp, int flush)
+static int fg_output_step(OutputFilterPriv *ofp, FilterGraphThread *fgt,
+ AVFrame *frame, int buffer)
{
FilterGraphPriv *fgp = fgp_from_fg(ofp->ofilter.graph);
OutputStream *ost = ofp->ofilter.ost;
- AVFrame *frame = fgp->frame;
AVFilterContext *filter = ofp->filter;
FrameData *fd;
int ret;
ret = av_buffersink_get_frame_flags(filter, frame,
AV_BUFFERSINK_FLAG_NO_REQUEST);
- if (flush && ret == AVERROR_EOF && ofp->got_frame &&
- ost->type == AVMEDIA_TYPE_VIDEO) {
- ret = fg_output_frame(ofp, NULL);
+ if (ret == AVERROR_EOF && !buffer && !fgt->eof_out[ofp->index]) {
+ ret = fg_output_frame(ofp, fgt, NULL, buffer);
return (ret < 0) ? ret : 1;
} else if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
return 1;
@@ -2183,22 +2378,18 @@ static int fg_output_step(OutputFilterPriv *ofp, int flush)
av_err2str(ret));
return ret;
}
- if (ost->finished) {
+
+ if (fgt->eof_out[ofp->index]) {
av_frame_unref(frame);
return 0;
}
frame->time_base = av_buffersink_get_time_base(filter);
- if (frame->pts != AV_NOPTS_VALUE) {
- ost->filter->last_pts = av_rescale_q(frame->pts, frame->time_base,
- AV_TIME_BASE_Q);
-
- if (debug_ts)
- av_log(fgp, AV_LOG_INFO, "filter_raw -> pts:%s pts_time:%s time_base:%d/%d\n",
- av_ts2str(frame->pts), av_ts2timestr(frame->pts, &frame->time_base),
- frame->time_base.num, frame->time_base.den);
- }
+ if (debug_ts)
+ av_log(fgp, AV_LOG_INFO, "filter_raw -> pts:%s pts_time:%s time_base:%d/%d\n",
+ av_ts2str(frame->pts), av_ts2timestr(frame->pts, &frame->time_base),
+ frame->time_base.num, frame->time_base.den);
// Choose the output timebase the first time we get a frame.
if (!ofp->tb_out_locked) {
@@ -2231,7 +2422,7 @@ static int fg_output_step(OutputFilterPriv *ofp, int flush)
fd->frame_rate_filter = ofp->fps.framerate;
}
- ret = fg_output_frame(ofp, frame);
+ ret = fg_output_frame(ofp, fgt, frame, buffer);
av_frame_unref(frame);
if (ret < 0)
return ret;
@@ -2239,18 +2430,38 @@ static int fg_output_step(OutputFilterPriv *ofp, int flush)
return 0;
}
-int reap_filters(FilterGraph *fg, int flush)
+/* retrieve all frames available at filtergraph outputs and either send them to
+ * the main thread (buffer=0) or buffer them for later (buffer=1) */
+static int read_frames(FilterGraph *fg, FilterGraphThread *fgt,
+ AVFrame *frame, int buffer)
{
+ FilterGraphPriv *fgp = fgp_from_fg(fg);
+ int ret = 0;
+
if (!fg->graph)
return 0;
+ // process buffered frames
+ if (!buffer) {
+ AVFrame *f;
+
+ while (av_fifo_read(fgt->frame_queue_out, &f, 1) >= 0) {
+ int out_idx = (intptr_t)f->opaque;
+ f->opaque = NULL;
+ ret = tq_send(fgp->queue_out, out_idx, f);
+ av_frame_free(&f);
+ if (ret < 0 && ret != AVERROR_EOF)
+ return ret;
+ }
+ }
+
/* Reap all buffers present in the buffer sinks */
for (int i = 0; i < fg->nb_outputs; i++) {
OutputFilterPriv *ofp = ofp_from_ofilter(fg->outputs[i]);
int ret = 0;
while (!ret) {
- ret = fg_output_step(ofp, flush);
+ ret = fg_output_step(ofp, fgt, frame, buffer);
if (ret < 0)
return ret;
}
@@ -2259,7 +2470,7 @@ int reap_filters(FilterGraph *fg, int flush)
return 0;
}
-void ifilter_sub2video_heartbeat(InputFilter *ifilter, int64_t pts, AVRational tb)
+static void sub2video_heartbeat(InputFilter *ifilter, int64_t pts, AVRational tb)
{
InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
int64_t pts2;
@@ -2284,11 +2495,17 @@ void ifilter_sub2video_heartbeat(InputFilter *ifilter, int64_t pts, AVRational t
sub2video_push_ref(ifp, pts2);
}
-int ifilter_sub2video(InputFilter *ifilter, const AVFrame *frame)
+static int sub2video_frame(InputFilter *ifilter, const AVFrame *frame)
{
InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
int ret;
+ // heartbeat frame
+ if (frame && !frame->buf[0]) {
+ sub2video_heartbeat(ifilter, frame->pts, frame->time_base);
+ return 0;
+ }
+
if (ifilter->graph->graph) {
if (!frame) {
if (ifp->sub2video.end_pts < INT64_MAX)
@@ -2317,12 +2534,13 @@ int ifilter_sub2video(InputFilter *ifilter, const AVFrame *frame)
return 0;
}
-int ifilter_send_eof(InputFilter *ifilter, int64_t pts, AVRational tb)
+static int send_eof(FilterGraphThread *fgt, InputFilter *ifilter,
+ int64_t pts, AVRational tb)
{
InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
int ret;
- ifp->eof = 1;
+ fgt->eof_in[ifp->index] = 1;
if (ifp->filter) {
pts = av_rescale_q_rnd(pts, tb, ifp->time_base,
@@ -2346,7 +2564,7 @@ int ifilter_send_eof(InputFilter *ifilter, int64_t pts, AVRational tb)
return ret;
if (ifilter_has_all_input_formats(ifilter->graph)) {
- ret = configure_filtergraph(ifilter->graph);
+ ret = configure_filtergraph(ifilter->graph, fgt);
if (ret < 0) {
av_log(NULL, AV_LOG_ERROR, "Error initializing filters!\n");
return ret;
@@ -2365,10 +2583,10 @@ int ifilter_send_eof(InputFilter *ifilter, int64_t pts, AVRational tb)
return 0;
}
-int ifilter_send_frame(InputFilter *ifilter, AVFrame *frame, int keep_reference)
+static int send_frame(FilterGraph *fg, FilterGraphThread *fgt,
+ InputFilter *ifilter, AVFrame *frame)
{
InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
- FilterGraph *fg = ifilter->graph;
AVFrameSideData *sd;
int need_reinit, ret;
@@ -2408,10 +2626,13 @@ int ifilter_send_frame(InputFilter *ifilter, AVFrame *frame, int keep_reference)
/* (re)init the graph if possible, otherwise buffer the frame and return */
if (need_reinit || !fg->graph) {
+ AVFrame *tmp = av_frame_alloc();
+
+ if (!tmp)
+ return AVERROR(ENOMEM);
+
if (!ifilter_has_all_input_formats(fg)) {
- AVFrame *tmp = av_frame_clone(frame);
- if (!tmp)
- return AVERROR(ENOMEM);
+ av_frame_move_ref(tmp, frame);
ret = av_fifo_write(ifp->frame_queue, &tmp, 1);
if (ret < 0)
@@ -2420,27 +2641,18 @@ int ifilter_send_frame(InputFilter *ifilter, AVFrame *frame, int keep_reference)
return ret;
}
- ret = reap_filters(fg, 0);
- if (ret < 0 && ret != AVERROR_EOF) {
- av_log(fg, AV_LOG_ERROR, "Error while filtering: %s\n", av_err2str(ret));
+ ret = fg->graph ? read_frames(fg, fgt, tmp, 1) : 0;
+ av_frame_free(&tmp);
+ if (ret < 0)
return ret;
- }
- ret = configure_filtergraph(fg);
+ ret = configure_filtergraph(fg, fgt);
if (ret < 0) {
av_log(fg, AV_LOG_ERROR, "Error reinitializing filters!\n");
return ret;
}
}
- if (keep_reference) {
- ret = av_frame_ref(ifp->frame, frame);
- if (ret < 0)
- return ret;
- } else
- av_frame_move_ref(ifp->frame, frame);
- frame = ifp->frame;
-
frame->pts = av_rescale_q(frame->pts, frame->time_base, ifp->time_base);
frame->duration = av_rescale_q(frame->duration, frame->time_base, ifp->time_base);
frame->time_base = ifp->time_base;
@@ -2462,20 +2674,32 @@ int ifilter_send_frame(InputFilter *ifilter, AVFrame *frame, int keep_reference)
return 0;
}
-int fg_transcode_step(FilterGraph *graph, InputStream **best_ist)
+static int msg_process(FilterGraphPriv *fgp, FilterGraphThread *fgt,
+ AVFrame *frame)
{
- FilterGraphPriv *fgp = fgp_from_fg(graph);
- int i, ret;
- int nb_requests, nb_requests_max = 0;
- InputStream *ist;
+ const enum FrameOpaque msg = (intptr_t)frame->opaque;
+ FilterGraph *fg = &fgp->fg;
+ int graph_eof = 0;
+ int ret;
- if (!graph->graph) {
- for (int i = 0; i < graph->nb_inputs; i++) {
- InputFilter *ifilter = graph->inputs[i];
+ frame->opaque = NULL;
+ av_assert0(msg > 0);
+ av_assert0(msg == FRAME_OPAQUE_SEND_COMMAND || !frame->buf[0]);
+
+ if (!fg->graph) {
+ // graph not configured yet, ignore all messages other than choosing
+ // the input to read from
+ if (msg != FRAME_OPAQUE_CHOOSE_INPUT) {
+ av_frame_unref(frame);
+ goto done;
+ }
+
+ for (int i = 0; i < fg->nb_inputs; i++) {
+ InputFilter *ifilter = fg->inputs[i];
InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
- if (ifp->format < 0 && !ifp->eof) {
- *best_ist = ifp->ist;
- return 0;
+ if (ifp->format < 0 && !fgt->eof_in[i]) {
+ frame->opaque = (void*)(intptr_t)(i + 1);
+ goto done;
}
}
@@ -2486,16 +2710,310 @@ int fg_transcode_step(FilterGraph *graph, InputStream **best_ist)
return AVERROR_BUG;
}
- *best_ist = NULL;
- ret = avfilter_graph_request_oldest(graph->graph);
- if (ret >= 0)
- return reap_filters(graph, 0);
+ if (msg == FRAME_OPAQUE_SEND_COMMAND) {
+ FilterCommand *fc = (FilterCommand*)frame->buf[0]->data;
+ send_command(fg, fc->time, fc->target, fc->command, fc->arg, fc->all_filters);
+ av_frame_unref(frame);
+ goto done;
+ }
- if (ret == AVERROR_EOF) {
- reap_filters(graph, 1);
- for (int i = 0; i < graph->nb_outputs; i++) {
- OutputFilter *ofilter = graph->outputs[i];
- OutputFilterPriv *ofp = ofp_from_ofilter(ofilter);
+ if (msg == FRAME_OPAQUE_CHOOSE_INPUT) {
+ ret = avfilter_graph_request_oldest(fg->graph);
+
+ graph_eof = ret == AVERROR_EOF;
+
+ if (ret == AVERROR(EAGAIN)) {
+ frame->opaque = (void*)(intptr_t)(choose_input(fg, fgt) + 1);
+ goto done;
+ } else if (ret < 0 && !graph_eof)
+ return ret;
+ }
+
+ ret = read_frames(fg, fgt, frame, 0);
+ if (ret < 0) {
+ av_log(fg, AV_LOG_ERROR, "Error sending filtered frames for encoding\n");
+ return ret;
+ }
+
+ if (graph_eof)
+ return AVERROR_EOF;
+
+ // signal to the main thread that we are done processing the message
+done:
+ ret = tq_send(fgp->queue_out, fg->nb_outputs, frame);
+ if (ret < 0) {
+ if (ret != AVERROR_EOF)
+ av_log(fg, AV_LOG_ERROR, "Error communicating with the main thread\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+static void fg_thread_set_name(const FilterGraph *fg)
+{
+ char name[16];
+ if (filtergraph_is_simple(fg)) {
+ OutputStream *ost = fg->outputs[0]->ost;
+ snprintf(name, sizeof(name), "%cf#%d:%d",
+ av_get_media_type_string(ost->type)[0],
+ ost->file_index, ost->index);
+ } else {
+ snprintf(name, sizeof(name), "fc%d", fg->index);
+ }
+
+ ff_thread_setname(name);
+}
+
+static void fg_thread_uninit(FilterGraphThread *fgt)
+{
+ if (fgt->frame_queue_out) {
+ AVFrame *frame;
+ while (av_fifo_read(fgt->frame_queue_out, &frame, 1) >= 0)
+ av_frame_free(&frame);
+ av_fifo_freep2(&fgt->frame_queue_out);
+ }
+
+ av_frame_free(&fgt->frame);
+ av_freep(&fgt->eof_in);
+ av_freep(&fgt->eof_out);
+
+ memset(fgt, 0, sizeof(*fgt));
+}
+
+static int fg_thread_init(FilterGraphThread *fgt, const FilterGraph *fg)
+{
+ memset(fgt, 0, sizeof(*fgt));
+
+ fgt->frame = av_frame_alloc();
+ if (!fgt->frame)
+ goto fail;
+
+ fgt->eof_in = av_calloc(fg->nb_inputs, sizeof(*fgt->eof_in));
+ if (!fgt->eof_in)
+ goto fail;
+
+ fgt->eof_out = av_calloc(fg->nb_outputs, sizeof(*fgt->eof_out));
+ if (!fgt->eof_out)
+ goto fail;
+
+ fgt->frame_queue_out = av_fifo_alloc2(1, sizeof(AVFrame*), AV_FIFO_FLAG_AUTO_GROW);
+ if (!fgt->frame_queue_out)
+ goto fail;
+
+ return 0;
+
+fail:
+ fg_thread_uninit(fgt);
+ return AVERROR(ENOMEM);
+}
+
+static void *filter_thread(void *arg)
+{
+ FilterGraphPriv *fgp = arg;
+ FilterGraph *fg = &fgp->fg;
+
+ FilterGraphThread fgt;
+ int ret = 0, input_status = 0;
+
+ ret = fg_thread_init(&fgt, fg);
+ if (ret < 0)
+ goto finish;
+
+ fg_thread_set_name(fg);
+
+ // if we have all input parameters the graph can now be configured
+ if (ifilter_has_all_input_formats(fg)) {
+ ret = configure_filtergraph(fg, &fgt);
+ if (ret < 0) {
+ av_log(fg, AV_LOG_ERROR, "Error configuring filter graph: %s\n",
+ av_err2str(ret));
+ goto finish;
+ }
+ }
+
+ while (1) {
+ InputFilter *ifilter;
+ InputFilterPriv *ifp;
+ enum FrameOpaque o;
+ int input_idx, eof_frame;
+
+ input_status = tq_receive(fgp->queue_in, &input_idx, fgt.frame);
+ if (input_idx < 0 ||
+ (input_idx == fg->nb_inputs && input_status < 0)) {
+ av_log(fg, AV_LOG_VERBOSE, "Filtering thread received EOF\n");
+ break;
+ }
+
+ o = (intptr_t)fgt.frame->opaque;
+
+ // message on the control stream
+ if (input_idx == fg->nb_inputs) {
+ ret = msg_process(fgp, &fgt, fgt.frame);
+ if (ret < 0)
+ goto finish;
+
+ continue;
+ }
+
+ // we received an input frame or EOF
+ ifilter = fg->inputs[input_idx];
+ ifp = ifp_from_ifilter(ifilter);
+ eof_frame = input_status >= 0 && o == FRAME_OPAQUE_EOF;
+ if (ifp->type_src == AVMEDIA_TYPE_SUBTITLE) {
+ int hb_frame = input_status >= 0 && o == FRAME_OPAQUE_SUB_HEARTBEAT;
+ ret = sub2video_frame(ifilter, (fgt.frame->buf[0] || hb_frame) ? fgt.frame : NULL);
+ } else if (input_status >= 0 && fgt.frame->buf[0]) {
+ ret = send_frame(fg, &fgt, ifilter, fgt.frame);
+ } else {
+ int64_t pts = input_status >= 0 ? fgt.frame->pts : AV_NOPTS_VALUE;
+ AVRational tb = input_status >= 0 ? fgt.frame->time_base : (AVRational){ 1, 1 };
+ ret = send_eof(&fgt, ifilter, pts, tb);
+ }
+ av_frame_unref(fgt.frame);
+ if (ret < 0)
+ break;
+
+ if (eof_frame) {
+ // an EOF frame is immediately followed by sender closing
+ // the corresponding stream, so retrieve that event
+ input_status = tq_receive(fgp->queue_in, &input_idx, fgt.frame);
+ av_assert0(input_status == AVERROR_EOF && input_idx == ifp->index);
+ }
+
+ // signal to the main thread that we are done
+ ret = tq_send(fgp->queue_out, fg->nb_outputs, fgt.frame);
+ if (ret < 0) {
+ if (ret == AVERROR_EOF)
+ break;
+
+ av_log(fg, AV_LOG_ERROR, "Error communicating with the main thread\n");
+ goto finish;
+ }
+ }
+
+finish:
+ // EOF is normal termination
+ if (ret == AVERROR_EOF)
+ ret = 0;
+
+ for (int i = 0; i <= fg->nb_inputs; i++)
+ tq_receive_finish(fgp->queue_in, i);
+ for (int i = 0; i <= fg->nb_outputs; i++)
+ tq_send_finish(fgp->queue_out, i);
+
+ fg_thread_uninit(&fgt);
+
+ av_log(fg, AV_LOG_VERBOSE, "Terminating filtering thread\n");
+
+ return (void*)(intptr_t)ret;
+}
+
+static int thread_send_frame(FilterGraphPriv *fgp, InputFilter *ifilter,
+ AVFrame *frame, enum FrameOpaque type)
+{
+ InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
+ int output_idx, ret;
+
+ if (ifp->eof) {
+ av_frame_unref(frame);
+ return AVERROR_EOF;
+ }
+
+ frame->opaque = (void*)(intptr_t)type;
+
+ ret = tq_send(fgp->queue_in, ifp->index, frame);
+ if (ret < 0) {
+ ifp->eof = 1;
+ av_frame_unref(frame);
+ return ret;
+ }
+
+ if (type == FRAME_OPAQUE_EOF)
+ tq_send_finish(fgp->queue_in, ifp->index);
+
+ // wait for the frame to be processed
+ ret = tq_receive(fgp->queue_out, &output_idx, frame);
+ av_assert0(output_idx == fgp->fg.nb_outputs || ret == AVERROR_EOF);
+
+ return ret;
+}
+
+int ifilter_send_frame(InputFilter *ifilter, AVFrame *frame, int keep_reference)
+{
+ FilterGraphPriv *fgp = fgp_from_fg(ifilter->graph);
+ int ret;
+
+ if (keep_reference) {
+ ret = av_frame_ref(fgp->frame, frame);
+ if (ret < 0)
+ return ret;
+ } else
+ av_frame_move_ref(fgp->frame, frame);
+
+ return thread_send_frame(fgp, ifilter, fgp->frame, 0);
+}
+
+int ifilter_send_eof(InputFilter *ifilter, int64_t pts, AVRational tb)
+{
+ FilterGraphPriv *fgp = fgp_from_fg(ifilter->graph);
+ int ret;
+
+ fgp->frame->pts = pts;
+ fgp->frame->time_base = tb;
+
+ ret = thread_send_frame(fgp, ifilter, fgp->frame, FRAME_OPAQUE_EOF);
+
+ return ret == AVERROR_EOF ? 0 : ret;
+}
+
+void ifilter_sub2video_heartbeat(InputFilter *ifilter, int64_t pts, AVRational tb)
+{
+ FilterGraphPriv *fgp = fgp_from_fg(ifilter->graph);
+
+ fgp->frame->pts = pts;
+ fgp->frame->time_base = tb;
+
+ thread_send_frame(fgp, ifilter, fgp->frame, FRAME_OPAQUE_SUB_HEARTBEAT);
+}
+
+int fg_transcode_step(FilterGraph *graph, InputStream **best_ist)
+{
+ FilterGraphPriv *fgp = fgp_from_fg(graph);
+ int ret, got_frames = 0;
+
+ if (fgp->eof_in)
+ return AVERROR_EOF;
+
+ // signal to the filtering thread to return all frames it can
+ av_assert0(!fgp->frame->buf[0]);
+ fgp->frame->opaque = (void*)(intptr_t)(best_ist ?
+ FRAME_OPAQUE_CHOOSE_INPUT :
+ FRAME_OPAQUE_REAP_FILTERS);
+
+ ret = tq_send(fgp->queue_in, graph->nb_inputs, fgp->frame);
+ if (ret < 0) {
+ fgp->eof_in = 1;
+ goto finish;
+ }
+
+ while (1) {
+ OutputFilter *ofilter;
+ OutputFilterPriv *ofp;
+ OutputStream *ost;
+ int output_idx;
+
+ ret = tq_receive(fgp->queue_out, &output_idx, fgp->frame);
+
+ // EOF on the whole queue or the control stream
+ if (output_idx < 0 ||
+ (ret < 0 && output_idx == graph->nb_outputs))
+ goto finish;
+
+ // EOF for a specific stream
+ if (ret < 0) {
+ ofilter = graph->outputs[output_idx];
+ ofp = ofp_from_ofilter(ofilter);
// we are finished and no frames were ever seen at this output,
// at least initialize the encoder with a dummy frame
@@ -2533,30 +3051,111 @@ int fg_transcode_step(FilterGraph *graph, InputStream **best_ist)
av_frame_unref(frame);
}
- close_output_stream(ofilter->ost);
- }
- return 0;
- }
- if (ret != AVERROR(EAGAIN))
- return ret;
-
- for (i = 0; i < graph->nb_inputs; i++) {
- InputFilter *ifilter = graph->inputs[i];
- InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
-
- ist = ifp->ist;
- if (input_files[ist->file_index]->eagain || ifp->eof)
+ close_output_stream(graph->outputs[output_idx]->ost);
continue;
- nb_requests = av_buffersrc_get_nb_failed_requests(ifp->filter);
- if (nb_requests > nb_requests_max) {
- nb_requests_max = nb_requests;
- *best_ist = ist;
}
+
+ // request was fully processed by the filtering thread,
+ // return the input stream to read from, if needed
+ if (output_idx == graph->nb_outputs) {
+ int input_idx = (intptr_t)fgp->frame->opaque - 1;
+ av_assert0(input_idx <= graph->nb_inputs);
+
+ if (best_ist) {
+ *best_ist = (input_idx >= 0 && input_idx < graph->nb_inputs) ?
+ ifp_from_ifilter(graph->inputs[input_idx])->ist : NULL;
+
+ if (input_idx < 0 && !got_frames) {
+ for (int i = 0; i < graph->nb_outputs; i++)
+ graph->outputs[i]->ost->unavailable = 1;
+ }
+ }
+ break;
+ }
+
+ // got a frame from the filtering thread, send it for encoding
+ ofilter = graph->outputs[output_idx];
+ ost = ofilter->ost;
+ ofp = ofp_from_ofilter(ofilter);
+
+ if (ost->finished) {
+ av_frame_unref(fgp->frame);
+ tq_receive_finish(fgp->queue_out, output_idx);
+ continue;
+ }
+
+ if (fgp->frame->pts != AV_NOPTS_VALUE) {
+ ofilter->last_pts = av_rescale_q(fgp->frame->pts,
+ fgp->frame->time_base,
+ AV_TIME_BASE_Q);
+ }
+
+ ret = enc_frame(ost, fgp->frame);
+ av_frame_unref(fgp->frame);
+ if (ret < 0)
+ goto finish;
+
+ ofp->got_frame = 1;
+ got_frames = 1;
}
- if (!*best_ist)
- for (i = 0; i < graph->nb_outputs; i++)
- graph->outputs[i]->ost->unavailable = 1;
+finish:
+ if (ret < 0) {
+ fgp->eof_in = 1;
+ for (int i = 0; i < graph->nb_outputs; i++)
+ close_output_stream(graph->outputs[i]->ost);
+ }
- return 0;
+ return ret;
+}
+
+int reap_filters(FilterGraph *fg, int flush)
+{
+ return fg_transcode_step(fg, NULL);
+}
+
+void fg_send_command(FilterGraph *fg, double time, const char *target,
+ const char *command, const char *arg, int all_filters)
+{
+ FilterGraphPriv *fgp = fgp_from_fg(fg);
+ AVBufferRef *buf;
+ FilterCommand *fc;
+ int output_idx, ret;
+
+ if (!fgp->queue_in)
+ return;
+
+ fc = av_mallocz(sizeof(*fc));
+ if (!fc)
+ return;
+
+ buf = av_buffer_create((uint8_t*)fc, sizeof(*fc), filter_command_free, NULL, 0);
+ if (!buf) {
+ av_freep(&fc);
+ return;
+ }
+
+ fc->target = av_strdup(target);
+ fc->command = av_strdup(command);
+ fc->arg = av_strdup(arg);
+ if (!fc->target || !fc->command || !fc->arg) {
+ av_buffer_unref(&buf);
+ return;
+ }
+
+ fc->time = time;
+ fc->all_filters = all_filters;
+
+ fgp->frame->buf[0] = buf;
+ fgp->frame->opaque = (void*)(intptr_t)FRAME_OPAQUE_SEND_COMMAND;
+
+ ret = tq_send(fgp->queue_in, fg->nb_inputs, fgp->frame);
+ if (ret < 0) {
+ av_frame_unref(fgp->frame);
+ return;
+ }
+
+ // wait for the frame to be processed
+ ret = tq_receive(fgp->queue_out, &output_idx, fgp->frame);
+ av_assert0(output_idx == fgp->fg.nb_outputs || ret == AVERROR_EOF);
}
--
2.42.0
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 05/13] fftools/ffmpeg_filter: move filtering to a separate thread
2023-11-24 22:56 ` Michael Niedermayer
2023-11-25 20:18 ` [FFmpeg-devel] [PATCH 05/13 v2] " Anton Khirnov
@ 2023-11-25 20:23 ` James Almer
1 sibling, 0 replies; 49+ messages in thread
From: James Almer @ 2023-11-25 20:23 UTC (permalink / raw)
To: ffmpeg-devel
On 11/24/2023 7:56 PM, Michael Niedermayer wrote:
> On Thu, Nov 23, 2023 at 08:15:00PM +0100, Anton Khirnov wrote:
>> As previously for decoding, this is merely "scaffolding" for moving to a
>> fully threaded architecture and does not yet make filtering truly
>> parallel - the main thread will currently wait for the filtering thread
>> to finish its work before continuing. That will change in future commits
>> after encoders are also moved to threads and a thread-aware scheduler is
>> added.
>> ---
>> fftools/ffmpeg.h | 9 +-
>> fftools/ffmpeg_dec.c | 39 +-
>> fftools/ffmpeg_filter.c | 825 ++++++++++++++++++++++++++++++++++------
>> 3 files changed, 730 insertions(+), 143 deletions(-)
>
> This seems to cause a new assertion failure with this:
>
> echo 'Call 0 ping' | ./ffmpeg -nostats -i mm-short.mpg -vf nullsink,color=green -bitexact -vframes 1 -f null -
That's an interesting way to trigger the filter command interrupt.
>
> Press [q] to stop, [?] for help
>
> Enter command: <target>|all <time>|-1 <command>[ <argument>]
>
> Assertion !fgp->frame->buf[0] failed at fftools/ffmpeg_filter.c:2987
> Aborted (core dumped)
>
> [...]
>
>
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* [FFmpeg-devel] [PATCH 13/13 v2] fftools/ffmpeg: convert to a threaded architecture
2023-11-24 22:26 ` Michael Niedermayer
@ 2023-11-25 20:32 ` Anton Khirnov
2023-11-30 13:08 ` Michael Niedermayer
0 siblings, 1 reply; 49+ messages in thread
From: Anton Khirnov @ 2023-11-25 20:32 UTC (permalink / raw)
To: ffmpeg-devel
Change the main loop and every component (demuxers, decoders, filters,
encoders, muxers) to use the previously added transcode scheduler. Every
instance of every such component was already running in a separate
thread, but now they can actually run in parallel.
Changes the results of ffmpeg-fix_sub_duration_heartbeat - tested by
JEEB to be more correct and deterministic.
---
fftools/ffmpeg.c | 374 +--------
fftools/ffmpeg.h | 97 +--
fftools/ffmpeg_dec.c | 321 ++------
fftools/ffmpeg_demux.c | 268 ++++---
fftools/ffmpeg_enc.c | 368 ++-------
fftools/ffmpeg_filter.c | 722 +++++-------------
fftools/ffmpeg_mux.c | 324 ++------
fftools/ffmpeg_mux.h | 24 +-
fftools/ffmpeg_mux_init.c | 88 +--
fftools/ffmpeg_opt.c | 6 +-
.../fate/ffmpeg-fix_sub_duration_heartbeat | 36 +-
11 files changed, 598 insertions(+), 2030 deletions(-)
diff --git a/fftools/ffmpeg.c b/fftools/ffmpeg.c
index b8a97258a0..30b594fd97 100644
--- a/fftools/ffmpeg.c
+++ b/fftools/ffmpeg.c
@@ -117,7 +117,7 @@ typedef struct BenchmarkTimeStamps {
static BenchmarkTimeStamps get_benchmark_time_stamps(void);
static int64_t getmaxrss(void);
-unsigned nb_output_dumped = 0;
+atomic_uint nb_output_dumped = 0;
static BenchmarkTimeStamps current_time;
AVIOContext *progress_avio = NULL;
@@ -138,30 +138,6 @@ static struct termios oldtty;
static int restore_tty;
#endif
-/* sub2video hack:
- Convert subtitles to video with alpha to insert them in filter graphs.
- This is a temporary solution until libavfilter gets real subtitles support.
- */
-
-static void sub2video_heartbeat(InputFile *infile, int64_t pts, AVRational tb)
-{
- /* When a frame is read from a file, examine all sub2video streams in
- the same file and send the sub2video frame again. Otherwise, decoded
- video frames could be accumulating in the filter graph while a filter
- (possibly overlay) is desperately waiting for a subtitle frame. */
- for (int i = 0; i < infile->nb_streams; i++) {
- InputStream *ist = infile->streams[i];
-
- if (ist->dec_ctx->codec_type != AVMEDIA_TYPE_SUBTITLE)
- continue;
-
- for (int j = 0; j < ist->nb_filters; j++)
- ifilter_sub2video_heartbeat(ist->filters[j], pts, tb);
- }
-}
-
-/* end of sub2video hack */
-
static void term_exit_sigsafe(void)
{
#if HAVE_TERMIOS_H
@@ -499,23 +475,13 @@ void update_benchmark(const char *fmt, ...)
}
}
-void close_output_stream(OutputStream *ost)
-{
- OutputFile *of = output_files[ost->file_index];
- ost->finished |= ENCODER_FINISHED;
-
- if (ost->sq_idx_encode >= 0)
- sq_send(of->sq_encode, ost->sq_idx_encode, SQFRAME(NULL));
-}
-
-static void print_report(int is_last_report, int64_t timer_start, int64_t cur_time)
+static void print_report(int is_last_report, int64_t timer_start, int64_t cur_time, int64_t pts)
{
AVBPrint buf, buf_script;
int64_t total_size = of_filesize(output_files[0]);
int vid;
double bitrate;
double speed;
- int64_t pts = AV_NOPTS_VALUE;
static int64_t last_time = -1;
static int first_report = 1;
uint64_t nb_frames_dup = 0, nb_frames_drop = 0;
@@ -533,7 +499,7 @@ static void print_report(int is_last_report, int64_t timer_start, int64_t cur_ti
last_time = cur_time;
}
if (((cur_time - last_time) < stats_period && !first_report) ||
- (first_report && nb_output_dumped < nb_output_files))
+ (first_report && atomic_load(&nb_output_dumped) < nb_output_files))
return;
last_time = cur_time;
}
@@ -544,7 +510,7 @@ static void print_report(int is_last_report, int64_t timer_start, int64_t cur_ti
av_bprint_init(&buf, 0, AV_BPRINT_SIZE_AUTOMATIC);
av_bprint_init(&buf_script, 0, AV_BPRINT_SIZE_AUTOMATIC);
for (OutputStream *ost = ost_iter(NULL); ost; ost = ost_iter(ost)) {
- const float q = ost->enc ? ost->quality / (float) FF_QP2LAMBDA : -1;
+ const float q = ost->enc ? atomic_load(&ost->quality) / (float) FF_QP2LAMBDA : -1;
if (vid && ost->type == AVMEDIA_TYPE_VIDEO) {
av_bprintf(&buf, "q=%2.1f ", q);
@@ -565,22 +531,18 @@ static void print_report(int is_last_report, int64_t timer_start, int64_t cur_ti
if (is_last_report)
av_bprintf(&buf, "L");
- nb_frames_dup = ost->filter->nb_frames_dup;
- nb_frames_drop = ost->filter->nb_frames_drop;
+ nb_frames_dup = atomic_load(&ost->filter->nb_frames_dup);
+ nb_frames_drop = atomic_load(&ost->filter->nb_frames_drop);
vid = 1;
}
- /* compute min output value */
- if (ost->last_mux_dts != AV_NOPTS_VALUE) {
- if (pts == AV_NOPTS_VALUE || ost->last_mux_dts > pts)
- pts = ost->last_mux_dts;
- if (copy_ts) {
- if (copy_ts_first_pts == AV_NOPTS_VALUE && pts > 1)
- copy_ts_first_pts = pts;
- if (copy_ts_first_pts != AV_NOPTS_VALUE)
- pts -= copy_ts_first_pts;
- }
- }
+ }
+
+ if (copy_ts) {
+ if (copy_ts_first_pts == AV_NOPTS_VALUE && pts > 1)
+ copy_ts_first_pts = pts;
+ if (copy_ts_first_pts != AV_NOPTS_VALUE)
+ pts -= copy_ts_first_pts;
}
us = FFABS64U(pts) % AV_TIME_BASE;
@@ -783,81 +745,6 @@ int subtitle_wrap_frame(AVFrame *frame, AVSubtitle *subtitle, int copy)
return 0;
}
-int trigger_fix_sub_duration_heartbeat(OutputStream *ost, const AVPacket *pkt)
-{
- OutputFile *of = output_files[ost->file_index];
- int64_t signal_pts = av_rescale_q(pkt->pts, pkt->time_base,
- AV_TIME_BASE_Q);
-
- if (!ost->fix_sub_duration_heartbeat || !(pkt->flags & AV_PKT_FLAG_KEY))
- // we are only interested in heartbeats on streams configured, and
- // only on random access points.
- return 0;
-
- for (int i = 0; i < of->nb_streams; i++) {
- OutputStream *iter_ost = of->streams[i];
- InputStream *ist = iter_ost->ist;
- int ret = AVERROR_BUG;
-
- if (iter_ost == ost || !ist || !ist->decoding_needed ||
- ist->dec_ctx->codec_type != AVMEDIA_TYPE_SUBTITLE)
- // We wish to skip the stream that causes the heartbeat,
- // output streams without an input stream, streams not decoded
- // (as fix_sub_duration is only done for decoded subtitles) as
- // well as non-subtitle streams.
- continue;
-
- if ((ret = fix_sub_duration_heartbeat(ist, signal_pts)) < 0)
- return ret;
- }
-
- return 0;
-}
-
-/* pkt = NULL means EOF (needed to flush decoder buffers) */
-static int process_input_packet(InputStream *ist, const AVPacket *pkt, int no_eof)
-{
- InputFile *f = input_files[ist->file_index];
- int64_t dts_est = AV_NOPTS_VALUE;
- int ret = 0;
- int eof_reached = 0;
-
- if (ist->decoding_needed) {
- ret = dec_packet(ist, pkt, no_eof);
- if (ret < 0 && ret != AVERROR_EOF)
- return ret;
- }
- if (ret == AVERROR_EOF || (!pkt && !ist->decoding_needed))
- eof_reached = 1;
-
- if (pkt && pkt->opaque_ref) {
- DemuxPktData *pd = (DemuxPktData*)pkt->opaque_ref->data;
- dts_est = pd->dts_est;
- }
-
- if (f->recording_time != INT64_MAX) {
- int64_t start_time = 0;
- if (copy_ts) {
- start_time += f->start_time != AV_NOPTS_VALUE ? f->start_time : 0;
- start_time += start_at_zero ? 0 : f->start_time_effective;
- }
- if (dts_est >= f->recording_time + start_time)
- pkt = NULL;
- }
-
- for (int oidx = 0; oidx < ist->nb_outputs; oidx++) {
- OutputStream *ost = ist->outputs[oidx];
- if (ost->enc || (!pkt && no_eof))
- continue;
-
- ret = of_streamcopy(ost, pkt, dts_est);
- if (ret < 0)
- return ret;
- }
-
- return !eof_reached;
-}
-
static void print_stream_maps(void)
{
av_log(NULL, AV_LOG_INFO, "Stream mapping:\n");
@@ -934,43 +821,6 @@ static void print_stream_maps(void)
}
}
-/**
- * Select the output stream to process.
- *
- * @retval 0 an output stream was selected
- * @retval AVERROR(EAGAIN) need to wait until more input is available
- * @retval AVERROR_EOF no more streams need output
- */
-static int choose_output(OutputStream **post)
-{
- int64_t opts_min = INT64_MAX;
- OutputStream *ost_min = NULL;
-
- for (OutputStream *ost = ost_iter(NULL); ost; ost = ost_iter(ost)) {
- int64_t opts;
-
- if (ost->filter && ost->filter->last_pts != AV_NOPTS_VALUE) {
- opts = ost->filter->last_pts;
- } else {
- opts = ost->last_mux_dts == AV_NOPTS_VALUE ?
- INT64_MIN : ost->last_mux_dts;
- }
-
- if (!ost->initialized && !ost->finished) {
- ost_min = ost;
- break;
- }
- if (!ost->finished && opts < opts_min) {
- opts_min = opts;
- ost_min = ost;
- }
- }
- if (!ost_min)
- return AVERROR_EOF;
- *post = ost_min;
- return ost_min->unavailable ? AVERROR(EAGAIN) : 0;
-}
-
static void set_tty_echo(int on)
{
#if HAVE_TERMIOS_H
@@ -1042,149 +892,21 @@ static int check_keyboard_interaction(int64_t cur_time)
return 0;
}
-static void reset_eagain(void)
-{
- for (OutputStream *ost = ost_iter(NULL); ost; ost = ost_iter(ost))
- ost->unavailable = 0;
-}
-
-static void decode_flush(InputFile *ifile)
-{
- for (int i = 0; i < ifile->nb_streams; i++) {
- InputStream *ist = ifile->streams[i];
-
- if (ist->discard || !ist->decoding_needed)
- continue;
-
- dec_packet(ist, NULL, 1);
- }
-}
-
-/*
- * Return
- * - 0 -- one packet was read and processed
- * - AVERROR(EAGAIN) -- no packets were available for selected file,
- * this function should be called again
- * - AVERROR_EOF -- this function should not be called again
- */
-static int process_input(int file_index, AVPacket *pkt)
-{
- InputFile *ifile = input_files[file_index];
- InputStream *ist;
- int ret, i;
-
- ret = ifile_get_packet(ifile, pkt);
-
- if (ret == 1) {
- /* the input file is looped: flush the decoders */
- decode_flush(ifile);
- return AVERROR(EAGAIN);
- }
- if (ret < 0) {
- if (ret != AVERROR_EOF) {
- av_log(ifile, AV_LOG_ERROR,
- "Error retrieving a packet from demuxer: %s\n", av_err2str(ret));
- if (exit_on_error)
- return ret;
- }
-
- for (i = 0; i < ifile->nb_streams; i++) {
- ist = ifile->streams[i];
- if (!ist->discard) {
- ret = process_input_packet(ist, NULL, 0);
- if (ret>0)
- return 0;
- else if (ret < 0)
- return ret;
- }
-
- /* mark all outputs that don't go through lavfi as finished */
- for (int oidx = 0; oidx < ist->nb_outputs; oidx++) {
- OutputStream *ost = ist->outputs[oidx];
- OutputFile *of = output_files[ost->file_index];
-
- ret = of_output_packet(of, ost, NULL);
- if (ret < 0)
- return ret;
- }
- }
-
- ifile->eof_reached = 1;
- return AVERROR(EAGAIN);
- }
-
- reset_eagain();
-
- ist = ifile->streams[pkt->stream_index];
-
- sub2video_heartbeat(ifile, pkt->pts, pkt->time_base);
-
- ret = process_input_packet(ist, pkt, 0);
-
- av_packet_unref(pkt);
-
- return ret < 0 ? ret : 0;
-}
-
-/**
- * Run a single step of transcoding.
- *
- * @return 0 for success, <0 for error
- */
-static int transcode_step(OutputStream *ost, AVPacket *demux_pkt)
-{
- InputStream *ist = NULL;
- int ret;
-
- if (ost->filter) {
- if ((ret = fg_transcode_step(ost->filter->graph, &ist)) < 0)
- return ret;
- if (!ist)
- return 0;
- } else {
- ist = ost->ist;
- av_assert0(ist);
- }
-
- ret = process_input(ist->file_index, demux_pkt);
- if (ret == AVERROR(EAGAIN)) {
- return 0;
- }
-
- if (ret < 0)
- return ret == AVERROR_EOF ? 0 : ret;
-
- // process_input() above might have caused output to become available
- // in multiple filtergraphs, so we process all of them
- for (int i = 0; i < nb_filtergraphs; i++) {
- ret = reap_filters(filtergraphs[i], 0);
- if (ret < 0)
- return ret;
- }
-
- return 0;
-}
-
/*
* The following code is the main loop of the file converter
*/
-static int transcode(Scheduler *sch, int *err_rate_exceeded)
+static int transcode(Scheduler *sch)
{
int ret = 0, i;
- InputStream *ist;
- int64_t timer_start;
- AVPacket *demux_pkt = NULL;
+ int64_t timer_start, transcode_ts = 0;
print_stream_maps();
- *err_rate_exceeded = 0;
atomic_store(&transcode_init_done, 1);
- demux_pkt = av_packet_alloc();
- if (!demux_pkt) {
- ret = AVERROR(ENOMEM);
- goto fail;
- }
+ ret = sch_start(sch);
+ if (ret < 0)
+ return ret;
if (stdin_interaction) {
av_log(NULL, AV_LOG_INFO, "Press [q] to stop, [?] for help\n");
@@ -1192,8 +914,7 @@ static int transcode(Scheduler *sch, int *err_rate_exceeded)
timer_start = av_gettime_relative();
- while (!received_sigterm) {
- OutputStream *ost;
+ while (!sch_wait(sch, stats_period, &transcode_ts)) {
int64_t cur_time= av_gettime_relative();
/* if 'q' pressed, exits */
@@ -1201,49 +922,11 @@ static int transcode(Scheduler *sch, int *err_rate_exceeded)
if (check_keyboard_interaction(cur_time) < 0)
break;
- ret = choose_output(&ost);
- if (ret == AVERROR(EAGAIN)) {
- reset_eagain();
- av_usleep(10000);
- ret = 0;
- continue;
- } else if (ret < 0) {
- av_log(NULL, AV_LOG_VERBOSE, "No more output streams to write to, finishing.\n");
- ret = 0;
- break;
- }
-
- ret = transcode_step(ost, demux_pkt);
- if (ret < 0 && ret != AVERROR_EOF) {
- av_log(NULL, AV_LOG_ERROR, "Error while filtering: %s\n", av_err2str(ret));
- break;
- }
-
/* dump report by using the output first video and audio streams */
- print_report(0, timer_start, cur_time);
+ print_report(0, timer_start, cur_time, transcode_ts);
}
- /* at the end of stream, we must flush the decoder buffers */
- for (ist = ist_iter(NULL); ist; ist = ist_iter(ist)) {
- float err_rate;
-
- if (!input_files[ist->file_index]->eof_reached) {
- int err = process_input_packet(ist, NULL, 0);
- ret = err_merge(ret, err);
- }
-
- err_rate = (ist->frames_decoded || ist->decode_errors) ?
- ist->decode_errors / (ist->frames_decoded + ist->decode_errors) : 0.f;
- if (err_rate > max_error_rate) {
- av_log(ist, AV_LOG_FATAL, "Decode error rate %g exceeds maximum %g\n",
- err_rate, max_error_rate);
- *err_rate_exceeded = 1;
- } else if (err_rate)
- av_log(ist, AV_LOG_VERBOSE, "Decode error rate %g\n", err_rate);
- }
- ret = err_merge(ret, enc_flush());
-
- term_exit();
+ ret = sch_stop(sch);
/* write the trailer if needed */
for (i = 0; i < nb_output_files; i++) {
@@ -1251,11 +934,10 @@ static int transcode(Scheduler *sch, int *err_rate_exceeded)
ret = err_merge(ret, err);
}
- /* dump report by using the first video and audio streams */
- print_report(1, timer_start, av_gettime_relative());
+ term_exit();
-fail:
- av_packet_free(&demux_pkt);
+ /* dump report by using the first video and audio streams */
+ print_report(1, timer_start, av_gettime_relative(), transcode_ts);
return ret;
}
@@ -1308,7 +990,7 @@ int main(int argc, char **argv)
{
Scheduler *sch = NULL;
- int ret, err_rate_exceeded;
+ int ret;
BenchmarkTimeStamps ti;
init_dynload();
@@ -1350,7 +1032,7 @@ int main(int argc, char **argv)
}
current_time = ti = get_benchmark_time_stamps();
- ret = transcode(sch, &err_rate_exceeded);
+ ret = transcode(sch);
if (ret >= 0 && do_benchmark) {
int64_t utime, stime, rtime;
current_time = get_benchmark_time_stamps();
@@ -1362,8 +1044,8 @@ int main(int argc, char **argv)
utime / 1000000.0, stime / 1000000.0, rtime / 1000000.0);
}
- ret = received_nb_signals ? 255 :
- err_rate_exceeded ? 69 : ret;
+ ret = received_nb_signals ? 255 :
+ (ret == FFMPEG_ERROR_RATE_EXCEEDED) ? 69 : ret;
finish:
if (ret == AVERROR_EXIT)
diff --git a/fftools/ffmpeg.h b/fftools/ffmpeg.h
index a89038b765..ba82b7490d 100644
--- a/fftools/ffmpeg.h
+++ b/fftools/ffmpeg.h
@@ -61,6 +61,8 @@
#define FFMPEG_OPT_TOP 1
#define FFMPEG_OPT_FORCE_KF_SOURCE_NO_DROP 1
+#define FFMPEG_ERROR_RATE_EXCEEDED FFERRTAG('E', 'R', 'E', 'D')
+
enum VideoSyncMethod {
VSYNC_AUTO = -1,
VSYNC_PASSTHROUGH,
@@ -82,13 +84,16 @@ enum HWAccelID {
};
enum FrameOpaque {
- FRAME_OPAQUE_REAP_FILTERS = 1,
- FRAME_OPAQUE_CHOOSE_INPUT,
- FRAME_OPAQUE_SUB_HEARTBEAT,
+ FRAME_OPAQUE_SUB_HEARTBEAT = 1,
FRAME_OPAQUE_EOF,
FRAME_OPAQUE_SEND_COMMAND,
};
+enum PacketOpaque {
+ PKT_OPAQUE_SUB_HEARTBEAT = 1,
+ PKT_OPAQUE_FIX_SUB_DURATION,
+};
+
typedef struct HWDevice {
const char *name;
enum AVHWDeviceType type;
@@ -309,11 +314,8 @@ typedef struct OutputFilter {
enum AVMediaType type;
- /* pts of the last frame received from this filter, in AV_TIME_BASE_Q */
- int64_t last_pts;
-
- uint64_t nb_frames_dup;
- uint64_t nb_frames_drop;
+ atomic_uint_least64_t nb_frames_dup;
+ atomic_uint_least64_t nb_frames_drop;
} OutputFilter;
typedef struct FilterGraph {
@@ -426,11 +428,6 @@ typedef struct InputFile {
float readrate;
int accurate_seek;
-
- /* when looping the input file, this queue is used by decoders to report
- * the last frame timestamp back to the demuxer thread */
- AVThreadMessageQueue *audio_ts_queue;
- int audio_ts_queue_size;
} InputFile;
enum forced_keyframes_const {
@@ -532,8 +529,6 @@ typedef struct OutputStream {
InputStream *ist;
AVStream *st; /* stream in the output file */
- /* dts of the last packet sent to the muxing queue, in AV_TIME_BASE_Q */
- int64_t last_mux_dts;
AVRational enc_timebase;
@@ -578,13 +573,6 @@ typedef struct OutputStream {
AVDictionary *sws_dict;
AVDictionary *swr_opts;
char *apad;
- OSTFinished finished; /* no more packets should be written for this stream */
- int unavailable; /* true if the steram is unavailable (possibly temporarily) */
-
- // init_output_stream() has been called for this stream
- // The encoder and the bitstream filters have been initialized and the stream
- // parameters are set in the AVStream.
- int initialized;
const char *attachment_filename;
@@ -598,9 +586,8 @@ typedef struct OutputStream {
uint64_t samples_encoded;
/* packet quality factor */
- int quality;
+ atomic_int quality;
- int sq_idx_encode;
int sq_idx_mux;
EncStats enc_stats_pre;
@@ -658,7 +645,6 @@ extern FilterGraph **filtergraphs;
extern int nb_filtergraphs;
extern char *vstats_filename;
-extern char *sdp_filename;
extern float dts_delta_threshold;
extern float dts_error_threshold;
@@ -691,7 +677,7 @@ extern const AVIOInterruptCB int_cb;
extern const OptionDef options[];
extern HWDevice *filter_hw_device;
-extern unsigned nb_output_dumped;
+extern atomic_uint nb_output_dumped;
extern int ignore_unknown_streams;
extern int copy_unknown_streams;
@@ -737,10 +723,6 @@ FrameData *frame_data(AVFrame *frame);
const FrameData *frame_data_c(AVFrame *frame);
-int ifilter_send_frame(InputFilter *ifilter, AVFrame *frame, int keep_reference);
-int ifilter_send_eof(InputFilter *ifilter, int64_t pts, AVRational tb);
-void ifilter_sub2video_heartbeat(InputFilter *ifilter, int64_t pts, AVRational tb);
-
/**
* Set up fallback filtering parameters from a decoder context. They will only
* be used if no frames are ever sent on this input, otherwise the actual
@@ -761,26 +743,9 @@ int fg_create(FilterGraph **pfg, char *graph_desc, Scheduler *sch);
void fg_free(FilterGraph **pfg);
-/**
- * Perform a step of transcoding for the specified filter graph.
- *
- * @param[in] graph filter graph to consider
- * @param[out] best_ist input stream where a frame would allow to continue
- * @return 0 for success, <0 for error
- */
-int fg_transcode_step(FilterGraph *graph, InputStream **best_ist);
-
void fg_send_command(FilterGraph *fg, double time, const char *target,
const char *command, const char *arg, int all_filters);
-/**
- * Get and encode new output from specified filtergraph, without causing
- * activity.
- *
- * @return 0 for success, <0 for severe errors
- */
-int reap_filters(FilterGraph *fg, int flush);
-
int ffmpeg_parse_options(int argc, char **argv, Scheduler *sch);
void enc_stats_write(OutputStream *ost, EncStats *es,
@@ -807,25 +772,11 @@ int hwaccel_retrieve_data(AVCodecContext *avctx, AVFrame *input);
int dec_open(InputStream *ist, Scheduler *sch, unsigned sch_idx);
void dec_free(Decoder **pdec);
-/**
- * Submit a packet for decoding
- *
- * When pkt==NULL and no_eof=0, there will be no more input. Flush decoders and
- * mark all downstreams as finished.
- *
- * When pkt==NULL and no_eof=1, the stream was reset (e.g. after a seek). Flush
- * decoders and await further input.
- */
-int dec_packet(InputStream *ist, const AVPacket *pkt, int no_eof);
-
int enc_alloc(Encoder **penc, const AVCodec *codec,
Scheduler *sch, unsigned sch_idx);
void enc_free(Encoder **penc);
-int enc_open(OutputStream *ost, const AVFrame *frame);
-int enc_subtitle(OutputFile *of, OutputStream *ost, const AVSubtitle *sub);
-int enc_frame(OutputStream *ost, AVFrame *frame);
-int enc_flush(void);
+int enc_open(void *opaque, const AVFrame *frame);
/*
* Initialize muxing state for the given stream, should be called
@@ -840,30 +791,11 @@ void of_free(OutputFile **pof);
void of_enc_stats_close(void);
-int of_output_packet(OutputFile *of, OutputStream *ost, AVPacket *pkt);
-
-/**
- * @param dts predicted packet dts in AV_TIME_BASE_Q
- */
-int of_streamcopy(OutputStream *ost, const AVPacket *pkt, int64_t dts);
-
int64_t of_filesize(OutputFile *of);
int ifile_open(const OptionsContext *o, const char *filename, Scheduler *sch);
void ifile_close(InputFile **f);
-/**
- * Get next input packet from the demuxer.
- *
- * @param pkt the packet is written here when this function returns 0
- * @return
- * - 0 when a packet has been read successfully
- * - 1 when stream end was reached, but the stream is looped;
- * caller should flush decoders and read from this demuxer again
- * - a negative error code on failure
- */
-int ifile_get_packet(InputFile *f, AVPacket *pkt);
-
int ist_output_add(InputStream *ist, OutputStream *ost);
int ist_filter_add(InputStream *ist, InputFilter *ifilter, int is_simple);
@@ -880,9 +812,6 @@ InputStream *ist_iter(InputStream *prev);
* pass NULL to start iteration */
OutputStream *ost_iter(OutputStream *prev);
-void close_output_stream(OutputStream *ost);
-int trigger_fix_sub_duration_heartbeat(OutputStream *ost, const AVPacket *pkt);
-int fix_sub_duration_heartbeat(InputStream *ist, int64_t signal_pts);
void update_benchmark(const char *fmt, ...);
#define SPECIFIER_OPT_FMT_str "%s"
diff --git a/fftools/ffmpeg_dec.c b/fftools/ffmpeg_dec.c
index 90ea0d6d93..5dde82a276 100644
--- a/fftools/ffmpeg_dec.c
+++ b/fftools/ffmpeg_dec.c
@@ -54,24 +54,6 @@ struct Decoder {
Scheduler *sch;
unsigned sch_idx;
-
- pthread_t thread;
- /**
- * Queue for sending coded packets from the main thread to
- * the decoder thread.
- *
- * An empty packet is sent to flush the decoder without terminating
- * decoding.
- */
- ThreadQueue *queue_in;
- /**
- * Queue for sending decoded frames from the decoder thread
- * to the main thread.
- *
- * An empty frame is sent to signal that a single packet has been fully
- * processed.
- */
- ThreadQueue *queue_out;
};
// data that is local to the decoder thread and not visible outside of it
@@ -80,24 +62,6 @@ typedef struct DecThreadContext {
AVPacket *pkt;
} DecThreadContext;
-static int dec_thread_stop(Decoder *d)
-{
- void *ret;
-
- if (!d->queue_in)
- return 0;
-
- tq_send_finish(d->queue_in, 0);
- tq_receive_finish(d->queue_out, 0);
-
- pthread_join(d->thread, &ret);
-
- tq_free(&d->queue_in);
- tq_free(&d->queue_out);
-
- return (intptr_t)ret;
-}
-
void dec_free(Decoder **pdec)
{
Decoder *dec = *pdec;
@@ -105,8 +69,6 @@ void dec_free(Decoder **pdec)
if (!dec)
return;
- dec_thread_stop(dec);
-
av_frame_free(&dec->frame);
av_packet_free(&dec->pkt);
@@ -148,25 +110,6 @@ fail:
return AVERROR(ENOMEM);
}
-static int send_frame_to_filters(InputStream *ist, AVFrame *decoded_frame)
-{
- int i, ret = 0;
-
- for (i = 0; i < ist->nb_filters; i++) {
- ret = ifilter_send_frame(ist->filters[i], decoded_frame,
- i < ist->nb_filters - 1 ||
- ist->dec->type == AVMEDIA_TYPE_SUBTITLE);
- if (ret == AVERROR_EOF)
- ret = 0; /* ignore */
- if (ret < 0) {
- av_log(NULL, AV_LOG_ERROR,
- "Failed to inject frame into filter network: %s\n", av_err2str(ret));
- break;
- }
- }
- return ret;
-}
-
static AVRational audio_samplerate_update(void *logctx, Decoder *d,
const AVFrame *frame)
{
@@ -421,28 +364,14 @@ static int process_subtitle(InputStream *ist, AVFrame *frame)
if (!subtitle)
return 0;
- ret = send_frame_to_filters(ist, frame);
+ ret = sch_dec_send(d->sch, d->sch_idx, frame);
if (ret < 0)
- return ret;
+ av_frame_unref(frame);
- subtitle = (AVSubtitle*)frame->buf[0]->data;
- if (!subtitle->num_rects)
- return 0;
-
- for (int oidx = 0; oidx < ist->nb_outputs; oidx++) {
- OutputStream *ost = ist->outputs[oidx];
- if (!ost->enc || ost->type != AVMEDIA_TYPE_SUBTITLE)
- continue;
-
- ret = enc_subtitle(output_files[ost->file_index], ost, subtitle);
- if (ret < 0)
- return ret;
- }
-
- return 0;
+ return ret == AVERROR_EOF ? AVERROR_EXIT : ret;
}
-int fix_sub_duration_heartbeat(InputStream *ist, int64_t signal_pts)
+static int fix_sub_duration_heartbeat(InputStream *ist, int64_t signal_pts)
{
Decoder *d = ist->decoder;
int ret = AVERROR_BUG;
@@ -468,12 +397,24 @@ int fix_sub_duration_heartbeat(InputStream *ist, int64_t signal_pts)
static int transcode_subtitles(InputStream *ist, const AVPacket *pkt,
AVFrame *frame)
{
- Decoder *d = ist->decoder;
+ Decoder *d = ist->decoder;
AVPacket *flush_pkt = NULL;
AVSubtitle subtitle;
int got_output;
int ret;
+ if (pkt && (intptr_t)pkt->opaque == PKT_OPAQUE_SUB_HEARTBEAT) {
+ frame->pts = pkt->pts;
+ frame->time_base = pkt->time_base;
+ frame->opaque = (void*)(intptr_t)FRAME_OPAQUE_SUB_HEARTBEAT;
+
+ ret = sch_dec_send(d->sch, d->sch_idx, frame);
+ return ret == AVERROR_EOF ? AVERROR_EXIT : ret;
+ } else if (pkt && (intptr_t)pkt->opaque == PKT_OPAQUE_FIX_SUB_DURATION) {
+ return fix_sub_duration_heartbeat(ist, av_rescale_q(pkt->pts, pkt->time_base,
+ AV_TIME_BASE_Q));
+ }
+
if (!pkt) {
flush_pkt = av_packet_alloc();
if (!flush_pkt)
@@ -496,7 +437,7 @@ static int transcode_subtitles(InputStream *ist, const AVPacket *pkt,
ist->frames_decoded++;
- // XXX the queue for transferring data back to the main thread runs
+ // XXX the queue for transferring data to consumers runs
// on AVFrames, so we wrap AVSubtitle in an AVBufferRef and put that
// inside the frame
// eventually, subtitles should be switched to use AVFrames natively
@@ -509,26 +450,7 @@ static int transcode_subtitles(InputStream *ist, const AVPacket *pkt,
frame->width = ist->dec_ctx->width;
frame->height = ist->dec_ctx->height;
- ret = tq_send(d->queue_out, 0, frame);
- if (ret < 0)
- av_frame_unref(frame);
-
- return ret;
-}
-
-static int send_filter_eof(InputStream *ist)
-{
- Decoder *d = ist->decoder;
- int i, ret;
-
- for (i = 0; i < ist->nb_filters; i++) {
- int64_t end_pts = d->last_frame_pts == AV_NOPTS_VALUE ? AV_NOPTS_VALUE :
- d->last_frame_pts + d->last_frame_duration_est;
- ret = ifilter_send_eof(ist->filters[i], end_pts, d->last_frame_tb);
- if (ret < 0)
- return ret;
- }
- return 0;
+ return process_subtitle(ist, frame);
}
static int packet_decode(InputStream *ist, AVPacket *pkt, AVFrame *frame)
@@ -635,9 +557,11 @@ static int packet_decode(InputStream *ist, AVPacket *pkt, AVFrame *frame)
ist->frames_decoded++;
- ret = tq_send(d->queue_out, 0, frame);
- if (ret < 0)
- return ret;
+ ret = sch_dec_send(d->sch, d->sch_idx, frame);
+ if (ret < 0) {
+ av_frame_unref(frame);
+ return ret == AVERROR_EOF ? AVERROR_EXIT : ret;
+ }
}
}
@@ -679,7 +603,6 @@ fail:
void *decoder_thread(void *arg)
{
InputStream *ist = arg;
- InputFile *ifile = input_files[ist->file_index];
Decoder *d = ist->decoder;
DecThreadContext dt;
int ret = 0, input_status = 0;
@@ -691,19 +614,31 @@ void *decoder_thread(void *arg)
dec_thread_set_name(ist);
while (!input_status) {
- int dummy, flush_buffers;
+ int flush_buffers, have_data;
- input_status = tq_receive(d->queue_in, &dummy, dt.pkt);
- flush_buffers = input_status >= 0 && !dt.pkt->buf;
- if (!dt.pkt->buf)
+ input_status = sch_dec_receive(d->sch, d->sch_idx, dt.pkt);
+ have_data = input_status >= 0 &&
+ (dt.pkt->buf || dt.pkt->side_data_elems ||
+ (intptr_t)dt.pkt->opaque == PKT_OPAQUE_SUB_HEARTBEAT ||
+ (intptr_t)dt.pkt->opaque == PKT_OPAQUE_FIX_SUB_DURATION);
+ flush_buffers = input_status >= 0 && !have_data;
+ if (!have_data)
av_log(ist, AV_LOG_VERBOSE, "Decoder thread received %s packet\n",
flush_buffers ? "flush" : "EOF");
- ret = packet_decode(ist, dt.pkt->buf ? dt.pkt : NULL, dt.frame);
+ ret = packet_decode(ist, have_data ? dt.pkt : NULL, dt.frame);
av_packet_unref(dt.pkt);
av_frame_unref(dt.frame);
+ // AVERROR_EOF - EOF from the decoder
+ // AVERROR_EXIT - EOF from the scheduler
+ // we treat them differently when flushing
+ if (ret == AVERROR_EXIT) {
+ ret = AVERROR_EOF;
+ flush_buffers = 0;
+ }
+
if (ret == AVERROR_EOF) {
av_log(ist, AV_LOG_VERBOSE, "Decoder returned EOF, %s\n",
flush_buffers ? "resetting" : "finishing");
@@ -711,11 +646,10 @@ void *decoder_thread(void *arg)
if (!flush_buffers)
break;
- /* report last frame duration to the demuxer thread */
+ /* report last frame duration to the scheduler */
if (ist->dec->type == AVMEDIA_TYPE_AUDIO) {
- Timestamp ts = { .ts = d->last_frame_pts + d->last_frame_duration_est,
- .tb = d->last_frame_tb };
- av_thread_message_queue_send(ifile->audio_ts_queue, &ts, 0);
+ dt.pkt->pts = d->last_frame_pts + d->last_frame_duration_est;
+ dt.pkt->time_base = d->last_frame_tb;
}
avcodec_flush_buffers(ist->dec_ctx);
@@ -724,149 +658,47 @@ void *decoder_thread(void *arg)
av_err2str(ret));
break;
}
-
- // signal to the consumer thread that the entire packet was processed
- ret = tq_send(d->queue_out, 0, dt.frame);
- if (ret < 0) {
- if (ret != AVERROR_EOF)
- av_log(ist, AV_LOG_ERROR, "Error communicating with the main thread\n");
- break;
- }
}
// EOF is normal thread termination
if (ret == AVERROR_EOF)
ret = 0;
+ // on success send EOF timestamp to our downstreams
+ if (ret >= 0) {
+ float err_rate;
+
+ av_frame_unref(dt.frame);
+
+ dt.frame->opaque = (void*)(intptr_t)FRAME_OPAQUE_EOF;
+ dt.frame->pts = d->last_frame_pts == AV_NOPTS_VALUE ? AV_NOPTS_VALUE :
+ d->last_frame_pts + d->last_frame_duration_est;
+ dt.frame->time_base = d->last_frame_tb;
+
+ ret = sch_dec_send(d->sch, d->sch_idx, dt.frame);
+ if (ret < 0 && ret != AVERROR_EOF) {
+ av_log(NULL, AV_LOG_FATAL,
+ "Error signalling EOF timestamp: %s\n", av_err2str(ret));
+ goto finish;
+ }
+ ret = 0;
+
+ err_rate = (ist->frames_decoded || ist->decode_errors) ?
+ ist->decode_errors / (ist->frames_decoded + ist->decode_errors) : 0.f;
+ if (err_rate > max_error_rate) {
+ av_log(ist, AV_LOG_FATAL, "Decode error rate %g exceeds maximum %g\n",
+ err_rate, max_error_rate);
+ ret = FFMPEG_ERROR_RATE_EXCEEDED;
+ } else if (err_rate)
+ av_log(ist, AV_LOG_VERBOSE, "Decode error rate %g\n", err_rate);
+ }
+
finish:
- tq_receive_finish(d->queue_in, 0);
- tq_send_finish (d->queue_out, 0);
-
- // make sure the demuxer does not get stuck waiting for audio durations
- // that will never arrive
- if (ifile->audio_ts_queue && ist->dec->type == AVMEDIA_TYPE_AUDIO)
- av_thread_message_queue_set_err_recv(ifile->audio_ts_queue, AVERROR_EOF);
-
dec_thread_uninit(&dt);
- av_log(ist, AV_LOG_VERBOSE, "Terminating decoder thread\n");
-
return (void*)(intptr_t)ret;
}
-int dec_packet(InputStream *ist, const AVPacket *pkt, int no_eof)
-{
- Decoder *d = ist->decoder;
- int ret = 0, thread_ret;
-
- // thread already joined
- if (!d->queue_in)
- return AVERROR_EOF;
-
- // send the packet/flush request/EOF to the decoder thread
- if (pkt || no_eof) {
- av_packet_unref(d->pkt);
-
- if (pkt) {
- ret = av_packet_ref(d->pkt, pkt);
- if (ret < 0)
- goto finish;
- }
-
- ret = tq_send(d->queue_in, 0, d->pkt);
- if (ret < 0)
- goto finish;
- } else
- tq_send_finish(d->queue_in, 0);
-
- // retrieve all decoded data for the packet
- while (1) {
- int dummy;
-
- ret = tq_receive(d->queue_out, &dummy, d->frame);
- if (ret < 0)
- goto finish;
-
- // packet fully processed
- if (!d->frame->buf[0])
- return 0;
-
- // process the decoded frame
- if (ist->dec->type == AVMEDIA_TYPE_SUBTITLE) {
- ret = process_subtitle(ist, d->frame);
- } else {
- ret = send_frame_to_filters(ist, d->frame);
- }
- av_frame_unref(d->frame);
- if (ret < 0)
- goto finish;
- }
-
-finish:
- thread_ret = dec_thread_stop(d);
- if (thread_ret < 0) {
- av_log(ist, AV_LOG_ERROR, "Decoder thread returned error: %s\n",
- av_err2str(thread_ret));
- ret = err_merge(ret, thread_ret);
- }
- // non-EOF errors here are all fatal
- if (ret < 0 && ret != AVERROR_EOF)
- return ret;
-
- // signal EOF to our downstreams
- ret = send_filter_eof(ist);
- if (ret < 0) {
- av_log(NULL, AV_LOG_FATAL, "Error marking filters as finished\n");
- return ret;
- }
-
- return AVERROR_EOF;
-}
-
-static int dec_thread_start(InputStream *ist)
-{
- Decoder *d = ist->decoder;
- ObjPool *op;
- int ret = 0;
-
- op = objpool_alloc_packets();
- if (!op)
- return AVERROR(ENOMEM);
-
- d->queue_in = tq_alloc(1, 1, op, pkt_move);
- if (!d->queue_in) {
- objpool_free(&op);
- return AVERROR(ENOMEM);
- }
-
- op = objpool_alloc_frames();
- if (!op)
- goto fail;
-
- d->queue_out = tq_alloc(1, 4, op, frame_move);
- if (!d->queue_out) {
- objpool_free(&op);
- goto fail;
- }
-
- ret = pthread_create(&d->thread, NULL, decoder_thread, ist);
- if (ret) {
- ret = AVERROR(ret);
- av_log(ist, AV_LOG_ERROR, "pthread_create() failed: %s\n",
- av_err2str(ret));
- goto fail;
- }
-
- return 0;
-fail:
- if (ret >= 0)
- ret = AVERROR(ENOMEM);
-
- tq_free(&d->queue_in);
- tq_free(&d->queue_out);
- return ret;
-}
-
static enum AVPixelFormat get_format(AVCodecContext *s, const enum AVPixelFormat *pix_fmts)
{
InputStream *ist = s->opaque;
@@ -1118,12 +950,5 @@ int dec_open(InputStream *ist, Scheduler *sch, unsigned sch_idx)
if (ret < 0)
return ret;
- ret = dec_thread_start(ist);
- if (ret < 0) {
- av_log(ist, AV_LOG_ERROR, "Error starting decoder thread: %s\n",
- av_err2str(ret));
- return ret;
- }
-
return 0;
}
diff --git a/fftools/ffmpeg_demux.c b/fftools/ffmpeg_demux.c
index 2234dbe076..91cd7a1125 100644
--- a/fftools/ffmpeg_demux.c
+++ b/fftools/ffmpeg_demux.c
@@ -22,8 +22,6 @@
#include "ffmpeg.h"
#include "ffmpeg_sched.h"
#include "ffmpeg_utils.h"
-#include "objpool.h"
-#include "thread_queue.h"
#include "libavutil/avassert.h"
#include "libavutil/avstring.h"
@@ -35,7 +33,6 @@
#include "libavutil/pixdesc.h"
#include "libavutil/time.h"
#include "libavutil/timestamp.h"
-#include "libavutil/thread.h"
#include "libavcodec/packet.h"
@@ -66,7 +63,11 @@ typedef struct DemuxStream {
double ts_scale;
+ // scheduler returned EOF for this stream
+ int finished;
+
int streamcopy_needed;
+ int have_sub2video;
int wrap_correction_done;
int saw_first_ts;
@@ -101,6 +102,7 @@ typedef struct Demuxer {
/* number of times input stream should be looped */
int loop;
+ int have_audio_dec;
/* duration of the looped segment of the input file */
Timestamp duration;
/* pts with the smallest/largest values ever seen */
@@ -113,11 +115,12 @@ typedef struct Demuxer {
double readrate_initial_burst;
Scheduler *sch;
- ThreadQueue *thread_queue;
- int thread_queue_size;
- pthread_t thread;
+
+ AVPacket *pkt_heartbeat;
int read_started;
+ int nb_streams_used;
+ int nb_streams_finished;
} Demuxer;
static DemuxStream *ds_from_ist(InputStream *ist)
@@ -153,7 +156,7 @@ static void report_new_stream(Demuxer *d, const AVPacket *pkt)
d->nb_streams_warn = pkt->stream_index + 1;
}
-static int seek_to_start(Demuxer *d)
+static int seek_to_start(Demuxer *d, Timestamp end_pts)
{
InputFile *ifile = &d->f;
AVFormatContext *is = ifile->ctx;
@@ -163,21 +166,10 @@ static int seek_to_start(Demuxer *d)
if (ret < 0)
return ret;
- if (ifile->audio_ts_queue_size) {
- int got_ts = 0;
-
- while (got_ts < ifile->audio_ts_queue_size) {
- Timestamp ts;
- ret = av_thread_message_queue_recv(ifile->audio_ts_queue, &ts, 0);
- if (ret < 0)
- return ret;
- got_ts++;
-
- if (d->max_pts.ts == AV_NOPTS_VALUE ||
- av_compare_ts(d->max_pts.ts, d->max_pts.tb, ts.ts, ts.tb) < 0)
- d->max_pts = ts;
- }
- }
+ if (end_pts.ts != AV_NOPTS_VALUE &&
+ (d->max_pts.ts == AV_NOPTS_VALUE ||
+ av_compare_ts(d->max_pts.ts, d->max_pts.tb, end_pts.ts, end_pts.tb) < 0))
+ d->max_pts = end_pts;
if (d->max_pts.ts != AV_NOPTS_VALUE) {
int64_t min_pts = d->min_pts.ts == AV_NOPTS_VALUE ? 0 : d->min_pts.ts;
@@ -404,7 +396,7 @@ static int ts_fixup(Demuxer *d, AVPacket *pkt)
duration = av_rescale_q(d->duration.ts, d->duration.tb, pkt->time_base);
if (pkt->pts != AV_NOPTS_VALUE) {
// audio decoders take precedence for estimating total file duration
- int64_t pkt_duration = ifile->audio_ts_queue_size ? 0 : pkt->duration;
+ int64_t pkt_duration = d->have_audio_dec ? 0 : pkt->duration;
pkt->pts += duration;
@@ -440,7 +432,7 @@ static int ts_fixup(Demuxer *d, AVPacket *pkt)
return 0;
}
-static int input_packet_process(Demuxer *d, AVPacket *pkt)
+static int input_packet_process(Demuxer *d, AVPacket *pkt, unsigned *send_flags)
{
InputFile *f = &d->f;
InputStream *ist = f->streams[pkt->stream_index];
@@ -451,6 +443,16 @@ static int input_packet_process(Demuxer *d, AVPacket *pkt)
if (ret < 0)
return ret;
+ if (f->recording_time != INT64_MAX) {
+ int64_t start_time = 0;
+ if (copy_ts) {
+ start_time += f->start_time != AV_NOPTS_VALUE ? f->start_time : 0;
+ start_time += start_at_zero ? 0 : f->start_time_effective;
+ }
+ if (ds->dts >= f->recording_time + start_time)
+ *send_flags |= DEMUX_SEND_STREAMCOPY_EOF;
+ }
+
ds->data_size += pkt->size;
ds->nb_packets++;
@@ -465,6 +467,8 @@ static int input_packet_process(Demuxer *d, AVPacket *pkt)
av_ts2timestr(input_files[ist->file_index]->ts_offset, &AV_TIME_BASE_Q));
}
+ pkt->stream_index = ds->sch_idx_stream;
+
return 0;
}
@@ -488,6 +492,65 @@ static void readrate_sleep(Demuxer *d)
}
}
+static int do_send(Demuxer *d, DemuxStream *ds, AVPacket *pkt, unsigned flags,
+ const char *pkt_desc)
+{
+ int ret;
+
+ ret = sch_demux_send(d->sch, d->f.index, pkt, flags);
+ if (ret == AVERROR_EOF) {
+ av_packet_unref(pkt);
+
+ av_log(ds, AV_LOG_VERBOSE, "All consumers of this stream are done\n");
+ ds->finished = 1;
+
+ if (++d->nb_streams_finished == d->nb_streams_used) {
+ av_log(d, AV_LOG_VERBOSE, "All consumers are done\n");
+ return AVERROR_EOF;
+ }
+ } else if (ret < 0) {
+ if (ret != AVERROR_EXIT)
+ av_log(d, AV_LOG_ERROR,
+ "Unable to send %s packet to consumers: %s\n",
+ pkt_desc, av_err2str(ret));
+ return ret;
+ }
+
+ return 0;
+}
+
+static int demux_send(Demuxer *d, DemuxStream *ds, AVPacket *pkt, unsigned flags)
+{
+ InputFile *f = &d->f;
+ int ret;
+
+ // send heartbeat for sub2video streams
+ if (d->pkt_heartbeat && pkt->pts != AV_NOPTS_VALUE) {
+ for (int i = 0; i < f->nb_streams; i++) {
+ DemuxStream *ds1 = ds_from_ist(f->streams[i]);
+
+ if (ds1->finished || !ds1->have_sub2video)
+ continue;
+
+ d->pkt_heartbeat->pts = pkt->pts;
+ d->pkt_heartbeat->time_base = pkt->time_base;
+ d->pkt_heartbeat->stream_index = ds1->sch_idx_stream;
+ d->pkt_heartbeat->opaque = (void*)(intptr_t)PKT_OPAQUE_SUB_HEARTBEAT;
+
+ ret = do_send(d, ds1, d->pkt_heartbeat, 0, "heartbeat");
+ if (ret < 0)
+ return ret;
+ }
+ }
+
+ ret = do_send(d, ds, pkt, flags, "demuxed");
+ if (ret < 0)
+ return ret;
+
+
+ return 0;
+}
+
static void discard_unused_programs(InputFile *ifile)
{
for (int j = 0; j < ifile->ctx->nb_programs; j++) {
@@ -527,9 +590,13 @@ static void *input_thread(void *arg)
discard_unused_programs(f);
+ d->read_started = 1;
d->wallclock_start = av_gettime_relative();
while (1) {
+ DemuxStream *ds;
+ unsigned send_flags = 0;
+
ret = av_read_frame(f->ctx, pkt);
if (ret == AVERROR(EAGAIN)) {
@@ -538,11 +605,13 @@ static void *input_thread(void *arg)
}
if (ret < 0) {
if (d->loop) {
- /* signal looping to the consumer thread */
+ /* signal looping to our consumers */
pkt->stream_index = -1;
- ret = tq_send(d->thread_queue, 0, pkt);
+
+ ret = sch_demux_send(d->sch, f->index, pkt, 0);
if (ret >= 0)
- ret = seek_to_start(d);
+ ret = seek_to_start(d, (Timestamp){ .ts = pkt->pts,
+ .tb = pkt->time_base });
if (ret >= 0)
continue;
@@ -551,9 +620,11 @@ static void *input_thread(void *arg)
if (ret == AVERROR_EOF)
av_log(d, AV_LOG_VERBOSE, "EOF while reading input\n");
- else
+ else {
av_log(d, AV_LOG_ERROR, "Error during demuxing: %s\n",
av_err2str(ret));
+ ret = exit_on_error ? ret : 0;
+ }
break;
}
@@ -565,8 +636,9 @@ static void *input_thread(void *arg)
/* the following test is needed in case new streams appear
dynamically in stream : we ignore them */
- if (pkt->stream_index >= f->nb_streams ||
- f->streams[pkt->stream_index]->discard) {
+ ds = pkt->stream_index < f->nb_streams ?
+ ds_from_ist(f->streams[pkt->stream_index]) : NULL;
+ if (!ds || ds->ist.discard || ds->finished) {
report_new_stream(d, pkt);
av_packet_unref(pkt);
continue;
@@ -583,122 +655,26 @@ static void *input_thread(void *arg)
}
}
- ret = input_packet_process(d, pkt);
+ ret = input_packet_process(d, pkt, &send_flags);
if (ret < 0)
break;
if (f->readrate)
readrate_sleep(d);
- ret = tq_send(d->thread_queue, 0, pkt);
- if (ret < 0) {
- if (ret != AVERROR_EOF)
- av_log(f, AV_LOG_ERROR,
- "Unable to send packet to main thread: %s\n",
- av_err2str(ret));
+ ret = demux_send(d, ds, pkt, send_flags);
+ if (ret < 0)
break;
- }
}
+ // EOF/EXIT is normal termination
+ if (ret == AVERROR_EOF || ret == AVERROR_EXIT)
+ ret = 0;
+
finish:
- av_assert0(ret < 0);
- tq_send_finish(d->thread_queue, 0);
-
av_packet_free(&pkt);
- av_log(d, AV_LOG_VERBOSE, "Terminating demuxer thread\n");
-
- return NULL;
-}
-
-static void thread_stop(Demuxer *d)
-{
- InputFile *f = &d->f;
-
- if (!d->thread_queue)
- return;
-
- tq_receive_finish(d->thread_queue, 0);
-
- pthread_join(d->thread, NULL);
-
- tq_free(&d->thread_queue);
-
- av_thread_message_queue_free(&f->audio_ts_queue);
-}
-
-static int thread_start(Demuxer *d)
-{
- int ret;
- InputFile *f = &d->f;
- ObjPool *op;
-
- if (d->thread_queue_size <= 0)
- d->thread_queue_size = (nb_input_files > 1 ? 8 : 1);
-
- op = objpool_alloc_packets();
- if (!op)
- return AVERROR(ENOMEM);
-
- d->thread_queue = tq_alloc(1, d->thread_queue_size, op, pkt_move);
- if (!d->thread_queue) {
- objpool_free(&op);
- return AVERROR(ENOMEM);
- }
-
- if (d->loop) {
- int nb_audio_dec = 0;
-
- for (int i = 0; i < f->nb_streams; i++) {
- InputStream *ist = f->streams[i];
- nb_audio_dec += !!(ist->decoding_needed &&
- ist->st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO);
- }
-
- if (nb_audio_dec) {
- ret = av_thread_message_queue_alloc(&f->audio_ts_queue,
- nb_audio_dec, sizeof(Timestamp));
- if (ret < 0)
- goto fail;
- f->audio_ts_queue_size = nb_audio_dec;
- }
- }
-
- if ((ret = pthread_create(&d->thread, NULL, input_thread, d))) {
- av_log(d, AV_LOG_ERROR, "pthread_create failed: %s. Try to increase `ulimit -v` or decrease `ulimit -s`.\n", strerror(ret));
- ret = AVERROR(ret);
- goto fail;
- }
-
- d->read_started = 1;
-
- return 0;
-fail:
- tq_free(&d->thread_queue);
- return ret;
-}
-
-int ifile_get_packet(InputFile *f, AVPacket *pkt)
-{
- Demuxer *d = demuxer_from_ifile(f);
- int ret, dummy;
-
- if (!d->thread_queue) {
- ret = thread_start(d);
- if (ret < 0)
- return ret;
- }
-
- ret = tq_receive(d->thread_queue, &dummy, pkt);
- if (ret < 0)
- return ret;
-
- if (pkt->stream_index == -1) {
- av_assert0(!pkt->data && !pkt->side_data_elems);
- return 1;
- }
-
- return 0;
+ return (void*)(intptr_t)ret;
}
static void demux_final_stats(Demuxer *d)
@@ -769,8 +745,6 @@ void ifile_close(InputFile **pf)
if (!f)
return;
- thread_stop(d);
-
if (d->read_started)
demux_final_stats(d);
@@ -780,6 +754,8 @@ void ifile_close(InputFile **pf)
avformat_close_input(&f->ctx);
+ av_packet_free(&d->pkt_heartbeat);
+
av_freep(pf);
}
@@ -802,7 +778,11 @@ static int ist_use(InputStream *ist, int decoding_needed)
ds->sch_idx_stream = ret;
}
- ist->discard = 0;
+ if (ist->discard) {
+ ist->discard = 0;
+ d->nb_streams_used++;
+ }
+
ist->st->discard = ist->user_set_discard;
ist->decoding_needed |= decoding_needed;
ds->streamcopy_needed |= !decoding_needed;
@@ -823,6 +803,8 @@ static int ist_use(InputStream *ist, int decoding_needed)
ret = dec_open(ist, d->sch, ds->sch_idx_dec);
if (ret < 0)
return ret;
+
+ d->have_audio_dec |= is_audio;
}
return 0;
@@ -848,6 +830,7 @@ int ist_output_add(InputStream *ist, OutputStream *ost)
int ist_filter_add(InputStream *ist, InputFilter *ifilter, int is_simple)
{
+ Demuxer *d = demuxer_from_ifile(input_files[ist->file_index]);
DemuxStream *ds = ds_from_ist(ist);
int ret;
@@ -866,6 +849,15 @@ int ist_filter_add(InputStream *ist, InputFilter *ifilter, int is_simple)
if (ret < 0)
return ret;
+ if (ist->dec_ctx->codec_type == AVMEDIA_TYPE_SUBTITLE) {
+ if (!d->pkt_heartbeat) {
+ d->pkt_heartbeat = av_packet_alloc();
+ if (!d->pkt_heartbeat)
+ return AVERROR(ENOMEM);
+ }
+ ds->have_sub2video = 1;
+ }
+
return ds->sch_idx_dec;
}
@@ -1607,8 +1599,6 @@ int ifile_open(const OptionsContext *o, const char *filename, Scheduler *sch)
"since neither -readrate nor -re were given\n");
}
- d->thread_queue_size = o->thread_queue_size;
-
/* Add all the streams from the given input file to the demuxer */
for (int i = 0; i < ic->nb_streams; i++) {
ret = ist_add(o, d, ic->streams[i]);
diff --git a/fftools/ffmpeg_enc.c b/fftools/ffmpeg_enc.c
index 9871381c0e..9383b167f7 100644
--- a/fftools/ffmpeg_enc.c
+++ b/fftools/ffmpeg_enc.c
@@ -41,12 +41,6 @@
#include "libavformat/avformat.h"
struct Encoder {
- AVFrame *sq_frame;
-
- // packet for receiving encoded output
- AVPacket *pkt;
- AVFrame *sub_frame;
-
// combined size of all the packets received from the encoder
uint64_t data_size;
@@ -54,25 +48,9 @@ struct Encoder {
uint64_t packets_encoded;
int opened;
- int finished;
Scheduler *sch;
unsigned sch_idx;
-
- pthread_t thread;
- /**
- * Queue for sending frames from the main thread to
- * the encoder thread.
- */
- ThreadQueue *queue_in;
- /**
- * Queue for sending encoded packets from the encoder thread
- * to the main thread.
- *
- * An empty packet is sent to signal that a previously sent
- * frame has been fully processed.
- */
- ThreadQueue *queue_out;
};
// data that is local to the decoder thread and not visible outside of it
@@ -81,24 +59,6 @@ typedef struct EncoderThread {
AVPacket *pkt;
} EncoderThread;
-static int enc_thread_stop(Encoder *e)
-{
- void *ret;
-
- if (!e->queue_in)
- return 0;
-
- tq_send_finish(e->queue_in, 0);
- tq_receive_finish(e->queue_out, 0);
-
- pthread_join(e->thread, &ret);
-
- tq_free(&e->queue_in);
- tq_free(&e->queue_out);
-
- return (int)(intptr_t)ret;
-}
-
void enc_free(Encoder **penc)
{
Encoder *enc = *penc;
@@ -106,13 +66,6 @@ void enc_free(Encoder **penc)
if (!enc)
return;
- enc_thread_stop(enc);
-
- av_frame_free(&enc->sq_frame);
- av_frame_free(&enc->sub_frame);
-
- av_packet_free(&enc->pkt);
-
av_freep(penc);
}
@@ -127,25 +80,12 @@ int enc_alloc(Encoder **penc, const AVCodec *codec,
if (!enc)
return AVERROR(ENOMEM);
- if (codec->type == AVMEDIA_TYPE_SUBTITLE) {
- enc->sub_frame = av_frame_alloc();
- if (!enc->sub_frame)
- goto fail;
- }
-
- enc->pkt = av_packet_alloc();
- if (!enc->pkt)
- goto fail;
-
enc->sch = sch;
enc->sch_idx = sch_idx;
*penc = enc;
return 0;
-fail:
- enc_free(&enc);
- return AVERROR(ENOMEM);
}
static int hw_device_setup_for_encode(OutputStream *ost, AVBufferRef *frames_ref)
@@ -224,52 +164,9 @@ static int set_encoder_id(OutputFile *of, OutputStream *ost)
return 0;
}
-static int enc_thread_start(OutputStream *ost)
-{
- Encoder *e = ost->enc;
- ObjPool *op;
- int ret = 0;
-
- op = objpool_alloc_frames();
- if (!op)
- return AVERROR(ENOMEM);
-
- e->queue_in = tq_alloc(1, 1, op, frame_move);
- if (!e->queue_in) {
- objpool_free(&op);
- return AVERROR(ENOMEM);
- }
-
- op = objpool_alloc_packets();
- if (!op)
- goto fail;
-
- e->queue_out = tq_alloc(1, 4, op, pkt_move);
- if (!e->queue_out) {
- objpool_free(&op);
- goto fail;
- }
-
- ret = pthread_create(&e->thread, NULL, encoder_thread, ost);
- if (ret) {
- ret = AVERROR(ret);
- av_log(ost, AV_LOG_ERROR, "pthread_create() failed: %s\n",
- av_err2str(ret));
- goto fail;
- }
-
- return 0;
-fail:
- if (ret >= 0)
- ret = AVERROR(ENOMEM);
-
- tq_free(&e->queue_in);
- tq_free(&e->queue_out);
- return ret;
-}
-
-int enc_open(OutputStream *ost, const AVFrame *frame)
+int enc_open(void *opaque, const AVFrame *frame)
{
+ OutputStream *ost = opaque;
InputStream *ist = ost->ist;
Encoder *e = ost->enc;
AVCodecContext *enc_ctx = ost->enc_ctx;
@@ -277,6 +174,7 @@ int enc_open(OutputStream *ost, const AVFrame *frame)
const AVCodec *enc = enc_ctx->codec;
OutputFile *of = output_files[ost->file_index];
FrameData *fd;
+ int frame_samples = 0;
int ret;
if (e->opened)
@@ -420,17 +318,8 @@ int enc_open(OutputStream *ost, const AVFrame *frame)
e->opened = 1;
- if (ost->sq_idx_encode >= 0) {
- e->sq_frame = av_frame_alloc();
- if (!e->sq_frame)
- return AVERROR(ENOMEM);
- }
-
- if (ost->enc_ctx->frame_size) {
- av_assert0(ost->sq_idx_encode >= 0);
- sq_frame_samples(output_files[ost->file_index]->sq_encode,
- ost->sq_idx_encode, ost->enc_ctx->frame_size);
- }
+ if (ost->enc_ctx->frame_size)
+ frame_samples = ost->enc_ctx->frame_size;
ret = check_avoptions(ost->encoder_opts);
if (ret < 0)
@@ -476,18 +365,11 @@ int enc_open(OutputStream *ost, const AVFrame *frame)
if (ost->st->time_base.num <= 0 || ost->st->time_base.den <= 0)
ost->st->time_base = av_add_q(ost->enc_ctx->time_base, (AVRational){0, 1});
- ret = enc_thread_start(ost);
- if (ret < 0) {
- av_log(ost, AV_LOG_ERROR, "Error starting encoder thread: %s\n",
- av_err2str(ret));
- return ret;
- }
-
ret = of_stream_init(of, ost);
if (ret < 0)
return ret;
- return 0;
+ return frame_samples;
}
static int check_recording_time(OutputStream *ost, int64_t ts, AVRational tb)
@@ -514,8 +396,7 @@ static int do_subtitle_out(OutputFile *of, OutputStream *ost, const AVSubtitle *
av_log(ost, AV_LOG_ERROR, "Subtitle packets must have a pts\n");
return exit_on_error ? AVERROR(EINVAL) : 0;
}
- if (ost->finished ||
- (of->start_time != AV_NOPTS_VALUE && sub->pts < of->start_time))
+ if ((of->start_time != AV_NOPTS_VALUE && sub->pts < of->start_time))
return 0;
enc = ost->enc_ctx;
@@ -579,7 +460,7 @@ static int do_subtitle_out(OutputFile *of, OutputStream *ost, const AVSubtitle *
}
pkt->dts = pkt->pts;
- ret = tq_send(e->queue_out, 0, pkt);
+ ret = sch_enc_send(e->sch, e->sch_idx, pkt);
if (ret < 0) {
av_packet_unref(pkt);
return ret;
@@ -671,10 +552,13 @@ static int update_video_stats(OutputStream *ost, const AVPacket *pkt, int write_
int64_t frame_number;
double ti1, bitrate, avg_bitrate;
double psnr_val = -1;
+ int quality;
- ost->quality = sd ? AV_RL32(sd) : -1;
+ quality = sd ? AV_RL32(sd) : -1;
pict_type = sd ? sd[4] : AV_PICTURE_TYPE_NONE;
+ atomic_store(&ost->quality, quality);
+
if ((enc->flags & AV_CODEC_FLAG_PSNR) && sd && sd[5]) {
// FIXME the scaling assumes 8bit
double error = AV_RL64(sd + 8) / (enc->width * enc->height * 255.0 * 255.0);
@@ -697,10 +581,10 @@ static int update_video_stats(OutputStream *ost, const AVPacket *pkt, int write_
frame_number = e->packets_encoded;
if (vstats_version <= 1) {
fprintf(vstats_file, "frame= %5"PRId64" q= %2.1f ", frame_number,
- ost->quality / (float)FF_QP2LAMBDA);
+ quality / (float)FF_QP2LAMBDA);
} else {
fprintf(vstats_file, "out= %2d st= %2d frame= %5"PRId64" q= %2.1f ", ost->file_index, ost->index, frame_number,
- ost->quality / (float)FF_QP2LAMBDA);
+ quality / (float)FF_QP2LAMBDA);
}
if (psnr_val >= 0)
@@ -801,18 +685,11 @@ static int encode_frame(OutputFile *of, OutputStream *ost, AVFrame *frame,
av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, &enc->time_base));
}
- if ((ret = trigger_fix_sub_duration_heartbeat(ost, pkt)) < 0) {
- av_log(NULL, AV_LOG_ERROR,
- "Subtitle heartbeat logic failed in %s! (%s)\n",
- __func__, av_err2str(ret));
- return ret;
- }
-
e->data_size += pkt->size;
e->packets_encoded++;
- ret = tq_send(e->queue_out, 0, pkt);
+ ret = sch_enc_send(e->sch, e->sch_idx, pkt);
if (ret < 0) {
av_packet_unref(pkt);
return ret;
@@ -822,50 +699,6 @@ static int encode_frame(OutputFile *of, OutputStream *ost, AVFrame *frame,
av_assert0(0);
}
-static int submit_encode_frame(OutputFile *of, OutputStream *ost,
- AVFrame *frame, AVPacket *pkt)
-{
- Encoder *e = ost->enc;
- int ret;
-
- if (ost->sq_idx_encode < 0)
- return encode_frame(of, ost, frame, pkt);
-
- if (frame) {
- ret = av_frame_ref(e->sq_frame, frame);
- if (ret < 0)
- return ret;
- frame = e->sq_frame;
- }
-
- ret = sq_send(of->sq_encode, ost->sq_idx_encode,
- SQFRAME(frame));
- if (ret < 0) {
- if (frame)
- av_frame_unref(frame);
- if (ret != AVERROR_EOF)
- return ret;
- }
-
- while (1) {
- AVFrame *enc_frame = e->sq_frame;
-
- ret = sq_receive(of->sq_encode, ost->sq_idx_encode,
- SQFRAME(enc_frame));
- if (ret == AVERROR_EOF) {
- enc_frame = NULL;
- } else if (ret < 0) {
- return (ret == AVERROR(EAGAIN)) ? 0 : ret;
- }
-
- ret = encode_frame(of, ost, enc_frame, pkt);
- if (enc_frame)
- av_frame_unref(enc_frame);
- if (ret < 0)
- return ret;
- }
-}
-
static int do_audio_out(OutputFile *of, OutputStream *ost,
AVFrame *frame, AVPacket *pkt)
{
@@ -881,7 +714,7 @@ static int do_audio_out(OutputFile *of, OutputStream *ost,
if (!check_recording_time(ost, frame->pts, frame->time_base))
return AVERROR_EOF;
- return submit_encode_frame(of, ost, frame, pkt);
+ return encode_frame(of, ost, frame, pkt);
}
static enum AVPictureType forced_kf_apply(void *logctx, KeyframeForceCtx *kf,
@@ -949,7 +782,7 @@ static int do_video_out(OutputFile *of, OutputStream *ost,
}
#endif
- return submit_encode_frame(of, ost, in_picture, pkt);
+ return encode_frame(of, ost, in_picture, pkt);
}
static int frame_encode(OutputStream *ost, AVFrame *frame, AVPacket *pkt)
@@ -958,9 +791,12 @@ static int frame_encode(OutputStream *ost, AVFrame *frame, AVPacket *pkt)
enum AVMediaType type = ost->type;
if (type == AVMEDIA_TYPE_SUBTITLE) {
+ const AVSubtitle *subtitle = frame && frame->buf[0] ?
+ (AVSubtitle*)frame->buf[0]->data : NULL;
+
// no flushing for subtitles
- return frame ?
- do_subtitle_out(of, ost, (AVSubtitle*)frame->buf[0]->data, pkt) : 0;
+ return subtitle && subtitle->num_rects ?
+ do_subtitle_out(of, ost, subtitle, pkt) : 0;
}
if (frame) {
@@ -968,7 +804,7 @@ static int frame_encode(OutputStream *ost, AVFrame *frame, AVPacket *pkt)
do_audio_out(of, ost, frame, pkt);
}
- return submit_encode_frame(of, ost, NULL, pkt);
+ return encode_frame(of, ost, NULL, pkt);
}
static void enc_thread_set_name(const OutputStream *ost)
@@ -1009,24 +845,50 @@ fail:
void *encoder_thread(void *arg)
{
OutputStream *ost = arg;
- OutputFile *of = output_files[ost->file_index];
Encoder *e = ost->enc;
EncoderThread et;
int ret = 0, input_status = 0;
+ int name_set = 0;
ret = enc_thread_init(&et);
if (ret < 0)
goto finish;
- enc_thread_set_name(ost);
+ /* Open the subtitle encoders immediately. AVFrame-based encoders
+ * are opened through a callback from the scheduler once they get
+ * their first frame
+ *
+ * N.B.: because the callback is called from a different thread,
+ * enc_ctx MUST NOT be accessed before sch_enc_receive() returns
+ * for the first time for audio/video. */
+ if (ost->type != AVMEDIA_TYPE_VIDEO && ost->type != AVMEDIA_TYPE_AUDIO) {
+ ret = enc_open(ost, NULL);
+ if (ret < 0)
+ goto finish;
+ }
while (!input_status) {
- int dummy;
-
- input_status = tq_receive(e->queue_in, &dummy, et.frame);
- if (input_status < 0)
+ input_status = sch_enc_receive(e->sch, e->sch_idx, et.frame);
+ if (input_status == AVERROR_EOF) {
av_log(ost, AV_LOG_VERBOSE, "Encoder thread received EOF\n");
+ if (!e->opened) {
+ av_log(ost, AV_LOG_ERROR, "Could not open encoder before EOF\n");
+ ret = AVERROR(EINVAL);
+ goto finish;
+ }
+ } else if (input_status < 0) {
+ ret = input_status;
+ av_log(ost, AV_LOG_ERROR, "Error receiving a frame for encoding: %s\n",
+ av_err2str(ret));
+ goto finish;
+ }
+
+ if (!name_set) {
+ enc_thread_set_name(ost);
+ name_set = 1;
+ }
+
ret = frame_encode(ost, input_status >= 0 ? et.frame : NULL, et.pkt);
av_packet_unref(et.pkt);
@@ -1040,15 +902,6 @@ void *encoder_thread(void *arg)
av_err2str(ret));
break;
}
-
- // signal to the consumer thread that the frame was encoded
- ret = tq_send(e->queue_out, 0, et.pkt);
- if (ret < 0) {
- if (ret != AVERROR_EOF)
- av_log(ost, AV_LOG_ERROR,
- "Error communicating with the main thread\n");
- break;
- }
}
// EOF is normal thread termination
@@ -1056,118 +909,7 @@ void *encoder_thread(void *arg)
ret = 0;
finish:
- if (ost->sq_idx_encode >= 0)
- sq_send(of->sq_encode, ost->sq_idx_encode, SQFRAME(NULL));
-
- tq_receive_finish(e->queue_in, 0);
- tq_send_finish (e->queue_out, 0);
-
enc_thread_uninit(&et);
- av_log(ost, AV_LOG_VERBOSE, "Terminating encoder thread\n");
-
return (void*)(intptr_t)ret;
}
-
-int enc_frame(OutputStream *ost, AVFrame *frame)
-{
- OutputFile *of = output_files[ost->file_index];
- Encoder *e = ost->enc;
- int ret, thread_ret;
-
- ret = enc_open(ost, frame);
- if (ret < 0)
- return ret;
-
- if (!e->queue_in)
- return AVERROR_EOF;
-
- // send the frame/EOF to the encoder thread
- if (frame) {
- ret = tq_send(e->queue_in, 0, frame);
- if (ret < 0)
- goto finish;
- } else
- tq_send_finish(e->queue_in, 0);
-
- // retrieve all encoded data for the frame
- while (1) {
- int dummy;
-
- ret = tq_receive(e->queue_out, &dummy, e->pkt);
- if (ret < 0)
- break;
-
- // frame fully encoded
- if (!e->pkt->data && !e->pkt->side_data_elems)
- return 0;
-
- // process the encoded packet
- ret = of_output_packet(of, ost, e->pkt);
- if (ret < 0)
- goto finish;
- }
-
-finish:
- thread_ret = enc_thread_stop(e);
- if (thread_ret < 0) {
- av_log(ost, AV_LOG_ERROR, "Encoder thread returned error: %s\n",
- av_err2str(thread_ret));
- ret = err_merge(ret, thread_ret);
- }
-
- if (ret < 0 && ret != AVERROR_EOF)
- return ret;
-
- // signal EOF to the muxer
- return of_output_packet(of, ost, NULL);
-}
-
-int enc_subtitle(OutputFile *of, OutputStream *ost, const AVSubtitle *sub)
-{
- Encoder *e = ost->enc;
- AVFrame *f = e->sub_frame;
- int ret;
-
- // XXX the queue for transferring data to the encoder thread runs
- // on AVFrames, so we wrap AVSubtitle in an AVBufferRef and put
- // that inside the frame
- // eventually, subtitles should be switched to use AVFrames natively
- ret = subtitle_wrap_frame(f, sub, 1);
- if (ret < 0)
- return ret;
-
- ret = enc_frame(ost, f);
- av_frame_unref(f);
-
- return ret;
-}
-
-int enc_flush(void)
-{
- int ret = 0;
-
- for (OutputStream *ost = ost_iter(NULL); ost; ost = ost_iter(ost)) {
- OutputFile *of = output_files[ost->file_index];
- if (ost->sq_idx_encode >= 0)
- sq_send(of->sq_encode, ost->sq_idx_encode, SQFRAME(NULL));
- }
-
- for (OutputStream *ost = ost_iter(NULL); ost; ost = ost_iter(ost)) {
- Encoder *e = ost->enc;
- AVCodecContext *enc = ost->enc_ctx;
- int err;
-
- if (!enc || !e->opened ||
- (enc->codec_type != AVMEDIA_TYPE_VIDEO && enc->codec_type != AVMEDIA_TYPE_AUDIO))
- continue;
-
- err = enc_frame(ost, NULL);
- if (err != AVERROR_EOF && ret < 0)
- ret = err_merge(ret, err);
-
- av_assert0(!e->queue_in);
- }
-
- return ret;
-}
diff --git a/fftools/ffmpeg_filter.c b/fftools/ffmpeg_filter.c
index 635b1b0b6e..fe8787cacb 100644
--- a/fftools/ffmpeg_filter.c
+++ b/fftools/ffmpeg_filter.c
@@ -21,8 +21,6 @@
#include <stdint.h>
#include "ffmpeg.h"
-#include "ffmpeg_utils.h"
-#include "thread_queue.h"
#include "libavfilter/avfilter.h"
#include "libavfilter/buffersink.h"
@@ -53,10 +51,11 @@ typedef struct FilterGraphPriv {
// true when the filtergraph contains only meta filters
// that do not modify the frame data
int is_meta;
+ // source filters are present in the graph
+ int have_sources;
int disable_conversions;
- int nb_inputs_bound;
- int nb_outputs_bound;
+ unsigned nb_outputs_done;
const char *graph_desc;
@@ -67,41 +66,6 @@ typedef struct FilterGraphPriv {
Scheduler *sch;
unsigned sch_idx;
-
- pthread_t thread;
- /**
- * Queue for sending frames from the main thread to the filtergraph. Has
- * nb_inputs+1 streams - the first nb_inputs stream correspond to
- * filtergraph inputs. Frames on those streams may have their opaque set to
- * - FRAME_OPAQUE_EOF: frame contains no data, but pts+timebase of the
- * EOF event for the correspondint stream. Will be immediately followed by
- * this stream being send-closed.
- * - FRAME_OPAQUE_SUB_HEARTBEAT: frame contains no data, but pts+timebase of
- * a subtitle heartbeat event. Will only be sent for sub2video streams.
- *
- * The last stream is "control" - the main thread sends empty AVFrames with
- * opaque set to
- * - FRAME_OPAQUE_REAP_FILTERS: a request to retrieve all frame available
- * from filtergraph outputs. These frames are sent to corresponding
- * streams in queue_out. Finally an empty frame is sent to the control
- * stream in queue_out.
- * - FRAME_OPAQUE_CHOOSE_INPUT: same as above, but in case no frames are
- * available the terminating empty frame's opaque will contain the index+1
- * of the filtergraph input to which more input frames should be supplied.
- */
- ThreadQueue *queue_in;
- /**
- * Queue for sending frames from the filtergraph back to the main thread.
- * Has nb_outputs+1 streams - the first nb_outputs stream correspond to
- * filtergraph outputs.
- *
- * The last stream is "control" - see documentation for queue_in for more
- * details.
- */
- ThreadQueue *queue_out;
- // submitting frames to filter thread returned EOF
- // this only happens on thread exit, so is not per-input
- int eof_in;
} FilterGraphPriv;
static FilterGraphPriv *fgp_from_fg(FilterGraph *fg)
@@ -123,6 +87,9 @@ typedef struct FilterGraphThread {
// The output index is stored in frame opaque.
AVFifo *frame_queue_out;
+ // index of the next input to request from the scheduler
+ unsigned next_in;
+ // set to 1 after at least one frame passed through this output
int got_frame;
// EOF status of each input/output, as received by the thread
@@ -253,9 +220,6 @@ typedef struct OutputFilterPriv {
int64_t ts_offset;
int64_t next_pts;
FPSConvContext fps;
-
- // set to 1 after at least one frame passed through this output
- int got_frame;
} OutputFilterPriv;
static OutputFilterPriv *ofp_from_ofilter(OutputFilter *ofilter)
@@ -653,57 +617,6 @@ static int ifilter_has_all_input_formats(FilterGraph *fg)
static void *filter_thread(void *arg);
-// start the filtering thread once all inputs and outputs are bound
-static int fg_thread_try_start(FilterGraphPriv *fgp)
-{
- FilterGraph *fg = &fgp->fg;
- ObjPool *op;
- int ret = 0;
-
- if (fgp->nb_inputs_bound < fg->nb_inputs ||
- fgp->nb_outputs_bound < fg->nb_outputs)
- return 0;
-
- op = objpool_alloc_frames();
- if (!op)
- return AVERROR(ENOMEM);
-
- fgp->queue_in = tq_alloc(fg->nb_inputs + 1, 1, op, frame_move);
- if (!fgp->queue_in) {
- objpool_free(&op);
- return AVERROR(ENOMEM);
- }
-
- // at least one output is mandatory
- op = objpool_alloc_frames();
- if (!op)
- goto fail;
-
- fgp->queue_out = tq_alloc(fg->nb_outputs + 1, 1, op, frame_move);
- if (!fgp->queue_out) {
- objpool_free(&op);
- goto fail;
- }
-
- ret = pthread_create(&fgp->thread, NULL, filter_thread, fgp);
- if (ret) {
- ret = AVERROR(ret);
- av_log(NULL, AV_LOG_ERROR, "pthread_create() for filtergraph %d failed: %s\n",
- fg->index, av_err2str(ret));
- goto fail;
- }
-
- return 0;
-fail:
- if (ret >= 0)
- ret = AVERROR(ENOMEM);
-
- tq_free(&fgp->queue_in);
- tq_free(&fgp->queue_out);
-
- return ret;
-}
-
static char *describe_filter_link(FilterGraph *fg, AVFilterInOut *inout, int in)
{
AVFilterContext *ctx = inout->filter_ctx;
@@ -729,7 +642,6 @@ static OutputFilter *ofilter_alloc(FilterGraph *fg)
ofilter->graph = fg;
ofp->format = -1;
ofp->index = fg->nb_outputs - 1;
- ofilter->last_pts = AV_NOPTS_VALUE;
return ofilter;
}
@@ -760,10 +672,7 @@ static int ifilter_bind_ist(InputFilter *ifilter, InputStream *ist)
return AVERROR(ENOMEM);
}
- fgp->nb_inputs_bound++;
- av_assert0(fgp->nb_inputs_bound <= ifilter->graph->nb_inputs);
-
- return fg_thread_try_start(fgp);
+ return 0;
}
static int set_channel_layout(OutputFilterPriv *f, OutputStream *ost)
@@ -902,10 +811,7 @@ int ofilter_bind_ost(OutputFilter *ofilter, OutputStream *ost,
if (ret < 0)
return ret;
- fgp->nb_outputs_bound++;
- av_assert0(fgp->nb_outputs_bound <= fg->nb_outputs);
-
- return fg_thread_try_start(fgp);
+ return 0;
}
static InputFilter *ifilter_alloc(FilterGraph *fg)
@@ -935,34 +841,6 @@ static InputFilter *ifilter_alloc(FilterGraph *fg)
return ifilter;
}
-static int fg_thread_stop(FilterGraphPriv *fgp)
-{
- void *ret;
-
- if (!fgp->queue_in)
- return 0;
-
- for (int i = 0; i <= fgp->fg.nb_inputs; i++) {
- InputFilterPriv *ifp = i < fgp->fg.nb_inputs ?
- ifp_from_ifilter(fgp->fg.inputs[i]) : NULL;
-
- if (ifp)
- ifp->eof = 1;
-
- tq_send_finish(fgp->queue_in, i);
- }
-
- for (int i = 0; i <= fgp->fg.nb_outputs; i++)
- tq_receive_finish(fgp->queue_out, i);
-
- pthread_join(fgp->thread, &ret);
-
- tq_free(&fgp->queue_in);
- tq_free(&fgp->queue_out);
-
- return (int)(intptr_t)ret;
-}
-
void fg_free(FilterGraph **pfg)
{
FilterGraph *fg = *pfg;
@@ -972,8 +850,6 @@ void fg_free(FilterGraph **pfg)
return;
fgp = fgp_from_fg(fg);
- fg_thread_stop(fgp);
-
avfilter_graph_free(&fg->graph);
for (int j = 0; j < fg->nb_inputs; j++) {
InputFilter *ifilter = fg->inputs[j];
@@ -1072,6 +948,15 @@ int fg_create(FilterGraph **pfg, char *graph_desc, Scheduler *sch)
if (ret < 0)
goto fail;
+ for (unsigned i = 0; i < graph->nb_filters; i++) {
+ const AVFilter *f = graph->filters[i]->filter;
+ if (!avfilter_filter_pad_count(f, 0) &&
+ !(f->flags & AVFILTER_FLAG_DYNAMIC_INPUTS)) {
+ fgp->have_sources = 1;
+ break;
+ }
+ }
+
for (AVFilterInOut *cur = inputs; cur; cur = cur->next) {
InputFilter *const ifilter = ifilter_alloc(fg);
InputFilterPriv *ifp;
@@ -1800,6 +1685,7 @@ static int configure_filtergraph(FilterGraph *fg, const FilterGraphThread *fgt)
AVBufferRef *hw_device;
AVFilterInOut *inputs, *outputs, *cur;
int ret, i, simple = filtergraph_is_simple(fg);
+ int have_input_eof = 0;
const char *graph_desc = fgp->graph_desc;
cleanup_filtergraph(fg);
@@ -1922,11 +1808,18 @@ static int configure_filtergraph(FilterGraph *fg, const FilterGraphThread *fgt)
ret = av_buffersrc_add_frame(ifp->filter, NULL);
if (ret < 0)
goto fail;
+ have_input_eof = 1;
}
}
- return 0;
+ if (have_input_eof) {
+ // make sure the EOF propagates to the end of the graph
+ ret = avfilter_graph_request_oldest(fg->graph);
+ if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF)
+ goto fail;
+ }
+ return 0;
fail:
cleanup_filtergraph(fg);
return ret;
@@ -2182,7 +2075,7 @@ static void video_sync_process(OutputFilterPriv *ofp, AVFrame *frame,
fps->frames_prev_hist[2]);
if (!*nb_frames && fps->last_dropped) {
- ofilter->nb_frames_drop++;
+ atomic_fetch_add(&ofilter->nb_frames_drop, 1);
fps->last_dropped++;
}
@@ -2260,21 +2153,23 @@ finish:
fps->frames_prev_hist[0] = *nb_frames_prev;
if (*nb_frames_prev == 0 && fps->last_dropped) {
- ofilter->nb_frames_drop++;
+ atomic_fetch_add(&ofilter->nb_frames_drop, 1);
av_log(ost, AV_LOG_VERBOSE,
"*** dropping frame %"PRId64" at ts %"PRId64"\n",
fps->frame_number, fps->last_frame->pts);
}
if (*nb_frames > (*nb_frames_prev && fps->last_dropped) + (*nb_frames > *nb_frames_prev)) {
+ uint64_t nb_frames_dup;
if (*nb_frames > dts_error_threshold * 30) {
av_log(ost, AV_LOG_ERROR, "%"PRId64" frame duplication too large, skipping\n", *nb_frames - 1);
- ofilter->nb_frames_drop++;
+ atomic_fetch_add(&ofilter->nb_frames_drop, 1);
*nb_frames = 0;
return;
}
- ofilter->nb_frames_dup += *nb_frames - (*nb_frames_prev && fps->last_dropped) - (*nb_frames > *nb_frames_prev);
+ nb_frames_dup = atomic_fetch_add(&ofilter->nb_frames_dup,
+ *nb_frames - (*nb_frames_prev && fps->last_dropped) - (*nb_frames > *nb_frames_prev));
av_log(ost, AV_LOG_VERBOSE, "*** %"PRId64" dup!\n", *nb_frames - 1);
- if (ofilter->nb_frames_dup > fps->dup_warning) {
+ if (nb_frames_dup > fps->dup_warning) {
av_log(ost, AV_LOG_WARNING, "More than %"PRIu64" frames duplicated\n", fps->dup_warning);
fps->dup_warning *= 10;
}
@@ -2284,8 +2179,57 @@ finish:
fps->dropped_keyframe |= fps->last_dropped && (frame->flags & AV_FRAME_FLAG_KEY);
}
+static int close_output(OutputFilterPriv *ofp, FilterGraphThread *fgt)
+{
+ FilterGraphPriv *fgp = fgp_from_fg(ofp->ofilter.graph);
+ int ret;
+
+ // we are finished and no frames were ever seen at this output,
+ // at least initialize the encoder with a dummy frame
+ if (!fgt->got_frame) {
+ AVFrame *frame = fgt->frame;
+ FrameData *fd;
+
+ frame->time_base = ofp->tb_out;
+ frame->format = ofp->format;
+
+ frame->width = ofp->width;
+ frame->height = ofp->height;
+ frame->sample_aspect_ratio = ofp->sample_aspect_ratio;
+
+ frame->sample_rate = ofp->sample_rate;
+ if (ofp->ch_layout.nb_channels) {
+ ret = av_channel_layout_copy(&frame->ch_layout, &ofp->ch_layout);
+ if (ret < 0)
+ return ret;
+ }
+
+ fd = frame_data(frame);
+ if (!fd)
+ return AVERROR(ENOMEM);
+
+ fd->frame_rate_filter = ofp->fps.framerate;
+
+ av_assert0(!frame->buf[0]);
+
+ av_log(ofp->ofilter.ost, AV_LOG_WARNING,
+ "No filtered frames for output stream, trying to "
+ "initialize anyway.\n");
+
+ ret = sch_filter_send(fgp->sch, fgp->sch_idx, ofp->index, frame);
+ if (ret < 0) {
+ av_frame_unref(frame);
+ return ret;
+ }
+ }
+
+ fgt->eof_out[ofp->index] = 1;
+
+ return sch_filter_send(fgp->sch, fgp->sch_idx, ofp->index, NULL);
+}
+
static int fg_output_frame(OutputFilterPriv *ofp, FilterGraphThread *fgt,
- AVFrame *frame, int buffer)
+ AVFrame *frame)
{
FilterGraphPriv *fgp = fgp_from_fg(ofp->ofilter.graph);
AVFrame *frame_prev = ofp->fps.last_frame;
@@ -2332,28 +2276,17 @@ static int fg_output_frame(OutputFilterPriv *ofp, FilterGraphThread *fgt,
frame_out = frame;
}
- if (buffer) {
- AVFrame *f = av_frame_alloc();
-
- if (!f) {
- av_frame_unref(frame_out);
- return AVERROR(ENOMEM);
- }
-
- av_frame_move_ref(f, frame_out);
- f->opaque = (void*)(intptr_t)ofp->index;
-
- ret = av_fifo_write(fgt->frame_queue_out, &f, 1);
- if (ret < 0) {
- av_frame_free(&f);
- return AVERROR(ENOMEM);
- }
- } else {
- // return the frame to the main thread
- ret = tq_send(fgp->queue_out, ofp->index, frame_out);
+ {
+ // send the frame to consumers
+ ret = sch_filter_send(fgp->sch, fgp->sch_idx, ofp->index, frame_out);
if (ret < 0) {
av_frame_unref(frame_out);
- fgt->eof_out[ofp->index] = 1;
+
+ if (!fgt->eof_out[ofp->index]) {
+ fgt->eof_out[ofp->index] = 1;
+ fgp->nb_outputs_done++;
+ }
+
return ret == AVERROR_EOF ? 0 : ret;
}
}
@@ -2374,16 +2307,14 @@ static int fg_output_frame(OutputFilterPriv *ofp, FilterGraphThread *fgt,
av_frame_move_ref(frame_prev, frame);
}
- if (!frame) {
- tq_send_finish(fgp->queue_out, ofp->index);
- fgt->eof_out[ofp->index] = 1;
- }
+ if (!frame)
+ return close_output(ofp, fgt);
return 0;
}
static int fg_output_step(OutputFilterPriv *ofp, FilterGraphThread *fgt,
- AVFrame *frame, int buffer)
+ AVFrame *frame)
{
FilterGraphPriv *fgp = fgp_from_fg(ofp->ofilter.graph);
OutputStream *ost = ofp->ofilter.ost;
@@ -2393,8 +2324,8 @@ static int fg_output_step(OutputFilterPriv *ofp, FilterGraphThread *fgt,
ret = av_buffersink_get_frame_flags(filter, frame,
AV_BUFFERSINK_FLAG_NO_REQUEST);
- if (ret == AVERROR_EOF && !buffer && !fgt->eof_out[ofp->index]) {
- ret = fg_output_frame(ofp, fgt, NULL, buffer);
+ if (ret == AVERROR_EOF && !fgt->eof_out[ofp->index]) {
+ ret = fg_output_frame(ofp, fgt, NULL);
return (ret < 0) ? ret : 1;
} else if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
return 1;
@@ -2448,7 +2379,7 @@ static int fg_output_step(OutputFilterPriv *ofp, FilterGraphThread *fgt,
fd->frame_rate_filter = ofp->fps.framerate;
}
- ret = fg_output_frame(ofp, fgt, frame, buffer);
+ ret = fg_output_frame(ofp, fgt, frame);
av_frame_unref(frame);
if (ret < 0)
return ret;
@@ -2456,44 +2387,68 @@ static int fg_output_step(OutputFilterPriv *ofp, FilterGraphThread *fgt,
return 0;
}
-/* retrieve all frames available at filtergraph outputs and either send them to
- * the main thread (buffer=0) or buffer them for later (buffer=1) */
+/* retrieve all frames available at filtergraph outputs
+ * and send them to consumers */
static int read_frames(FilterGraph *fg, FilterGraphThread *fgt,
- AVFrame *frame, int buffer)
+ AVFrame *frame)
{
FilterGraphPriv *fgp = fgp_from_fg(fg);
- int ret = 0;
+ int did_step = 0;
- if (!fg->graph)
- return 0;
-
- // process buffered frames
- if (!buffer) {
- AVFrame *f;
-
- while (av_fifo_read(fgt->frame_queue_out, &f, 1) >= 0) {
- int out_idx = (intptr_t)f->opaque;
- f->opaque = NULL;
- ret = tq_send(fgp->queue_out, out_idx, f);
- av_frame_free(&f);
- if (ret < 0 && ret != AVERROR_EOF)
- return ret;
+ // graph not configured, just select the input to request
+ if (!fg->graph) {
+ for (int i = 0; i < fg->nb_inputs; i++) {
+ InputFilterPriv *ifp = ifp_from_ifilter(fg->inputs[i]);
+ if (ifp->format < 0 && !fgt->eof_in[i]) {
+ fgt->next_in = i;
+ return 0;
+ }
}
+
+ // This state - graph is not configured, but all inputs are either
+ // initialized or EOF - should be unreachable because sending EOF to a
+ // filter without even a fallback format should fail
+ av_assert0(0);
+ return AVERROR_BUG;
}
- /* Reap all buffers present in the buffer sinks */
- for (int i = 0; i < fg->nb_outputs; i++) {
- OutputFilterPriv *ofp = ofp_from_ofilter(fg->outputs[i]);
- int ret = 0;
+ while (1) {
+ int ret;
- while (!ret) {
- ret = fg_output_step(ofp, fgt, frame, buffer);
- if (ret < 0)
- return ret;
+ ret = avfilter_graph_request_oldest(fg->graph);
+ if (ret == AVERROR(EAGAIN)) {
+ fgt->next_in = choose_input(fg, fgt);
+ break;
+ } else if (ret < 0) {
+ if (ret == AVERROR_EOF)
+ av_log(fg, AV_LOG_VERBOSE, "Filtergraph returned EOF, finishing\n");
+ else
+ av_log(fg, AV_LOG_ERROR,
+ "Error requesting a frame from the filtergraph: %s\n",
+ av_err2str(ret));
+ return ret;
}
- }
+ fgt->next_in = fg->nb_inputs;
- return 0;
+ // return after one iteration, so that scheduler can rate-control us
+ if (did_step && fgp->have_sources)
+ return 0;
+
+ /* Reap all buffers present in the buffer sinks */
+ for (int i = 0; i < fg->nb_outputs; i++) {
+ OutputFilterPriv *ofp = ofp_from_ofilter(fg->outputs[i]);
+
+ ret = 0;
+ while (!ret) {
+ ret = fg_output_step(ofp, fgt, frame);
+ if (ret < 0)
+ return ret;
+ }
+ }
+ did_step = 1;
+ };
+
+ return (fgp->nb_outputs_done == fg->nb_outputs) ? AVERROR_EOF : 0;
}
static void sub2video_heartbeat(InputFilter *ifilter, int64_t pts, AVRational tb)
@@ -2571,6 +2526,9 @@ static int send_eof(FilterGraphThread *fgt, InputFilter *ifilter,
InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
int ret;
+ if (fgt->eof_in[ifp->index])
+ return 0;
+
fgt->eof_in[ifp->index] = 1;
if (ifp->filter) {
@@ -2672,7 +2630,7 @@ static int send_frame(FilterGraph *fg, FilterGraphThread *fgt,
return ret;
}
- ret = fg->graph ? read_frames(fg, fgt, tmp, 1) : 0;
+ ret = fg->graph ? read_frames(fg, fgt, tmp) : 0;
av_frame_free(&tmp);
if (ret < 0)
return ret;
@@ -2705,82 +2663,6 @@ static int send_frame(FilterGraph *fg, FilterGraphThread *fgt,
return 0;
}
-static int msg_process(FilterGraphPriv *fgp, FilterGraphThread *fgt,
- AVFrame *frame)
-{
- const enum FrameOpaque msg = (intptr_t)frame->opaque;
- FilterGraph *fg = &fgp->fg;
- int graph_eof = 0;
- int ret;
-
- frame->opaque = NULL;
- av_assert0(msg > 0);
- av_assert0(msg == FRAME_OPAQUE_SEND_COMMAND || !frame->buf[0]);
-
- if (!fg->graph) {
- // graph not configured yet, ignore all messages other than choosing
- // the input to read from
- if (msg != FRAME_OPAQUE_CHOOSE_INPUT) {
- av_frame_unref(frame);
- goto done;
- }
-
- for (int i = 0; i < fg->nb_inputs; i++) {
- InputFilter *ifilter = fg->inputs[i];
- InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
- if (ifp->format < 0 && !fgt->eof_in[i]) {
- frame->opaque = (void*)(intptr_t)(i + 1);
- goto done;
- }
- }
-
- // This state - graph is not configured, but all inputs are either
- // initialized or EOF - should be unreachable because sending EOF to a
- // filter without even a fallback format should fail
- av_assert0(0);
- return AVERROR_BUG;
- }
-
- if (msg == FRAME_OPAQUE_SEND_COMMAND) {
- FilterCommand *fc = (FilterCommand*)frame->buf[0]->data;
- send_command(fg, fc->time, fc->target, fc->command, fc->arg, fc->all_filters);
- av_frame_unref(frame);
- goto done;
- }
-
- if (msg == FRAME_OPAQUE_CHOOSE_INPUT) {
- ret = avfilter_graph_request_oldest(fg->graph);
-
- graph_eof = ret == AVERROR_EOF;
-
- if (ret == AVERROR(EAGAIN)) {
- frame->opaque = (void*)(intptr_t)(choose_input(fg, fgt) + 1);
- goto done;
- } else if (ret < 0 && !graph_eof)
- return ret;
- }
-
- ret = read_frames(fg, fgt, frame, 0);
- if (ret < 0) {
- av_log(fg, AV_LOG_ERROR, "Error sending filtered frames for encoding\n");
- return ret;
- }
-
- if (graph_eof)
- return AVERROR_EOF;
-
- // signal to the main thread that we are done processing the message
-done:
- ret = tq_send(fgp->queue_out, fg->nb_outputs, frame);
- if (ret < 0) {
- if (ret != AVERROR_EOF)
- av_log(fg, AV_LOG_ERROR, "Error communicating with the main thread\n");
- return ret;
- }
-
- return 0;
-}
-
static void fg_thread_set_name(const FilterGraph *fg)
{
char name[16];
@@ -2867,294 +2749,94 @@ static void *filter_thread(void *arg)
InputFilter *ifilter;
InputFilterPriv *ifp;
enum FrameOpaque o;
- int input_idx, eof_frame;
+ unsigned input_idx = fgt.next_in;
- input_status = tq_receive(fgp->queue_in, &input_idx, fgt.frame);
- if (input_idx < 0 ||
- (input_idx == fg->nb_inputs && input_status < 0)) {
+ input_status = sch_filter_receive(fgp->sch, fgp->sch_idx,
+ &input_idx, fgt.frame);
+ if (input_status == AVERROR_EOF) {
av_log(fg, AV_LOG_VERBOSE, "Filtering thread received EOF\n");
break;
+ } else if (input_status == AVERROR(EAGAIN)) {
+ // should only happen when we didn't request any input
+ av_assert0(input_idx == fg->nb_inputs);
+ goto read_frames;
}
+ av_assert0(input_status >= 0);
+
+ o = (intptr_t)fgt.frame->opaque;
o = (intptr_t)fgt.frame->opaque;
// message on the control stream
if (input_idx == fg->nb_inputs) {
- ret = msg_process(fgp, &fgt, fgt.frame);
- if (ret < 0)
- goto finish;
+ FilterCommand *fc;
+ av_assert0(o == FRAME_OPAQUE_SEND_COMMAND && fgt.frame->buf[0]);
+
+ fc = (FilterCommand*)fgt.frame->buf[0]->data;
+ send_command(fg, fc->time, fc->target, fc->command, fc->arg,
+ fc->all_filters);
+ av_frame_unref(fgt.frame);
continue;
}
// we received an input frame or EOF
ifilter = fg->inputs[input_idx];
ifp = ifp_from_ifilter(ifilter);
- eof_frame = input_status >= 0 && o == FRAME_OPAQUE_EOF;
+
if (ifp->type_src == AVMEDIA_TYPE_SUBTITLE) {
int hb_frame = input_status >= 0 && o == FRAME_OPAQUE_SUB_HEARTBEAT;
ret = sub2video_frame(ifilter, (fgt.frame->buf[0] || hb_frame) ? fgt.frame : NULL);
- } else if (input_status >= 0 && fgt.frame->buf[0]) {
+ } else if (fgt.frame->buf[0]) {
ret = send_frame(fg, &fgt, ifilter, fgt.frame);
} else {
- int64_t pts = input_status >= 0 ? fgt.frame->pts : AV_NOPTS_VALUE;
- AVRational tb = input_status >= 0 ? fgt.frame->time_base : (AVRational){ 1, 1 };
- ret = send_eof(&fgt, ifilter, pts, tb);
+ av_assert1(o == FRAME_OPAQUE_EOF);
+ ret = send_eof(&fgt, ifilter, fgt.frame->pts, fgt.frame->time_base);
}
av_frame_unref(fgt.frame);
if (ret < 0)
+ goto finish;
+
+read_frames:
+ // retrieve all newly avalable frames
+ ret = read_frames(fg, &fgt, fgt.frame);
+ if (ret == AVERROR_EOF) {
+ av_log(fg, AV_LOG_VERBOSE, "All consumers returned EOF\n");
break;
-
- if (eof_frame) {
- // an EOF frame is immediately followed by sender closing
- // the corresponding stream, so retrieve that event
- input_status = tq_receive(fgp->queue_in, &input_idx, fgt.frame);
- av_assert0(input_status == AVERROR_EOF && input_idx == ifp->index);
- }
-
- // signal to the main thread that we are done
- ret = tq_send(fgp->queue_out, fg->nb_outputs, fgt.frame);
- if (ret < 0) {
- if (ret == AVERROR_EOF)
- break;
-
- av_log(fg, AV_LOG_ERROR, "Error communicating with the main thread\n");
+ } else if (ret < 0) {
+ av_log(fg, AV_LOG_ERROR, "Error sending frames to consumers: %s\n",
+ av_err2str(ret));
goto finish;
}
}
+ for (unsigned i = 0; i < fg->nb_outputs; i++) {
+ OutputFilterPriv *ofp = ofp_from_ofilter(fg->outputs[i]);
+
+ if (fgt.eof_out[i])
+ continue;
+
+ ret = fg_output_frame(ofp, &fgt, NULL);
+ if (ret < 0)
+ goto finish;
+ }
+
finish:
// EOF is normal termination
if (ret == AVERROR_EOF)
ret = 0;
- for (int i = 0; i <= fg->nb_inputs; i++)
- tq_receive_finish(fgp->queue_in, i);
- for (int i = 0; i <= fg->nb_outputs; i++)
- tq_send_finish(fgp->queue_out, i);
-
fg_thread_uninit(&fgt);
- av_log(fg, AV_LOG_VERBOSE, "Terminating filtering thread\n");
-
return (void*)(intptr_t)ret;
}
-static int thread_send_frame(FilterGraphPriv *fgp, InputFilter *ifilter,
- AVFrame *frame, enum FrameOpaque type)
-{
- InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
- int output_idx, ret;
-
- if (ifp->eof) {
- av_frame_unref(frame);
- return AVERROR_EOF;
- }
-
- frame->opaque = (void*)(intptr_t)type;
-
- ret = tq_send(fgp->queue_in, ifp->index, frame);
- if (ret < 0) {
- ifp->eof = 1;
- av_frame_unref(frame);
- return ret;
- }
-
- if (type == FRAME_OPAQUE_EOF)
- tq_send_finish(fgp->queue_in, ifp->index);
-
- // wait for the frame to be processed
- ret = tq_receive(fgp->queue_out, &output_idx, frame);
- av_assert0(output_idx == fgp->fg.nb_outputs || ret == AVERROR_EOF);
-
- return ret;
-}
-
-int ifilter_send_frame(InputFilter *ifilter, AVFrame *frame, int keep_reference)
-{
- FilterGraphPriv *fgp = fgp_from_fg(ifilter->graph);
- int ret;
-
- if (keep_reference) {
- ret = av_frame_ref(fgp->frame, frame);
- if (ret < 0)
- return ret;
- } else
- av_frame_move_ref(fgp->frame, frame);
-
- return thread_send_frame(fgp, ifilter, fgp->frame, 0);
-}
-
-int ifilter_send_eof(InputFilter *ifilter, int64_t pts, AVRational tb)
-{
- FilterGraphPriv *fgp = fgp_from_fg(ifilter->graph);
- int ret;
-
- fgp->frame->pts = pts;
- fgp->frame->time_base = tb;
-
- ret = thread_send_frame(fgp, ifilter, fgp->frame, FRAME_OPAQUE_EOF);
-
- return ret == AVERROR_EOF ? 0 : ret;
-}
-
-void ifilter_sub2video_heartbeat(InputFilter *ifilter, int64_t pts, AVRational tb)
-{
- FilterGraphPriv *fgp = fgp_from_fg(ifilter->graph);
-
- fgp->frame->pts = pts;
- fgp->frame->time_base = tb;
-
- thread_send_frame(fgp, ifilter, fgp->frame, FRAME_OPAQUE_SUB_HEARTBEAT);
-}
-
-int fg_transcode_step(FilterGraph *graph, InputStream **best_ist)
-{
- FilterGraphPriv *fgp = fgp_from_fg(graph);
- int ret, got_frames = 0;
-
- if (fgp->eof_in)
- return AVERROR_EOF;
-
- // signal to the filtering thread to return all frames it can
- av_assert0(!fgp->frame->buf[0]);
- fgp->frame->opaque = (void*)(intptr_t)(best_ist ?
- FRAME_OPAQUE_CHOOSE_INPUT :
- FRAME_OPAQUE_REAP_FILTERS);
-
- ret = tq_send(fgp->queue_in, graph->nb_inputs, fgp->frame);
- if (ret < 0) {
- fgp->eof_in = 1;
- goto finish;
- }
-
- while (1) {
- OutputFilter *ofilter;
- OutputFilterPriv *ofp;
- OutputStream *ost;
- int output_idx;
-
- ret = tq_receive(fgp->queue_out, &output_idx, fgp->frame);
-
- // EOF on the whole queue or the control stream
- if (output_idx < 0 ||
- (ret < 0 && output_idx == graph->nb_outputs))
- goto finish;
-
- // EOF for a specific stream
- if (ret < 0) {
- ofilter = graph->outputs[output_idx];
- ofp = ofp_from_ofilter(ofilter);
-
- // we are finished and no frames were ever seen at this output,
- // at least initialize the encoder with a dummy frame
- if (!ofp->got_frame) {
- AVFrame *frame = fgp->frame;
- FrameData *fd;
-
- frame->time_base = ofp->tb_out;
- frame->format = ofp->format;
-
- frame->width = ofp->width;
- frame->height = ofp->height;
- frame->sample_aspect_ratio = ofp->sample_aspect_ratio;
-
- frame->sample_rate = ofp->sample_rate;
- if (ofp->ch_layout.nb_channels) {
- ret = av_channel_layout_copy(&frame->ch_layout, &ofp->ch_layout);
- if (ret < 0)
- return ret;
- }
-
- fd = frame_data(frame);
- if (!fd)
- return AVERROR(ENOMEM);
-
- fd->frame_rate_filter = ofp->fps.framerate;
-
- av_assert0(!frame->buf[0]);
-
- av_log(ofilter->ost, AV_LOG_WARNING,
- "No filtered frames for output stream, trying to "
- "initialize anyway.\n");
-
- enc_open(ofilter->ost, frame);
- av_frame_unref(frame);
- }
-
- close_output_stream(graph->outputs[output_idx]->ost);
- continue;
- }
-
- // request was fully processed by the filtering thread,
- // return the input stream to read from, if needed
- if (output_idx == graph->nb_outputs) {
- int input_idx = (intptr_t)fgp->frame->opaque - 1;
- av_assert0(input_idx <= graph->nb_inputs);
-
- if (best_ist) {
- *best_ist = (input_idx >= 0 && input_idx < graph->nb_inputs) ?
- ifp_from_ifilter(graph->inputs[input_idx])->ist : NULL;
-
- if (input_idx < 0 && !got_frames) {
- for (int i = 0; i < graph->nb_outputs; i++)
- graph->outputs[i]->ost->unavailable = 1;
- }
- }
- break;
- }
-
- // got a frame from the filtering thread, send it for encoding
- ofilter = graph->outputs[output_idx];
- ost = ofilter->ost;
- ofp = ofp_from_ofilter(ofilter);
-
- if (ost->finished) {
- av_frame_unref(fgp->frame);
- tq_receive_finish(fgp->queue_out, output_idx);
- continue;
- }
-
- if (fgp->frame->pts != AV_NOPTS_VALUE) {
- ofilter->last_pts = av_rescale_q(fgp->frame->pts,
- fgp->frame->time_base,
- AV_TIME_BASE_Q);
- }
-
- ret = enc_frame(ost, fgp->frame);
- av_frame_unref(fgp->frame);
- if (ret < 0)
- goto finish;
-
- ofp->got_frame = 1;
- got_frames = 1;
- }
-
-finish:
- if (ret < 0) {
- fgp->eof_in = 1;
- for (int i = 0; i < graph->nb_outputs; i++)
- close_output_stream(graph->outputs[i]->ost);
- }
-
- return ret;
-}
-
-int reap_filters(FilterGraph *fg, int flush)
-{
- return fg_transcode_step(fg, NULL);
-}
-
void fg_send_command(FilterGraph *fg, double time, const char *target,
const char *command, const char *arg, int all_filters)
{
FilterGraphPriv *fgp = fgp_from_fg(fg);
AVBufferRef *buf;
FilterCommand *fc;
- int output_idx, ret;
-
- if (!fgp->queue_in)
- return;
fc = av_mallocz(sizeof(*fc));
if (!fc)
@@ -3180,13 +2862,5 @@ void fg_send_command(FilterGraph *fg, double time, const char *target,
fgp->frame->buf[0] = buf;
fgp->frame->opaque = (void*)(intptr_t)FRAME_OPAQUE_SEND_COMMAND;
- ret = tq_send(fgp->queue_in, fg->nb_inputs, fgp->frame);
- if (ret < 0) {
- av_frame_unref(fgp->frame);
- return;
- }
-
- // wait for the frame to be processed
- ret = tq_receive(fgp->queue_out, &output_idx, fgp->frame);
- av_assert0(output_idx == fgp->fg.nb_outputs || ret == AVERROR_EOF);
+ sch_filter_command(fgp->sch, fgp->sch_idx, fgp->frame);
}
diff --git a/fftools/ffmpeg_mux.c b/fftools/ffmpeg_mux.c
index ef5c2f60e0..067dc65d4e 100644
--- a/fftools/ffmpeg_mux.c
+++ b/fftools/ffmpeg_mux.c
@@ -23,16 +23,13 @@
#include "ffmpeg.h"
#include "ffmpeg_mux.h"
#include "ffmpeg_utils.h"
-#include "objpool.h"
#include "sync_queue.h"
-#include "thread_queue.h"
#include "libavutil/fifo.h"
#include "libavutil/intreadwrite.h"
#include "libavutil/log.h"
#include "libavutil/mem.h"
#include "libavutil/timestamp.h"
-#include "libavutil/thread.h"
#include "libavcodec/packet.h"
@@ -41,10 +38,9 @@
typedef struct MuxThreadContext {
AVPacket *pkt;
+ AVPacket *fix_sub_duration_pkt;
} MuxThreadContext;
-int want_sdp = 1;
-
static Muxer *mux_from_of(OutputFile *of)
{
return (Muxer*)of;
@@ -207,14 +203,41 @@ static int sync_queue_process(Muxer *mux, OutputStream *ost, AVPacket *pkt, int
return 0;
}
+static int of_streamcopy(OutputStream *ost, AVPacket *pkt);
+
/* apply the output bitstream filters */
-static int mux_packet_filter(Muxer *mux, OutputStream *ost,
- AVPacket *pkt, int *stream_eof)
+static int mux_packet_filter(Muxer *mux, MuxThreadContext *mt,
+ OutputStream *ost, AVPacket *pkt, int *stream_eof)
{
MuxStream *ms = ms_from_ost(ost);
const char *err_msg;
int ret = 0;
+ if (pkt && !ost->enc) {
+ ret = of_streamcopy(ost, pkt);
+ if (ret == AVERROR(EAGAIN))
+ return 0;
+ else if (ret == AVERROR_EOF) {
+ av_packet_unref(pkt);
+ pkt = NULL;
+ ret = 0;
+ } else if (ret < 0)
+ goto fail;
+ }
+
+ // emit heartbeat for -fix_sub_duration;
+ // we are only interested in heartbeats on on random access points.
+ if (pkt && (pkt->flags & AV_PKT_FLAG_KEY)) {
+ mt->fix_sub_duration_pkt->opaque = (void*)(intptr_t)PKT_OPAQUE_FIX_SUB_DURATION;
+ mt->fix_sub_duration_pkt->pts = pkt->pts;
+ mt->fix_sub_duration_pkt->time_base = pkt->time_base;
+
+ ret = sch_mux_sub_heartbeat(mux->sch, mux->sch_idx, ms->sch_idx,
+ mt->fix_sub_duration_pkt);
+ if (ret < 0)
+ goto fail;
+ }
+
if (ms->bsf_ctx) {
int bsf_eof = 0;
@@ -278,6 +301,7 @@ static void thread_set_name(OutputFile *of)
static void mux_thread_uninit(MuxThreadContext *mt)
{
av_packet_free(&mt->pkt);
+ av_packet_free(&mt->fix_sub_duration_pkt);
memset(mt, 0, sizeof(*mt));
}
@@ -290,6 +314,10 @@ static int mux_thread_init(MuxThreadContext *mt)
if (!mt->pkt)
goto fail;
+ mt->fix_sub_duration_pkt = av_packet_alloc();
+ if (!mt->fix_sub_duration_pkt)
+ goto fail;
+
return 0;
fail:
@@ -316,19 +344,22 @@ void *muxer_thread(void *arg)
OutputStream *ost;
int stream_idx, stream_eof = 0;
- ret = tq_receive(mux->tq, &stream_idx, mt.pkt);
+ ret = sch_mux_receive(mux->sch, of->index, mt.pkt);
+ stream_idx = mt.pkt->stream_index;
if (stream_idx < 0) {
av_log(mux, AV_LOG_VERBOSE, "All streams finished\n");
ret = 0;
break;
}
- ost = of->streams[stream_idx];
- ret = mux_packet_filter(mux, ost, ret < 0 ? NULL : mt.pkt, &stream_eof);
+ ost = of->streams[mux->sch_stream_idx[stream_idx]];
+ mt.pkt->stream_index = ost->index;
+
+ ret = mux_packet_filter(mux, &mt, ost, ret < 0 ? NULL : mt.pkt, &stream_eof);
av_packet_unref(mt.pkt);
if (ret == AVERROR_EOF) {
if (stream_eof) {
- tq_receive_finish(mux->tq, stream_idx);
+ sch_mux_receive_finish(mux->sch, of->index, stream_idx);
} else {
av_log(mux, AV_LOG_VERBOSE, "Muxer returned EOF\n");
ret = 0;
@@ -343,243 +374,55 @@ void *muxer_thread(void *arg)
finish:
mux_thread_uninit(&mt);
- for (unsigned int i = 0; i < mux->fc->nb_streams; i++)
- tq_receive_finish(mux->tq, i);
-
- av_log(mux, AV_LOG_VERBOSE, "Terminating muxer thread\n");
-
return (void*)(intptr_t)ret;
}
-static int thread_submit_packet(Muxer *mux, OutputStream *ost, AVPacket *pkt)
-{
- int ret = 0;
-
- if (!pkt || ost->finished & MUXER_FINISHED)
- goto finish;
-
- ret = tq_send(mux->tq, ost->index, pkt);
- if (ret < 0)
- goto finish;
-
- return 0;
-
-finish:
- if (pkt)
- av_packet_unref(pkt);
-
- ost->finished |= MUXER_FINISHED;
- tq_send_finish(mux->tq, ost->index);
- return ret == AVERROR_EOF ? 0 : ret;
-}
-
-static int queue_packet(OutputStream *ost, AVPacket *pkt)
-{
- MuxStream *ms = ms_from_ost(ost);
- AVPacket *tmp_pkt = NULL;
- int ret;
-
- if (!av_fifo_can_write(ms->muxing_queue)) {
- size_t cur_size = av_fifo_can_read(ms->muxing_queue);
- size_t pkt_size = pkt ? pkt->size : 0;
- unsigned int are_we_over_size =
- (ms->muxing_queue_data_size + pkt_size) > ms->muxing_queue_data_threshold;
- size_t limit = are_we_over_size ? ms->max_muxing_queue_size : SIZE_MAX;
- size_t new_size = FFMIN(2 * cur_size, limit);
-
- if (new_size <= cur_size) {
- av_log(ost, AV_LOG_ERROR,
- "Too many packets buffered for output stream %d:%d.\n",
- ost->file_index, ost->st->index);
- return AVERROR(ENOSPC);
- }
- ret = av_fifo_grow2(ms->muxing_queue, new_size - cur_size);
- if (ret < 0)
- return ret;
- }
-
- if (pkt) {
- ret = av_packet_make_refcounted(pkt);
- if (ret < 0)
- return ret;
-
- tmp_pkt = av_packet_alloc();
- if (!tmp_pkt)
- return AVERROR(ENOMEM);
-
- av_packet_move_ref(tmp_pkt, pkt);
- ms->muxing_queue_data_size += tmp_pkt->size;
- }
- av_fifo_write(ms->muxing_queue, &tmp_pkt, 1);
-
- return 0;
-}
-
-static int submit_packet(Muxer *mux, AVPacket *pkt, OutputStream *ost)
-{
- int ret;
-
- if (mux->tq) {
- return thread_submit_packet(mux, ost, pkt);
- } else {
- /* the muxer is not initialized yet, buffer the packet */
- ret = queue_packet(ost, pkt);
- if (ret < 0) {
- if (pkt)
- av_packet_unref(pkt);
- return ret;
- }
- }
-
- return 0;
-}
-
-int of_output_packet(OutputFile *of, OutputStream *ost, AVPacket *pkt)
-{
- Muxer *mux = mux_from_of(of);
- int ret = 0;
-
- if (pkt && pkt->dts != AV_NOPTS_VALUE)
- ost->last_mux_dts = av_rescale_q(pkt->dts, pkt->time_base, AV_TIME_BASE_Q);
-
- ret = submit_packet(mux, pkt, ost);
- if (ret < 0) {
- av_log(ost, AV_LOG_ERROR, "Error submitting a packet to the muxer: %s",
- av_err2str(ret));
- return ret;
- }
-
- return 0;
-}
-
-int of_streamcopy(OutputStream *ost, const AVPacket *pkt, int64_t dts)
+static int of_streamcopy(OutputStream *ost, AVPacket *pkt)
{
OutputFile *of = output_files[ost->file_index];
MuxStream *ms = ms_from_ost(ost);
+ DemuxPktData *pd = pkt->opaque_ref ? (DemuxPktData*)pkt->opaque_ref->data : NULL;
+ int64_t dts = pd ? pd->dts_est : AV_NOPTS_VALUE;
int64_t start_time = (of->start_time == AV_NOPTS_VALUE) ? 0 : of->start_time;
int64_t ts_offset;
- AVPacket *opkt = ms->pkt;
- int ret;
-
- av_packet_unref(opkt);
if (of->recording_time != INT64_MAX &&
dts >= of->recording_time + start_time)
- pkt = NULL;
-
- // EOF: flush output bitstream filters.
- if (!pkt)
- return of_output_packet(of, ost, NULL);
+ return AVERROR_EOF;
if (!ms->streamcopy_started && !(pkt->flags & AV_PKT_FLAG_KEY) &&
!ms->copy_initial_nonkeyframes)
- return 0;
+ return AVERROR(EAGAIN);
if (!ms->streamcopy_started) {
if (!ms->copy_prior_start &&
(pkt->pts == AV_NOPTS_VALUE ?
dts < ms->ts_copy_start :
pkt->pts < av_rescale_q(ms->ts_copy_start, AV_TIME_BASE_Q, pkt->time_base)))
- return 0;
+ return AVERROR(EAGAIN);
if (of->start_time != AV_NOPTS_VALUE && dts < of->start_time)
- return 0;
+ return AVERROR(EAGAIN);
}
- ret = av_packet_ref(opkt, pkt);
- if (ret < 0)
- return ret;
-
- ts_offset = av_rescale_q(start_time, AV_TIME_BASE_Q, opkt->time_base);
+ ts_offset = av_rescale_q(start_time, AV_TIME_BASE_Q, pkt->time_base);
if (pkt->pts != AV_NOPTS_VALUE)
- opkt->pts -= ts_offset;
+ pkt->pts -= ts_offset;
if (pkt->dts == AV_NOPTS_VALUE) {
- opkt->dts = av_rescale_q(dts, AV_TIME_BASE_Q, opkt->time_base);
+ pkt->dts = av_rescale_q(dts, AV_TIME_BASE_Q, pkt->time_base);
} else if (ost->st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {
- opkt->pts = opkt->dts - ts_offset;
- }
- opkt->dts -= ts_offset;
-
- {
- int ret = trigger_fix_sub_duration_heartbeat(ost, pkt);
- if (ret < 0) {
- av_log(NULL, AV_LOG_ERROR,
- "Subtitle heartbeat logic failed in %s! (%s)\n",
- __func__, av_err2str(ret));
- return ret;
- }
+ pkt->pts = pkt->dts - ts_offset;
}
- ret = of_output_packet(of, ost, opkt);
- if (ret < 0)
- return ret;
+ pkt->dts -= ts_offset;
ms->streamcopy_started = 1;
return 0;
}
-static int thread_stop(Muxer *mux)
-{
- void *ret;
-
- if (!mux || !mux->tq)
- return 0;
-
- for (unsigned int i = 0; i < mux->fc->nb_streams; i++)
- tq_send_finish(mux->tq, i);
-
- pthread_join(mux->thread, &ret);
-
- tq_free(&mux->tq);
-
- return (int)(intptr_t)ret;
-}
-
-static int thread_start(Muxer *mux)
-{
- AVFormatContext *fc = mux->fc;
- ObjPool *op;
- int ret;
-
- op = objpool_alloc_packets();
- if (!op)
- return AVERROR(ENOMEM);
-
- mux->tq = tq_alloc(fc->nb_streams, mux->thread_queue_size, op, pkt_move);
- if (!mux->tq) {
- objpool_free(&op);
- return AVERROR(ENOMEM);
- }
-
- ret = pthread_create(&mux->thread, NULL, muxer_thread, (void*)mux);
- if (ret) {
- tq_free(&mux->tq);
- return AVERROR(ret);
- }
-
- /* flush the muxing queues */
- for (int i = 0; i < fc->nb_streams; i++) {
- OutputStream *ost = mux->of.streams[i];
- MuxStream *ms = ms_from_ost(ost);
- AVPacket *pkt;
-
- while (av_fifo_read(ms->muxing_queue, &pkt, 1) >= 0) {
- ret = thread_submit_packet(mux, ost, pkt);
- if (pkt) {
- ms->muxing_queue_data_size -= pkt->size;
- av_packet_free(&pkt);
- }
- if (ret < 0)
- return ret;
- }
- }
-
- return 0;
-}
-
int print_sdp(const char *filename);
int print_sdp(const char *filename)
@@ -590,11 +433,6 @@ int print_sdp(const char *filename)
AVIOContext *sdp_pb;
AVFormatContext **avc;
- for (i = 0; i < nb_output_files; i++) {
- if (!mux_from_of(output_files[i])->header_written)
- return 0;
- }
-
avc = av_malloc_array(nb_output_files, sizeof(*avc));
if (!avc)
return AVERROR(ENOMEM);
@@ -629,25 +467,17 @@ int print_sdp(const char *filename)
avio_closep(&sdp_pb);
}
- // SDP successfully written, allow muxer threads to start
- ret = 1;
-
fail:
av_freep(&avc);
return ret;
}
-int mux_check_init(Muxer *mux)
+int mux_check_init(void *arg)
{
+ Muxer *mux = arg;
OutputFile *of = &mux->of;
AVFormatContext *fc = mux->fc;
- int ret, i;
-
- for (i = 0; i < fc->nb_streams; i++) {
- OutputStream *ost = of->streams[i];
- if (!ost->initialized)
- return 0;
- }
+ int ret;
ret = avformat_write_header(fc, &mux->opts);
if (ret < 0) {
@@ -659,27 +489,7 @@ int mux_check_init(Muxer *mux)
mux->header_written = 1;
av_dump_format(fc, of->index, fc->url, 1);
- nb_output_dumped++;
-
- if (sdp_filename || want_sdp) {
- ret = print_sdp(sdp_filename);
- if (ret < 0) {
- av_log(NULL, AV_LOG_ERROR, "Error writing the SDP.\n");
- return ret;
- } else if (ret == 1) {
- /* SDP is written only after all the muxers are ready, so now we
- * start ALL the threads */
- for (i = 0; i < nb_output_files; i++) {
- ret = thread_start(mux_from_of(output_files[i]));
- if (ret < 0)
- return ret;
- }
- }
- } else {
- ret = thread_start(mux_from_of(of));
- if (ret < 0)
- return ret;
- }
+ atomic_fetch_add(&nb_output_dumped, 1);
return 0;
}
@@ -736,9 +546,10 @@ int of_stream_init(OutputFile *of, OutputStream *ost)
ost->st->time_base);
}
- ost->initialized = 1;
+ if (ms->sch_idx >= 0)
+ return sch_mux_stream_ready(mux->sch, of->index, ms->sch_idx);
- return mux_check_init(mux);
+ return 0;
}
static int check_written(OutputFile *of)
@@ -852,15 +663,13 @@ int of_write_trailer(OutputFile *of)
AVFormatContext *fc = mux->fc;
int ret, mux_result = 0;
- if (!mux->tq) {
+ if (!mux->header_written) {
av_log(mux, AV_LOG_ERROR,
"Nothing was written into output file, because "
"at least one of its streams received no packets.\n");
return AVERROR(EINVAL);
}
- mux_result = thread_stop(mux);
-
ret = av_write_trailer(fc);
if (ret < 0) {
av_log(mux, AV_LOG_ERROR, "Error writing trailer: %s\n", av_err2str(ret));
@@ -905,13 +714,6 @@ static void ost_free(OutputStream **post)
ost->logfile = NULL;
}
- if (ms->muxing_queue) {
- AVPacket *pkt;
- while (av_fifo_read(ms->muxing_queue, &pkt, 1) >= 0)
- av_packet_free(&pkt);
- av_fifo_freep2(&ms->muxing_queue);
- }
-
avcodec_parameters_free(&ost->par_in);
av_bsf_free(&ms->bsf_ctx);
@@ -976,8 +778,6 @@ void of_free(OutputFile **pof)
return;
mux = mux_from_of(of);
- thread_stop(mux);
-
sq_free(&of->sq_encode);
sq_free(&mux->sq_mux);
diff --git a/fftools/ffmpeg_mux.h b/fftools/ffmpeg_mux.h
index eee2b2cb07..5d7cf3fa76 100644
--- a/fftools/ffmpeg_mux.h
+++ b/fftools/ffmpeg_mux.h
@@ -25,7 +25,6 @@
#include <stdint.h>
#include "ffmpeg_sched.h"
-#include "thread_queue.h"
#include "libavformat/avformat.h"
@@ -33,7 +32,6 @@
#include "libavutil/dict.h"
#include "libavutil/fifo.h"
-#include "libavutil/thread.h"
typedef struct MuxStream {
OutputStream ost;
@@ -41,9 +39,6 @@ typedef struct MuxStream {
// name used for logging
char log_name[32];
- /* the packets are buffered here until the muxer is ready to be initialized */
- AVFifo *muxing_queue;
-
AVBSFContext *bsf_ctx;
AVPacket *bsf_pkt;
@@ -57,17 +52,6 @@ typedef struct MuxStream {
int64_t max_frames;
- /*
- * The size of the AVPackets' buffers in queue.
- * Updated when a packet is either pushed or pulled from the queue.
- */
- size_t muxing_queue_data_size;
-
- int max_muxing_queue_size;
-
- /* Threshold after which max_muxing_queue_size will be in effect */
- size_t muxing_queue_data_threshold;
-
// timestamp from which the streamcopied streams should start,
// in AV_TIME_BASE_Q;
// everything before it should be discarded
@@ -106,9 +90,6 @@ typedef struct Muxer {
int *sch_stream_idx;
int nb_sch_stream_idx;
- pthread_t thread;
- ThreadQueue *tq;
-
AVDictionary *opts;
int thread_queue_size;
@@ -122,10 +103,7 @@ typedef struct Muxer {
AVPacket *sq_pkt;
} Muxer;
-/* whether we want to print an SDP, set in of_open() */
-extern int want_sdp;
-
-int mux_check_init(Muxer *mux);
+int mux_check_init(void *arg);
static MuxStream *ms_from_ost(OutputStream *ost)
{
diff --git a/fftools/ffmpeg_mux_init.c b/fftools/ffmpeg_mux_init.c
index 534b4379c7..6459296ab0 100644
--- a/fftools/ffmpeg_mux_init.c
+++ b/fftools/ffmpeg_mux_init.c
@@ -924,13 +924,6 @@ static int new_stream_audio(Muxer *mux, const OptionsContext *o,
return 0;
}
-static int new_stream_attachment(Muxer *mux, const OptionsContext *o,
- OutputStream *ost)
-{
- ost->finished = 1;
- return 0;
-}
-
static int new_stream_subtitle(Muxer *mux, const OptionsContext *o,
OutputStream *ost)
{
@@ -1168,9 +1161,6 @@ static int ost_add(Muxer *mux, const OptionsContext *o, enum AVMediaType type,
if (!ost->par_in)
return AVERROR(ENOMEM);
- ms->muxing_queue = av_fifo_alloc2(8, sizeof(AVPacket*), 0);
- if (!ms->muxing_queue)
- return AVERROR(ENOMEM);
ms->last_mux_dts = AV_NOPTS_VALUE;
ost->st = st;
@@ -1190,7 +1180,8 @@ static int ost_add(Muxer *mux, const OptionsContext *o, enum AVMediaType type,
if (!ost->enc_ctx)
return AVERROR(ENOMEM);
- ret = sch_add_enc(mux->sch, encoder_thread, ost, NULL);
+ ret = sch_add_enc(mux->sch, encoder_thread, ost,
+ ost->type == AVMEDIA_TYPE_SUBTITLE ? NULL : enc_open);
if (ret < 0)
return ret;
ms->sch_idx_enc = ret;
@@ -1414,9 +1405,6 @@ static int ost_add(Muxer *mux, const OptionsContext *o, enum AVMediaType type,
sch_mux_stream_buffering(mux->sch, mux->sch_idx, ms->sch_idx,
max_muxing_queue_size, muxing_queue_data_threshold);
-
- ms->max_muxing_queue_size = max_muxing_queue_size;
- ms->muxing_queue_data_threshold = muxing_queue_data_threshold;
}
MATCH_PER_STREAM_OPT(bits_per_raw_sample, i, ost->bits_per_raw_sample,
@@ -1434,8 +1422,6 @@ static int ost_add(Muxer *mux, const OptionsContext *o, enum AVMediaType type,
if (ost->enc_ctx && av_get_exact_bits_per_sample(ost->enc_ctx->codec_id) == 24)
av_dict_set(&ost->swr_opts, "output_sample_bits", "24", 0);
- ost->last_mux_dts = AV_NOPTS_VALUE;
-
MATCH_PER_STREAM_OPT(copy_initial_nonkeyframes, i,
ms->copy_initial_nonkeyframes, oc, st);
@@ -1443,7 +1429,6 @@ static int ost_add(Muxer *mux, const OptionsContext *o, enum AVMediaType type,
case AVMEDIA_TYPE_VIDEO: ret = new_stream_video (mux, o, ost); break;
case AVMEDIA_TYPE_AUDIO: ret = new_stream_audio (mux, o, ost); break;
case AVMEDIA_TYPE_SUBTITLE: ret = new_stream_subtitle (mux, o, ost); break;
- case AVMEDIA_TYPE_ATTACHMENT: ret = new_stream_attachment(mux, o, ost); break;
}
if (ret < 0)
return ret;
@@ -1938,7 +1923,6 @@ static int setup_sync_queues(Muxer *mux, AVFormatContext *oc, int64_t buf_size_u
MuxStream *ms = ms_from_ost(ost);
enum AVMediaType type = ost->type;
- ost->sq_idx_encode = -1;
ost->sq_idx_mux = -1;
nb_interleaved += IS_INTERLEAVED(type);
@@ -1961,11 +1945,17 @@ static int setup_sync_queues(Muxer *mux, AVFormatContext *oc, int64_t buf_size_u
* - at least one encoded audio/video stream is frame-limited, since
* that has similar semantics to 'shortest'
* - at least one audio encoder requires constant frame sizes
+ *
+ * Note that encoding sync queues are handled in the scheduler, because
+ * different encoders run in different threads and need external
+ * synchronization, while muxer sync queues can be handled inside the muxer
*/
if ((of->shortest && nb_av_enc > 1) || limit_frames_av_enc || nb_audio_fs) {
- of->sq_encode = sq_alloc(SYNC_QUEUE_FRAMES, buf_size_us, mux);
- if (!of->sq_encode)
- return AVERROR(ENOMEM);
+ int sq_idx, ret;
+
+ sq_idx = sch_add_sq_enc(mux->sch, buf_size_us, mux);
+ if (sq_idx < 0)
+ return sq_idx;
for (int i = 0; i < oc->nb_streams; i++) {
OutputStream *ost = of->streams[i];
@@ -1975,13 +1965,11 @@ static int setup_sync_queues(Muxer *mux, AVFormatContext *oc, int64_t buf_size_u
if (!IS_AV_ENC(ost, type))
continue;
- ost->sq_idx_encode = sq_add_stream(of->sq_encode,
- of->shortest || ms->max_frames < INT64_MAX);
- if (ost->sq_idx_encode < 0)
- return ost->sq_idx_encode;
-
- if (ms->max_frames != INT64_MAX)
- sq_limit_frames(of->sq_encode, ost->sq_idx_encode, ms->max_frames);
+ ret = sch_sq_add_enc(mux->sch, sq_idx, ms->sch_idx_enc,
+ of->shortest || ms->max_frames < INT64_MAX,
+ ms->max_frames);
+ if (ret < 0)
+ return ret;
}
}
@@ -2652,23 +2640,6 @@ static int validate_enc_avopt(Muxer *mux, const AVDictionary *codec_avopt)
return 0;
}
-static int init_output_stream_nofilter(OutputStream *ost)
-{
- int ret = 0;
-
- if (ost->enc_ctx) {
- ret = enc_open(ost, NULL);
- if (ret < 0)
- return ret;
- } else {
- ret = of_stream_init(output_files[ost->file_index], ost);
- if (ret < 0)
- return ret;
- }
-
- return ret;
-}
-
static const char *output_file_item_name(void *obj)
{
const Muxer *mux = obj;
@@ -2751,8 +2722,6 @@ int of_open(const OptionsContext *o, const char *filename, Scheduler *sch)
av_strlcat(mux->log_name, "/", sizeof(mux->log_name));
av_strlcat(mux->log_name, oc->oformat->name, sizeof(mux->log_name));
- if (strcmp(oc->oformat->name, "rtp"))
- want_sdp = 0;
of->format = oc->oformat;
if (recording_time != INT64_MAX)
@@ -2768,7 +2737,7 @@ int of_open(const OptionsContext *o, const char *filename, Scheduler *sch)
AVFMT_FLAG_BITEXACT);
}
- err = sch_add_mux(sch, muxer_thread, NULL, mux,
+ err = sch_add_mux(sch, muxer_thread, mux_check_init, mux,
!strcmp(oc->oformat->name, "rtp"));
if (err < 0)
return err;
@@ -2854,26 +2823,15 @@ int of_open(const OptionsContext *o, const char *filename, Scheduler *sch)
of->url = filename;
- /* initialize stream copy and subtitle/data streams.
- * Encoded AVFrame based streams will get initialized when the first AVFrame
- * is received in do_video_out
- */
+ /* initialize streamcopy streams. */
for (int i = 0; i < of->nb_streams; i++) {
OutputStream *ost = of->streams[i];
- if (ost->filter)
- continue;
-
- err = init_output_stream_nofilter(ost);
- if (err < 0)
- return err;
- }
-
- /* write the header for files with no streams */
- if (of->format->flags & AVFMT_NOSTREAMS && oc->nb_streams == 0) {
- int ret = mux_check_init(mux);
- if (ret < 0)
- return ret;
+ if (!ost->enc) {
+ err = of_stream_init(of, ost);
+ if (err < 0)
+ return err;
+ }
}
return 0;
diff --git a/fftools/ffmpeg_opt.c b/fftools/ffmpeg_opt.c
index d463306546..6177a96a4e 100644
--- a/fftools/ffmpeg_opt.c
+++ b/fftools/ffmpeg_opt.c
@@ -64,7 +64,6 @@ const char *const opt_name_top_field_first[] = {"top", NULL};
HWDevice *filter_hw_device;
char *vstats_filename;
-char *sdp_filename;
float audio_drift_threshold = 0.1;
float dts_delta_threshold = 10;
@@ -580,9 +579,8 @@ fail:
static int opt_sdp_file(void *optctx, const char *opt, const char *arg)
{
- av_free(sdp_filename);
- sdp_filename = av_strdup(arg);
- return 0;
+ Scheduler *sch = optctx;
+ return sch_sdp_filename(sch, arg);
}
#if CONFIG_VAAPI
diff --git a/tests/ref/fate/ffmpeg-fix_sub_duration_heartbeat b/tests/ref/fate/ffmpeg-fix_sub_duration_heartbeat
index 957a410921..bc9b833799 100644
--- a/tests/ref/fate/ffmpeg-fix_sub_duration_heartbeat
+++ b/tests/ref/fate/ffmpeg-fix_sub_duration_heartbeat
@@ -1,48 +1,40 @@
1
-00:00:00,968 --> 00:00:01,001
+00:00:00,968 --> 00:00:01,168
<font face="Monospace">{\an7}(</font>
2
-00:00:01,001 --> 00:00:01,168
-<font face="Monospace">{\an7}(</font>
-
-3
00:00:01,168 --> 00:00:01,368
<font face="Monospace">{\an7}(<i> inaudibl</i></font>
-4
+3
00:00:01,368 --> 00:00:01,568
<font face="Monospace">{\an7}(<i> inaudible radio chat</i></font>
-5
+4
00:00:01,568 --> 00:00:02,002
<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )</font>
+5
+00:00:02,002 --> 00:00:03,103
+<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )</font>
+
6
-00:00:02,002 --> 00:00:03,003
-<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )</font>
-
-7
-00:00:03,003 --> 00:00:03,103
-<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )</font>
-
-8
00:00:03,103 --> 00:00:03,303
-<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )
+<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )
>></font>
-9
+7
00:00:03,303 --> 00:00:03,503
-<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )
+<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )
>> Safety rema</font>
-10
+8
00:00:03,504 --> 00:00:03,704
-<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )
+<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )
>> Safety remains our numb</font>
-11
+9
00:00:03,704 --> 00:00:04,004
-<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )
+<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )
>> Safety remains our number one</font>
--
2.42.0
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 02/13] fftools/ffmpeg_filter: make sub2video heartbeat more robust
2023-11-23 19:14 ` [FFmpeg-devel] [PATCH 02/13] fftools/ffmpeg_filter: make sub2video heartbeat more robust Anton Khirnov
@ 2023-11-27 9:40 ` Nicolas George
2023-11-27 9:42 ` Nicolas George
2023-11-29 10:18 ` Anton Khirnov
0 siblings, 2 replies; 49+ messages in thread
From: Nicolas George @ 2023-11-27 9:40 UTC (permalink / raw)
To: FFmpeg development discussions and patches
Anton Khirnov (12023-11-23):
> Avoid making decisions based on current graph input state, which makes
> the output dependent on the order in which the frames from different
> inputs are interleaved.
>
> Makes the output of fate-filter-overlay-dvdsub-2397 more correct - the
> subtitle appears two frames later, which is closer to its PTS as stored
> in the file.
> ---
> fftools/ffmpeg_filter.c | 3 +--
> tests/ref/fate/filter-overlay-dvdsub-2397 | 4 ++--
> tests/ref/fate/sub2video | 8 +++++---
> 3 files changed, 8 insertions(+), 7 deletions(-)
Just as I warned you, it breaks the test case I suggested:
./ffmpeg_g -xerror -i /tmp/dummy_with_sub.mkv -preset ultrafast -lavfi '[0:s]setpts=PTS+60/TB[s] ; [0:v][s]overlay' -y /tmp/dummy_with_hardsub.mkv
(/tmp/dummy_with_sub.mkv is created like I told a few days ago)
thousands of frame queued, eventually failing on OOM.
Regards,
--
Nicolas George
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 02/13] fftools/ffmpeg_filter: make sub2video heartbeat more robust
2023-11-27 9:40 ` Nicolas George
@ 2023-11-27 9:42 ` Nicolas George
2023-11-27 13:02 ` Paul B Mahol
2023-11-29 10:18 ` Anton Khirnov
1 sibling, 1 reply; 49+ messages in thread
From: Nicolas George @ 2023-11-27 9:42 UTC (permalink / raw)
To: FFmpeg development discussions and patches
Nicolas George (12023-11-27):
> Just as I warned you, it breaks the test case I suggested:
>
> ./ffmpeg_g -xerror -i /tmp/dummy_with_sub.mkv -preset ultrafast -lavfi '[0:s]setpts=PTS+60/TB[s] ; [0:v][s]overlay' -y /tmp/dummy_with_hardsub.mkv
>
> (/tmp/dummy_with_sub.mkv is created like I told a few days ago)
> thousands of frame queued, eventually failing on OOM.
… And it fails with just this patch but also with the whole patch
series.
--
Nicolas George
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 01/13] lavfi/buffersink: avoid leaking peeked_frame on uninit
2023-11-23 19:14 ` [FFmpeg-devel] [PATCH 01/13] lavfi/buffersink: avoid leaking peeked_frame on uninit Anton Khirnov
2023-11-23 22:16 ` Paul B Mahol
@ 2023-11-27 9:45 ` Nicolas George
1 sibling, 0 replies; 49+ messages in thread
From: Nicolas George @ 2023-11-27 9:45 UTC (permalink / raw)
To: FFmpeg development discussions and patches
Anton Khirnov (12023-11-23):
> ---
> libavfilter/buffersink.c | 9 +++++++++
> 1 file changed, 9 insertions(+)
LGTM, thanks.
Regards,
--
Nicolas George
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 02/13] fftools/ffmpeg_filter: make sub2video heartbeat more robust
2023-11-27 9:42 ` Nicolas George
@ 2023-11-27 13:02 ` Paul B Mahol
2023-11-27 13:49 ` Nicolas George
0 siblings, 1 reply; 49+ messages in thread
From: Paul B Mahol @ 2023-11-27 13:02 UTC (permalink / raw)
To: FFmpeg development discussions and patches
On Mon, Nov 27, 2023 at 10:43 AM Nicolas George <george@nsup.org> wrote:
> Nicolas George (12023-11-27):
> > Just as I warned you, it breaks the test case I suggested:
> >
> > ./ffmpeg_g -xerror -i /tmp/dummy_with_sub.mkv -preset ultrafast -lavfi
> '[0:s]setpts=PTS+60/TB[s] ; [0:v][s]overlay' -y /tmp/dummy_with_hardsub.mkv
> >
> > (/tmp/dummy_with_sub.mkv is created like I told a few days ago)
> > thousands of frame queued, eventually failing on OOM.
>
> … And it fails with just this patch but also with the whole patch
> series.
>
Looks unrelated issue I just fixed, and sent patch to ML.
>
> --
> Nicolas George
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
>
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 02/13] fftools/ffmpeg_filter: make sub2video heartbeat more robust
2023-11-27 13:02 ` Paul B Mahol
@ 2023-11-27 13:49 ` Nicolas George
2023-11-27 14:08 ` Paul B Mahol
0 siblings, 1 reply; 49+ messages in thread
From: Nicolas George @ 2023-11-27 13:49 UTC (permalink / raw)
To: FFmpeg development discussions and patches
Paul B Mahol (12023-11-27):
> Looks unrelated issue I just fixed, and sent patch to ML.
No, it does not change anything, still “queued = 1081”. You could have
tested.
--
Nicolas George
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 02/13] fftools/ffmpeg_filter: make sub2video heartbeat more robust
2023-11-27 13:49 ` Nicolas George
@ 2023-11-27 14:08 ` Paul B Mahol
0 siblings, 0 replies; 49+ messages in thread
From: Paul B Mahol @ 2023-11-27 14:08 UTC (permalink / raw)
To: FFmpeg development discussions and patches
On Mon, Nov 27, 2023 at 2:49 PM Nicolas George <george@nsup.org> wrote:
> Paul B Mahol (12023-11-27):
> > Looks unrelated issue I just fixed, and sent patch to ML.
>
> No, it does not change anything, still “queued = 1081”. You could have
> tested.
>
I tested it, it passed:
-bash: mkvmerge: command not found
>
> --
> Nicolas George
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
>
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 02/13] fftools/ffmpeg_filter: make sub2video heartbeat more robust
2023-11-27 9:40 ` Nicolas George
2023-11-27 9:42 ` Nicolas George
@ 2023-11-29 10:18 ` Anton Khirnov
1 sibling, 0 replies; 49+ messages in thread
From: Anton Khirnov @ 2023-11-29 10:18 UTC (permalink / raw)
To: FFmpeg development discussions and patches
Quoting Nicolas George (2023-11-27 10:40:18)
> Anton Khirnov (12023-11-23):
> > Avoid making decisions based on current graph input state, which makes
> > the output dependent on the order in which the frames from different
> > inputs are interleaved.
> >
> > Makes the output of fate-filter-overlay-dvdsub-2397 more correct - the
> > subtitle appears two frames later, which is closer to its PTS as stored
> > in the file.
> > ---
> > fftools/ffmpeg_filter.c | 3 +--
> > tests/ref/fate/filter-overlay-dvdsub-2397 | 4 ++--
> > tests/ref/fate/sub2video | 8 +++++---
> > 3 files changed, 8 insertions(+), 7 deletions(-)
>
> Just as I warned you, it breaks the test case I suggested:
>
> ./ffmpeg_g -xerror -i /tmp/dummy_with_sub.mkv -preset ultrafast -lavfi '[0:s]setpts=PTS+60/TB[s] ; [0:v][s]overlay' -y /tmp/dummy_with_hardsub.mkv
>
> (/tmp/dummy_with_sub.mkv is created like I told a few days ago)
> thousands of frame queued, eventually failing on OOM.
You're offsetting two streams from the same file by 60 seconds, so you
should expect about 60s of buffering - that is 1500 frames at 25fps.
The maximum amount of frames I see buffered is 1602, which roughly
corresponds to the expected number.
So I don't think this demonstrates there actually is a problem.
Also note that the output timestamps look better after the patch than
before.
--
Anton Khirnov
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 13/13 v2] fftools/ffmpeg: convert to a threaded architecture
2023-11-25 20:32 ` [FFmpeg-devel] [PATCH 13/13 v2] " Anton Khirnov
@ 2023-11-30 13:08 ` Michael Niedermayer
2023-11-30 13:34 ` Anton Khirnov
0 siblings, 1 reply; 49+ messages in thread
From: Michael Niedermayer @ 2023-11-30 13:08 UTC (permalink / raw)
To: FFmpeg development discussions and patches
[-- Attachment #1.1: Type: text/plain, Size: 2309 bytes --]
On Sat, Nov 25, 2023 at 09:32:06PM +0100, Anton Khirnov wrote:
> Change the main loop and every component (demuxers, decoders, filters,
> encoders, muxers) to use the previously added transcode scheduler. Every
> instance of every such component was already running in a separate
> thread, but now they can actually run in parallel.
>
> Changes the results of ffmpeg-fix_sub_duration_heartbeat - tested by
> JEEB to be more correct and deterministic.
> ---
> fftools/ffmpeg.c | 374 +--------
> fftools/ffmpeg.h | 97 +--
> fftools/ffmpeg_dec.c | 321 ++------
> fftools/ffmpeg_demux.c | 268 ++++---
> fftools/ffmpeg_enc.c | 368 ++-------
> fftools/ffmpeg_filter.c | 722 +++++-------------
> fftools/ffmpeg_mux.c | 324 ++------
> fftools/ffmpeg_mux.h | 24 +-
> fftools/ffmpeg_mux_init.c | 88 +--
> fftools/ffmpeg_opt.c | 6 +-
> .../fate/ffmpeg-fix_sub_duration_heartbeat | 36 +-
> 11 files changed, 598 insertions(+), 2030 deletions(-)
I tried
./ffmpeg -f lavfi -i testsrc2 -bsf:v noise -bitexact -t 2 /tmp/.y4m
with merged ffmpeg_threading into master
and it gets stuck
Stream #0:0 -> #0:0 (wrapped_avframe (native) -> wrapped_avframe (native))
Press [q] to stop, [?] for help
[noise @ 0x55e8fbaea340] Wrapped AVFrame noising is unsupported
[vost#0:0/wrapped_avframe @ 0x55e8fbae9840] Error initializing bitstream filter: noise
[vf#0:0 @ 0x55e8fbaea880] Error sending frames to consumers: Not yet implemented in FFmpeg, patches welcome
[vf#0:0 @ 0x55e8fbaea880] Task finished with error code: -1163346256 (Not yet implemented in FFmpeg, patches welcome)
[vf#0:0 @ 0x55e8fbaea880] Terminating thread with return code -1163346256 (Not yet implemented in FFmpeg, patches welcome)
[...]
--
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
The day soldiers stop bringing you their problems is the day you have stopped
leading them. They have either lost confidence that you can help or concluded
you do not care. Either case is a failure of leadership. - Colin Powell
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]
[-- Attachment #2: Type: text/plain, Size: 251 bytes --]
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 13/13 v2] fftools/ffmpeg: convert to a threaded architecture
2023-11-30 13:08 ` Michael Niedermayer
@ 2023-11-30 13:34 ` Anton Khirnov
2023-11-30 20:48 ` Michael Niedermayer
0 siblings, 1 reply; 49+ messages in thread
From: Anton Khirnov @ 2023-11-30 13:34 UTC (permalink / raw)
To: FFmpeg development discussions and patches
Quoting Michael Niedermayer (2023-11-30 14:08:26)
> On Sat, Nov 25, 2023 at 09:32:06PM +0100, Anton Khirnov wrote:
> > Change the main loop and every component (demuxers, decoders, filters,
> > encoders, muxers) to use the previously added transcode scheduler. Every
> > instance of every such component was already running in a separate
> > thread, but now they can actually run in parallel.
> >
> > Changes the results of ffmpeg-fix_sub_duration_heartbeat - tested by
> > JEEB to be more correct and deterministic.
> > ---
> > fftools/ffmpeg.c | 374 +--------
> > fftools/ffmpeg.h | 97 +--
> > fftools/ffmpeg_dec.c | 321 ++------
> > fftools/ffmpeg_demux.c | 268 ++++---
> > fftools/ffmpeg_enc.c | 368 ++-------
> > fftools/ffmpeg_filter.c | 722 +++++-------------
> > fftools/ffmpeg_mux.c | 324 ++------
> > fftools/ffmpeg_mux.h | 24 +-
> > fftools/ffmpeg_mux_init.c | 88 +--
> > fftools/ffmpeg_opt.c | 6 +-
> > .../fate/ffmpeg-fix_sub_duration_heartbeat | 36 +-
> > 11 files changed, 598 insertions(+), 2030 deletions(-)
>
> I tried
> ./ffmpeg -f lavfi -i testsrc2 -bsf:v noise -bitexact -t 2 /tmp/.y4m
>
> with merged ffmpeg_threading into master
>
> and it gets stuck
>
> Stream #0:0 -> #0:0 (wrapped_avframe (native) -> wrapped_avframe (native))
> Press [q] to stop, [?] for help
> [noise @ 0x55e8fbaea340] Wrapped AVFrame noising is unsupported
> [vost#0:0/wrapped_avframe @ 0x55e8fbae9840] Error initializing bitstream filter: noise
> [vf#0:0 @ 0x55e8fbaea880] Error sending frames to consumers: Not yet implemented in FFmpeg, patches welcome
> [vf#0:0 @ 0x55e8fbaea880] Task finished with error code: -1163346256 (Not yet implemented in FFmpeg, patches welcome)
> [vf#0:0 @ 0x55e8fbaea880] Terminating thread with return code -1163346256 (Not yet implemented in FFmpeg, patches welcome)
Sorry, seems I forgot to update my branch. Did that now, it should not
get suck anymore.
--
Anton Khirnov
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 13/13 v2] fftools/ffmpeg: convert to a threaded architecture
2023-11-30 13:34 ` Anton Khirnov
@ 2023-11-30 20:48 ` Michael Niedermayer
2023-12-01 11:15 ` [FFmpeg-devel] [PATCH 13/13 v3] " Anton Khirnov
0 siblings, 1 reply; 49+ messages in thread
From: Michael Niedermayer @ 2023-11-30 20:48 UTC (permalink / raw)
To: FFmpeg development discussions and patches
[-- Attachment #1.1: Type: text/plain, Size: 2649 bytes --]
On Thu, Nov 30, 2023 at 02:34:59PM +0100, Anton Khirnov wrote:
> Quoting Michael Niedermayer (2023-11-30 14:08:26)
> > On Sat, Nov 25, 2023 at 09:32:06PM +0100, Anton Khirnov wrote:
> > > Change the main loop and every component (demuxers, decoders, filters,
> > > encoders, muxers) to use the previously added transcode scheduler. Every
> > > instance of every such component was already running in a separate
> > > thread, but now they can actually run in parallel.
> > >
> > > Changes the results of ffmpeg-fix_sub_duration_heartbeat - tested by
> > > JEEB to be more correct and deterministic.
> > > ---
> > > fftools/ffmpeg.c | 374 +--------
> > > fftools/ffmpeg.h | 97 +--
> > > fftools/ffmpeg_dec.c | 321 ++------
> > > fftools/ffmpeg_demux.c | 268 ++++---
> > > fftools/ffmpeg_enc.c | 368 ++-------
> > > fftools/ffmpeg_filter.c | 722 +++++-------------
> > > fftools/ffmpeg_mux.c | 324 ++------
> > > fftools/ffmpeg_mux.h | 24 +-
> > > fftools/ffmpeg_mux_init.c | 88 +--
> > > fftools/ffmpeg_opt.c | 6 +-
> > > .../fate/ffmpeg-fix_sub_duration_heartbeat | 36 +-
> > > 11 files changed, 598 insertions(+), 2030 deletions(-)
> >
> > I tried
> > ./ffmpeg -f lavfi -i testsrc2 -bsf:v noise -bitexact -t 2 /tmp/.y4m
> >
> > with merged ffmpeg_threading into master
> >
> > and it gets stuck
> >
> > Stream #0:0 -> #0:0 (wrapped_avframe (native) -> wrapped_avframe (native))
> > Press [q] to stop, [?] for help
> > [noise @ 0x55e8fbaea340] Wrapped AVFrame noising is unsupported
> > [vost#0:0/wrapped_avframe @ 0x55e8fbae9840] Error initializing bitstream filter: noise
> > [vf#0:0 @ 0x55e8fbaea880] Error sending frames to consumers: Not yet implemented in FFmpeg, patches welcome
> > [vf#0:0 @ 0x55e8fbaea880] Task finished with error code: -1163346256 (Not yet implemented in FFmpeg, patches welcome)
> > [vf#0:0 @ 0x55e8fbaea880] Terminating thread with return code -1163346256 (Not yet implemented in FFmpeg, patches welcome)
>
> Sorry, seems I forgot to update my branch. Did that now, it should not
> get suck anymore.
this still gets stuck:
./ffmpeg -y -i mm-short.mpg -af apad -shortest /tmp/.nut
[...]
thx
--
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
I know you won't believe me, but the highest form of Human Excellence is
to question oneself and others. -- Socrates
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]
[-- Attachment #2: Type: text/plain, Size: 251 bytes --]
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* [FFmpeg-devel] [PATCH 13/13 v3] fftools/ffmpeg: convert to a threaded architecture
2023-11-30 20:48 ` Michael Niedermayer
@ 2023-12-01 11:15 ` Anton Khirnov
2023-12-01 14:24 ` Nicolas George
0 siblings, 1 reply; 49+ messages in thread
From: Anton Khirnov @ 2023-12-01 11:15 UTC (permalink / raw)
To: ffmpeg-devel
Change the main loop and every component (demuxers, decoders, filters,
encoders, muxers) to use the previously added transcode scheduler. Every
instance of every such component was already running in a separate
thread, but now they can actually run in parallel.
Changes the results of ffmpeg-fix_sub_duration_heartbeat - tested by
JEEB to be more correct and deterministic.
---
Fixed the hang. Also updated the public branch.
---
fftools/ffmpeg.c | 374 +--------
fftools/ffmpeg.h | 97 +--
fftools/ffmpeg_dec.c | 321 ++------
fftools/ffmpeg_demux.c | 268 ++++---
fftools/ffmpeg_enc.c | 368 ++-------
fftools/ffmpeg_filter.c | 722 +++++-------------
fftools/ffmpeg_mux.c | 324 ++------
fftools/ffmpeg_mux.h | 24 +-
fftools/ffmpeg_mux_init.c | 88 +--
fftools/ffmpeg_opt.c | 6 +-
.../fate/ffmpeg-fix_sub_duration_heartbeat | 36 +-
11 files changed, 598 insertions(+), 2030 deletions(-)
diff --git a/fftools/ffmpeg.c b/fftools/ffmpeg.c
index b8a97258a0..30b594fd97 100644
--- a/fftools/ffmpeg.c
+++ b/fftools/ffmpeg.c
@@ -117,7 +117,7 @@ typedef struct BenchmarkTimeStamps {
static BenchmarkTimeStamps get_benchmark_time_stamps(void);
static int64_t getmaxrss(void);
-unsigned nb_output_dumped = 0;
+atomic_uint nb_output_dumped = 0;
static BenchmarkTimeStamps current_time;
AVIOContext *progress_avio = NULL;
@@ -138,30 +138,6 @@ static struct termios oldtty;
static int restore_tty;
#endif
-/* sub2video hack:
- Convert subtitles to video with alpha to insert them in filter graphs.
- This is a temporary solution until libavfilter gets real subtitles support.
- */
-
-static void sub2video_heartbeat(InputFile *infile, int64_t pts, AVRational tb)
-{
- /* When a frame is read from a file, examine all sub2video streams in
- the same file and send the sub2video frame again. Otherwise, decoded
- video frames could be accumulating in the filter graph while a filter
- (possibly overlay) is desperately waiting for a subtitle frame. */
- for (int i = 0; i < infile->nb_streams; i++) {
- InputStream *ist = infile->streams[i];
-
- if (ist->dec_ctx->codec_type != AVMEDIA_TYPE_SUBTITLE)
- continue;
-
- for (int j = 0; j < ist->nb_filters; j++)
- ifilter_sub2video_heartbeat(ist->filters[j], pts, tb);
- }
-}
-
-/* end of sub2video hack */
-
static void term_exit_sigsafe(void)
{
#if HAVE_TERMIOS_H
@@ -499,23 +475,13 @@ void update_benchmark(const char *fmt, ...)
}
}
-void close_output_stream(OutputStream *ost)
-{
- OutputFile *of = output_files[ost->file_index];
- ost->finished |= ENCODER_FINISHED;
-
- if (ost->sq_idx_encode >= 0)
- sq_send(of->sq_encode, ost->sq_idx_encode, SQFRAME(NULL));
-}
-
-static void print_report(int is_last_report, int64_t timer_start, int64_t cur_time)
+static void print_report(int is_last_report, int64_t timer_start, int64_t cur_time, int64_t pts)
{
AVBPrint buf, buf_script;
int64_t total_size = of_filesize(output_files[0]);
int vid;
double bitrate;
double speed;
- int64_t pts = AV_NOPTS_VALUE;
static int64_t last_time = -1;
static int first_report = 1;
uint64_t nb_frames_dup = 0, nb_frames_drop = 0;
@@ -533,7 +499,7 @@ static void print_report(int is_last_report, int64_t timer_start, int64_t cur_ti
last_time = cur_time;
}
if (((cur_time - last_time) < stats_period && !first_report) ||
- (first_report && nb_output_dumped < nb_output_files))
+ (first_report && atomic_load(&nb_output_dumped) < nb_output_files))
return;
last_time = cur_time;
}
@@ -544,7 +510,7 @@ static void print_report(int is_last_report, int64_t timer_start, int64_t cur_ti
av_bprint_init(&buf, 0, AV_BPRINT_SIZE_AUTOMATIC);
av_bprint_init(&buf_script, 0, AV_BPRINT_SIZE_AUTOMATIC);
for (OutputStream *ost = ost_iter(NULL); ost; ost = ost_iter(ost)) {
- const float q = ost->enc ? ost->quality / (float) FF_QP2LAMBDA : -1;
+ const float q = ost->enc ? atomic_load(&ost->quality) / (float) FF_QP2LAMBDA : -1;
if (vid && ost->type == AVMEDIA_TYPE_VIDEO) {
av_bprintf(&buf, "q=%2.1f ", q);
@@ -565,22 +531,18 @@ static void print_report(int is_last_report, int64_t timer_start, int64_t cur_ti
if (is_last_report)
av_bprintf(&buf, "L");
- nb_frames_dup = ost->filter->nb_frames_dup;
- nb_frames_drop = ost->filter->nb_frames_drop;
+ nb_frames_dup = atomic_load(&ost->filter->nb_frames_dup);
+ nb_frames_drop = atomic_load(&ost->filter->nb_frames_drop);
vid = 1;
}
- /* compute min output value */
- if (ost->last_mux_dts != AV_NOPTS_VALUE) {
- if (pts == AV_NOPTS_VALUE || ost->last_mux_dts > pts)
- pts = ost->last_mux_dts;
- if (copy_ts) {
- if (copy_ts_first_pts == AV_NOPTS_VALUE && pts > 1)
- copy_ts_first_pts = pts;
- if (copy_ts_first_pts != AV_NOPTS_VALUE)
- pts -= copy_ts_first_pts;
- }
- }
+ }
+
+ if (copy_ts) {
+ if (copy_ts_first_pts == AV_NOPTS_VALUE && pts > 1)
+ copy_ts_first_pts = pts;
+ if (copy_ts_first_pts != AV_NOPTS_VALUE)
+ pts -= copy_ts_first_pts;
}
us = FFABS64U(pts) % AV_TIME_BASE;
@@ -783,81 +745,6 @@ int subtitle_wrap_frame(AVFrame *frame, AVSubtitle *subtitle, int copy)
return 0;
}
-int trigger_fix_sub_duration_heartbeat(OutputStream *ost, const AVPacket *pkt)
-{
- OutputFile *of = output_files[ost->file_index];
- int64_t signal_pts = av_rescale_q(pkt->pts, pkt->time_base,
- AV_TIME_BASE_Q);
-
- if (!ost->fix_sub_duration_heartbeat || !(pkt->flags & AV_PKT_FLAG_KEY))
- // we are only interested in heartbeats on streams configured, and
- // only on random access points.
- return 0;
-
- for (int i = 0; i < of->nb_streams; i++) {
- OutputStream *iter_ost = of->streams[i];
- InputStream *ist = iter_ost->ist;
- int ret = AVERROR_BUG;
-
- if (iter_ost == ost || !ist || !ist->decoding_needed ||
- ist->dec_ctx->codec_type != AVMEDIA_TYPE_SUBTITLE)
- // We wish to skip the stream that causes the heartbeat,
- // output streams without an input stream, streams not decoded
- // (as fix_sub_duration is only done for decoded subtitles) as
- // well as non-subtitle streams.
- continue;
-
- if ((ret = fix_sub_duration_heartbeat(ist, signal_pts)) < 0)
- return ret;
- }
-
- return 0;
-}
-
-/* pkt = NULL means EOF (needed to flush decoder buffers) */
-static int process_input_packet(InputStream *ist, const AVPacket *pkt, int no_eof)
-{
- InputFile *f = input_files[ist->file_index];
- int64_t dts_est = AV_NOPTS_VALUE;
- int ret = 0;
- int eof_reached = 0;
-
- if (ist->decoding_needed) {
- ret = dec_packet(ist, pkt, no_eof);
- if (ret < 0 && ret != AVERROR_EOF)
- return ret;
- }
- if (ret == AVERROR_EOF || (!pkt && !ist->decoding_needed))
- eof_reached = 1;
-
- if (pkt && pkt->opaque_ref) {
- DemuxPktData *pd = (DemuxPktData*)pkt->opaque_ref->data;
- dts_est = pd->dts_est;
- }
-
- if (f->recording_time != INT64_MAX) {
- int64_t start_time = 0;
- if (copy_ts) {
- start_time += f->start_time != AV_NOPTS_VALUE ? f->start_time : 0;
- start_time += start_at_zero ? 0 : f->start_time_effective;
- }
- if (dts_est >= f->recording_time + start_time)
- pkt = NULL;
- }
-
- for (int oidx = 0; oidx < ist->nb_outputs; oidx++) {
- OutputStream *ost = ist->outputs[oidx];
- if (ost->enc || (!pkt && no_eof))
- continue;
-
- ret = of_streamcopy(ost, pkt, dts_est);
- if (ret < 0)
- return ret;
- }
-
- return !eof_reached;
-}
-
static void print_stream_maps(void)
{
av_log(NULL, AV_LOG_INFO, "Stream mapping:\n");
@@ -934,43 +821,6 @@ static void print_stream_maps(void)
}
}
-/**
- * Select the output stream to process.
- *
- * @retval 0 an output stream was selected
- * @retval AVERROR(EAGAIN) need to wait until more input is available
- * @retval AVERROR_EOF no more streams need output
- */
-static int choose_output(OutputStream **post)
-{
- int64_t opts_min = INT64_MAX;
- OutputStream *ost_min = NULL;
-
- for (OutputStream *ost = ost_iter(NULL); ost; ost = ost_iter(ost)) {
- int64_t opts;
-
- if (ost->filter && ost->filter->last_pts != AV_NOPTS_VALUE) {
- opts = ost->filter->last_pts;
- } else {
- opts = ost->last_mux_dts == AV_NOPTS_VALUE ?
- INT64_MIN : ost->last_mux_dts;
- }
-
- if (!ost->initialized && !ost->finished) {
- ost_min = ost;
- break;
- }
- if (!ost->finished && opts < opts_min) {
- opts_min = opts;
- ost_min = ost;
- }
- }
- if (!ost_min)
- return AVERROR_EOF;
- *post = ost_min;
- return ost_min->unavailable ? AVERROR(EAGAIN) : 0;
-}
-
static void set_tty_echo(int on)
{
#if HAVE_TERMIOS_H
@@ -1042,149 +892,21 @@ static int check_keyboard_interaction(int64_t cur_time)
return 0;
}
-static void reset_eagain(void)
-{
- for (OutputStream *ost = ost_iter(NULL); ost; ost = ost_iter(ost))
- ost->unavailable = 0;
-}
-
-static void decode_flush(InputFile *ifile)
-{
- for (int i = 0; i < ifile->nb_streams; i++) {
- InputStream *ist = ifile->streams[i];
-
- if (ist->discard || !ist->decoding_needed)
- continue;
-
- dec_packet(ist, NULL, 1);
- }
-}
-
-/*
- * Return
- * - 0 -- one packet was read and processed
- * - AVERROR(EAGAIN) -- no packets were available for selected file,
- * this function should be called again
- * - AVERROR_EOF -- this function should not be called again
- */
-static int process_input(int file_index, AVPacket *pkt)
-{
- InputFile *ifile = input_files[file_index];
- InputStream *ist;
- int ret, i;
-
- ret = ifile_get_packet(ifile, pkt);
-
- if (ret == 1) {
- /* the input file is looped: flush the decoders */
- decode_flush(ifile);
- return AVERROR(EAGAIN);
- }
- if (ret < 0) {
- if (ret != AVERROR_EOF) {
- av_log(ifile, AV_LOG_ERROR,
- "Error retrieving a packet from demuxer: %s\n", av_err2str(ret));
- if (exit_on_error)
- return ret;
- }
-
- for (i = 0; i < ifile->nb_streams; i++) {
- ist = ifile->streams[i];
- if (!ist->discard) {
- ret = process_input_packet(ist, NULL, 0);
- if (ret>0)
- return 0;
- else if (ret < 0)
- return ret;
- }
-
- /* mark all outputs that don't go through lavfi as finished */
- for (int oidx = 0; oidx < ist->nb_outputs; oidx++) {
- OutputStream *ost = ist->outputs[oidx];
- OutputFile *of = output_files[ost->file_index];
-
- ret = of_output_packet(of, ost, NULL);
- if (ret < 0)
- return ret;
- }
- }
-
- ifile->eof_reached = 1;
- return AVERROR(EAGAIN);
- }
-
- reset_eagain();
-
- ist = ifile->streams[pkt->stream_index];
-
- sub2video_heartbeat(ifile, pkt->pts, pkt->time_base);
-
- ret = process_input_packet(ist, pkt, 0);
-
- av_packet_unref(pkt);
-
- return ret < 0 ? ret : 0;
-}
-
-/**
- * Run a single step of transcoding.
- *
- * @return 0 for success, <0 for error
- */
-static int transcode_step(OutputStream *ost, AVPacket *demux_pkt)
-{
- InputStream *ist = NULL;
- int ret;
-
- if (ost->filter) {
- if ((ret = fg_transcode_step(ost->filter->graph, &ist)) < 0)
- return ret;
- if (!ist)
- return 0;
- } else {
- ist = ost->ist;
- av_assert0(ist);
- }
-
- ret = process_input(ist->file_index, demux_pkt);
- if (ret == AVERROR(EAGAIN)) {
- return 0;
- }
-
- if (ret < 0)
- return ret == AVERROR_EOF ? 0 : ret;
-
- // process_input() above might have caused output to become available
- // in multiple filtergraphs, so we process all of them
- for (int i = 0; i < nb_filtergraphs; i++) {
- ret = reap_filters(filtergraphs[i], 0);
- if (ret < 0)
- return ret;
- }
-
- return 0;
-}
-
/*
* The following code is the main loop of the file converter
*/
-static int transcode(Scheduler *sch, int *err_rate_exceeded)
+static int transcode(Scheduler *sch)
{
int ret = 0, i;
- InputStream *ist;
- int64_t timer_start;
- AVPacket *demux_pkt = NULL;
+ int64_t timer_start, transcode_ts = 0;
print_stream_maps();
- *err_rate_exceeded = 0;
atomic_store(&transcode_init_done, 1);
- demux_pkt = av_packet_alloc();
- if (!demux_pkt) {
- ret = AVERROR(ENOMEM);
- goto fail;
- }
+ ret = sch_start(sch);
+ if (ret < 0)
+ return ret;
if (stdin_interaction) {
av_log(NULL, AV_LOG_INFO, "Press [q] to stop, [?] for help\n");
@@ -1192,8 +914,7 @@ static int transcode(Scheduler *sch, int *err_rate_exceeded)
timer_start = av_gettime_relative();
- while (!received_sigterm) {
- OutputStream *ost;
+ while (!sch_wait(sch, stats_period, &transcode_ts)) {
int64_t cur_time= av_gettime_relative();
/* if 'q' pressed, exits */
@@ -1201,49 +922,11 @@ static int transcode(Scheduler *sch, int *err_rate_exceeded)
if (check_keyboard_interaction(cur_time) < 0)
break;
- ret = choose_output(&ost);
- if (ret == AVERROR(EAGAIN)) {
- reset_eagain();
- av_usleep(10000);
- ret = 0;
- continue;
- } else if (ret < 0) {
- av_log(NULL, AV_LOG_VERBOSE, "No more output streams to write to, finishing.\n");
- ret = 0;
- break;
- }
-
- ret = transcode_step(ost, demux_pkt);
- if (ret < 0 && ret != AVERROR_EOF) {
- av_log(NULL, AV_LOG_ERROR, "Error while filtering: %s\n", av_err2str(ret));
- break;
- }
-
/* dump report by using the output first video and audio streams */
- print_report(0, timer_start, cur_time);
+ print_report(0, timer_start, cur_time, transcode_ts);
}
- /* at the end of stream, we must flush the decoder buffers */
- for (ist = ist_iter(NULL); ist; ist = ist_iter(ist)) {
- float err_rate;
-
- if (!input_files[ist->file_index]->eof_reached) {
- int err = process_input_packet(ist, NULL, 0);
- ret = err_merge(ret, err);
- }
-
- err_rate = (ist->frames_decoded || ist->decode_errors) ?
- ist->decode_errors / (ist->frames_decoded + ist->decode_errors) : 0.f;
- if (err_rate > max_error_rate) {
- av_log(ist, AV_LOG_FATAL, "Decode error rate %g exceeds maximum %g\n",
- err_rate, max_error_rate);
- *err_rate_exceeded = 1;
- } else if (err_rate)
- av_log(ist, AV_LOG_VERBOSE, "Decode error rate %g\n", err_rate);
- }
- ret = err_merge(ret, enc_flush());
-
- term_exit();
+ ret = sch_stop(sch);
/* write the trailer if needed */
for (i = 0; i < nb_output_files; i++) {
@@ -1251,11 +934,10 @@ static int transcode(Scheduler *sch, int *err_rate_exceeded)
ret = err_merge(ret, err);
}
- /* dump report by using the first video and audio streams */
- print_report(1, timer_start, av_gettime_relative());
+ term_exit();
-fail:
- av_packet_free(&demux_pkt);
+ /* dump report by using the first video and audio streams */
+ print_report(1, timer_start, av_gettime_relative(), transcode_ts);
return ret;
}
@@ -1308,7 +990,7 @@ int main(int argc, char **argv)
{
Scheduler *sch = NULL;
- int ret, err_rate_exceeded;
+ int ret;
BenchmarkTimeStamps ti;
init_dynload();
@@ -1350,7 +1032,7 @@ int main(int argc, char **argv)
}
current_time = ti = get_benchmark_time_stamps();
- ret = transcode(sch, &err_rate_exceeded);
+ ret = transcode(sch);
if (ret >= 0 && do_benchmark) {
int64_t utime, stime, rtime;
current_time = get_benchmark_time_stamps();
@@ -1362,8 +1044,8 @@ int main(int argc, char **argv)
utime / 1000000.0, stime / 1000000.0, rtime / 1000000.0);
}
- ret = received_nb_signals ? 255 :
- err_rate_exceeded ? 69 : ret;
+ ret = received_nb_signals ? 255 :
+ (ret == FFMPEG_ERROR_RATE_EXCEEDED) ? 69 : ret;
finish:
if (ret == AVERROR_EXIT)
diff --git a/fftools/ffmpeg.h b/fftools/ffmpeg.h
index a89038b765..ba82b7490d 100644
--- a/fftools/ffmpeg.h
+++ b/fftools/ffmpeg.h
@@ -61,6 +61,8 @@
#define FFMPEG_OPT_TOP 1
#define FFMPEG_OPT_FORCE_KF_SOURCE_NO_DROP 1
+#define FFMPEG_ERROR_RATE_EXCEEDED FFERRTAG('E', 'R', 'E', 'D')
+
enum VideoSyncMethod {
VSYNC_AUTO = -1,
VSYNC_PASSTHROUGH,
@@ -82,13 +84,16 @@ enum HWAccelID {
};
enum FrameOpaque {
- FRAME_OPAQUE_REAP_FILTERS = 1,
- FRAME_OPAQUE_CHOOSE_INPUT,
- FRAME_OPAQUE_SUB_HEARTBEAT,
+ FRAME_OPAQUE_SUB_HEARTBEAT = 1,
FRAME_OPAQUE_EOF,
FRAME_OPAQUE_SEND_COMMAND,
};
+enum PacketOpaque {
+ PKT_OPAQUE_SUB_HEARTBEAT = 1,
+ PKT_OPAQUE_FIX_SUB_DURATION,
+};
+
typedef struct HWDevice {
const char *name;
enum AVHWDeviceType type;
@@ -309,11 +314,8 @@ typedef struct OutputFilter {
enum AVMediaType type;
- /* pts of the last frame received from this filter, in AV_TIME_BASE_Q */
- int64_t last_pts;
-
- uint64_t nb_frames_dup;
- uint64_t nb_frames_drop;
+ atomic_uint_least64_t nb_frames_dup;
+ atomic_uint_least64_t nb_frames_drop;
} OutputFilter;
typedef struct FilterGraph {
@@ -426,11 +428,6 @@ typedef struct InputFile {
float readrate;
int accurate_seek;
-
- /* when looping the input file, this queue is used by decoders to report
- * the last frame timestamp back to the demuxer thread */
- AVThreadMessageQueue *audio_ts_queue;
- int audio_ts_queue_size;
} InputFile;
enum forced_keyframes_const {
@@ -532,8 +529,6 @@ typedef struct OutputStream {
InputStream *ist;
AVStream *st; /* stream in the output file */
- /* dts of the last packet sent to the muxing queue, in AV_TIME_BASE_Q */
- int64_t last_mux_dts;
AVRational enc_timebase;
@@ -578,13 +573,6 @@ typedef struct OutputStream {
AVDictionary *sws_dict;
AVDictionary *swr_opts;
char *apad;
- OSTFinished finished; /* no more packets should be written for this stream */
- int unavailable; /* true if the steram is unavailable (possibly temporarily) */
-
- // init_output_stream() has been called for this stream
- // The encoder and the bitstream filters have been initialized and the stream
- // parameters are set in the AVStream.
- int initialized;
const char *attachment_filename;
@@ -598,9 +586,8 @@ typedef struct OutputStream {
uint64_t samples_encoded;
/* packet quality factor */
- int quality;
+ atomic_int quality;
- int sq_idx_encode;
int sq_idx_mux;
EncStats enc_stats_pre;
@@ -658,7 +645,6 @@ extern FilterGraph **filtergraphs;
extern int nb_filtergraphs;
extern char *vstats_filename;
-extern char *sdp_filename;
extern float dts_delta_threshold;
extern float dts_error_threshold;
@@ -691,7 +677,7 @@ extern const AVIOInterruptCB int_cb;
extern const OptionDef options[];
extern HWDevice *filter_hw_device;
-extern unsigned nb_output_dumped;
+extern atomic_uint nb_output_dumped;
extern int ignore_unknown_streams;
extern int copy_unknown_streams;
@@ -737,10 +723,6 @@ FrameData *frame_data(AVFrame *frame);
const FrameData *frame_data_c(AVFrame *frame);
-int ifilter_send_frame(InputFilter *ifilter, AVFrame *frame, int keep_reference);
-int ifilter_send_eof(InputFilter *ifilter, int64_t pts, AVRational tb);
-void ifilter_sub2video_heartbeat(InputFilter *ifilter, int64_t pts, AVRational tb);
-
/**
* Set up fallback filtering parameters from a decoder context. They will only
* be used if no frames are ever sent on this input, otherwise the actual
@@ -761,26 +743,9 @@ int fg_create(FilterGraph **pfg, char *graph_desc, Scheduler *sch);
void fg_free(FilterGraph **pfg);
-/**
- * Perform a step of transcoding for the specified filter graph.
- *
- * @param[in] graph filter graph to consider
- * @param[out] best_ist input stream where a frame would allow to continue
- * @return 0 for success, <0 for error
- */
-int fg_transcode_step(FilterGraph *graph, InputStream **best_ist);
-
void fg_send_command(FilterGraph *fg, double time, const char *target,
const char *command, const char *arg, int all_filters);
-/**
- * Get and encode new output from specified filtergraph, without causing
- * activity.
- *
- * @return 0 for success, <0 for severe errors
- */
-int reap_filters(FilterGraph *fg, int flush);
-
int ffmpeg_parse_options(int argc, char **argv, Scheduler *sch);
void enc_stats_write(OutputStream *ost, EncStats *es,
@@ -807,25 +772,11 @@ int hwaccel_retrieve_data(AVCodecContext *avctx, AVFrame *input);
int dec_open(InputStream *ist, Scheduler *sch, unsigned sch_idx);
void dec_free(Decoder **pdec);
-/**
- * Submit a packet for decoding
- *
- * When pkt==NULL and no_eof=0, there will be no more input. Flush decoders and
- * mark all downstreams as finished.
- *
- * When pkt==NULL and no_eof=1, the stream was reset (e.g. after a seek). Flush
- * decoders and await further input.
- */
-int dec_packet(InputStream *ist, const AVPacket *pkt, int no_eof);
-
int enc_alloc(Encoder **penc, const AVCodec *codec,
Scheduler *sch, unsigned sch_idx);
void enc_free(Encoder **penc);
-int enc_open(OutputStream *ost, const AVFrame *frame);
-int enc_subtitle(OutputFile *of, OutputStream *ost, const AVSubtitle *sub);
-int enc_frame(OutputStream *ost, AVFrame *frame);
-int enc_flush(void);
+int enc_open(void *opaque, const AVFrame *frame);
/*
* Initialize muxing state for the given stream, should be called
@@ -840,30 +791,11 @@ void of_free(OutputFile **pof);
void of_enc_stats_close(void);
-int of_output_packet(OutputFile *of, OutputStream *ost, AVPacket *pkt);
-
-/**
- * @param dts predicted packet dts in AV_TIME_BASE_Q
- */
-int of_streamcopy(OutputStream *ost, const AVPacket *pkt, int64_t dts);
-
int64_t of_filesize(OutputFile *of);
int ifile_open(const OptionsContext *o, const char *filename, Scheduler *sch);
void ifile_close(InputFile **f);
-/**
- * Get next input packet from the demuxer.
- *
- * @param pkt the packet is written here when this function returns 0
- * @return
- * - 0 when a packet has been read successfully
- * - 1 when stream end was reached, but the stream is looped;
- * caller should flush decoders and read from this demuxer again
- * - a negative error code on failure
- */
-int ifile_get_packet(InputFile *f, AVPacket *pkt);
-
int ist_output_add(InputStream *ist, OutputStream *ost);
int ist_filter_add(InputStream *ist, InputFilter *ifilter, int is_simple);
@@ -880,9 +812,6 @@ InputStream *ist_iter(InputStream *prev);
* pass NULL to start iteration */
OutputStream *ost_iter(OutputStream *prev);
-void close_output_stream(OutputStream *ost);
-int trigger_fix_sub_duration_heartbeat(OutputStream *ost, const AVPacket *pkt);
-int fix_sub_duration_heartbeat(InputStream *ist, int64_t signal_pts);
void update_benchmark(const char *fmt, ...);
#define SPECIFIER_OPT_FMT_str "%s"
diff --git a/fftools/ffmpeg_dec.c b/fftools/ffmpeg_dec.c
index 90ea0d6d93..5dde82a276 100644
--- a/fftools/ffmpeg_dec.c
+++ b/fftools/ffmpeg_dec.c
@@ -54,24 +54,6 @@ struct Decoder {
Scheduler *sch;
unsigned sch_idx;
-
- pthread_t thread;
- /**
- * Queue for sending coded packets from the main thread to
- * the decoder thread.
- *
- * An empty packet is sent to flush the decoder without terminating
- * decoding.
- */
- ThreadQueue *queue_in;
- /**
- * Queue for sending decoded frames from the decoder thread
- * to the main thread.
- *
- * An empty frame is sent to signal that a single packet has been fully
- * processed.
- */
- ThreadQueue *queue_out;
};
// data that is local to the decoder thread and not visible outside of it
@@ -80,24 +62,6 @@ typedef struct DecThreadContext {
AVPacket *pkt;
} DecThreadContext;
-static int dec_thread_stop(Decoder *d)
-{
- void *ret;
-
- if (!d->queue_in)
- return 0;
-
- tq_send_finish(d->queue_in, 0);
- tq_receive_finish(d->queue_out, 0);
-
- pthread_join(d->thread, &ret);
-
- tq_free(&d->queue_in);
- tq_free(&d->queue_out);
-
- return (intptr_t)ret;
-}
-
void dec_free(Decoder **pdec)
{
Decoder *dec = *pdec;
@@ -105,8 +69,6 @@ void dec_free(Decoder **pdec)
if (!dec)
return;
- dec_thread_stop(dec);
-
av_frame_free(&dec->frame);
av_packet_free(&dec->pkt);
@@ -148,25 +110,6 @@ fail:
return AVERROR(ENOMEM);
}
-static int send_frame_to_filters(InputStream *ist, AVFrame *decoded_frame)
-{
- int i, ret = 0;
-
- for (i = 0; i < ist->nb_filters; i++) {
- ret = ifilter_send_frame(ist->filters[i], decoded_frame,
- i < ist->nb_filters - 1 ||
- ist->dec->type == AVMEDIA_TYPE_SUBTITLE);
- if (ret == AVERROR_EOF)
- ret = 0; /* ignore */
- if (ret < 0) {
- av_log(NULL, AV_LOG_ERROR,
- "Failed to inject frame into filter network: %s\n", av_err2str(ret));
- break;
- }
- }
- return ret;
-}
-
static AVRational audio_samplerate_update(void *logctx, Decoder *d,
const AVFrame *frame)
{
@@ -421,28 +364,14 @@ static int process_subtitle(InputStream *ist, AVFrame *frame)
if (!subtitle)
return 0;
- ret = send_frame_to_filters(ist, frame);
+ ret = sch_dec_send(d->sch, d->sch_idx, frame);
if (ret < 0)
- return ret;
+ av_frame_unref(frame);
- subtitle = (AVSubtitle*)frame->buf[0]->data;
- if (!subtitle->num_rects)
- return 0;
-
- for (int oidx = 0; oidx < ist->nb_outputs; oidx++) {
- OutputStream *ost = ist->outputs[oidx];
- if (!ost->enc || ost->type != AVMEDIA_TYPE_SUBTITLE)
- continue;
-
- ret = enc_subtitle(output_files[ost->file_index], ost, subtitle);
- if (ret < 0)
- return ret;
- }
-
- return 0;
+ return ret == AVERROR_EOF ? AVERROR_EXIT : ret;
}
-int fix_sub_duration_heartbeat(InputStream *ist, int64_t signal_pts)
+static int fix_sub_duration_heartbeat(InputStream *ist, int64_t signal_pts)
{
Decoder *d = ist->decoder;
int ret = AVERROR_BUG;
@@ -468,12 +397,24 @@ int fix_sub_duration_heartbeat(InputStream *ist, int64_t signal_pts)
static int transcode_subtitles(InputStream *ist, const AVPacket *pkt,
AVFrame *frame)
{
- Decoder *d = ist->decoder;
+ Decoder *d = ist->decoder;
AVPacket *flush_pkt = NULL;
AVSubtitle subtitle;
int got_output;
int ret;
+ if (pkt && (intptr_t)pkt->opaque == PKT_OPAQUE_SUB_HEARTBEAT) {
+ frame->pts = pkt->pts;
+ frame->time_base = pkt->time_base;
+ frame->opaque = (void*)(intptr_t)FRAME_OPAQUE_SUB_HEARTBEAT;
+
+ ret = sch_dec_send(d->sch, d->sch_idx, frame);
+ return ret == AVERROR_EOF ? AVERROR_EXIT : ret;
+ } else if (pkt && (intptr_t)pkt->opaque == PKT_OPAQUE_FIX_SUB_DURATION) {
+ return fix_sub_duration_heartbeat(ist, av_rescale_q(pkt->pts, pkt->time_base,
+ AV_TIME_BASE_Q));
+ }
+
if (!pkt) {
flush_pkt = av_packet_alloc();
if (!flush_pkt)
@@ -496,7 +437,7 @@ static int transcode_subtitles(InputStream *ist, const AVPacket *pkt,
ist->frames_decoded++;
- // XXX the queue for transferring data back to the main thread runs
+ // XXX the queue for transferring data to consumers runs
// on AVFrames, so we wrap AVSubtitle in an AVBufferRef and put that
// inside the frame
// eventually, subtitles should be switched to use AVFrames natively
@@ -509,26 +450,7 @@ static int transcode_subtitles(InputStream *ist, const AVPacket *pkt,
frame->width = ist->dec_ctx->width;
frame->height = ist->dec_ctx->height;
- ret = tq_send(d->queue_out, 0, frame);
- if (ret < 0)
- av_frame_unref(frame);
-
- return ret;
-}
-
-static int send_filter_eof(InputStream *ist)
-{
- Decoder *d = ist->decoder;
- int i, ret;
-
- for (i = 0; i < ist->nb_filters; i++) {
- int64_t end_pts = d->last_frame_pts == AV_NOPTS_VALUE ? AV_NOPTS_VALUE :
- d->last_frame_pts + d->last_frame_duration_est;
- ret = ifilter_send_eof(ist->filters[i], end_pts, d->last_frame_tb);
- if (ret < 0)
- return ret;
- }
- return 0;
+ return process_subtitle(ist, frame);
}
static int packet_decode(InputStream *ist, AVPacket *pkt, AVFrame *frame)
@@ -635,9 +557,11 @@ static int packet_decode(InputStream *ist, AVPacket *pkt, AVFrame *frame)
ist->frames_decoded++;
- ret = tq_send(d->queue_out, 0, frame);
- if (ret < 0)
- return ret;
+ ret = sch_dec_send(d->sch, d->sch_idx, frame);
+ if (ret < 0) {
+ av_frame_unref(frame);
+ return ret == AVERROR_EOF ? AVERROR_EXIT : ret;
+ }
}
}
@@ -679,7 +603,6 @@ fail:
void *decoder_thread(void *arg)
{
InputStream *ist = arg;
- InputFile *ifile = input_files[ist->file_index];
Decoder *d = ist->decoder;
DecThreadContext dt;
int ret = 0, input_status = 0;
@@ -691,19 +614,31 @@ void *decoder_thread(void *arg)
dec_thread_set_name(ist);
while (!input_status) {
- int dummy, flush_buffers;
+ int flush_buffers, have_data;
- input_status = tq_receive(d->queue_in, &dummy, dt.pkt);
- flush_buffers = input_status >= 0 && !dt.pkt->buf;
- if (!dt.pkt->buf)
+ input_status = sch_dec_receive(d->sch, d->sch_idx, dt.pkt);
+ have_data = input_status >= 0 &&
+ (dt.pkt->buf || dt.pkt->side_data_elems ||
+ (intptr_t)dt.pkt->opaque == PKT_OPAQUE_SUB_HEARTBEAT ||
+ (intptr_t)dt.pkt->opaque == PKT_OPAQUE_FIX_SUB_DURATION);
+ flush_buffers = input_status >= 0 && !have_data;
+ if (!have_data)
av_log(ist, AV_LOG_VERBOSE, "Decoder thread received %s packet\n",
flush_buffers ? "flush" : "EOF");
- ret = packet_decode(ist, dt.pkt->buf ? dt.pkt : NULL, dt.frame);
+ ret = packet_decode(ist, have_data ? dt.pkt : NULL, dt.frame);
av_packet_unref(dt.pkt);
av_frame_unref(dt.frame);
+ // AVERROR_EOF - EOF from the decoder
+ // AVERROR_EXIT - EOF from the scheduler
+ // we treat them differently when flushing
+ if (ret == AVERROR_EXIT) {
+ ret = AVERROR_EOF;
+ flush_buffers = 0;
+ }
+
if (ret == AVERROR_EOF) {
av_log(ist, AV_LOG_VERBOSE, "Decoder returned EOF, %s\n",
flush_buffers ? "resetting" : "finishing");
@@ -711,11 +646,10 @@ void *decoder_thread(void *arg)
if (!flush_buffers)
break;
- /* report last frame duration to the demuxer thread */
+ /* report last frame duration to the scheduler */
if (ist->dec->type == AVMEDIA_TYPE_AUDIO) {
- Timestamp ts = { .ts = d->last_frame_pts + d->last_frame_duration_est,
- .tb = d->last_frame_tb };
- av_thread_message_queue_send(ifile->audio_ts_queue, &ts, 0);
+ dt.pkt->pts = d->last_frame_pts + d->last_frame_duration_est;
+ dt.pkt->time_base = d->last_frame_tb;
}
avcodec_flush_buffers(ist->dec_ctx);
@@ -724,149 +658,47 @@ void *decoder_thread(void *arg)
av_err2str(ret));
break;
}
-
- // signal to the consumer thread that the entire packet was processed
- ret = tq_send(d->queue_out, 0, dt.frame);
- if (ret < 0) {
- if (ret != AVERROR_EOF)
- av_log(ist, AV_LOG_ERROR, "Error communicating with the main thread\n");
- break;
- }
}
// EOF is normal thread termination
if (ret == AVERROR_EOF)
ret = 0;
+ // on success send EOF timestamp to our downstreams
+ if (ret >= 0) {
+ float err_rate;
+
+ av_frame_unref(dt.frame);
+
+ dt.frame->opaque = (void*)(intptr_t)FRAME_OPAQUE_EOF;
+ dt.frame->pts = d->last_frame_pts == AV_NOPTS_VALUE ? AV_NOPTS_VALUE :
+ d->last_frame_pts + d->last_frame_duration_est;
+ dt.frame->time_base = d->last_frame_tb;
+
+ ret = sch_dec_send(d->sch, d->sch_idx, dt.frame);
+ if (ret < 0 && ret != AVERROR_EOF) {
+ av_log(NULL, AV_LOG_FATAL,
+ "Error signalling EOF timestamp: %s\n", av_err2str(ret));
+ goto finish;
+ }
+ ret = 0;
+
+ err_rate = (ist->frames_decoded || ist->decode_errors) ?
+ ist->decode_errors / (ist->frames_decoded + ist->decode_errors) : 0.f;
+ if (err_rate > max_error_rate) {
+ av_log(ist, AV_LOG_FATAL, "Decode error rate %g exceeds maximum %g\n",
+ err_rate, max_error_rate);
+ ret = FFMPEG_ERROR_RATE_EXCEEDED;
+ } else if (err_rate)
+ av_log(ist, AV_LOG_VERBOSE, "Decode error rate %g\n", err_rate);
+ }
+
finish:
- tq_receive_finish(d->queue_in, 0);
- tq_send_finish (d->queue_out, 0);
-
- // make sure the demuxer does not get stuck waiting for audio durations
- // that will never arrive
- if (ifile->audio_ts_queue && ist->dec->type == AVMEDIA_TYPE_AUDIO)
- av_thread_message_queue_set_err_recv(ifile->audio_ts_queue, AVERROR_EOF);
-
dec_thread_uninit(&dt);
- av_log(ist, AV_LOG_VERBOSE, "Terminating decoder thread\n");
-
return (void*)(intptr_t)ret;
}
-int dec_packet(InputStream *ist, const AVPacket *pkt, int no_eof)
-{
- Decoder *d = ist->decoder;
- int ret = 0, thread_ret;
-
- // thread already joined
- if (!d->queue_in)
- return AVERROR_EOF;
-
- // send the packet/flush request/EOF to the decoder thread
- if (pkt || no_eof) {
- av_packet_unref(d->pkt);
-
- if (pkt) {
- ret = av_packet_ref(d->pkt, pkt);
- if (ret < 0)
- goto finish;
- }
-
- ret = tq_send(d->queue_in, 0, d->pkt);
- if (ret < 0)
- goto finish;
- } else
- tq_send_finish(d->queue_in, 0);
-
- // retrieve all decoded data for the packet
- while (1) {
- int dummy;
-
- ret = tq_receive(d->queue_out, &dummy, d->frame);
- if (ret < 0)
- goto finish;
-
- // packet fully processed
- if (!d->frame->buf[0])
- return 0;
-
- // process the decoded frame
- if (ist->dec->type == AVMEDIA_TYPE_SUBTITLE) {
- ret = process_subtitle(ist, d->frame);
- } else {
- ret = send_frame_to_filters(ist, d->frame);
- }
- av_frame_unref(d->frame);
- if (ret < 0)
- goto finish;
- }
-
-finish:
- thread_ret = dec_thread_stop(d);
- if (thread_ret < 0) {
- av_log(ist, AV_LOG_ERROR, "Decoder thread returned error: %s\n",
- av_err2str(thread_ret));
- ret = err_merge(ret, thread_ret);
- }
- // non-EOF errors here are all fatal
- if (ret < 0 && ret != AVERROR_EOF)
- return ret;
-
- // signal EOF to our downstreams
- ret = send_filter_eof(ist);
- if (ret < 0) {
- av_log(NULL, AV_LOG_FATAL, "Error marking filters as finished\n");
- return ret;
- }
-
- return AVERROR_EOF;
-}
-
-static int dec_thread_start(InputStream *ist)
-{
- Decoder *d = ist->decoder;
- ObjPool *op;
- int ret = 0;
-
- op = objpool_alloc_packets();
- if (!op)
- return AVERROR(ENOMEM);
-
- d->queue_in = tq_alloc(1, 1, op, pkt_move);
- if (!d->queue_in) {
- objpool_free(&op);
- return AVERROR(ENOMEM);
- }
-
- op = objpool_alloc_frames();
- if (!op)
- goto fail;
-
- d->queue_out = tq_alloc(1, 4, op, frame_move);
- if (!d->queue_out) {
- objpool_free(&op);
- goto fail;
- }
-
- ret = pthread_create(&d->thread, NULL, decoder_thread, ist);
- if (ret) {
- ret = AVERROR(ret);
- av_log(ist, AV_LOG_ERROR, "pthread_create() failed: %s\n",
- av_err2str(ret));
- goto fail;
- }
-
- return 0;
-fail:
- if (ret >= 0)
- ret = AVERROR(ENOMEM);
-
- tq_free(&d->queue_in);
- tq_free(&d->queue_out);
- return ret;
-}
-
static enum AVPixelFormat get_format(AVCodecContext *s, const enum AVPixelFormat *pix_fmts)
{
InputStream *ist = s->opaque;
@@ -1118,12 +950,5 @@ int dec_open(InputStream *ist, Scheduler *sch, unsigned sch_idx)
if (ret < 0)
return ret;
- ret = dec_thread_start(ist);
- if (ret < 0) {
- av_log(ist, AV_LOG_ERROR, "Error starting decoder thread: %s\n",
- av_err2str(ret));
- return ret;
- }
-
return 0;
}
diff --git a/fftools/ffmpeg_demux.c b/fftools/ffmpeg_demux.c
index 2234dbe076..91cd7a1125 100644
--- a/fftools/ffmpeg_demux.c
+++ b/fftools/ffmpeg_demux.c
@@ -22,8 +22,6 @@
#include "ffmpeg.h"
#include "ffmpeg_sched.h"
#include "ffmpeg_utils.h"
-#include "objpool.h"
-#include "thread_queue.h"
#include "libavutil/avassert.h"
#include "libavutil/avstring.h"
@@ -35,7 +33,6 @@
#include "libavutil/pixdesc.h"
#include "libavutil/time.h"
#include "libavutil/timestamp.h"
-#include "libavutil/thread.h"
#include "libavcodec/packet.h"
@@ -66,7 +63,11 @@ typedef struct DemuxStream {
double ts_scale;
+ // scheduler returned EOF for this stream
+ int finished;
+
int streamcopy_needed;
+ int have_sub2video;
int wrap_correction_done;
int saw_first_ts;
@@ -101,6 +102,7 @@ typedef struct Demuxer {
/* number of times input stream should be looped */
int loop;
+ int have_audio_dec;
/* duration of the looped segment of the input file */
Timestamp duration;
/* pts with the smallest/largest values ever seen */
@@ -113,11 +115,12 @@ typedef struct Demuxer {
double readrate_initial_burst;
Scheduler *sch;
- ThreadQueue *thread_queue;
- int thread_queue_size;
- pthread_t thread;
+
+ AVPacket *pkt_heartbeat;
int read_started;
+ int nb_streams_used;
+ int nb_streams_finished;
} Demuxer;
static DemuxStream *ds_from_ist(InputStream *ist)
@@ -153,7 +156,7 @@ static void report_new_stream(Demuxer *d, const AVPacket *pkt)
d->nb_streams_warn = pkt->stream_index + 1;
}
-static int seek_to_start(Demuxer *d)
+static int seek_to_start(Demuxer *d, Timestamp end_pts)
{
InputFile *ifile = &d->f;
AVFormatContext *is = ifile->ctx;
@@ -163,21 +166,10 @@ static int seek_to_start(Demuxer *d)
if (ret < 0)
return ret;
- if (ifile->audio_ts_queue_size) {
- int got_ts = 0;
-
- while (got_ts < ifile->audio_ts_queue_size) {
- Timestamp ts;
- ret = av_thread_message_queue_recv(ifile->audio_ts_queue, &ts, 0);
- if (ret < 0)
- return ret;
- got_ts++;
-
- if (d->max_pts.ts == AV_NOPTS_VALUE ||
- av_compare_ts(d->max_pts.ts, d->max_pts.tb, ts.ts, ts.tb) < 0)
- d->max_pts = ts;
- }
- }
+ if (end_pts.ts != AV_NOPTS_VALUE &&
+ (d->max_pts.ts == AV_NOPTS_VALUE ||
+ av_compare_ts(d->max_pts.ts, d->max_pts.tb, end_pts.ts, end_pts.tb) < 0))
+ d->max_pts = end_pts;
if (d->max_pts.ts != AV_NOPTS_VALUE) {
int64_t min_pts = d->min_pts.ts == AV_NOPTS_VALUE ? 0 : d->min_pts.ts;
@@ -404,7 +396,7 @@ static int ts_fixup(Demuxer *d, AVPacket *pkt)
duration = av_rescale_q(d->duration.ts, d->duration.tb, pkt->time_base);
if (pkt->pts != AV_NOPTS_VALUE) {
// audio decoders take precedence for estimating total file duration
- int64_t pkt_duration = ifile->audio_ts_queue_size ? 0 : pkt->duration;
+ int64_t pkt_duration = d->have_audio_dec ? 0 : pkt->duration;
pkt->pts += duration;
@@ -440,7 +432,7 @@ static int ts_fixup(Demuxer *d, AVPacket *pkt)
return 0;
}
-static int input_packet_process(Demuxer *d, AVPacket *pkt)
+static int input_packet_process(Demuxer *d, AVPacket *pkt, unsigned *send_flags)
{
InputFile *f = &d->f;
InputStream *ist = f->streams[pkt->stream_index];
@@ -451,6 +443,16 @@ static int input_packet_process(Demuxer *d, AVPacket *pkt)
if (ret < 0)
return ret;
+ if (f->recording_time != INT64_MAX) {
+ int64_t start_time = 0;
+ if (copy_ts) {
+ start_time += f->start_time != AV_NOPTS_VALUE ? f->start_time : 0;
+ start_time += start_at_zero ? 0 : f->start_time_effective;
+ }
+ if (ds->dts >= f->recording_time + start_time)
+ *send_flags |= DEMUX_SEND_STREAMCOPY_EOF;
+ }
+
ds->data_size += pkt->size;
ds->nb_packets++;
@@ -465,6 +467,8 @@ static int input_packet_process(Demuxer *d, AVPacket *pkt)
av_ts2timestr(input_files[ist->file_index]->ts_offset, &AV_TIME_BASE_Q));
}
+ pkt->stream_index = ds->sch_idx_stream;
+
return 0;
}
@@ -488,6 +492,65 @@ static void readrate_sleep(Demuxer *d)
}
}
+static int do_send(Demuxer *d, DemuxStream *ds, AVPacket *pkt, unsigned flags,
+ const char *pkt_desc)
+{
+ int ret;
+
+ ret = sch_demux_send(d->sch, d->f.index, pkt, flags);
+ if (ret == AVERROR_EOF) {
+ av_packet_unref(pkt);
+
+ av_log(ds, AV_LOG_VERBOSE, "All consumers of this stream are done\n");
+ ds->finished = 1;
+
+ if (++d->nb_streams_finished == d->nb_streams_used) {
+ av_log(d, AV_LOG_VERBOSE, "All consumers are done\n");
+ return AVERROR_EOF;
+ }
+ } else if (ret < 0) {
+ if (ret != AVERROR_EXIT)
+ av_log(d, AV_LOG_ERROR,
+ "Unable to send %s packet to consumers: %s\n",
+ pkt_desc, av_err2str(ret));
+ return ret;
+ }
+
+ return 0;
+}
+
+static int demux_send(Demuxer *d, DemuxStream *ds, AVPacket *pkt, unsigned flags)
+{
+ InputFile *f = &d->f;
+ int ret;
+
+ // send heartbeat for sub2video streams
+ if (d->pkt_heartbeat && pkt->pts != AV_NOPTS_VALUE) {
+ for (int i = 0; i < f->nb_streams; i++) {
+ DemuxStream *ds1 = ds_from_ist(f->streams[i]);
+
+ if (ds1->finished || !ds1->have_sub2video)
+ continue;
+
+ d->pkt_heartbeat->pts = pkt->pts;
+ d->pkt_heartbeat->time_base = pkt->time_base;
+ d->pkt_heartbeat->stream_index = ds1->sch_idx_stream;
+ d->pkt_heartbeat->opaque = (void*)(intptr_t)PKT_OPAQUE_SUB_HEARTBEAT;
+
+ ret = do_send(d, ds1, d->pkt_heartbeat, 0, "heartbeat");
+ if (ret < 0)
+ return ret;
+ }
+ }
+
+ ret = do_send(d, ds, pkt, flags, "demuxed");
+ if (ret < 0)
+ return ret;
+
+
+ return 0;
+}
+
static void discard_unused_programs(InputFile *ifile)
{
for (int j = 0; j < ifile->ctx->nb_programs; j++) {
@@ -527,9 +590,13 @@ static void *input_thread(void *arg)
discard_unused_programs(f);
+ d->read_started = 1;
d->wallclock_start = av_gettime_relative();
while (1) {
+ DemuxStream *ds;
+ unsigned send_flags = 0;
+
ret = av_read_frame(f->ctx, pkt);
if (ret == AVERROR(EAGAIN)) {
@@ -538,11 +605,13 @@ static void *input_thread(void *arg)
}
if (ret < 0) {
if (d->loop) {
- /* signal looping to the consumer thread */
+ /* signal looping to our consumers */
pkt->stream_index = -1;
- ret = tq_send(d->thread_queue, 0, pkt);
+
+ ret = sch_demux_send(d->sch, f->index, pkt, 0);
if (ret >= 0)
- ret = seek_to_start(d);
+ ret = seek_to_start(d, (Timestamp){ .ts = pkt->pts,
+ .tb = pkt->time_base });
if (ret >= 0)
continue;
@@ -551,9 +620,11 @@ static void *input_thread(void *arg)
if (ret == AVERROR_EOF)
av_log(d, AV_LOG_VERBOSE, "EOF while reading input\n");
- else
+ else {
av_log(d, AV_LOG_ERROR, "Error during demuxing: %s\n",
av_err2str(ret));
+ ret = exit_on_error ? ret : 0;
+ }
break;
}
@@ -565,8 +636,9 @@ static void *input_thread(void *arg)
/* the following test is needed in case new streams appear
dynamically in stream : we ignore them */
- if (pkt->stream_index >= f->nb_streams ||
- f->streams[pkt->stream_index]->discard) {
+ ds = pkt->stream_index < f->nb_streams ?
+ ds_from_ist(f->streams[pkt->stream_index]) : NULL;
+ if (!ds || ds->ist.discard || ds->finished) {
report_new_stream(d, pkt);
av_packet_unref(pkt);
continue;
@@ -583,122 +655,26 @@ static void *input_thread(void *arg)
}
}
- ret = input_packet_process(d, pkt);
+ ret = input_packet_process(d, pkt, &send_flags);
if (ret < 0)
break;
if (f->readrate)
readrate_sleep(d);
- ret = tq_send(d->thread_queue, 0, pkt);
- if (ret < 0) {
- if (ret != AVERROR_EOF)
- av_log(f, AV_LOG_ERROR,
- "Unable to send packet to main thread: %s\n",
- av_err2str(ret));
+ ret = demux_send(d, ds, pkt, send_flags);
+ if (ret < 0)
break;
- }
}
+ // EOF/EXIT is normal termination
+ if (ret == AVERROR_EOF || ret == AVERROR_EXIT)
+ ret = 0;
+
finish:
- av_assert0(ret < 0);
- tq_send_finish(d->thread_queue, 0);
-
av_packet_free(&pkt);
- av_log(d, AV_LOG_VERBOSE, "Terminating demuxer thread\n");
-
- return NULL;
-}
-
-static void thread_stop(Demuxer *d)
-{
- InputFile *f = &d->f;
-
- if (!d->thread_queue)
- return;
-
- tq_receive_finish(d->thread_queue, 0);
-
- pthread_join(d->thread, NULL);
-
- tq_free(&d->thread_queue);
-
- av_thread_message_queue_free(&f->audio_ts_queue);
-}
-
-static int thread_start(Demuxer *d)
-{
- int ret;
- InputFile *f = &d->f;
- ObjPool *op;
-
- if (d->thread_queue_size <= 0)
- d->thread_queue_size = (nb_input_files > 1 ? 8 : 1);
-
- op = objpool_alloc_packets();
- if (!op)
- return AVERROR(ENOMEM);
-
- d->thread_queue = tq_alloc(1, d->thread_queue_size, op, pkt_move);
- if (!d->thread_queue) {
- objpool_free(&op);
- return AVERROR(ENOMEM);
- }
-
- if (d->loop) {
- int nb_audio_dec = 0;
-
- for (int i = 0; i < f->nb_streams; i++) {
- InputStream *ist = f->streams[i];
- nb_audio_dec += !!(ist->decoding_needed &&
- ist->st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO);
- }
-
- if (nb_audio_dec) {
- ret = av_thread_message_queue_alloc(&f->audio_ts_queue,
- nb_audio_dec, sizeof(Timestamp));
- if (ret < 0)
- goto fail;
- f->audio_ts_queue_size = nb_audio_dec;
- }
- }
-
- if ((ret = pthread_create(&d->thread, NULL, input_thread, d))) {
- av_log(d, AV_LOG_ERROR, "pthread_create failed: %s. Try to increase `ulimit -v` or decrease `ulimit -s`.\n", strerror(ret));
- ret = AVERROR(ret);
- goto fail;
- }
-
- d->read_started = 1;
-
- return 0;
-fail:
- tq_free(&d->thread_queue);
- return ret;
-}
-
-int ifile_get_packet(InputFile *f, AVPacket *pkt)
-{
- Demuxer *d = demuxer_from_ifile(f);
- int ret, dummy;
-
- if (!d->thread_queue) {
- ret = thread_start(d);
- if (ret < 0)
- return ret;
- }
-
- ret = tq_receive(d->thread_queue, &dummy, pkt);
- if (ret < 0)
- return ret;
-
- if (pkt->stream_index == -1) {
- av_assert0(!pkt->data && !pkt->side_data_elems);
- return 1;
- }
-
- return 0;
+ return (void*)(intptr_t)ret;
}
static void demux_final_stats(Demuxer *d)
@@ -769,8 +745,6 @@ void ifile_close(InputFile **pf)
if (!f)
return;
- thread_stop(d);
-
if (d->read_started)
demux_final_stats(d);
@@ -780,6 +754,8 @@ void ifile_close(InputFile **pf)
avformat_close_input(&f->ctx);
+ av_packet_free(&d->pkt_heartbeat);
+
av_freep(pf);
}
@@ -802,7 +778,11 @@ static int ist_use(InputStream *ist, int decoding_needed)
ds->sch_idx_stream = ret;
}
- ist->discard = 0;
+ if (ist->discard) {
+ ist->discard = 0;
+ d->nb_streams_used++;
+ }
+
ist->st->discard = ist->user_set_discard;
ist->decoding_needed |= decoding_needed;
ds->streamcopy_needed |= !decoding_needed;
@@ -823,6 +803,8 @@ static int ist_use(InputStream *ist, int decoding_needed)
ret = dec_open(ist, d->sch, ds->sch_idx_dec);
if (ret < 0)
return ret;
+
+ d->have_audio_dec |= is_audio;
}
return 0;
@@ -848,6 +830,7 @@ int ist_output_add(InputStream *ist, OutputStream *ost)
int ist_filter_add(InputStream *ist, InputFilter *ifilter, int is_simple)
{
+ Demuxer *d = demuxer_from_ifile(input_files[ist->file_index]);
DemuxStream *ds = ds_from_ist(ist);
int ret;
@@ -866,6 +849,15 @@ int ist_filter_add(InputStream *ist, InputFilter *ifilter, int is_simple)
if (ret < 0)
return ret;
+ if (ist->dec_ctx->codec_type == AVMEDIA_TYPE_SUBTITLE) {
+ if (!d->pkt_heartbeat) {
+ d->pkt_heartbeat = av_packet_alloc();
+ if (!d->pkt_heartbeat)
+ return AVERROR(ENOMEM);
+ }
+ ds->have_sub2video = 1;
+ }
+
return ds->sch_idx_dec;
}
@@ -1607,8 +1599,6 @@ int ifile_open(const OptionsContext *o, const char *filename, Scheduler *sch)
"since neither -readrate nor -re were given\n");
}
- d->thread_queue_size = o->thread_queue_size;
-
/* Add all the streams from the given input file to the demuxer */
for (int i = 0; i < ic->nb_streams; i++) {
ret = ist_add(o, d, ic->streams[i]);
diff --git a/fftools/ffmpeg_enc.c b/fftools/ffmpeg_enc.c
index 9871381c0e..9383b167f7 100644
--- a/fftools/ffmpeg_enc.c
+++ b/fftools/ffmpeg_enc.c
@@ -41,12 +41,6 @@
#include "libavformat/avformat.h"
struct Encoder {
- AVFrame *sq_frame;
-
- // packet for receiving encoded output
- AVPacket *pkt;
- AVFrame *sub_frame;
-
// combined size of all the packets received from the encoder
uint64_t data_size;
@@ -54,25 +48,9 @@ struct Encoder {
uint64_t packets_encoded;
int opened;
- int finished;
Scheduler *sch;
unsigned sch_idx;
-
- pthread_t thread;
- /**
- * Queue for sending frames from the main thread to
- * the encoder thread.
- */
- ThreadQueue *queue_in;
- /**
- * Queue for sending encoded packets from the encoder thread
- * to the main thread.
- *
- * An empty packet is sent to signal that a previously sent
- * frame has been fully processed.
- */
- ThreadQueue *queue_out;
};
// data that is local to the decoder thread and not visible outside of it
@@ -81,24 +59,6 @@ typedef struct EncoderThread {
AVPacket *pkt;
} EncoderThread;
-static int enc_thread_stop(Encoder *e)
-{
- void *ret;
-
- if (!e->queue_in)
- return 0;
-
- tq_send_finish(e->queue_in, 0);
- tq_receive_finish(e->queue_out, 0);
-
- pthread_join(e->thread, &ret);
-
- tq_free(&e->queue_in);
- tq_free(&e->queue_out);
-
- return (int)(intptr_t)ret;
-}
-
void enc_free(Encoder **penc)
{
Encoder *enc = *penc;
@@ -106,13 +66,6 @@ void enc_free(Encoder **penc)
if (!enc)
return;
- enc_thread_stop(enc);
-
- av_frame_free(&enc->sq_frame);
- av_frame_free(&enc->sub_frame);
-
- av_packet_free(&enc->pkt);
-
av_freep(penc);
}
@@ -127,25 +80,12 @@ int enc_alloc(Encoder **penc, const AVCodec *codec,
if (!enc)
return AVERROR(ENOMEM);
- if (codec->type == AVMEDIA_TYPE_SUBTITLE) {
- enc->sub_frame = av_frame_alloc();
- if (!enc->sub_frame)
- goto fail;
- }
-
- enc->pkt = av_packet_alloc();
- if (!enc->pkt)
- goto fail;
-
enc->sch = sch;
enc->sch_idx = sch_idx;
*penc = enc;
return 0;
-fail:
- enc_free(&enc);
- return AVERROR(ENOMEM);
}
static int hw_device_setup_for_encode(OutputStream *ost, AVBufferRef *frames_ref)
@@ -224,52 +164,9 @@ static int set_encoder_id(OutputFile *of, OutputStream *ost)
return 0;
}
-static int enc_thread_start(OutputStream *ost)
-{
- Encoder *e = ost->enc;
- ObjPool *op;
- int ret = 0;
-
- op = objpool_alloc_frames();
- if (!op)
- return AVERROR(ENOMEM);
-
- e->queue_in = tq_alloc(1, 1, op, frame_move);
- if (!e->queue_in) {
- objpool_free(&op);
- return AVERROR(ENOMEM);
- }
-
- op = objpool_alloc_packets();
- if (!op)
- goto fail;
-
- e->queue_out = tq_alloc(1, 4, op, pkt_move);
- if (!e->queue_out) {
- objpool_free(&op);
- goto fail;
- }
-
- ret = pthread_create(&e->thread, NULL, encoder_thread, ost);
- if (ret) {
- ret = AVERROR(ret);
- av_log(ost, AV_LOG_ERROR, "pthread_create() failed: %s\n",
- av_err2str(ret));
- goto fail;
- }
-
- return 0;
-fail:
- if (ret >= 0)
- ret = AVERROR(ENOMEM);
-
- tq_free(&e->queue_in);
- tq_free(&e->queue_out);
- return ret;
-}
-
-int enc_open(OutputStream *ost, const AVFrame *frame)
+int enc_open(void *opaque, const AVFrame *frame)
{
+ OutputStream *ost = opaque;
InputStream *ist = ost->ist;
Encoder *e = ost->enc;
AVCodecContext *enc_ctx = ost->enc_ctx;
@@ -277,6 +174,7 @@ int enc_open(OutputStream *ost, const AVFrame *frame)
const AVCodec *enc = enc_ctx->codec;
OutputFile *of = output_files[ost->file_index];
FrameData *fd;
+ int frame_samples = 0;
int ret;
if (e->opened)
@@ -420,17 +318,8 @@ int enc_open(OutputStream *ost, const AVFrame *frame)
e->opened = 1;
- if (ost->sq_idx_encode >= 0) {
- e->sq_frame = av_frame_alloc();
- if (!e->sq_frame)
- return AVERROR(ENOMEM);
- }
-
- if (ost->enc_ctx->frame_size) {
- av_assert0(ost->sq_idx_encode >= 0);
- sq_frame_samples(output_files[ost->file_index]->sq_encode,
- ost->sq_idx_encode, ost->enc_ctx->frame_size);
- }
+ if (ost->enc_ctx->frame_size)
+ frame_samples = ost->enc_ctx->frame_size;
ret = check_avoptions(ost->encoder_opts);
if (ret < 0)
@@ -476,18 +365,11 @@ int enc_open(OutputStream *ost, const AVFrame *frame)
if (ost->st->time_base.num <= 0 || ost->st->time_base.den <= 0)
ost->st->time_base = av_add_q(ost->enc_ctx->time_base, (AVRational){0, 1});
- ret = enc_thread_start(ost);
- if (ret < 0) {
- av_log(ost, AV_LOG_ERROR, "Error starting encoder thread: %s\n",
- av_err2str(ret));
- return ret;
- }
-
ret = of_stream_init(of, ost);
if (ret < 0)
return ret;
- return 0;
+ return frame_samples;
}
static int check_recording_time(OutputStream *ost, int64_t ts, AVRational tb)
@@ -514,8 +396,7 @@ static int do_subtitle_out(OutputFile *of, OutputStream *ost, const AVSubtitle *
av_log(ost, AV_LOG_ERROR, "Subtitle packets must have a pts\n");
return exit_on_error ? AVERROR(EINVAL) : 0;
}
- if (ost->finished ||
- (of->start_time != AV_NOPTS_VALUE && sub->pts < of->start_time))
+ if ((of->start_time != AV_NOPTS_VALUE && sub->pts < of->start_time))
return 0;
enc = ost->enc_ctx;
@@ -579,7 +460,7 @@ static int do_subtitle_out(OutputFile *of, OutputStream *ost, const AVSubtitle *
}
pkt->dts = pkt->pts;
- ret = tq_send(e->queue_out, 0, pkt);
+ ret = sch_enc_send(e->sch, e->sch_idx, pkt);
if (ret < 0) {
av_packet_unref(pkt);
return ret;
@@ -671,10 +552,13 @@ static int update_video_stats(OutputStream *ost, const AVPacket *pkt, int write_
int64_t frame_number;
double ti1, bitrate, avg_bitrate;
double psnr_val = -1;
+ int quality;
- ost->quality = sd ? AV_RL32(sd) : -1;
+ quality = sd ? AV_RL32(sd) : -1;
pict_type = sd ? sd[4] : AV_PICTURE_TYPE_NONE;
+ atomic_store(&ost->quality, quality);
+
if ((enc->flags & AV_CODEC_FLAG_PSNR) && sd && sd[5]) {
// FIXME the scaling assumes 8bit
double error = AV_RL64(sd + 8) / (enc->width * enc->height * 255.0 * 255.0);
@@ -697,10 +581,10 @@ static int update_video_stats(OutputStream *ost, const AVPacket *pkt, int write_
frame_number = e->packets_encoded;
if (vstats_version <= 1) {
fprintf(vstats_file, "frame= %5"PRId64" q= %2.1f ", frame_number,
- ost->quality / (float)FF_QP2LAMBDA);
+ quality / (float)FF_QP2LAMBDA);
} else {
fprintf(vstats_file, "out= %2d st= %2d frame= %5"PRId64" q= %2.1f ", ost->file_index, ost->index, frame_number,
- ost->quality / (float)FF_QP2LAMBDA);
+ quality / (float)FF_QP2LAMBDA);
}
if (psnr_val >= 0)
@@ -801,18 +685,11 @@ static int encode_frame(OutputFile *of, OutputStream *ost, AVFrame *frame,
av_ts2str(pkt->duration), av_ts2timestr(pkt->duration, &enc->time_base));
}
- if ((ret = trigger_fix_sub_duration_heartbeat(ost, pkt)) < 0) {
- av_log(NULL, AV_LOG_ERROR,
- "Subtitle heartbeat logic failed in %s! (%s)\n",
- __func__, av_err2str(ret));
- return ret;
- }
-
e->data_size += pkt->size;
e->packets_encoded++;
- ret = tq_send(e->queue_out, 0, pkt);
+ ret = sch_enc_send(e->sch, e->sch_idx, pkt);
if (ret < 0) {
av_packet_unref(pkt);
return ret;
@@ -822,50 +699,6 @@ static int encode_frame(OutputFile *of, OutputStream *ost, AVFrame *frame,
av_assert0(0);
}
-static int submit_encode_frame(OutputFile *of, OutputStream *ost,
- AVFrame *frame, AVPacket *pkt)
-{
- Encoder *e = ost->enc;
- int ret;
-
- if (ost->sq_idx_encode < 0)
- return encode_frame(of, ost, frame, pkt);
-
- if (frame) {
- ret = av_frame_ref(e->sq_frame, frame);
- if (ret < 0)
- return ret;
- frame = e->sq_frame;
- }
-
- ret = sq_send(of->sq_encode, ost->sq_idx_encode,
- SQFRAME(frame));
- if (ret < 0) {
- if (frame)
- av_frame_unref(frame);
- if (ret != AVERROR_EOF)
- return ret;
- }
-
- while (1) {
- AVFrame *enc_frame = e->sq_frame;
-
- ret = sq_receive(of->sq_encode, ost->sq_idx_encode,
- SQFRAME(enc_frame));
- if (ret == AVERROR_EOF) {
- enc_frame = NULL;
- } else if (ret < 0) {
- return (ret == AVERROR(EAGAIN)) ? 0 : ret;
- }
-
- ret = encode_frame(of, ost, enc_frame, pkt);
- if (enc_frame)
- av_frame_unref(enc_frame);
- if (ret < 0)
- return ret;
- }
-}
-
static int do_audio_out(OutputFile *of, OutputStream *ost,
AVFrame *frame, AVPacket *pkt)
{
@@ -881,7 +714,7 @@ static int do_audio_out(OutputFile *of, OutputStream *ost,
if (!check_recording_time(ost, frame->pts, frame->time_base))
return AVERROR_EOF;
- return submit_encode_frame(of, ost, frame, pkt);
+ return encode_frame(of, ost, frame, pkt);
}
static enum AVPictureType forced_kf_apply(void *logctx, KeyframeForceCtx *kf,
@@ -949,7 +782,7 @@ static int do_video_out(OutputFile *of, OutputStream *ost,
}
#endif
- return submit_encode_frame(of, ost, in_picture, pkt);
+ return encode_frame(of, ost, in_picture, pkt);
}
static int frame_encode(OutputStream *ost, AVFrame *frame, AVPacket *pkt)
@@ -958,9 +791,12 @@ static int frame_encode(OutputStream *ost, AVFrame *frame, AVPacket *pkt)
enum AVMediaType type = ost->type;
if (type == AVMEDIA_TYPE_SUBTITLE) {
+ const AVSubtitle *subtitle = frame && frame->buf[0] ?
+ (AVSubtitle*)frame->buf[0]->data : NULL;
+
// no flushing for subtitles
- return frame ?
- do_subtitle_out(of, ost, (AVSubtitle*)frame->buf[0]->data, pkt) : 0;
+ return subtitle && subtitle->num_rects ?
+ do_subtitle_out(of, ost, subtitle, pkt) : 0;
}
if (frame) {
@@ -968,7 +804,7 @@ static int frame_encode(OutputStream *ost, AVFrame *frame, AVPacket *pkt)
do_audio_out(of, ost, frame, pkt);
}
- return submit_encode_frame(of, ost, NULL, pkt);
+ return encode_frame(of, ost, NULL, pkt);
}
static void enc_thread_set_name(const OutputStream *ost)
@@ -1009,24 +845,50 @@ fail:
void *encoder_thread(void *arg)
{
OutputStream *ost = arg;
- OutputFile *of = output_files[ost->file_index];
Encoder *e = ost->enc;
EncoderThread et;
int ret = 0, input_status = 0;
+ int name_set = 0;
ret = enc_thread_init(&et);
if (ret < 0)
goto finish;
- enc_thread_set_name(ost);
+ /* Open the subtitle encoders immediately. AVFrame-based encoders
+ * are opened through a callback from the scheduler once they get
+ * their first frame
+ *
+ * N.B.: because the callback is called from a different thread,
+ * enc_ctx MUST NOT be accessed before sch_enc_receive() returns
+ * for the first time for audio/video. */
+ if (ost->type != AVMEDIA_TYPE_VIDEO && ost->type != AVMEDIA_TYPE_AUDIO) {
+ ret = enc_open(ost, NULL);
+ if (ret < 0)
+ goto finish;
+ }
while (!input_status) {
- int dummy;
-
- input_status = tq_receive(e->queue_in, &dummy, et.frame);
- if (input_status < 0)
+ input_status = sch_enc_receive(e->sch, e->sch_idx, et.frame);
+ if (input_status == AVERROR_EOF) {
av_log(ost, AV_LOG_VERBOSE, "Encoder thread received EOF\n");
+ if (!e->opened) {
+ av_log(ost, AV_LOG_ERROR, "Could not open encoder before EOF\n");
+ ret = AVERROR(EINVAL);
+ goto finish;
+ }
+ } else if (input_status < 0) {
+ ret = input_status;
+ av_log(ost, AV_LOG_ERROR, "Error receiving a frame for encoding: %s\n",
+ av_err2str(ret));
+ goto finish;
+ }
+
+ if (!name_set) {
+ enc_thread_set_name(ost);
+ name_set = 1;
+ }
+
ret = frame_encode(ost, input_status >= 0 ? et.frame : NULL, et.pkt);
av_packet_unref(et.pkt);
@@ -1040,15 +902,6 @@ void *encoder_thread(void *arg)
av_err2str(ret));
break;
}
-
- // signal to the consumer thread that the frame was encoded
- ret = tq_send(e->queue_out, 0, et.pkt);
- if (ret < 0) {
- if (ret != AVERROR_EOF)
- av_log(ost, AV_LOG_ERROR,
- "Error communicating with the main thread\n");
- break;
- }
}
// EOF is normal thread termination
@@ -1056,118 +909,7 @@ void *encoder_thread(void *arg)
ret = 0;
finish:
- if (ost->sq_idx_encode >= 0)
- sq_send(of->sq_encode, ost->sq_idx_encode, SQFRAME(NULL));
-
- tq_receive_finish(e->queue_in, 0);
- tq_send_finish (e->queue_out, 0);
-
enc_thread_uninit(&et);
- av_log(ost, AV_LOG_VERBOSE, "Terminating encoder thread\n");
-
return (void*)(intptr_t)ret;
}
-
-int enc_frame(OutputStream *ost, AVFrame *frame)
-{
- OutputFile *of = output_files[ost->file_index];
- Encoder *e = ost->enc;
- int ret, thread_ret;
-
- ret = enc_open(ost, frame);
- if (ret < 0)
- return ret;
-
- if (!e->queue_in)
- return AVERROR_EOF;
-
- // send the frame/EOF to the encoder thread
- if (frame) {
- ret = tq_send(e->queue_in, 0, frame);
- if (ret < 0)
- goto finish;
- } else
- tq_send_finish(e->queue_in, 0);
-
- // retrieve all encoded data for the frame
- while (1) {
- int dummy;
-
- ret = tq_receive(e->queue_out, &dummy, e->pkt);
- if (ret < 0)
- break;
-
- // frame fully encoded
- if (!e->pkt->data && !e->pkt->side_data_elems)
- return 0;
-
- // process the encoded packet
- ret = of_output_packet(of, ost, e->pkt);
- if (ret < 0)
- goto finish;
- }
-
-finish:
- thread_ret = enc_thread_stop(e);
- if (thread_ret < 0) {
- av_log(ost, AV_LOG_ERROR, "Encoder thread returned error: %s\n",
- av_err2str(thread_ret));
- ret = err_merge(ret, thread_ret);
- }
-
- if (ret < 0 && ret != AVERROR_EOF)
- return ret;
-
- // signal EOF to the muxer
- return of_output_packet(of, ost, NULL);
-}
-
-int enc_subtitle(OutputFile *of, OutputStream *ost, const AVSubtitle *sub)
-{
- Encoder *e = ost->enc;
- AVFrame *f = e->sub_frame;
- int ret;
-
- // XXX the queue for transferring data to the encoder thread runs
- // on AVFrames, so we wrap AVSubtitle in an AVBufferRef and put
- // that inside the frame
- // eventually, subtitles should be switched to use AVFrames natively
- ret = subtitle_wrap_frame(f, sub, 1);
- if (ret < 0)
- return ret;
-
- ret = enc_frame(ost, f);
- av_frame_unref(f);
-
- return ret;
-}
-
-int enc_flush(void)
-{
- int ret = 0;
-
- for (OutputStream *ost = ost_iter(NULL); ost; ost = ost_iter(ost)) {
- OutputFile *of = output_files[ost->file_index];
- if (ost->sq_idx_encode >= 0)
- sq_send(of->sq_encode, ost->sq_idx_encode, SQFRAME(NULL));
- }
-
- for (OutputStream *ost = ost_iter(NULL); ost; ost = ost_iter(ost)) {
- Encoder *e = ost->enc;
- AVCodecContext *enc = ost->enc_ctx;
- int err;
-
- if (!enc || !e->opened ||
- (enc->codec_type != AVMEDIA_TYPE_VIDEO && enc->codec_type != AVMEDIA_TYPE_AUDIO))
- continue;
-
- err = enc_frame(ost, NULL);
- if (err != AVERROR_EOF && ret < 0)
- ret = err_merge(ret, err);
-
- av_assert0(!e->queue_in);
- }
-
- return ret;
-}
diff --git a/fftools/ffmpeg_filter.c b/fftools/ffmpeg_filter.c
index 635b1b0b6e..ada235b084 100644
--- a/fftools/ffmpeg_filter.c
+++ b/fftools/ffmpeg_filter.c
@@ -21,8 +21,6 @@
#include <stdint.h>
#include "ffmpeg.h"
-#include "ffmpeg_utils.h"
-#include "thread_queue.h"
#include "libavfilter/avfilter.h"
#include "libavfilter/buffersink.h"
@@ -53,10 +51,11 @@ typedef struct FilterGraphPriv {
// true when the filtergraph contains only meta filters
// that do not modify the frame data
int is_meta;
+ // source filters are present in the graph
+ int have_sources;
int disable_conversions;
- int nb_inputs_bound;
- int nb_outputs_bound;
+ unsigned nb_outputs_done;
const char *graph_desc;
@@ -67,41 +66,6 @@ typedef struct FilterGraphPriv {
Scheduler *sch;
unsigned sch_idx;
-
- pthread_t thread;
- /**
- * Queue for sending frames from the main thread to the filtergraph. Has
- * nb_inputs+1 streams - the first nb_inputs stream correspond to
- * filtergraph inputs. Frames on those streams may have their opaque set to
- * - FRAME_OPAQUE_EOF: frame contains no data, but pts+timebase of the
- * EOF event for the correspondint stream. Will be immediately followed by
- * this stream being send-closed.
- * - FRAME_OPAQUE_SUB_HEARTBEAT: frame contains no data, but pts+timebase of
- * a subtitle heartbeat event. Will only be sent for sub2video streams.
- *
- * The last stream is "control" - the main thread sends empty AVFrames with
- * opaque set to
- * - FRAME_OPAQUE_REAP_FILTERS: a request to retrieve all frame available
- * from filtergraph outputs. These frames are sent to corresponding
- * streams in queue_out. Finally an empty frame is sent to the control
- * stream in queue_out.
- * - FRAME_OPAQUE_CHOOSE_INPUT: same as above, but in case no frames are
- * available the terminating empty frame's opaque will contain the index+1
- * of the filtergraph input to which more input frames should be supplied.
- */
- ThreadQueue *queue_in;
- /**
- * Queue for sending frames from the filtergraph back to the main thread.
- * Has nb_outputs+1 streams - the first nb_outputs stream correspond to
- * filtergraph outputs.
- *
- * The last stream is "control" - see documentation for queue_in for more
- * details.
- */
- ThreadQueue *queue_out;
- // submitting frames to filter thread returned EOF
- // this only happens on thread exit, so is not per-input
- int eof_in;
} FilterGraphPriv;
static FilterGraphPriv *fgp_from_fg(FilterGraph *fg)
@@ -123,6 +87,9 @@ typedef struct FilterGraphThread {
// The output index is stored in frame opaque.
AVFifo *frame_queue_out;
+ // index of the next input to request from the scheduler
+ unsigned next_in;
+ // set to 1 after at least one frame passed through this output
int got_frame;
// EOF status of each input/output, as received by the thread
@@ -253,9 +220,6 @@ typedef struct OutputFilterPriv {
int64_t ts_offset;
int64_t next_pts;
FPSConvContext fps;
-
- // set to 1 after at least one frame passed through this output
- int got_frame;
} OutputFilterPriv;
static OutputFilterPriv *ofp_from_ofilter(OutputFilter *ofilter)
@@ -653,57 +617,6 @@ static int ifilter_has_all_input_formats(FilterGraph *fg)
static void *filter_thread(void *arg);
-// start the filtering thread once all inputs and outputs are bound
-static int fg_thread_try_start(FilterGraphPriv *fgp)
-{
- FilterGraph *fg = &fgp->fg;
- ObjPool *op;
- int ret = 0;
-
- if (fgp->nb_inputs_bound < fg->nb_inputs ||
- fgp->nb_outputs_bound < fg->nb_outputs)
- return 0;
-
- op = objpool_alloc_frames();
- if (!op)
- return AVERROR(ENOMEM);
-
- fgp->queue_in = tq_alloc(fg->nb_inputs + 1, 1, op, frame_move);
- if (!fgp->queue_in) {
- objpool_free(&op);
- return AVERROR(ENOMEM);
- }
-
- // at least one output is mandatory
- op = objpool_alloc_frames();
- if (!op)
- goto fail;
-
- fgp->queue_out = tq_alloc(fg->nb_outputs + 1, 1, op, frame_move);
- if (!fgp->queue_out) {
- objpool_free(&op);
- goto fail;
- }
-
- ret = pthread_create(&fgp->thread, NULL, filter_thread, fgp);
- if (ret) {
- ret = AVERROR(ret);
- av_log(NULL, AV_LOG_ERROR, "pthread_create() for filtergraph %d failed: %s\n",
- fg->index, av_err2str(ret));
- goto fail;
- }
-
- return 0;
-fail:
- if (ret >= 0)
- ret = AVERROR(ENOMEM);
-
- tq_free(&fgp->queue_in);
- tq_free(&fgp->queue_out);
-
- return ret;
-}
-
static char *describe_filter_link(FilterGraph *fg, AVFilterInOut *inout, int in)
{
AVFilterContext *ctx = inout->filter_ctx;
@@ -729,7 +642,6 @@ static OutputFilter *ofilter_alloc(FilterGraph *fg)
ofilter->graph = fg;
ofp->format = -1;
ofp->index = fg->nb_outputs - 1;
- ofilter->last_pts = AV_NOPTS_VALUE;
return ofilter;
}
@@ -760,10 +672,7 @@ static int ifilter_bind_ist(InputFilter *ifilter, InputStream *ist)
return AVERROR(ENOMEM);
}
- fgp->nb_inputs_bound++;
- av_assert0(fgp->nb_inputs_bound <= ifilter->graph->nb_inputs);
-
- return fg_thread_try_start(fgp);
+ return 0;
}
static int set_channel_layout(OutputFilterPriv *f, OutputStream *ost)
@@ -902,10 +811,7 @@ int ofilter_bind_ost(OutputFilter *ofilter, OutputStream *ost,
if (ret < 0)
return ret;
- fgp->nb_outputs_bound++;
- av_assert0(fgp->nb_outputs_bound <= fg->nb_outputs);
-
- return fg_thread_try_start(fgp);
+ return 0;
}
static InputFilter *ifilter_alloc(FilterGraph *fg)
@@ -935,34 +841,6 @@ static InputFilter *ifilter_alloc(FilterGraph *fg)
return ifilter;
}
-static int fg_thread_stop(FilterGraphPriv *fgp)
-{
- void *ret;
-
- if (!fgp->queue_in)
- return 0;
-
- for (int i = 0; i <= fgp->fg.nb_inputs; i++) {
- InputFilterPriv *ifp = i < fgp->fg.nb_inputs ?
- ifp_from_ifilter(fgp->fg.inputs[i]) : NULL;
-
- if (ifp)
- ifp->eof = 1;
-
- tq_send_finish(fgp->queue_in, i);
- }
-
- for (int i = 0; i <= fgp->fg.nb_outputs; i++)
- tq_receive_finish(fgp->queue_out, i);
-
- pthread_join(fgp->thread, &ret);
-
- tq_free(&fgp->queue_in);
- tq_free(&fgp->queue_out);
-
- return (int)(intptr_t)ret;
-}
-
void fg_free(FilterGraph **pfg)
{
FilterGraph *fg = *pfg;
@@ -972,8 +850,6 @@ void fg_free(FilterGraph **pfg)
return;
fgp = fgp_from_fg(fg);
- fg_thread_stop(fgp);
-
avfilter_graph_free(&fg->graph);
for (int j = 0; j < fg->nb_inputs; j++) {
InputFilter *ifilter = fg->inputs[j];
@@ -1072,6 +948,15 @@ int fg_create(FilterGraph **pfg, char *graph_desc, Scheduler *sch)
if (ret < 0)
goto fail;
+ for (unsigned i = 0; i < graph->nb_filters; i++) {
+ const AVFilter *f = graph->filters[i]->filter;
+ if (!avfilter_filter_pad_count(f, 0) &&
+ !(f->flags & AVFILTER_FLAG_DYNAMIC_INPUTS)) {
+ fgp->have_sources = 1;
+ break;
+ }
+ }
+
for (AVFilterInOut *cur = inputs; cur; cur = cur->next) {
InputFilter *const ifilter = ifilter_alloc(fg);
InputFilterPriv *ifp;
@@ -1800,6 +1685,7 @@ static int configure_filtergraph(FilterGraph *fg, const FilterGraphThread *fgt)
AVBufferRef *hw_device;
AVFilterInOut *inputs, *outputs, *cur;
int ret, i, simple = filtergraph_is_simple(fg);
+ int have_input_eof = 0;
const char *graph_desc = fgp->graph_desc;
cleanup_filtergraph(fg);
@@ -1922,11 +1808,18 @@ static int configure_filtergraph(FilterGraph *fg, const FilterGraphThread *fgt)
ret = av_buffersrc_add_frame(ifp->filter, NULL);
if (ret < 0)
goto fail;
+ have_input_eof = 1;
}
}
- return 0;
+ if (have_input_eof) {
+ // make sure the EOF propagates to the end of the graph
+ ret = avfilter_graph_request_oldest(fg->graph);
+ if (ret < 0 && ret != AVERROR(EAGAIN) && ret != AVERROR_EOF)
+ goto fail;
+ }
+ return 0;
fail:
cleanup_filtergraph(fg);
return ret;
@@ -2182,7 +2075,7 @@ static void video_sync_process(OutputFilterPriv *ofp, AVFrame *frame,
fps->frames_prev_hist[2]);
if (!*nb_frames && fps->last_dropped) {
- ofilter->nb_frames_drop++;
+ atomic_fetch_add(&ofilter->nb_frames_drop, 1);
fps->last_dropped++;
}
@@ -2260,21 +2153,23 @@ finish:
fps->frames_prev_hist[0] = *nb_frames_prev;
if (*nb_frames_prev == 0 && fps->last_dropped) {
- ofilter->nb_frames_drop++;
+ atomic_fetch_add(&ofilter->nb_frames_drop, 1);
av_log(ost, AV_LOG_VERBOSE,
"*** dropping frame %"PRId64" at ts %"PRId64"\n",
fps->frame_number, fps->last_frame->pts);
}
if (*nb_frames > (*nb_frames_prev && fps->last_dropped) + (*nb_frames > *nb_frames_prev)) {
+ uint64_t nb_frames_dup;
if (*nb_frames > dts_error_threshold * 30) {
av_log(ost, AV_LOG_ERROR, "%"PRId64" frame duplication too large, skipping\n", *nb_frames - 1);
- ofilter->nb_frames_drop++;
+ atomic_fetch_add(&ofilter->nb_frames_drop, 1);
*nb_frames = 0;
return;
}
- ofilter->nb_frames_dup += *nb_frames - (*nb_frames_prev && fps->last_dropped) - (*nb_frames > *nb_frames_prev);
+ nb_frames_dup = atomic_fetch_add(&ofilter->nb_frames_dup,
+ *nb_frames - (*nb_frames_prev && fps->last_dropped) - (*nb_frames > *nb_frames_prev));
av_log(ost, AV_LOG_VERBOSE, "*** %"PRId64" dup!\n", *nb_frames - 1);
- if (ofilter->nb_frames_dup > fps->dup_warning) {
+ if (nb_frames_dup > fps->dup_warning) {
av_log(ost, AV_LOG_WARNING, "More than %"PRIu64" frames duplicated\n", fps->dup_warning);
fps->dup_warning *= 10;
}
@@ -2284,8 +2179,57 @@ finish:
fps->dropped_keyframe |= fps->last_dropped && (frame->flags & AV_FRAME_FLAG_KEY);
}
+static int close_output(OutputFilterPriv *ofp, FilterGraphThread *fgt)
+{
+ FilterGraphPriv *fgp = fgp_from_fg(ofp->ofilter.graph);
+ int ret;
+
+ // we are finished and no frames were ever seen at this output,
+ // at least initialize the encoder with a dummy frame
+ if (!fgt->got_frame) {
+ AVFrame *frame = fgt->frame;
+ FrameData *fd;
+
+ frame->time_base = ofp->tb_out;
+ frame->format = ofp->format;
+
+ frame->width = ofp->width;
+ frame->height = ofp->height;
+ frame->sample_aspect_ratio = ofp->sample_aspect_ratio;
+
+ frame->sample_rate = ofp->sample_rate;
+ if (ofp->ch_layout.nb_channels) {
+ ret = av_channel_layout_copy(&frame->ch_layout, &ofp->ch_layout);
+ if (ret < 0)
+ return ret;
+ }
+
+ fd = frame_data(frame);
+ if (!fd)
+ return AVERROR(ENOMEM);
+
+ fd->frame_rate_filter = ofp->fps.framerate;
+
+ av_assert0(!frame->buf[0]);
+
+ av_log(ofp->ofilter.ost, AV_LOG_WARNING,
+ "No filtered frames for output stream, trying to "
+ "initialize anyway.\n");
+
+ ret = sch_filter_send(fgp->sch, fgp->sch_idx, ofp->index, frame);
+ if (ret < 0) {
+ av_frame_unref(frame);
+ return ret;
+ }
+ }
+
+ fgt->eof_out[ofp->index] = 1;
+
+ return sch_filter_send(fgp->sch, fgp->sch_idx, ofp->index, NULL);
+}
+
static int fg_output_frame(OutputFilterPriv *ofp, FilterGraphThread *fgt,
- AVFrame *frame, int buffer)
+ AVFrame *frame)
{
FilterGraphPriv *fgp = fgp_from_fg(ofp->ofilter.graph);
AVFrame *frame_prev = ofp->fps.last_frame;
@@ -2332,28 +2276,17 @@ static int fg_output_frame(OutputFilterPriv *ofp, FilterGraphThread *fgt,
frame_out = frame;
}
- if (buffer) {
- AVFrame *f = av_frame_alloc();
-
- if (!f) {
- av_frame_unref(frame_out);
- return AVERROR(ENOMEM);
- }
-
- av_frame_move_ref(f, frame_out);
- f->opaque = (void*)(intptr_t)ofp->index;
-
- ret = av_fifo_write(fgt->frame_queue_out, &f, 1);
- if (ret < 0) {
- av_frame_free(&f);
- return AVERROR(ENOMEM);
- }
- } else {
- // return the frame to the main thread
- ret = tq_send(fgp->queue_out, ofp->index, frame_out);
+ {
+ // send the frame to consumers
+ ret = sch_filter_send(fgp->sch, fgp->sch_idx, ofp->index, frame_out);
if (ret < 0) {
av_frame_unref(frame_out);
- fgt->eof_out[ofp->index] = 1;
+
+ if (!fgt->eof_out[ofp->index]) {
+ fgt->eof_out[ofp->index] = 1;
+ fgp->nb_outputs_done++;
+ }
+
return ret == AVERROR_EOF ? 0 : ret;
}
}
@@ -2374,16 +2307,14 @@ static int fg_output_frame(OutputFilterPriv *ofp, FilterGraphThread *fgt,
av_frame_move_ref(frame_prev, frame);
}
- if (!frame) {
- tq_send_finish(fgp->queue_out, ofp->index);
- fgt->eof_out[ofp->index] = 1;
- }
+ if (!frame)
+ return close_output(ofp, fgt);
return 0;
}
static int fg_output_step(OutputFilterPriv *ofp, FilterGraphThread *fgt,
- AVFrame *frame, int buffer)
+ AVFrame *frame)
{
FilterGraphPriv *fgp = fgp_from_fg(ofp->ofilter.graph);
OutputStream *ost = ofp->ofilter.ost;
@@ -2393,8 +2324,8 @@ static int fg_output_step(OutputFilterPriv *ofp, FilterGraphThread *fgt,
ret = av_buffersink_get_frame_flags(filter, frame,
AV_BUFFERSINK_FLAG_NO_REQUEST);
- if (ret == AVERROR_EOF && !buffer && !fgt->eof_out[ofp->index]) {
- ret = fg_output_frame(ofp, fgt, NULL, buffer);
+ if (ret == AVERROR_EOF && !fgt->eof_out[ofp->index]) {
+ ret = fg_output_frame(ofp, fgt, NULL);
return (ret < 0) ? ret : 1;
} else if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF) {
return 1;
@@ -2448,7 +2379,7 @@ static int fg_output_step(OutputFilterPriv *ofp, FilterGraphThread *fgt,
fd->frame_rate_filter = ofp->fps.framerate;
}
- ret = fg_output_frame(ofp, fgt, frame, buffer);
+ ret = fg_output_frame(ofp, fgt, frame);
av_frame_unref(frame);
if (ret < 0)
return ret;
@@ -2456,44 +2387,68 @@ static int fg_output_step(OutputFilterPriv *ofp, FilterGraphThread *fgt,
return 0;
}
-/* retrieve all frames available at filtergraph outputs and either send them to
- * the main thread (buffer=0) or buffer them for later (buffer=1) */
+/* retrieve all frames available at filtergraph outputs
+ * and send them to consumers */
static int read_frames(FilterGraph *fg, FilterGraphThread *fgt,
- AVFrame *frame, int buffer)
+ AVFrame *frame)
{
FilterGraphPriv *fgp = fgp_from_fg(fg);
- int ret = 0;
+ int did_step = 0;
- if (!fg->graph)
- return 0;
-
- // process buffered frames
- if (!buffer) {
- AVFrame *f;
-
- while (av_fifo_read(fgt->frame_queue_out, &f, 1) >= 0) {
- int out_idx = (intptr_t)f->opaque;
- f->opaque = NULL;
- ret = tq_send(fgp->queue_out, out_idx, f);
- av_frame_free(&f);
- if (ret < 0 && ret != AVERROR_EOF)
- return ret;
+ // graph not configured, just select the input to request
+ if (!fg->graph) {
+ for (int i = 0; i < fg->nb_inputs; i++) {
+ InputFilterPriv *ifp = ifp_from_ifilter(fg->inputs[i]);
+ if (ifp->format < 0 && !fgt->eof_in[i]) {
+ fgt->next_in = i;
+ return 0;
+ }
}
+
+ // This state - graph is not configured, but all inputs are either
+ // initialized or EOF - should be unreachable because sending EOF to a
+ // filter without even a fallback format should fail
+ av_assert0(0);
+ return AVERROR_BUG;
}
- /* Reap all buffers present in the buffer sinks */
- for (int i = 0; i < fg->nb_outputs; i++) {
- OutputFilterPriv *ofp = ofp_from_ofilter(fg->outputs[i]);
- int ret = 0;
+ while (fgp->nb_outputs_done < fg->nb_outputs) {
+ int ret;
- while (!ret) {
- ret = fg_output_step(ofp, fgt, frame, buffer);
- if (ret < 0)
- return ret;
+ ret = avfilter_graph_request_oldest(fg->graph);
+ if (ret == AVERROR(EAGAIN)) {
+ fgt->next_in = choose_input(fg, fgt);
+ break;
+ } else if (ret < 0) {
+ if (ret == AVERROR_EOF)
+ av_log(fg, AV_LOG_VERBOSE, "Filtergraph returned EOF, finishing\n");
+ else
+ av_log(fg, AV_LOG_ERROR,
+ "Error requesting a frame from the filtergraph: %s\n",
+ av_err2str(ret));
+ return ret;
}
- }
+ fgt->next_in = fg->nb_inputs;
- return 0;
+ // return after one iteration, so that scheduler can rate-control us
+ if (did_step && fgp->have_sources)
+ return 0;
+
+ /* Reap all buffers present in the buffer sinks */
+ for (int i = 0; i < fg->nb_outputs; i++) {
+ OutputFilterPriv *ofp = ofp_from_ofilter(fg->outputs[i]);
+
+ ret = 0;
+ while (!ret) {
+ ret = fg_output_step(ofp, fgt, frame);
+ if (ret < 0)
+ return ret;
+ }
+ }
+ did_step = 1;
+ };
+
+ return (fgp->nb_outputs_done == fg->nb_outputs) ? AVERROR_EOF : 0;
}
static void sub2video_heartbeat(InputFilter *ifilter, int64_t pts, AVRational tb)
@@ -2571,6 +2526,9 @@ static int send_eof(FilterGraphThread *fgt, InputFilter *ifilter,
InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
int ret;
+ if (fgt->eof_in[ifp->index])
+ return 0;
+
fgt->eof_in[ifp->index] = 1;
if (ifp->filter) {
@@ -2672,7 +2630,7 @@ static int send_frame(FilterGraph *fg, FilterGraphThread *fgt,
return ret;
}
- ret = fg->graph ? read_frames(fg, fgt, tmp, 1) : 0;
+ ret = fg->graph ? read_frames(fg, fgt, tmp) : 0;
av_frame_free(&tmp);
if (ret < 0)
return ret;
@@ -2705,82 +2663,6 @@ static int send_frame(FilterGraph *fg, FilterGraphThread *fgt,
return 0;
}
-static int msg_process(FilterGraphPriv *fgp, FilterGraphThread *fgt,
- AVFrame *frame)
-{
- const enum FrameOpaque msg = (intptr_t)frame->opaque;
- FilterGraph *fg = &fgp->fg;
- int graph_eof = 0;
- int ret;
-
- frame->opaque = NULL;
- av_assert0(msg > 0);
- av_assert0(msg == FRAME_OPAQUE_SEND_COMMAND || !frame->buf[0]);
-
- if (!fg->graph) {
- // graph not configured yet, ignore all messages other than choosing
- // the input to read from
- if (msg != FRAME_OPAQUE_CHOOSE_INPUT) {
- av_frame_unref(frame);
- goto done;
- }
-
- for (int i = 0; i < fg->nb_inputs; i++) {
- InputFilter *ifilter = fg->inputs[i];
- InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
- if (ifp->format < 0 && !fgt->eof_in[i]) {
- frame->opaque = (void*)(intptr_t)(i + 1);
- goto done;
- }
- }
-
- // This state - graph is not configured, but all inputs are either
- // initialized or EOF - should be unreachable because sending EOF to a
- // filter without even a fallback format should fail
- av_assert0(0);
- return AVERROR_BUG;
- }
-
- if (msg == FRAME_OPAQUE_SEND_COMMAND) {
- FilterCommand *fc = (FilterCommand*)frame->buf[0]->data;
- send_command(fg, fc->time, fc->target, fc->command, fc->arg, fc->all_filters);
- av_frame_unref(frame);
- goto done;
- }
-
- if (msg == FRAME_OPAQUE_CHOOSE_INPUT) {
- ret = avfilter_graph_request_oldest(fg->graph);
-
- graph_eof = ret == AVERROR_EOF;
-
- if (ret == AVERROR(EAGAIN)) {
- frame->opaque = (void*)(intptr_t)(choose_input(fg, fgt) + 1);
- goto done;
- } else if (ret < 0 && !graph_eof)
- return ret;
- }
-
- ret = read_frames(fg, fgt, frame, 0);
- if (ret < 0) {
- av_log(fg, AV_LOG_ERROR, "Error sending filtered frames for encoding\n");
- return ret;
- }
-
- if (graph_eof)
- return AVERROR_EOF;
-
- // signal to the main thread that we are done processing the message
-done:
- ret = tq_send(fgp->queue_out, fg->nb_outputs, frame);
- if (ret < 0) {
- if (ret != AVERROR_EOF)
- av_log(fg, AV_LOG_ERROR, "Error communicating with the main thread\n");
- return ret;
- }
-
- return 0;
-}
-
static void fg_thread_set_name(const FilterGraph *fg)
{
char name[16];
@@ -2867,294 +2749,94 @@ static void *filter_thread(void *arg)
InputFilter *ifilter;
InputFilterPriv *ifp;
enum FrameOpaque o;
- int input_idx, eof_frame;
+ unsigned input_idx = fgt.next_in;
- input_status = tq_receive(fgp->queue_in, &input_idx, fgt.frame);
- if (input_idx < 0 ||
- (input_idx == fg->nb_inputs && input_status < 0)) {
+ input_status = sch_filter_receive(fgp->sch, fgp->sch_idx,
+ &input_idx, fgt.frame);
+ if (input_status == AVERROR_EOF) {
av_log(fg, AV_LOG_VERBOSE, "Filtering thread received EOF\n");
break;
+ } else if (input_status == AVERROR(EAGAIN)) {
+ // should only happen when we didn't request any input
+ av_assert0(input_idx == fg->nb_inputs);
+ goto read_frames;
}
+ av_assert0(input_status >= 0);
+
+ o = (intptr_t)fgt.frame->opaque;
o = (intptr_t)fgt.frame->opaque;
// message on the control stream
if (input_idx == fg->nb_inputs) {
- ret = msg_process(fgp, &fgt, fgt.frame);
- if (ret < 0)
- goto finish;
+ FilterCommand *fc;
+ av_assert0(o == FRAME_OPAQUE_SEND_COMMAND && fgt.frame->buf[0]);
+
+ fc = (FilterCommand*)fgt.frame->buf[0]->data;
+ send_command(fg, fc->time, fc->target, fc->command, fc->arg,
+ fc->all_filters);
+ av_frame_unref(fgt.frame);
continue;
}
// we received an input frame or EOF
ifilter = fg->inputs[input_idx];
ifp = ifp_from_ifilter(ifilter);
- eof_frame = input_status >= 0 && o == FRAME_OPAQUE_EOF;
+
if (ifp->type_src == AVMEDIA_TYPE_SUBTITLE) {
int hb_frame = input_status >= 0 && o == FRAME_OPAQUE_SUB_HEARTBEAT;
ret = sub2video_frame(ifilter, (fgt.frame->buf[0] || hb_frame) ? fgt.frame : NULL);
- } else if (input_status >= 0 && fgt.frame->buf[0]) {
+ } else if (fgt.frame->buf[0]) {
ret = send_frame(fg, &fgt, ifilter, fgt.frame);
} else {
- int64_t pts = input_status >= 0 ? fgt.frame->pts : AV_NOPTS_VALUE;
- AVRational tb = input_status >= 0 ? fgt.frame->time_base : (AVRational){ 1, 1 };
- ret = send_eof(&fgt, ifilter, pts, tb);
+ av_assert1(o == FRAME_OPAQUE_EOF);
+ ret = send_eof(&fgt, ifilter, fgt.frame->pts, fgt.frame->time_base);
}
av_frame_unref(fgt.frame);
if (ret < 0)
+ goto finish;
+
+read_frames:
+ // retrieve all newly avalable frames
+ ret = read_frames(fg, &fgt, fgt.frame);
+ if (ret == AVERROR_EOF) {
+ av_log(fg, AV_LOG_VERBOSE, "All consumers returned EOF\n");
break;
-
- if (eof_frame) {
- // an EOF frame is immediately followed by sender closing
- // the corresponding stream, so retrieve that event
- input_status = tq_receive(fgp->queue_in, &input_idx, fgt.frame);
- av_assert0(input_status == AVERROR_EOF && input_idx == ifp->index);
- }
-
- // signal to the main thread that we are done
- ret = tq_send(fgp->queue_out, fg->nb_outputs, fgt.frame);
- if (ret < 0) {
- if (ret == AVERROR_EOF)
- break;
-
- av_log(fg, AV_LOG_ERROR, "Error communicating with the main thread\n");
+ } else if (ret < 0) {
+ av_log(fg, AV_LOG_ERROR, "Error sending frames to consumers: %s\n",
+ av_err2str(ret));
goto finish;
}
}
+ for (unsigned i = 0; i < fg->nb_outputs; i++) {
+ OutputFilterPriv *ofp = ofp_from_ofilter(fg->outputs[i]);
+
+ if (fgt.eof_out[i])
+ continue;
+
+ ret = fg_output_frame(ofp, &fgt, NULL);
+ if (ret < 0)
+ goto finish;
+ }
+
finish:
// EOF is normal termination
if (ret == AVERROR_EOF)
ret = 0;
- for (int i = 0; i <= fg->nb_inputs; i++)
- tq_receive_finish(fgp->queue_in, i);
- for (int i = 0; i <= fg->nb_outputs; i++)
- tq_send_finish(fgp->queue_out, i);
-
fg_thread_uninit(&fgt);
- av_log(fg, AV_LOG_VERBOSE, "Terminating filtering thread\n");
-
return (void*)(intptr_t)ret;
}
-static int thread_send_frame(FilterGraphPriv *fgp, InputFilter *ifilter,
- AVFrame *frame, enum FrameOpaque type)
-{
- InputFilterPriv *ifp = ifp_from_ifilter(ifilter);
- int output_idx, ret;
-
- if (ifp->eof) {
- av_frame_unref(frame);
- return AVERROR_EOF;
- }
-
- frame->opaque = (void*)(intptr_t)type;
-
- ret = tq_send(fgp->queue_in, ifp->index, frame);
- if (ret < 0) {
- ifp->eof = 1;
- av_frame_unref(frame);
- return ret;
- }
-
- if (type == FRAME_OPAQUE_EOF)
- tq_send_finish(fgp->queue_in, ifp->index);
-
- // wait for the frame to be processed
- ret = tq_receive(fgp->queue_out, &output_idx, frame);
- av_assert0(output_idx == fgp->fg.nb_outputs || ret == AVERROR_EOF);
-
- return ret;
-}
-
-int ifilter_send_frame(InputFilter *ifilter, AVFrame *frame, int keep_reference)
-{
- FilterGraphPriv *fgp = fgp_from_fg(ifilter->graph);
- int ret;
-
- if (keep_reference) {
- ret = av_frame_ref(fgp->frame, frame);
- if (ret < 0)
- return ret;
- } else
- av_frame_move_ref(fgp->frame, frame);
-
- return thread_send_frame(fgp, ifilter, fgp->frame, 0);
-}
-
-int ifilter_send_eof(InputFilter *ifilter, int64_t pts, AVRational tb)
-{
- FilterGraphPriv *fgp = fgp_from_fg(ifilter->graph);
- int ret;
-
- fgp->frame->pts = pts;
- fgp->frame->time_base = tb;
-
- ret = thread_send_frame(fgp, ifilter, fgp->frame, FRAME_OPAQUE_EOF);
-
- return ret == AVERROR_EOF ? 0 : ret;
-}
-
-void ifilter_sub2video_heartbeat(InputFilter *ifilter, int64_t pts, AVRational tb)
-{
- FilterGraphPriv *fgp = fgp_from_fg(ifilter->graph);
-
- fgp->frame->pts = pts;
- fgp->frame->time_base = tb;
-
- thread_send_frame(fgp, ifilter, fgp->frame, FRAME_OPAQUE_SUB_HEARTBEAT);
-}
-
-int fg_transcode_step(FilterGraph *graph, InputStream **best_ist)
-{
- FilterGraphPriv *fgp = fgp_from_fg(graph);
- int ret, got_frames = 0;
-
- if (fgp->eof_in)
- return AVERROR_EOF;
-
- // signal to the filtering thread to return all frames it can
- av_assert0(!fgp->frame->buf[0]);
- fgp->frame->opaque = (void*)(intptr_t)(best_ist ?
- FRAME_OPAQUE_CHOOSE_INPUT :
- FRAME_OPAQUE_REAP_FILTERS);
-
- ret = tq_send(fgp->queue_in, graph->nb_inputs, fgp->frame);
- if (ret < 0) {
- fgp->eof_in = 1;
- goto finish;
- }
-
- while (1) {
- OutputFilter *ofilter;
- OutputFilterPriv *ofp;
- OutputStream *ost;
- int output_idx;
-
- ret = tq_receive(fgp->queue_out, &output_idx, fgp->frame);
-
- // EOF on the whole queue or the control stream
- if (output_idx < 0 ||
- (ret < 0 && output_idx == graph->nb_outputs))
- goto finish;
-
- // EOF for a specific stream
- if (ret < 0) {
- ofilter = graph->outputs[output_idx];
- ofp = ofp_from_ofilter(ofilter);
-
- // we are finished and no frames were ever seen at this output,
- // at least initialize the encoder with a dummy frame
- if (!ofp->got_frame) {
- AVFrame *frame = fgp->frame;
- FrameData *fd;
-
- frame->time_base = ofp->tb_out;
- frame->format = ofp->format;
-
- frame->width = ofp->width;
- frame->height = ofp->height;
- frame->sample_aspect_ratio = ofp->sample_aspect_ratio;
-
- frame->sample_rate = ofp->sample_rate;
- if (ofp->ch_layout.nb_channels) {
- ret = av_channel_layout_copy(&frame->ch_layout, &ofp->ch_layout);
- if (ret < 0)
- return ret;
- }
-
- fd = frame_data(frame);
- if (!fd)
- return AVERROR(ENOMEM);
-
- fd->frame_rate_filter = ofp->fps.framerate;
-
- av_assert0(!frame->buf[0]);
-
- av_log(ofilter->ost, AV_LOG_WARNING,
- "No filtered frames for output stream, trying to "
- "initialize anyway.\n");
-
- enc_open(ofilter->ost, frame);
- av_frame_unref(frame);
- }
-
- close_output_stream(graph->outputs[output_idx]->ost);
- continue;
- }
-
- // request was fully processed by the filtering thread,
- // return the input stream to read from, if needed
- if (output_idx == graph->nb_outputs) {
- int input_idx = (intptr_t)fgp->frame->opaque - 1;
- av_assert0(input_idx <= graph->nb_inputs);
-
- if (best_ist) {
- *best_ist = (input_idx >= 0 && input_idx < graph->nb_inputs) ?
- ifp_from_ifilter(graph->inputs[input_idx])->ist : NULL;
-
- if (input_idx < 0 && !got_frames) {
- for (int i = 0; i < graph->nb_outputs; i++)
- graph->outputs[i]->ost->unavailable = 1;
- }
- }
- break;
- }
-
- // got a frame from the filtering thread, send it for encoding
- ofilter = graph->outputs[output_idx];
- ost = ofilter->ost;
- ofp = ofp_from_ofilter(ofilter);
-
- if (ost->finished) {
- av_frame_unref(fgp->frame);
- tq_receive_finish(fgp->queue_out, output_idx);
- continue;
- }
-
- if (fgp->frame->pts != AV_NOPTS_VALUE) {
- ofilter->last_pts = av_rescale_q(fgp->frame->pts,
- fgp->frame->time_base,
- AV_TIME_BASE_Q);
- }
-
- ret = enc_frame(ost, fgp->frame);
- av_frame_unref(fgp->frame);
- if (ret < 0)
- goto finish;
-
- ofp->got_frame = 1;
- got_frames = 1;
- }
-
-finish:
- if (ret < 0) {
- fgp->eof_in = 1;
- for (int i = 0; i < graph->nb_outputs; i++)
- close_output_stream(graph->outputs[i]->ost);
- }
-
- return ret;
-}
-
-int reap_filters(FilterGraph *fg, int flush)
-{
- return fg_transcode_step(fg, NULL);
-}
-
void fg_send_command(FilterGraph *fg, double time, const char *target,
const char *command, const char *arg, int all_filters)
{
FilterGraphPriv *fgp = fgp_from_fg(fg);
AVBufferRef *buf;
FilterCommand *fc;
- int output_idx, ret;
-
- if (!fgp->queue_in)
- return;
fc = av_mallocz(sizeof(*fc));
if (!fc)
@@ -3180,13 +2862,5 @@ void fg_send_command(FilterGraph *fg, double time, const char *target,
fgp->frame->buf[0] = buf;
fgp->frame->opaque = (void*)(intptr_t)FRAME_OPAQUE_SEND_COMMAND;
- ret = tq_send(fgp->queue_in, fg->nb_inputs, fgp->frame);
- if (ret < 0) {
- av_frame_unref(fgp->frame);
- return;
- }
-
- // wait for the frame to be processed
- ret = tq_receive(fgp->queue_out, &output_idx, fgp->frame);
- av_assert0(output_idx == fgp->fg.nb_outputs || ret == AVERROR_EOF);
+ sch_filter_command(fgp->sch, fgp->sch_idx, fgp->frame);
}
diff --git a/fftools/ffmpeg_mux.c b/fftools/ffmpeg_mux.c
index ef5c2f60e0..067dc65d4e 100644
--- a/fftools/ffmpeg_mux.c
+++ b/fftools/ffmpeg_mux.c
@@ -23,16 +23,13 @@
#include "ffmpeg.h"
#include "ffmpeg_mux.h"
#include "ffmpeg_utils.h"
-#include "objpool.h"
#include "sync_queue.h"
-#include "thread_queue.h"
#include "libavutil/fifo.h"
#include "libavutil/intreadwrite.h"
#include "libavutil/log.h"
#include "libavutil/mem.h"
#include "libavutil/timestamp.h"
-#include "libavutil/thread.h"
#include "libavcodec/packet.h"
@@ -41,10 +38,9 @@
typedef struct MuxThreadContext {
AVPacket *pkt;
+ AVPacket *fix_sub_duration_pkt;
} MuxThreadContext;
-int want_sdp = 1;
-
static Muxer *mux_from_of(OutputFile *of)
{
return (Muxer*)of;
@@ -207,14 +203,41 @@ static int sync_queue_process(Muxer *mux, OutputStream *ost, AVPacket *pkt, int
return 0;
}
+static int of_streamcopy(OutputStream *ost, AVPacket *pkt);
+
/* apply the output bitstream filters */
-static int mux_packet_filter(Muxer *mux, OutputStream *ost,
- AVPacket *pkt, int *stream_eof)
+static int mux_packet_filter(Muxer *mux, MuxThreadContext *mt,
+ OutputStream *ost, AVPacket *pkt, int *stream_eof)
{
MuxStream *ms = ms_from_ost(ost);
const char *err_msg;
int ret = 0;
+ if (pkt && !ost->enc) {
+ ret = of_streamcopy(ost, pkt);
+ if (ret == AVERROR(EAGAIN))
+ return 0;
+ else if (ret == AVERROR_EOF) {
+ av_packet_unref(pkt);
+ pkt = NULL;
+ ret = 0;
+ } else if (ret < 0)
+ goto fail;
+ }
+
+ // emit heartbeat for -fix_sub_duration;
+ // we are only interested in heartbeats on on random access points.
+ if (pkt && (pkt->flags & AV_PKT_FLAG_KEY)) {
+ mt->fix_sub_duration_pkt->opaque = (void*)(intptr_t)PKT_OPAQUE_FIX_SUB_DURATION;
+ mt->fix_sub_duration_pkt->pts = pkt->pts;
+ mt->fix_sub_duration_pkt->time_base = pkt->time_base;
+
+ ret = sch_mux_sub_heartbeat(mux->sch, mux->sch_idx, ms->sch_idx,
+ mt->fix_sub_duration_pkt);
+ if (ret < 0)
+ goto fail;
+ }
+
if (ms->bsf_ctx) {
int bsf_eof = 0;
@@ -278,6 +301,7 @@ static void thread_set_name(OutputFile *of)
static void mux_thread_uninit(MuxThreadContext *mt)
{
av_packet_free(&mt->pkt);
+ av_packet_free(&mt->fix_sub_duration_pkt);
memset(mt, 0, sizeof(*mt));
}
@@ -290,6 +314,10 @@ static int mux_thread_init(MuxThreadContext *mt)
if (!mt->pkt)
goto fail;
+ mt->fix_sub_duration_pkt = av_packet_alloc();
+ if (!mt->fix_sub_duration_pkt)
+ goto fail;
+
return 0;
fail:
@@ -316,19 +344,22 @@ void *muxer_thread(void *arg)
OutputStream *ost;
int stream_idx, stream_eof = 0;
- ret = tq_receive(mux->tq, &stream_idx, mt.pkt);
+ ret = sch_mux_receive(mux->sch, of->index, mt.pkt);
+ stream_idx = mt.pkt->stream_index;
if (stream_idx < 0) {
av_log(mux, AV_LOG_VERBOSE, "All streams finished\n");
ret = 0;
break;
}
- ost = of->streams[stream_idx];
- ret = mux_packet_filter(mux, ost, ret < 0 ? NULL : mt.pkt, &stream_eof);
+ ost = of->streams[mux->sch_stream_idx[stream_idx]];
+ mt.pkt->stream_index = ost->index;
+
+ ret = mux_packet_filter(mux, &mt, ost, ret < 0 ? NULL : mt.pkt, &stream_eof);
av_packet_unref(mt.pkt);
if (ret == AVERROR_EOF) {
if (stream_eof) {
- tq_receive_finish(mux->tq, stream_idx);
+ sch_mux_receive_finish(mux->sch, of->index, stream_idx);
} else {
av_log(mux, AV_LOG_VERBOSE, "Muxer returned EOF\n");
ret = 0;
@@ -343,243 +374,55 @@ void *muxer_thread(void *arg)
finish:
mux_thread_uninit(&mt);
- for (unsigned int i = 0; i < mux->fc->nb_streams; i++)
- tq_receive_finish(mux->tq, i);
-
- av_log(mux, AV_LOG_VERBOSE, "Terminating muxer thread\n");
-
return (void*)(intptr_t)ret;
}
-static int thread_submit_packet(Muxer *mux, OutputStream *ost, AVPacket *pkt)
-{
- int ret = 0;
-
- if (!pkt || ost->finished & MUXER_FINISHED)
- goto finish;
-
- ret = tq_send(mux->tq, ost->index, pkt);
- if (ret < 0)
- goto finish;
-
- return 0;
-
-finish:
- if (pkt)
- av_packet_unref(pkt);
-
- ost->finished |= MUXER_FINISHED;
- tq_send_finish(mux->tq, ost->index);
- return ret == AVERROR_EOF ? 0 : ret;
-}
-
-static int queue_packet(OutputStream *ost, AVPacket *pkt)
-{
- MuxStream *ms = ms_from_ost(ost);
- AVPacket *tmp_pkt = NULL;
- int ret;
-
- if (!av_fifo_can_write(ms->muxing_queue)) {
- size_t cur_size = av_fifo_can_read(ms->muxing_queue);
- size_t pkt_size = pkt ? pkt->size : 0;
- unsigned int are_we_over_size =
- (ms->muxing_queue_data_size + pkt_size) > ms->muxing_queue_data_threshold;
- size_t limit = are_we_over_size ? ms->max_muxing_queue_size : SIZE_MAX;
- size_t new_size = FFMIN(2 * cur_size, limit);
-
- if (new_size <= cur_size) {
- av_log(ost, AV_LOG_ERROR,
- "Too many packets buffered for output stream %d:%d.\n",
- ost->file_index, ost->st->index);
- return AVERROR(ENOSPC);
- }
- ret = av_fifo_grow2(ms->muxing_queue, new_size - cur_size);
- if (ret < 0)
- return ret;
- }
-
- if (pkt) {
- ret = av_packet_make_refcounted(pkt);
- if (ret < 0)
- return ret;
-
- tmp_pkt = av_packet_alloc();
- if (!tmp_pkt)
- return AVERROR(ENOMEM);
-
- av_packet_move_ref(tmp_pkt, pkt);
- ms->muxing_queue_data_size += tmp_pkt->size;
- }
- av_fifo_write(ms->muxing_queue, &tmp_pkt, 1);
-
- return 0;
-}
-
-static int submit_packet(Muxer *mux, AVPacket *pkt, OutputStream *ost)
-{
- int ret;
-
- if (mux->tq) {
- return thread_submit_packet(mux, ost, pkt);
- } else {
- /* the muxer is not initialized yet, buffer the packet */
- ret = queue_packet(ost, pkt);
- if (ret < 0) {
- if (pkt)
- av_packet_unref(pkt);
- return ret;
- }
- }
-
- return 0;
-}
-
-int of_output_packet(OutputFile *of, OutputStream *ost, AVPacket *pkt)
-{
- Muxer *mux = mux_from_of(of);
- int ret = 0;
-
- if (pkt && pkt->dts != AV_NOPTS_VALUE)
- ost->last_mux_dts = av_rescale_q(pkt->dts, pkt->time_base, AV_TIME_BASE_Q);
-
- ret = submit_packet(mux, pkt, ost);
- if (ret < 0) {
- av_log(ost, AV_LOG_ERROR, "Error submitting a packet to the muxer: %s",
- av_err2str(ret));
- return ret;
- }
-
- return 0;
-}
-
-int of_streamcopy(OutputStream *ost, const AVPacket *pkt, int64_t dts)
+static int of_streamcopy(OutputStream *ost, AVPacket *pkt)
{
OutputFile *of = output_files[ost->file_index];
MuxStream *ms = ms_from_ost(ost);
+ DemuxPktData *pd = pkt->opaque_ref ? (DemuxPktData*)pkt->opaque_ref->data : NULL;
+ int64_t dts = pd ? pd->dts_est : AV_NOPTS_VALUE;
int64_t start_time = (of->start_time == AV_NOPTS_VALUE) ? 0 : of->start_time;
int64_t ts_offset;
- AVPacket *opkt = ms->pkt;
- int ret;
-
- av_packet_unref(opkt);
if (of->recording_time != INT64_MAX &&
dts >= of->recording_time + start_time)
- pkt = NULL;
-
- // EOF: flush output bitstream filters.
- if (!pkt)
- return of_output_packet(of, ost, NULL);
+ return AVERROR_EOF;
if (!ms->streamcopy_started && !(pkt->flags & AV_PKT_FLAG_KEY) &&
!ms->copy_initial_nonkeyframes)
- return 0;
+ return AVERROR(EAGAIN);
if (!ms->streamcopy_started) {
if (!ms->copy_prior_start &&
(pkt->pts == AV_NOPTS_VALUE ?
dts < ms->ts_copy_start :
pkt->pts < av_rescale_q(ms->ts_copy_start, AV_TIME_BASE_Q, pkt->time_base)))
- return 0;
+ return AVERROR(EAGAIN);
if (of->start_time != AV_NOPTS_VALUE && dts < of->start_time)
- return 0;
+ return AVERROR(EAGAIN);
}
- ret = av_packet_ref(opkt, pkt);
- if (ret < 0)
- return ret;
-
- ts_offset = av_rescale_q(start_time, AV_TIME_BASE_Q, opkt->time_base);
+ ts_offset = av_rescale_q(start_time, AV_TIME_BASE_Q, pkt->time_base);
if (pkt->pts != AV_NOPTS_VALUE)
- opkt->pts -= ts_offset;
+ pkt->pts -= ts_offset;
if (pkt->dts == AV_NOPTS_VALUE) {
- opkt->dts = av_rescale_q(dts, AV_TIME_BASE_Q, opkt->time_base);
+ pkt->dts = av_rescale_q(dts, AV_TIME_BASE_Q, pkt->time_base);
} else if (ost->st->codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {
- opkt->pts = opkt->dts - ts_offset;
- }
- opkt->dts -= ts_offset;
-
- {
- int ret = trigger_fix_sub_duration_heartbeat(ost, pkt);
- if (ret < 0) {
- av_log(NULL, AV_LOG_ERROR,
- "Subtitle heartbeat logic failed in %s! (%s)\n",
- __func__, av_err2str(ret));
- return ret;
- }
+ pkt->pts = pkt->dts - ts_offset;
}
- ret = of_output_packet(of, ost, opkt);
- if (ret < 0)
- return ret;
+ pkt->dts -= ts_offset;
ms->streamcopy_started = 1;
return 0;
}
-static int thread_stop(Muxer *mux)
-{
- void *ret;
-
- if (!mux || !mux->tq)
- return 0;
-
- for (unsigned int i = 0; i < mux->fc->nb_streams; i++)
- tq_send_finish(mux->tq, i);
-
- pthread_join(mux->thread, &ret);
-
- tq_free(&mux->tq);
-
- return (int)(intptr_t)ret;
-}
-
-static int thread_start(Muxer *mux)
-{
- AVFormatContext *fc = mux->fc;
- ObjPool *op;
- int ret;
-
- op = objpool_alloc_packets();
- if (!op)
- return AVERROR(ENOMEM);
-
- mux->tq = tq_alloc(fc->nb_streams, mux->thread_queue_size, op, pkt_move);
- if (!mux->tq) {
- objpool_free(&op);
- return AVERROR(ENOMEM);
- }
-
- ret = pthread_create(&mux->thread, NULL, muxer_thread, (void*)mux);
- if (ret) {
- tq_free(&mux->tq);
- return AVERROR(ret);
- }
-
- /* flush the muxing queues */
- for (int i = 0; i < fc->nb_streams; i++) {
- OutputStream *ost = mux->of.streams[i];
- MuxStream *ms = ms_from_ost(ost);
- AVPacket *pkt;
-
- while (av_fifo_read(ms->muxing_queue, &pkt, 1) >= 0) {
- ret = thread_submit_packet(mux, ost, pkt);
- if (pkt) {
- ms->muxing_queue_data_size -= pkt->size;
- av_packet_free(&pkt);
- }
- if (ret < 0)
- return ret;
- }
- }
-
- return 0;
-}
-
int print_sdp(const char *filename);
int print_sdp(const char *filename)
@@ -590,11 +433,6 @@ int print_sdp(const char *filename)
AVIOContext *sdp_pb;
AVFormatContext **avc;
- for (i = 0; i < nb_output_files; i++) {
- if (!mux_from_of(output_files[i])->header_written)
- return 0;
- }
-
avc = av_malloc_array(nb_output_files, sizeof(*avc));
if (!avc)
return AVERROR(ENOMEM);
@@ -629,25 +467,17 @@ int print_sdp(const char *filename)
avio_closep(&sdp_pb);
}
- // SDP successfully written, allow muxer threads to start
- ret = 1;
-
fail:
av_freep(&avc);
return ret;
}
-int mux_check_init(Muxer *mux)
+int mux_check_init(void *arg)
{
+ Muxer *mux = arg;
OutputFile *of = &mux->of;
AVFormatContext *fc = mux->fc;
- int ret, i;
-
- for (i = 0; i < fc->nb_streams; i++) {
- OutputStream *ost = of->streams[i];
- if (!ost->initialized)
- return 0;
- }
+ int ret;
ret = avformat_write_header(fc, &mux->opts);
if (ret < 0) {
@@ -659,27 +489,7 @@ int mux_check_init(Muxer *mux)
mux->header_written = 1;
av_dump_format(fc, of->index, fc->url, 1);
- nb_output_dumped++;
-
- if (sdp_filename || want_sdp) {
- ret = print_sdp(sdp_filename);
- if (ret < 0) {
- av_log(NULL, AV_LOG_ERROR, "Error writing the SDP.\n");
- return ret;
- } else if (ret == 1) {
- /* SDP is written only after all the muxers are ready, so now we
- * start ALL the threads */
- for (i = 0; i < nb_output_files; i++) {
- ret = thread_start(mux_from_of(output_files[i]));
- if (ret < 0)
- return ret;
- }
- }
- } else {
- ret = thread_start(mux_from_of(of));
- if (ret < 0)
- return ret;
- }
+ atomic_fetch_add(&nb_output_dumped, 1);
return 0;
}
@@ -736,9 +546,10 @@ int of_stream_init(OutputFile *of, OutputStream *ost)
ost->st->time_base);
}
- ost->initialized = 1;
+ if (ms->sch_idx >= 0)
+ return sch_mux_stream_ready(mux->sch, of->index, ms->sch_idx);
- return mux_check_init(mux);
+ return 0;
}
static int check_written(OutputFile *of)
@@ -852,15 +663,13 @@ int of_write_trailer(OutputFile *of)
AVFormatContext *fc = mux->fc;
int ret, mux_result = 0;
- if (!mux->tq) {
+ if (!mux->header_written) {
av_log(mux, AV_LOG_ERROR,
"Nothing was written into output file, because "
"at least one of its streams received no packets.\n");
return AVERROR(EINVAL);
}
- mux_result = thread_stop(mux);
-
ret = av_write_trailer(fc);
if (ret < 0) {
av_log(mux, AV_LOG_ERROR, "Error writing trailer: %s\n", av_err2str(ret));
@@ -905,13 +714,6 @@ static void ost_free(OutputStream **post)
ost->logfile = NULL;
}
- if (ms->muxing_queue) {
- AVPacket *pkt;
- while (av_fifo_read(ms->muxing_queue, &pkt, 1) >= 0)
- av_packet_free(&pkt);
- av_fifo_freep2(&ms->muxing_queue);
- }
-
avcodec_parameters_free(&ost->par_in);
av_bsf_free(&ms->bsf_ctx);
@@ -976,8 +778,6 @@ void of_free(OutputFile **pof)
return;
mux = mux_from_of(of);
- thread_stop(mux);
-
sq_free(&of->sq_encode);
sq_free(&mux->sq_mux);
diff --git a/fftools/ffmpeg_mux.h b/fftools/ffmpeg_mux.h
index eee2b2cb07..5d7cf3fa76 100644
--- a/fftools/ffmpeg_mux.h
+++ b/fftools/ffmpeg_mux.h
@@ -25,7 +25,6 @@
#include <stdint.h>
#include "ffmpeg_sched.h"
-#include "thread_queue.h"
#include "libavformat/avformat.h"
@@ -33,7 +32,6 @@
#include "libavutil/dict.h"
#include "libavutil/fifo.h"
-#include "libavutil/thread.h"
typedef struct MuxStream {
OutputStream ost;
@@ -41,9 +39,6 @@ typedef struct MuxStream {
// name used for logging
char log_name[32];
- /* the packets are buffered here until the muxer is ready to be initialized */
- AVFifo *muxing_queue;
-
AVBSFContext *bsf_ctx;
AVPacket *bsf_pkt;
@@ -57,17 +52,6 @@ typedef struct MuxStream {
int64_t max_frames;
- /*
- * The size of the AVPackets' buffers in queue.
- * Updated when a packet is either pushed or pulled from the queue.
- */
- size_t muxing_queue_data_size;
-
- int max_muxing_queue_size;
-
- /* Threshold after which max_muxing_queue_size will be in effect */
- size_t muxing_queue_data_threshold;
-
// timestamp from which the streamcopied streams should start,
// in AV_TIME_BASE_Q;
// everything before it should be discarded
@@ -106,9 +90,6 @@ typedef struct Muxer {
int *sch_stream_idx;
int nb_sch_stream_idx;
- pthread_t thread;
- ThreadQueue *tq;
-
AVDictionary *opts;
int thread_queue_size;
@@ -122,10 +103,7 @@ typedef struct Muxer {
AVPacket *sq_pkt;
} Muxer;
-/* whether we want to print an SDP, set in of_open() */
-extern int want_sdp;
-
-int mux_check_init(Muxer *mux);
+int mux_check_init(void *arg);
static MuxStream *ms_from_ost(OutputStream *ost)
{
diff --git a/fftools/ffmpeg_mux_init.c b/fftools/ffmpeg_mux_init.c
index 534b4379c7..6459296ab0 100644
--- a/fftools/ffmpeg_mux_init.c
+++ b/fftools/ffmpeg_mux_init.c
@@ -924,13 +924,6 @@ static int new_stream_audio(Muxer *mux, const OptionsContext *o,
return 0;
}
-static int new_stream_attachment(Muxer *mux, const OptionsContext *o,
- OutputStream *ost)
-{
- ost->finished = 1;
- return 0;
-}
-
static int new_stream_subtitle(Muxer *mux, const OptionsContext *o,
OutputStream *ost)
{
@@ -1168,9 +1161,6 @@ static int ost_add(Muxer *mux, const OptionsContext *o, enum AVMediaType type,
if (!ost->par_in)
return AVERROR(ENOMEM);
- ms->muxing_queue = av_fifo_alloc2(8, sizeof(AVPacket*), 0);
- if (!ms->muxing_queue)
- return AVERROR(ENOMEM);
ms->last_mux_dts = AV_NOPTS_VALUE;
ost->st = st;
@@ -1190,7 +1180,8 @@ static int ost_add(Muxer *mux, const OptionsContext *o, enum AVMediaType type,
if (!ost->enc_ctx)
return AVERROR(ENOMEM);
- ret = sch_add_enc(mux->sch, encoder_thread, ost, NULL);
+ ret = sch_add_enc(mux->sch, encoder_thread, ost,
+ ost->type == AVMEDIA_TYPE_SUBTITLE ? NULL : enc_open);
if (ret < 0)
return ret;
ms->sch_idx_enc = ret;
@@ -1414,9 +1405,6 @@ static int ost_add(Muxer *mux, const OptionsContext *o, enum AVMediaType type,
sch_mux_stream_buffering(mux->sch, mux->sch_idx, ms->sch_idx,
max_muxing_queue_size, muxing_queue_data_threshold);
-
- ms->max_muxing_queue_size = max_muxing_queue_size;
- ms->muxing_queue_data_threshold = muxing_queue_data_threshold;
}
MATCH_PER_STREAM_OPT(bits_per_raw_sample, i, ost->bits_per_raw_sample,
@@ -1434,8 +1422,6 @@ static int ost_add(Muxer *mux, const OptionsContext *o, enum AVMediaType type,
if (ost->enc_ctx && av_get_exact_bits_per_sample(ost->enc_ctx->codec_id) == 24)
av_dict_set(&ost->swr_opts, "output_sample_bits", "24", 0);
- ost->last_mux_dts = AV_NOPTS_VALUE;
-
MATCH_PER_STREAM_OPT(copy_initial_nonkeyframes, i,
ms->copy_initial_nonkeyframes, oc, st);
@@ -1443,7 +1429,6 @@ static int ost_add(Muxer *mux, const OptionsContext *o, enum AVMediaType type,
case AVMEDIA_TYPE_VIDEO: ret = new_stream_video (mux, o, ost); break;
case AVMEDIA_TYPE_AUDIO: ret = new_stream_audio (mux, o, ost); break;
case AVMEDIA_TYPE_SUBTITLE: ret = new_stream_subtitle (mux, o, ost); break;
- case AVMEDIA_TYPE_ATTACHMENT: ret = new_stream_attachment(mux, o, ost); break;
}
if (ret < 0)
return ret;
@@ -1938,7 +1923,6 @@ static int setup_sync_queues(Muxer *mux, AVFormatContext *oc, int64_t buf_size_u
MuxStream *ms = ms_from_ost(ost);
enum AVMediaType type = ost->type;
- ost->sq_idx_encode = -1;
ost->sq_idx_mux = -1;
nb_interleaved += IS_INTERLEAVED(type);
@@ -1961,11 +1945,17 @@ static int setup_sync_queues(Muxer *mux, AVFormatContext *oc, int64_t buf_size_u
* - at least one encoded audio/video stream is frame-limited, since
* that has similar semantics to 'shortest'
* - at least one audio encoder requires constant frame sizes
+ *
+ * Note that encoding sync queues are handled in the scheduler, because
+ * different encoders run in different threads and need external
+ * synchronization, while muxer sync queues can be handled inside the muxer
*/
if ((of->shortest && nb_av_enc > 1) || limit_frames_av_enc || nb_audio_fs) {
- of->sq_encode = sq_alloc(SYNC_QUEUE_FRAMES, buf_size_us, mux);
- if (!of->sq_encode)
- return AVERROR(ENOMEM);
+ int sq_idx, ret;
+
+ sq_idx = sch_add_sq_enc(mux->sch, buf_size_us, mux);
+ if (sq_idx < 0)
+ return sq_idx;
for (int i = 0; i < oc->nb_streams; i++) {
OutputStream *ost = of->streams[i];
@@ -1975,13 +1965,11 @@ static int setup_sync_queues(Muxer *mux, AVFormatContext *oc, int64_t buf_size_u
if (!IS_AV_ENC(ost, type))
continue;
- ost->sq_idx_encode = sq_add_stream(of->sq_encode,
- of->shortest || ms->max_frames < INT64_MAX);
- if (ost->sq_idx_encode < 0)
- return ost->sq_idx_encode;
-
- if (ms->max_frames != INT64_MAX)
- sq_limit_frames(of->sq_encode, ost->sq_idx_encode, ms->max_frames);
+ ret = sch_sq_add_enc(mux->sch, sq_idx, ms->sch_idx_enc,
+ of->shortest || ms->max_frames < INT64_MAX,
+ ms->max_frames);
+ if (ret < 0)
+ return ret;
}
}
@@ -2652,23 +2640,6 @@ static int validate_enc_avopt(Muxer *mux, const AVDictionary *codec_avopt)
return 0;
}
-static int init_output_stream_nofilter(OutputStream *ost)
-{
- int ret = 0;
-
- if (ost->enc_ctx) {
- ret = enc_open(ost, NULL);
- if (ret < 0)
- return ret;
- } else {
- ret = of_stream_init(output_files[ost->file_index], ost);
- if (ret < 0)
- return ret;
- }
-
- return ret;
-}
-
static const char *output_file_item_name(void *obj)
{
const Muxer *mux = obj;
@@ -2751,8 +2722,6 @@ int of_open(const OptionsContext *o, const char *filename, Scheduler *sch)
av_strlcat(mux->log_name, "/", sizeof(mux->log_name));
av_strlcat(mux->log_name, oc->oformat->name, sizeof(mux->log_name));
- if (strcmp(oc->oformat->name, "rtp"))
- want_sdp = 0;
of->format = oc->oformat;
if (recording_time != INT64_MAX)
@@ -2768,7 +2737,7 @@ int of_open(const OptionsContext *o, const char *filename, Scheduler *sch)
AVFMT_FLAG_BITEXACT);
}
- err = sch_add_mux(sch, muxer_thread, NULL, mux,
+ err = sch_add_mux(sch, muxer_thread, mux_check_init, mux,
!strcmp(oc->oformat->name, "rtp"));
if (err < 0)
return err;
@@ -2854,26 +2823,15 @@ int of_open(const OptionsContext *o, const char *filename, Scheduler *sch)
of->url = filename;
- /* initialize stream copy and subtitle/data streams.
- * Encoded AVFrame based streams will get initialized when the first AVFrame
- * is received in do_video_out
- */
+ /* initialize streamcopy streams. */
for (int i = 0; i < of->nb_streams; i++) {
OutputStream *ost = of->streams[i];
- if (ost->filter)
- continue;
-
- err = init_output_stream_nofilter(ost);
- if (err < 0)
- return err;
- }
-
- /* write the header for files with no streams */
- if (of->format->flags & AVFMT_NOSTREAMS && oc->nb_streams == 0) {
- int ret = mux_check_init(mux);
- if (ret < 0)
- return ret;
+ if (!ost->enc) {
+ err = of_stream_init(of, ost);
+ if (err < 0)
+ return err;
+ }
}
return 0;
diff --git a/fftools/ffmpeg_opt.c b/fftools/ffmpeg_opt.c
index d463306546..6177a96a4e 100644
--- a/fftools/ffmpeg_opt.c
+++ b/fftools/ffmpeg_opt.c
@@ -64,7 +64,6 @@ const char *const opt_name_top_field_first[] = {"top", NULL};
HWDevice *filter_hw_device;
char *vstats_filename;
-char *sdp_filename;
float audio_drift_threshold = 0.1;
float dts_delta_threshold = 10;
@@ -580,9 +579,8 @@ fail:
static int opt_sdp_file(void *optctx, const char *opt, const char *arg)
{
- av_free(sdp_filename);
- sdp_filename = av_strdup(arg);
- return 0;
+ Scheduler *sch = optctx;
+ return sch_sdp_filename(sch, arg);
}
#if CONFIG_VAAPI
diff --git a/tests/ref/fate/ffmpeg-fix_sub_duration_heartbeat b/tests/ref/fate/ffmpeg-fix_sub_duration_heartbeat
index 957a410921..bc9b833799 100644
--- a/tests/ref/fate/ffmpeg-fix_sub_duration_heartbeat
+++ b/tests/ref/fate/ffmpeg-fix_sub_duration_heartbeat
@@ -1,48 +1,40 @@
1
-00:00:00,968 --> 00:00:01,001
+00:00:00,968 --> 00:00:01,168
<font face="Monospace">{\an7}(</font>
2
-00:00:01,001 --> 00:00:01,168
-<font face="Monospace">{\an7}(</font>
-
-3
00:00:01,168 --> 00:00:01,368
<font face="Monospace">{\an7}(<i> inaudibl</i></font>
-4
+3
00:00:01,368 --> 00:00:01,568
<font face="Monospace">{\an7}(<i> inaudible radio chat</i></font>
-5
+4
00:00:01,568 --> 00:00:02,002
<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )</font>
+5
+00:00:02,002 --> 00:00:03,103
+<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )</font>
+
6
-00:00:02,002 --> 00:00:03,003
-<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )</font>
-
-7
-00:00:03,003 --> 00:00:03,103
-<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )</font>
-
-8
00:00:03,103 --> 00:00:03,303
-<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )
+<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )
>></font>
-9
+7
00:00:03,303 --> 00:00:03,503
-<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )
+<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )
>> Safety rema</font>
-10
+8
00:00:03,504 --> 00:00:03,704
-<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )
+<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )
>> Safety remains our numb</font>
-11
+9
00:00:03,704 --> 00:00:04,004
-<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )
+<font face="Monospace">{\an7}(<i> inaudible radio chatter</i> )
>> Safety remains our number one</font>
--
2.42.0
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 13/13 v3] fftools/ffmpeg: convert to a threaded architecture
2023-12-01 11:15 ` [FFmpeg-devel] [PATCH 13/13 v3] " Anton Khirnov
@ 2023-12-01 14:24 ` Nicolas George
2023-12-01 14:27 ` Anton Khirnov
0 siblings, 1 reply; 49+ messages in thread
From: Nicolas George @ 2023-12-01 14:24 UTC (permalink / raw)
To: FFmpeg development discussions and patches
Anton Khirnov (12023-12-01):
> Change the main loop and every component (demuxers, decoders, filters,
> encoders, muxers) to use the previously added transcode scheduler. Every
> instance of every such component was already running in a separate
> thread, but now they can actually run in parallel.
>
> Changes the results of ffmpeg-fix_sub_duration_heartbeat - tested by
> JEEB to be more correct and deterministic.
> ---
> Fixed the hang. Also updated the public branch.
Still breaking sub2video with time shift, see the test case I already
sent twice.
--
Nicolas George
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 13/13 v3] fftools/ffmpeg: convert to a threaded architecture
2023-12-01 14:24 ` Nicolas George
@ 2023-12-01 14:27 ` Anton Khirnov
2023-12-01 14:42 ` Nicolas George
0 siblings, 1 reply; 49+ messages in thread
From: Anton Khirnov @ 2023-12-01 14:27 UTC (permalink / raw)
To: FFmpeg development discussions and patches
Quoting Nicolas George (2023-12-01 15:24:52)
> Anton Khirnov (12023-12-01):
> > Change the main loop and every component (demuxers, decoders, filters,
> > encoders, muxers) to use the previously added transcode scheduler. Every
> > instance of every such component was already running in a separate
> > thread, but now they can actually run in parallel.
> >
> > Changes the results of ffmpeg-fix_sub_duration_heartbeat - tested by
> > JEEB to be more correct and deterministic.
> > ---
> > Fixed the hang. Also updated the public branch.
>
> Still breaking sub2video with time shift, see the test case I already
> sent twice.
See my email from wednesday, it's not actually broken.
--
Anton Khirnov
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 13/13 v3] fftools/ffmpeg: convert to a threaded architecture
2023-12-01 14:27 ` Anton Khirnov
@ 2023-12-01 14:42 ` Nicolas George
2023-12-01 14:46 ` Anton Khirnov
0 siblings, 1 reply; 49+ messages in thread
From: Nicolas George @ 2023-12-01 14:42 UTC (permalink / raw)
To: FFmpeg development discussions and patches
Anton Khirnov (12023-12-01):
> See my email from wednesday, it's not actually broken.
I do not have a mail from you from Wednesday. When something succeeds
with the current code and fails with “Error while add the frame to
buffer source(Cannot allocate memory)”, that is broken.
--
Nicolas George
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 13/13 v3] fftools/ffmpeg: convert to a threaded architecture
2023-12-01 14:42 ` Nicolas George
@ 2023-12-01 14:46 ` Anton Khirnov
2023-12-01 14:50 ` Nicolas George
0 siblings, 1 reply; 49+ messages in thread
From: Anton Khirnov @ 2023-12-01 14:46 UTC (permalink / raw)
To: FFmpeg development discussions and patches
Quoting Nicolas George (2023-12-01 15:42:38)
> Anton Khirnov (12023-12-01):
> > See my email from wednesday, it's not actually broken.
>
> I do not have a mail from you from Wednesday.
http://lists.ffmpeg.org/pipermail/ffmpeg-devel/2023-November/317536.html
> When something succeeds with the current code and fails with “Error
> while add the frame to buffer source(Cannot allocate memory)”, that is
> broken.
Not necessarily, when the current code is broken (which you agreed with
in the last thread).
--
Anton Khirnov
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 13/13 v3] fftools/ffmpeg: convert to a threaded architecture
2023-12-01 14:46 ` Anton Khirnov
@ 2023-12-01 14:50 ` Nicolas George
2023-12-01 14:58 ` Anton Khirnov
0 siblings, 1 reply; 49+ messages in thread
From: Nicolas George @ 2023-12-01 14:50 UTC (permalink / raw)
To: FFmpeg development discussions and patches
Anton Khirnov (12023-12-01):
> > When something succeeds with the current code and fails with “Error
> > while add the frame to buffer source(Cannot allocate memory)”, that is
> > broken.
> Not necessarily, when the current code is broken (which you agreed with
> in the last thread).
I do not know what you are talking about, I agreed to no such thing.
The test case I gave you is correct and with the current code it works.
Your changes breaks it, removing an important features for users. Please
include it in your testing routine and only submit again when you have
fixed it.
--
Nicolas George
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 13/13 v3] fftools/ffmpeg: convert to a threaded architecture
2023-12-01 14:50 ` Nicolas George
@ 2023-12-01 14:58 ` Anton Khirnov
2023-12-01 15:25 ` Nicolas George
0 siblings, 1 reply; 49+ messages in thread
From: Anton Khirnov @ 2023-12-01 14:58 UTC (permalink / raw)
To: FFmpeg development discussions and patches
Quoting Nicolas George (2023-12-01 15:50:39)
> Anton Khirnov (12023-12-01):
> > > When something succeeds with the current code and fails with “Error
> > > while add the frame to buffer source(Cannot allocate memory)”, that is
> > > broken.
> > Not necessarily, when the current code is broken (which you agreed with
> > in the last thread).
>
> I do not know what you are talking about, I agreed to no such thing.
http://lists.ffmpeg.org/pipermail/ffmpeg-devel/2023-November/316787.html
The current code is broken because its output depends on the order in
which the frames from different inputs arrive at the filtergraph. It
just so happens that it is deterministically broken currently. After
this patchset it becomes non-deterministically broken, which forces me
to do something about it.
> The test case I gave you is correct and with the current code it works.
> Your changes breaks it, removing an important features for users. Please
> include it in your testing routine and only submit again when you have
> fixed it.
Your testcase offsets two streams by 60 seconds. That implies 60 seconds
of buffering. You would get this same amount of bufering in the muxer if
you did the same offsetting with transcoding or remuxing two streams
from the same source.
One can also avoid this buffering entirely by simply opening the file
twice.
So I don't think your demand is reasonable, unless you're also
suggesting a specific way of implementing this.
--
Anton Khirnov
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 13/13 v3] fftools/ffmpeg: convert to a threaded architecture
2023-12-01 14:58 ` Anton Khirnov
@ 2023-12-01 15:25 ` Nicolas George
2023-12-01 19:49 ` Anton Khirnov
0 siblings, 1 reply; 49+ messages in thread
From: Nicolas George @ 2023-12-01 15:25 UTC (permalink / raw)
To: FFmpeg development discussions and patches
Anton Khirnov (12023-12-01):
> http://lists.ffmpeg.org/pipermail/ffmpeg-devel/2023-November/316787.html
So not Wednesday but Tursday three weeks ago.
I did not agree that the current code was broken.
> The current code is broken because its output depends on the order in
> which the frames from different inputs arrive at the filtergraph. It
> just so happens that it is deterministically broken currently. After
> this patchset it becomes non-deterministically broken, which forces me
> to do something about it.
That is not true. The current code works and gives correct result if the
file is properly muxed: it cannot be said to be broken.
> Your testcase offsets two streams by 60 seconds.
Indeed.
> That implies 60 seconds
> of buffering. You would get this same amount of bufering in the muxer if
> you did the same offsetting with transcoding or remuxing two streams
> from the same source.
> One can also avoid this buffering entirely by simply opening the file
> twice.
You are wrong. You would be right if the offset had been in the opposite
direction. But in the case I chose, it is the subtitles stream that is
delayed, and 60 seconds of subtitles means a few dozens frames at most,
not many hundreds.
Your change to the sub2video hearbeat makes it continuous, and turns the
few dozens frames into many hundreds: this is what is breaking.
So I say it again: this test case is useful and currently works, include
it in your test case so that your patch series keeps the feature
working.
I can consider sending a patch to add it to FATE, but not before Monday.
Also, note that in the grand-parent message from the one you quoted
above, I gave you a solution to make it work. You told that it was
already what you did, but obviously it is not, so let us resolve this
misunderstanding.
--
Nicolas George
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 13/13 v3] fftools/ffmpeg: convert to a threaded architecture
2023-12-01 15:25 ` Nicolas George
@ 2023-12-01 19:49 ` Anton Khirnov
2023-12-04 15:25 ` Nicolas George
0 siblings, 1 reply; 49+ messages in thread
From: Anton Khirnov @ 2023-12-01 19:49 UTC (permalink / raw)
To: FFmpeg development discussions and patches
Quoting Nicolas George (2023-12-01 16:25:04)
> Anton Khirnov (12023-12-01):
> > http://lists.ffmpeg.org/pipermail/ffmpeg-devel/2023-November/316787.html
>
> So not Wednesday but Tursday three weeks ago.
The Wednesday email was the one I linked to two emails ago. Here it is
again:
http://lists.ffmpeg.org/pipermail/ffmpeg-devel/2023-November/317536.html
> I did not agree that the current code was broken.
>
> > The current code is broken because its output depends on the order in
> > which the frames from different inputs arrive at the filtergraph. It
> > just so happens that it is deterministically broken currently. After
> > this patchset it becomes non-deterministically broken, which forces me
> > to do something about it.
>
> That is not true. The current code works and gives correct result if the
> file is properly muxed: it cannot be said to be broken.
I can definitely say it is broken and I already told you why. But if you
want something more specific:
* the output of your example with the current master changes depending
on the number of decoder frame threads; my patch fixes that
* in fate-filter-overlay-dvdsub-2397 subtitles appear two frames too
early; again, my patch fixes that
> > Your testcase offsets two streams by 60 seconds.
>
> Indeed.
>
> > That implies 60 seconds
> > of buffering. You would get this same amount of bufering in the muxer if
> > you did the same offsetting with transcoding or remuxing two streams
> > from the same source.
> > One can also avoid this buffering entirely by simply opening the file
> > twice.
>
> You are wrong. You would be right if the offset had been in the opposite
> direction. But in the case I chose, it is the subtitles stream that is
> delayed, and 60 seconds of subtitles means a few dozens frames at most,
> not many hundreds.
>
> Your change to the sub2video hearbeat makes it continuous, and turns the
> few dozens frames into many hundreds: this is what is breaking.
>
> So I say it again: this test case is useful and currently works, include
> it in your test case so that your patch series keeps the feature
> working.
>
> I can consider sending a patch to add it to FATE, but not before Monday.
>
> Also, note that in the grand-parent message from the one you quoted
> above, I gave you a solution to make it work. You told that it was
> already what you did, but obviously it is not, so let us resolve this
> misunderstanding.
IIUC your suggestion was to send heartbeat packets from demuxer to
decoder, then have the decoder forward them to filtergraph.
That is EXACTLY what I'm doing in the final patch, see [1]. It also does
not address this problem at all, because it is caused by the heartbeat
processing code making decisions based on
av_buffersrc_get_nb_failed_requests(), which fundamentally depends on
what frames previously arrived on the video input.
[1] https://git.khirnov.net/libav.git/tree/fftools/ffmpeg_demux.c?h=ffmpeg_threading#n527
https://git.khirnov.net/libav.git/tree/fftools/ffmpeg_dec.c?h=ffmpeg_threading#n406
--
Anton Khirnov
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 13/13 v3] fftools/ffmpeg: convert to a threaded architecture
2023-12-01 19:49 ` Anton Khirnov
@ 2023-12-04 15:25 ` Nicolas George
2023-12-04 16:25 ` Anton Khirnov
0 siblings, 1 reply; 49+ messages in thread
From: Nicolas George @ 2023-12-04 15:25 UTC (permalink / raw)
To: FFmpeg development discussions and patches
Anton Khirnov (12023-12-01):
> I can definitely say it is broken and I already told you why. But if you
> want something more specific:
> * the output of your example with the current master changes depending
> on the number of decoder frame threads; my patch fixes that
> * in fate-filter-overlay-dvdsub-2397 subtitles appear two frames too
> early; again, my patch fixes that
Ok, some cases are broken. Fine, this is a hard task, some cases are
impossible. That does not allow you to break cases that are currently
working.
> IIUC your suggestion was to send heartbeat packets from demuxer to
> decoder, then have the decoder forward them to filtergraph.
>
> That is EXACTLY what I'm doing in the final patch, see [1]. It also does
> not address this problem at all, because it is caused by the heartbeat
> processing code making decisions based on
> av_buffersrc_get_nb_failed_requests(), which fundamentally depends on
> what frames previously arrived on the video input.
Then fix it. I have given you a command that currently works and
produces valid output: make sure it still works with your changes. And
that will require sending heartbeat frames only when they are needed,
keeping the current logic in place.
--
Nicolas George
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 13/13 v3] fftools/ffmpeg: convert to a threaded architecture
2023-12-04 15:25 ` Nicolas George
@ 2023-12-04 16:25 ` Anton Khirnov
2023-12-04 16:37 ` Nicolas George
0 siblings, 1 reply; 49+ messages in thread
From: Anton Khirnov @ 2023-12-04 16:25 UTC (permalink / raw)
To: FFmpeg development discussions and patches
Quoting Nicolas George (2023-12-04 16:25:52)
> Anton Khirnov (12023-12-01):
> > I can definitely say it is broken and I already told you why. But if you
> > want something more specific:
> > * the output of your example with the current master changes depending
> > on the number of decoder frame threads; my patch fixes that
> > * in fate-filter-overlay-dvdsub-2397 subtitles appear two frames too
> > early; again, my patch fixes that
>
> Ok, some cases are broken. Fine, this is a hard task, some cases are
> impossible. That does not allow you to break cases that are currently
> working.
Nothing is being broken. Your highly contrived and currently broken
testcase buffers a bounded number of extra frames in order to stop being
broken. If that extra buffering is an actual problem for someone, it can
be easily avoided by opening the file twice.
> > IIUC your suggestion was to send heartbeat packets from demuxer to
> > decoder, then have the decoder forward them to filtergraph.
> >
> > That is EXACTLY what I'm doing in the final patch, see [1]. It also does
> > not address this problem at all, because it is caused by the heartbeat
> > processing code making decisions based on
> > av_buffersrc_get_nb_failed_requests(), which fundamentally depends on
> > what frames previously arrived on the video input.
>
> Then fix it. I have given you a command that currently works and
> produces valid output:
As I said before, your command does NOT work. Its output changes
unpredictably depending on unrelated parameters.
> make sure it still works with your changes.
After my changes it actually does work reliably.
I maintain that your demand to "fix" your testcase (i.e. reduce its
memory consumption) is highly unreasonable – unless you specify how
exactly that is supposed to be accomplished while preserving
determinism.
--
Anton Khirnov
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 13/13 v3] fftools/ffmpeg: convert to a threaded architecture
2023-12-04 16:25 ` Anton Khirnov
@ 2023-12-04 16:37 ` Nicolas George
2023-12-04 17:07 ` Anton Khirnov
0 siblings, 1 reply; 49+ messages in thread
From: Nicolas George @ 2023-12-04 16:37 UTC (permalink / raw)
To: FFmpeg development discussions and patches
Anton Khirnov (12023-12-04):
> broken. If that extra buffering is an actual problem for someone, it can
> be easily avoided by opening the file twice.
Not a solution if the file is streamed or generated.
> As I said before, your command does NOT work. Its output changes
> unpredictably depending on unrelated parameters.
It still produces correct output in most of the cases, which is what
matters to users.
> I maintain that your demand to "fix" your testcase (i.e. reduce its
> memory consumption) is highly unreasonable –
My demand is not that you REDUCE the memory consumption, my demand is
that you DO NOT INCREASE IT HUNDREDFOLD.
That is a perfectly reasonable demand.
> unless you specify how
> exactly that is supposed to be accomplished while preserving
> determinism.
Fixing the bugs introduced by threading is the job of the person who
wants to introduce threading. I can offer the help of my expertise about
lavfi and subtitles, of course. But your attitude to just pretend the
problem does not exist is unacceptable.
--
Nicolas George
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 13/13 v3] fftools/ffmpeg: convert to a threaded architecture
2023-12-04 16:37 ` Nicolas George
@ 2023-12-04 17:07 ` Anton Khirnov
2023-12-06 12:55 ` Nicolas George
0 siblings, 1 reply; 49+ messages in thread
From: Anton Khirnov @ 2023-12-04 17:07 UTC (permalink / raw)
To: FFmpeg development discussions and patches
Quoting Nicolas George (2023-12-04 17:37:23)
> Anton Khirnov (12023-12-04):
> > broken. If that extra buffering is an actual problem for someone, it can
> > be easily avoided by opening the file twice.
>
> Not a solution if the file is streamed or generated.
>
> > As I said before, your command does NOT work. Its output changes
> > unpredictably depending on unrelated parameters.
>
> It still produces correct output in most of the cases, which is what
> matters to users.
$ for i in $(seq 1 4 64); do
. echo -ne "$i\t";
. ./ffmpeg -v fatal -threads $i -i sub.mkv -preset ultrafast \
. -lavfi '[0:s]setpts=PTS+60/TB[s] ; [0:v][s]overlay' \
. -bitexact -y -f matroska md5:
. done
1 287e929cba6c67c6ce8e35954548048d
5 7e234d83cca90d4b0ee4d563e21a8bd8
9 abfd093a36b1661db022616d97c45fad
13 f6be9d63a7dce69cd16581895923b196
17 2c7144c01f6294e65e305f602b58a718
21 99564d81199a1f453ccff24ac2db7eac
25 02ec0aadb03a0205ccacb4c873ee9ad9
29 76643205916887fd444525ea8bb610fb
33 6267a2baeb1554bc5f219890ac8b37aa
37 f955daadf62c91ddce5705561e32804f
41 523c61ebad03b6ba297812cb7939b679
45 96c48966f459442707f29f854aa59c3b
49 ad92b3f79c7bb31fdc272b2a81985334
53 362a7d6c0a681a049adf4802e0dc8771
57 1529ffd4f487ea13b9a37f4f1b9879eb
61 53db192ac534d348b3ed63fdcbac945a
Which of these are you saying is correct?
> > I maintain that your demand to "fix" your testcase (i.e. reduce its
> > memory consumption) is highly unreasonable –
>
> My demand is not that you REDUCE the memory consumption, my demand is
> that you DO NOT INCREASE IT HUNDREDFOLD.
>
> That is a perfectly reasonable demand.
>
> > unless you specify how
> > exactly that is supposed to be accomplished while preserving
> > determinism.
>
> Fixing the bugs introduced by threading
The only bug that's been established to exist so far is in your
heartbeat code, which produces random output as per above.
Buffering is by itself not a bug, otherwise you'd have to say the lavf
interleaving queue is a bug.
So for the last time - either suggest a specific and practical way of
reducing memory consumption or stop interfering with my work.
--
Anton Khirnov
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 13/13 v3] fftools/ffmpeg: convert to a threaded architecture
2023-12-04 17:07 ` Anton Khirnov
@ 2023-12-06 12:55 ` Nicolas George
2023-12-06 13:21 ` James Almer
0 siblings, 1 reply; 49+ messages in thread
From: Nicolas George @ 2023-12-06 12:55 UTC (permalink / raw)
To: FFmpeg development discussions and patches
Anton Khirnov (12023-12-04):
> Which of these are you saying is correct?
I do not know? Do you think I am able to reverse MD5 mentally? I am
flattered, but I am sorry to confess I am not.
Why do you not look at the resulting videos to judge for yourself? But
to do that, you will need to remember (or learn two things):
First, most people do not have that many CPU threads available, and if
they do they will spend them on encoding more than decoding.
Second, and most important: for subtitles, in many many cases, a few
frames of shift do not matter because the timing in the source material
is not that accurate.
So the answer to your question is: probably most of the ones generated
with a sane number of threads are correct, in the sense that the result
is within the acceptable accuracy of subtitles sync and useful for the
user.
Of course, if the use case is one where perfect accuracy is necessary,
users need to revert to a slower and more bulky procedure (like you
suggested: open the file twice, which might require storing it entirely)
to get it.
So really, what you pretend is not breaking anything is really removing
one of the options currently available to users in the compromise
between speed, latency and accuracy.
So I demand you stop pretending you are not breaking anything, stop
pretending it is currently broken, just so you can move forward without
bothering to search for a solution: that starts to feels like laziness
and it always felt like rudeness because I spend a lot of effort in
getting this to work in the cases where it can.
> The only bug that's been established to exist so far is in your
> heartbeat code, which produces random output as per above.
As I explained many times, this is not a bug.
> Buffering is by itself not a bug, otherwise you'd have to say the lavf
> interleaving queue is a bug.
Once again, buffering thousands of frames and crashing because out of
memory when the current code succeeds and produces an useful result is a
regression and the patch series cannot be applied until that regression
is fixed.
> So for the last time - either suggest a specific and practical way of
> reducing memory consumption or stop interfering with my work.
The specific and practical way is to let the current logic in place.
There might be a few tweaks to make it more accurate, like looking into
this comment:
/* subtitles seem to be usually muxed ahead of other streams;
if not, subtracting a larger time here is necessary */
pts2 = av_rescale_q(pts, tb, ifp->time_base) - 1;
But first, we need you to stop behaving as if my previous efforts did
not mater just because it does not overlap with your narrow use cases.
--
Nicolas George
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 13/13 v3] fftools/ffmpeg: convert to a threaded architecture
2023-12-06 12:55 ` Nicolas George
@ 2023-12-06 13:21 ` James Almer
2023-12-06 13:38 ` Nicolas George
0 siblings, 1 reply; 49+ messages in thread
From: James Almer @ 2023-12-06 13:21 UTC (permalink / raw)
To: ffmpeg-devel
On 12/6/2023 9:55 AM, Nicolas George wrote:
> Anton Khirnov (12023-12-04):
>> Which of these are you saying is correct?
>
> I do not know? Do you think I am able to reverse MD5 mentally? I am
> flattered, but I am sorry to confess I am not.
>
> Why do you not look at the resulting videos to judge for yourself? But
I honestly can't believe you're arguing this. At this point you're just
being defensive of your position without really taking into account what
you were challenged with.
> to do that, you will need to remember (or learn two things):
And being condescending will not help your case.
>
> First, most people do not have that many CPU threads available, and if
> they do they will spend them on encoding more than decoding.
>
> Second, and most important: for subtitles, in many many cases, a few
> frames of shift do not matter because the timing in the source material
> is not that accurate.
>
> So the answer to your question is: probably most of the ones generated
> with a sane number of threads are correct, in the sense that the result
> is within the acceptable accuracy of subtitles sync and useful for the
> user.
How can you argue it's fine when you request bitexact output and do NOT
get bitexact output? Go ahead and add that command line as a FATE test.
See the runners turn yellow. Will you argue it's fine and not broken?
Number of threads should not matter, the output has to be deterministic.
Saying "Maybe the user likes what he gets. Varying amount of artifacts
here and there or a few frames of shift here and there of difference
between runs. It's fine!" is laughable.
>
> Of course, if the use case is one where perfect accuracy is necessary,
> users need to revert to a slower and more bulky procedure (like you
> suggested: open the file twice, which might require storing it entirely)
> to get it.
>
> So really, what you pretend is not breaking anything is really removing
> one of the options currently available to users in the compromise
> between speed, latency and accuracy.
>
> So I demand you stop pretending you are not breaking anything, stop
> pretending it is currently broken, just so you can move forward without
> bothering to search for a solution: that starts to feels like laziness
> and it always felt like rudeness because I spend a lot of effort in
> getting this to work in the cases where it can.
>
>> The only bug that's been established to exist so far is in your
>> heartbeat code, which produces random output as per above.
>
> As I explained many times, this is not a bug.
If i request -bitexact, i want bitexact output, regardless of running on
a core i3 or a Threadripper. There's nothing more to it.
>
>> Buffering is by itself not a bug, otherwise you'd have to say the lavf
>> interleaving queue is a bug.
>
> Once again, buffering thousands of frames and crashing because out of
> memory when the current code succeeds and produces an useful result is a
> regression and the patch series cannot be applied until that regression
> is fixed.
Calling random output that happens to be "acceptable" within the
subjective expectations of the user as useful sounds to me like you're
trying to find an excuse to keep buggy code with unpredictable results
around, just because it's been there for a long time.
>
>> So for the last time - either suggest a specific and practical way of
>> reducing memory consumption or stop interfering with my work.
>
> The specific and practical way is to let the current logic in place.
> There might be a few tweaks to make it more accurate, like looking into
> this comment:
>
> /* subtitles seem to be usually muxed ahead of other streams;
> if not, subtracting a larger time here is necessary */
> pts2 = av_rescale_q(pts, tb, ifp->time_base) - 1;
>
> But first, we need you to stop behaving as if my previous efforts did
> not mater just because it does not overlap with your narrow use cases.
Your previous efforts mattered, but evidently did not yield completely
acceptable results, and this overhaul has exposed it.
So, like Anton has asked several times, suggest a way to keep
deterministic and bitexact output without exponentially increasing
memory consumption due to buffering.
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 13/13 v3] fftools/ffmpeg: convert to a threaded architecture
2023-12-06 13:21 ` James Almer
@ 2023-12-06 13:38 ` Nicolas George
2023-12-07 17:26 ` Paul B Mahol
0 siblings, 1 reply; 49+ messages in thread
From: Nicolas George @ 2023-12-06 13:38 UTC (permalink / raw)
To: FFmpeg development discussions and patches
James Almer (12023-12-06):
> I honestly can't believe you're arguing this.
Yet I do, so I suggest you think a little harder to understand why I do.
> And being condescending will not help your case.
Can you tell that to Anton too please?
> If i request -bitexact, i want bitexact output, regardless of running on a
> core i3 or a Threadripper. There's nothing more to it.
I had not noticed the -bitexact on the test command line. I will grant
the change is acceptable if bit-exact is requested.
> Calling random output that happens to be "acceptable" within the subjective
> expectations of the user as useful sounds to me like you're trying to find
> an excuse to keep buggy code with unpredictable results around, just because
> it's been there for a long time.
Well, you are wrong, and what I explained is the real reason: most
subtitles are not timed that accurately. The subtitles on HBO's Last
Week Tonight, for example, can randomly lag or be early by several
seconds. Even serious subtitles, like the ones for scripted shows on
Netflix/Amazon/Crunchyroll/whatever vary by a few tenths of seconds,
i.e. several frames.
And I have used this code. And I look carefully at subtitles. If the
result was lower quality than the source material, I would have noticed
and I would have endeavored to fix it. There never was need.
Now, can Anton claim similar experience working with subtitles from the
real world? Most of this discussions points to the answer being no.
> So, like Anton has asked several times, suggest a way to keep deterministic
> and bitexact output without exponentially increasing memory consumption due
> to buffering.
I will spend time and effort searching for a solution when we agree to
work together.
“Do this or I will break your code” is an unacceptable behavior, whether
it is directed at me or at Paul or at anybody else, and I do not spend
effort when unacceptable behavior is tolerated.
--
Nicolas George
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 13/13 v3] fftools/ffmpeg: convert to a threaded architecture
2023-12-06 13:38 ` Nicolas George
@ 2023-12-07 17:26 ` Paul B Mahol
2023-12-21 11:53 ` Paul B Mahol
0 siblings, 1 reply; 49+ messages in thread
From: Paul B Mahol @ 2023-12-07 17:26 UTC (permalink / raw)
To: FFmpeg development discussions and patches
On Wed, Dec 6, 2023 at 2:38 PM Nicolas George <george@nsup.org> wrote:
> James Almer (12023-12-06):
> > I honestly can't believe you're arguing this.
>
> Yet I do, so I suggest you think a little harder to understand why I do.
>
> > And being condescending will not help your case.
>
> Can you tell that to Anton too please?
>
> > If i request -bitexact, i want bitexact output, regardless of running on
> a
> > core i3 or a Threadripper. There's nothing more to it.
>
> I had not noticed the -bitexact on the test command line. I will grant
> the change is acceptable if bit-exact is requested.
>
> > Calling random output that happens to be "acceptable" within the
> subjective
> > expectations of the user as useful sounds to me like you're trying to
> find
> > an excuse to keep buggy code with unpredictable results around, just
> because
> > it's been there for a long time.
>
> Well, you are wrong, and what I explained is the real reason: most
> subtitles are not timed that accurately. The subtitles on HBO's Last
> Week Tonight, for example, can randomly lag or be early by several
> seconds. Even serious subtitles, like the ones for scripted shows on
> Netflix/Amazon/Crunchyroll/whatever vary by a few tenths of seconds,
> i.e. several frames.
>
> And I have used this code. And I look carefully at subtitles. If the
> result was lower quality than the source material, I would have noticed
> and I would have endeavored to fix it. There never was need.
>
> Now, can Anton claim similar experience working with subtitles from the
> real world? Most of this discussions points to the answer being no.
>
> > So, like Anton has asked several times, suggest a way to keep
> deterministic
> > and bitexact output without exponentially increasing memory consumption
> due
> > to buffering.
>
> I will spend time and effort searching for a solution when we agree to
> work together.
>
> “Do this or I will break your code” is an unacceptable behavior, whether
> it is directed at me or at Paul or at anybody else, and I do not spend
> effort when unacceptable behavior is tolerated.
>
>
From 3.4 version of ffmpeg to 6.1 version demuxing truehd (-c:a copy)
files dropped by factor of 2x speed.
But simple transcode from doc/examples is still several times faster than
that.
I bet using mutexes and condition variables is far from perfect solution or
fftools/ code is buggy.
This is similar to -lavfi sources dropouts in performance but more used by
users of truehd/any small packets format.
> --
> Nicolas George
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
>
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 13/13 v3] fftools/ffmpeg: convert to a threaded architecture
2023-12-07 17:26 ` Paul B Mahol
@ 2023-12-21 11:53 ` Paul B Mahol
2023-12-22 10:26 ` Anton Khirnov
0 siblings, 1 reply; 49+ messages in thread
From: Paul B Mahol @ 2023-12-21 11:53 UTC (permalink / raw)
To: FFmpeg development discussions and patches
On Thu, Dec 7, 2023 at 6:26 PM Paul B Mahol <onemda@gmail.com> wrote:
>
>
> On Wed, Dec 6, 2023 at 2:38 PM Nicolas George <george@nsup.org> wrote:
>
>> James Almer (12023-12-06):
>> > I honestly can't believe you're arguing this.
>>
>> Yet I do, so I suggest you think a little harder to understand why I do.
>>
>> > And being condescending will not help your case.
>>
>> Can you tell that to Anton too please?
>>
>> > If i request -bitexact, i want bitexact output, regardless of running
>> on a
>> > core i3 or a Threadripper. There's nothing more to it.
>>
>> I had not noticed the -bitexact on the test command line. I will grant
>> the change is acceptable if bit-exact is requested.
>>
>> > Calling random output that happens to be "acceptable" within the
>> subjective
>> > expectations of the user as useful sounds to me like you're trying to
>> find
>> > an excuse to keep buggy code with unpredictable results around, just
>> because
>> > it's been there for a long time.
>>
>> Well, you are wrong, and what I explained is the real reason: most
>> subtitles are not timed that accurately. The subtitles on HBO's Last
>> Week Tonight, for example, can randomly lag or be early by several
>> seconds. Even serious subtitles, like the ones for scripted shows on
>> Netflix/Amazon/Crunchyroll/whatever vary by a few tenths of seconds,
>> i.e. several frames.
>>
>> And I have used this code. And I look carefully at subtitles. If the
>> result was lower quality than the source material, I would have noticed
>> and I would have endeavored to fix it. There never was need.
>>
>> Now, can Anton claim similar experience working with subtitles from the
>> real world? Most of this discussions points to the answer being no.
>>
>> > So, like Anton has asked several times, suggest a way to keep
>> deterministic
>> > and bitexact output without exponentially increasing memory consumption
>> due
>> > to buffering.
>>
>> I will spend time and effort searching for a solution when we agree to
>> work together.
>>
>> “Do this or I will break your code” is an unacceptable behavior, whether
>> it is directed at me or at Paul or at anybody else, and I do not spend
>> effort when unacceptable behavior is tolerated.
>>
>>
> From 3.4 version of ffmpeg to 6.1 version demuxing truehd (-c:a copy)
> files dropped by factor of 2x speed.
> But simple transcode from doc/examples is still several times faster than
> that.
>
> I bet using mutexes and condition variables is far from perfect solution
> or fftools/ code is buggy.
>
> This is similar to -lavfi sources dropouts in performance but more used by
> users of truehd/any small packets format.
>
I found out if I increase queue size of thread for frames/packets in
fftools/ from 1 to >1 it increases speed in decoding by 10%.
Looks like other numbers greater than 2 do not make much any difference.
Still current state is sub-optimal.
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
* Re: [FFmpeg-devel] [PATCH 13/13 v3] fftools/ffmpeg: convert to a threaded architecture
2023-12-21 11:53 ` Paul B Mahol
@ 2023-12-22 10:26 ` Anton Khirnov
0 siblings, 0 replies; 49+ messages in thread
From: Anton Khirnov @ 2023-12-22 10:26 UTC (permalink / raw)
To: FFmpeg development discussions and patches
Quoting Paul B Mahol (2023-12-21 12:53:58)
> On Thu, Dec 7, 2023 at 6:26 PM Paul B Mahol <onemda@gmail.com> wrote:
>
> >
> >
> > On Wed, Dec 6, 2023 at 2:38 PM Nicolas George <george@nsup.org> wrote:
> >
> >> James Almer (12023-12-06):
> >> > I honestly can't believe you're arguing this.
> >>
> >> Yet I do, so I suggest you think a little harder to understand why I do.
> >>
> >> > And being condescending will not help your case.
> >>
> >> Can you tell that to Anton too please?
> >>
> >> > If i request -bitexact, i want bitexact output, regardless of running
> >> on a
> >> > core i3 or a Threadripper. There's nothing more to it.
> >>
> >> I had not noticed the -bitexact on the test command line. I will grant
> >> the change is acceptable if bit-exact is requested.
> >>
> >> > Calling random output that happens to be "acceptable" within the
> >> subjective
> >> > expectations of the user as useful sounds to me like you're trying to
> >> find
> >> > an excuse to keep buggy code with unpredictable results around, just
> >> because
> >> > it's been there for a long time.
> >>
> >> Well, you are wrong, and what I explained is the real reason: most
> >> subtitles are not timed that accurately. The subtitles on HBO's Last
> >> Week Tonight, for example, can randomly lag or be early by several
> >> seconds. Even serious subtitles, like the ones for scripted shows on
> >> Netflix/Amazon/Crunchyroll/whatever vary by a few tenths of seconds,
> >> i.e. several frames.
> >>
> >> And I have used this code. And I look carefully at subtitles. If the
> >> result was lower quality than the source material, I would have noticed
> >> and I would have endeavored to fix it. There never was need.
> >>
> >> Now, can Anton claim similar experience working with subtitles from the
> >> real world? Most of this discussions points to the answer being no.
> >>
> >> > So, like Anton has asked several times, suggest a way to keep
> >> deterministic
> >> > and bitexact output without exponentially increasing memory consumption
> >> due
> >> > to buffering.
> >>
> >> I will spend time and effort searching for a solution when we agree to
> >> work together.
> >>
> >> “Do this or I will break your code” is an unacceptable behavior, whether
> >> it is directed at me or at Paul or at anybody else, and I do not spend
> >> effort when unacceptable behavior is tolerated.
> >>
> >>
> > From 3.4 version of ffmpeg to 6.1 version demuxing truehd (-c:a copy)
> > files dropped by factor of 2x speed.
> > But simple transcode from doc/examples is still several times faster than
> > that.
> >
> > I bet using mutexes and condition variables is far from perfect solution
> > or fftools/ code is buggy.
> >
> > This is similar to -lavfi sources dropouts in performance but more used by
> > users of truehd/any small packets format.
> >
>
> I found out if I increase queue size of thread for frames/packets in
> fftools/ from 1 to >1 it increases speed in decoding by 10%.
> Looks like other numbers greater than 2 do not make much any difference.
>
> Still current state is sub-optimal.
It most likely is, but I expect the optimal value would depend on the
specific configuration. More testing welcome.
--
Anton Khirnov
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 49+ messages in thread
end of thread, other threads:[~2023-12-22 10:26 UTC | newest]
Thread overview: 49+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-23 19:14 [FFmpeg-devel] [PATCH v2] ffmpeg CLI multithreading Anton Khirnov
2023-11-23 19:14 ` [FFmpeg-devel] [PATCH 01/13] lavfi/buffersink: avoid leaking peeked_frame on uninit Anton Khirnov
2023-11-23 22:16 ` Paul B Mahol
2023-11-27 9:45 ` Nicolas George
2023-11-23 19:14 ` [FFmpeg-devel] [PATCH 02/13] fftools/ffmpeg_filter: make sub2video heartbeat more robust Anton Khirnov
2023-11-27 9:40 ` Nicolas George
2023-11-27 9:42 ` Nicolas George
2023-11-27 13:02 ` Paul B Mahol
2023-11-27 13:49 ` Nicolas George
2023-11-27 14:08 ` Paul B Mahol
2023-11-29 10:18 ` Anton Khirnov
2023-11-23 19:14 ` [FFmpeg-devel] [PATCH 03/13] fftools/ffmpeg_filter: track input/output index in {Input, Output}FilterPriv Anton Khirnov
2023-11-23 19:14 ` [FFmpeg-devel] [PATCH 04/13] fftools/ffmpeg: make sure FrameData is writable when we modify it Anton Khirnov
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 05/13] fftools/ffmpeg_filter: move filtering to a separate thread Anton Khirnov
2023-11-24 22:56 ` Michael Niedermayer
2023-11-25 20:18 ` [FFmpeg-devel] [PATCH 05/13 v2] " Anton Khirnov
2023-11-25 20:23 ` [FFmpeg-devel] [PATCH 05/13] " James Almer
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 06/13] fftools/ffmpeg_filter: buffer sub2video heartbeat frames like other frames Anton Khirnov
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 07/13] fftools/ffmpeg_filter: reindent Anton Khirnov
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 08/13] fftools/ffmpeg_mux: add muxing thread private data Anton Khirnov
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 09/13] fftools/ffmpeg_mux: move bitstream filtering to the muxer thread Anton Khirnov
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 10/13] fftools/ffmpeg_demux: switch from AVThreadMessageQueue to ThreadQueue Anton Khirnov
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 11/13] fftools/ffmpeg_enc: move encoding to a separate thread Anton Khirnov
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 12/13] fftools/ffmpeg: add thread-aware transcode scheduling infrastructure Anton Khirnov
2023-11-23 19:15 ` [FFmpeg-devel] [PATCH 13/13] fftools/ffmpeg: convert to a threaded architecture Anton Khirnov
2023-11-24 22:26 ` Michael Niedermayer
2023-11-25 20:32 ` [FFmpeg-devel] [PATCH 13/13 v2] " Anton Khirnov
2023-11-30 13:08 ` Michael Niedermayer
2023-11-30 13:34 ` Anton Khirnov
2023-11-30 20:48 ` Michael Niedermayer
2023-12-01 11:15 ` [FFmpeg-devel] [PATCH 13/13 v3] " Anton Khirnov
2023-12-01 14:24 ` Nicolas George
2023-12-01 14:27 ` Anton Khirnov
2023-12-01 14:42 ` Nicolas George
2023-12-01 14:46 ` Anton Khirnov
2023-12-01 14:50 ` Nicolas George
2023-12-01 14:58 ` Anton Khirnov
2023-12-01 15:25 ` Nicolas George
2023-12-01 19:49 ` Anton Khirnov
2023-12-04 15:25 ` Nicolas George
2023-12-04 16:25 ` Anton Khirnov
2023-12-04 16:37 ` Nicolas George
2023-12-04 17:07 ` Anton Khirnov
2023-12-06 12:55 ` Nicolas George
2023-12-06 13:21 ` James Almer
2023-12-06 13:38 ` Nicolas George
2023-12-07 17:26 ` Paul B Mahol
2023-12-21 11:53 ` Paul B Mahol
2023-12-22 10:26 ` Anton Khirnov
Git Inbox Mirror of the ffmpeg-devel mailing list - see https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
This inbox may be cloned and mirrored by anyone:
git clone --mirror https://master.gitmailbox.com/ffmpegdev/0 ffmpegdev/git/0.git
# If you have public-inbox 1.1+ installed, you may
# initialize and index your mirror using the following commands:
public-inbox-init -V2 ffmpegdev ffmpegdev/ https://master.gitmailbox.com/ffmpegdev \
ffmpegdev@gitmailbox.com
public-inbox-index ffmpegdev
Example config snippet for mirrors.
AGPL code for this site: git clone https://public-inbox.org/public-inbox.git