* [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error
@ 2024-05-11 20:23 Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 02/71] avcodec/ratecontrol: Pass RCContext directly in ff_rate_control_uninit() Andreas Rheinhardt
` (70 more replies)
0 siblings, 71 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:23 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Happens on init_pass2() failure.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/ratecontrol.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/libavcodec/ratecontrol.c b/libavcodec/ratecontrol.c
index 9ee08ecb88..27017d7976 100644
--- a/libavcodec/ratecontrol.c
+++ b/libavcodec/ratecontrol.c
@@ -694,6 +694,7 @@ av_cold void ff_rate_control_uninit(MpegEncContext *s)
emms_c();
av_expr_free(rcc->rc_eq_eval);
+ rcc->rc_eq_eval = NULL;
av_freep(&rcc->entry);
}
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 02/71] avcodec/ratecontrol: Pass RCContext directly in ff_rate_control_uninit()
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 03/71] avcodec/ratecontrol: Don't call ff_rate_control_uninit() ourselves Andreas Rheinhardt
` (69 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegvideo_enc.c | 2 +-
libavcodec/ratecontrol.c | 5 ++---
libavcodec/ratecontrol.h | 2 +-
libavcodec/snowenc.c | 2 +-
4 files changed, 5 insertions(+), 6 deletions(-)
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index b601a1a9e4..e31636d787 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -989,7 +989,7 @@ av_cold int ff_mpv_encode_end(AVCodecContext *avctx)
MpegEncContext *s = avctx->priv_data;
int i;
- ff_rate_control_uninit(s);
+ ff_rate_control_uninit(&s->rc_context);
ff_mpv_common_end(s);
diff --git a/libavcodec/ratecontrol.c b/libavcodec/ratecontrol.c
index 27017d7976..78022d80aa 100644
--- a/libavcodec/ratecontrol.c
+++ b/libavcodec/ratecontrol.c
@@ -623,7 +623,7 @@ av_cold int ff_rate_control_init(MpegEncContext *s)
}
if (init_pass2(s) < 0) {
- ff_rate_control_uninit(s);
+ ff_rate_control_uninit(rcc);
return -1;
}
}
@@ -688,9 +688,8 @@ av_cold int ff_rate_control_init(MpegEncContext *s)
return 0;
}
-av_cold void ff_rate_control_uninit(MpegEncContext *s)
+av_cold void ff_rate_control_uninit(RateControlContext *rcc)
{
- RateControlContext *rcc = &s->rc_context;
emms_c();
av_expr_free(rcc->rc_eq_eval);
diff --git a/libavcodec/ratecontrol.h b/libavcodec/ratecontrol.h
index 1f44b44341..a5434f5b90 100644
--- a/libavcodec/ratecontrol.h
+++ b/libavcodec/ratecontrol.h
@@ -87,8 +87,8 @@ struct MpegEncContext;
int ff_rate_control_init(struct MpegEncContext *s);
float ff_rate_estimate_qscale(struct MpegEncContext *s, int dry_run);
void ff_write_pass1_stats(struct MpegEncContext *s);
-void ff_rate_control_uninit(struct MpegEncContext *s);
int ff_vbv_update(struct MpegEncContext *s, int frame_size);
void ff_get_2pass_fcode(struct MpegEncContext *s);
+void ff_rate_control_uninit(RateControlContext *rcc);
#endif /* AVCODEC_RATECONTROL_H */
diff --git a/libavcodec/snowenc.c b/libavcodec/snowenc.c
index 43ca602762..b59dc04edc 100644
--- a/libavcodec/snowenc.c
+++ b/libavcodec/snowenc.c
@@ -2077,7 +2077,7 @@ static av_cold int encode_end(AVCodecContext *avctx)
SnowContext *const s = &enc->com;
ff_snow_common_end(s);
- ff_rate_control_uninit(&enc->m);
+ ff_rate_control_uninit(&enc->m.rc_context);
av_frame_free(&s->input_picture);
for (int i = 0; i < MAX_REF_FRAMES; i++) {
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 03/71] avcodec/ratecontrol: Don't call ff_rate_control_uninit() ourselves
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 02/71] avcodec/ratecontrol: Pass RCContext directly in ff_rate_control_uninit() Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 04/71] avcodec/mpegvideo, ratecontrol: Remove write-only skip_count Andreas Rheinhardt
` (68 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
It is currently done inconsistently: Only one error path
(namely the one from init_pass2()) made ff_rate_control_init()
call ff_rate_control_uninit(); in other error paths
cleanup was left to the caller.
Given that the only caller of this function already performs
the necessary cleanup this commit changes this to always
rely on the caller to perform cleanup on error.
Also return the error code from init_pass2().
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/ratecontrol.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/libavcodec/ratecontrol.c b/libavcodec/ratecontrol.c
index 78022d80aa..3219e1f60f 100644
--- a/libavcodec/ratecontrol.c
+++ b/libavcodec/ratecontrol.c
@@ -622,10 +622,9 @@ av_cold int ff_rate_control_init(MpegEncContext *s)
p = next;
}
- if (init_pass2(s) < 0) {
- ff_rate_control_uninit(rcc);
- return -1;
- }
+ res = init_pass2(s);
+ if (res < 0)
+ return res;
}
if (!(s->avctx->flags & AV_CODEC_FLAG_PASS2)) {
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 04/71] avcodec/mpegvideo, ratecontrol: Remove write-only skip_count
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 02/71] avcodec/ratecontrol: Pass RCContext directly in ff_rate_control_uninit() Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 03/71] avcodec/ratecontrol: Don't call ff_rate_control_uninit() ourselves Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 05/71] avcodec/ratecontrol: Avoid padding in RateControlEntry Andreas Rheinhardt
` (67 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Write-only since 6cf0cb8935f515a7b5f79d2e3cf02bd0764943bf.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/h261enc.c | 1 -
libavcodec/ituh263enc.c | 1 -
libavcodec/mpeg12enc.c | 1 -
libavcodec/mpeg4videoenc.c | 3 ---
libavcodec/mpegvideo.h | 1 -
libavcodec/mpegvideo_enc.c | 4 ----
libavcodec/msmpeg4enc.c | 1 -
libavcodec/ratecontrol.c | 14 +++++++++-----
libavcodec/ratecontrol.h | 1 -
9 files changed, 9 insertions(+), 18 deletions(-)
diff --git a/libavcodec/h261enc.c b/libavcodec/h261enc.c
index 438ebb63d9..20dd296711 100644
--- a/libavcodec/h261enc.c
+++ b/libavcodec/h261enc.c
@@ -253,7 +253,6 @@ void ff_h261_encode_mb(MpegEncContext *s, int16_t block[6][64],
if ((cbp | mvd) == 0) {
/* skip macroblock */
- s->skip_count++;
s->mb_skip_run++;
s->last_mv[0][0][0] = 0;
s->last_mv[0][0][1] = 0;
diff --git a/libavcodec/ituh263enc.c b/libavcodec/ituh263enc.c
index 97abfb3f45..4741ada853 100644
--- a/libavcodec/ituh263enc.c
+++ b/libavcodec/ituh263enc.c
@@ -512,7 +512,6 @@ void ff_h263_encode_mb(MpegEncContext * s,
s->misc_bits++;
s->last_bits++;
}
- s->skip_count++;
return;
}
diff --git a/libavcodec/mpeg12enc.c b/libavcodec/mpeg12enc.c
index f956dde78f..fdb1b1e4a6 100644
--- a/libavcodec/mpeg12enc.c
+++ b/libavcodec/mpeg12enc.c
@@ -824,7 +824,6 @@ static av_always_inline void mpeg1_encode_mb_internal(MpegEncContext *s,
(s->mv[1][0][1] - s->last_mv[1][0][1])) : 0)) == 0))) {
s->mb_skip_run++;
s->qscale -= s->dquant;
- s->skip_count++;
s->misc_bits++;
s->last_bits++;
if (s->pict_type == AV_PICTURE_TYPE_P) {
diff --git a/libavcodec/mpeg4videoenc.c b/libavcodec/mpeg4videoenc.c
index 71dda802e2..bddc26650a 100644
--- a/libavcodec/mpeg4videoenc.c
+++ b/libavcodec/mpeg4videoenc.c
@@ -512,7 +512,6 @@ void ff_mpeg4_encode_mb(MpegEncContext *s, int16_t block[6][64],
/* nothing to do if this MB was skipped in the next P-frame */
if (s->next_picture.mbskip_table[s->mb_y * s->mb_stride + s->mb_x]) { // FIXME avoid DCT & ...
- s->skip_count++;
s->mv[0][0][0] =
s->mv[0][0][1] =
s->mv[1][0][0] =
@@ -536,7 +535,6 @@ void ff_mpeg4_encode_mb(MpegEncContext *s, int16_t block[6][64],
s->misc_bits++;
s->last_bits++;
}
- s->skip_count++;
return;
}
@@ -691,7 +689,6 @@ void ff_mpeg4_encode_mb(MpegEncContext *s, int16_t block[6][64],
s->misc_bits++;
s->last_bits++;
}
- s->skip_count++;
return;
}
diff --git a/libavcodec/mpegvideo.h b/libavcodec/mpegvideo.h
index 215df0fd5b..a8ed1b60b6 100644
--- a/libavcodec/mpegvideo.h
+++ b/libavcodec/mpegvideo.h
@@ -345,7 +345,6 @@ typedef struct MpegEncContext {
int i_tex_bits;
int p_tex_bits;
int i_count;
- int skip_count;
int misc_bits; ///< cbp, mb_type
int last_bits; ///< temp var used for calculating the above vars
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index e31636d787..f45a5f1b37 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -2532,7 +2532,6 @@ static inline void copy_context_before_encode(MpegEncContext *d,
d->i_tex_bits= s->i_tex_bits;
d->p_tex_bits= s->p_tex_bits;
d->i_count= s->i_count;
- d->skip_count= s->skip_count;
d->misc_bits= s->misc_bits;
d->last_bits= 0;
@@ -2561,7 +2560,6 @@ static inline void copy_context_after_encode(MpegEncContext *d,
d->i_tex_bits= s->i_tex_bits;
d->p_tex_bits= s->p_tex_bits;
d->i_count= s->i_count;
- d->skip_count= s->skip_count;
d->misc_bits= s->misc_bits;
d->mb_intra= s->mb_intra;
@@ -2875,7 +2873,6 @@ static int encode_thread(AVCodecContext *c, void *arg){
s->i_tex_bits=0;
s->p_tex_bits=0;
s->i_count=0;
- s->skip_count=0;
for(i=0; i<3; i++){
/* init last dc values */
@@ -3504,7 +3501,6 @@ static void merge_context_after_encode(MpegEncContext *dst, MpegEncContext *src)
MERGE(i_tex_bits);
MERGE(p_tex_bits);
MERGE(i_count);
- MERGE(skip_count);
MERGE(misc_bits);
MERGE(encoding_error[0]);
MERGE(encoding_error[1]);
diff --git a/libavcodec/msmpeg4enc.c b/libavcodec/msmpeg4enc.c
index 119ea8f15e..c159256068 100644
--- a/libavcodec/msmpeg4enc.c
+++ b/libavcodec/msmpeg4enc.c
@@ -405,7 +405,6 @@ void ff_msmpeg4_encode_mb(MpegEncContext * s,
put_bits(&s->pb, 1, 1);
s->last_bits++;
s->misc_bits++;
- s->skip_count++;
return;
}
diff --git a/libavcodec/ratecontrol.c b/libavcodec/ratecontrol.c
index 3219e1f60f..ecc157a9eb 100644
--- a/libavcodec/ratecontrol.c
+++ b/libavcodec/ratecontrol.c
@@ -39,7 +39,7 @@ void ff_write_pass1_stats(MpegEncContext *s)
{
snprintf(s->avctx->stats_out, 256,
"in:%d out:%d type:%d q:%d itex:%d ptex:%d mv:%d misc:%d "
- "fcode:%d bcode:%d mc-var:%"PRId64" var:%"PRId64" icount:%d skipcount:%d hbits:%d;\n",
+ "fcode:%d bcode:%d mc-var:%"PRId64" var:%"PRId64" icount:%d hbits:%d;\n",
s->current_picture_ptr->display_picture_number,
s->current_picture_ptr->coded_picture_number,
s->pict_type,
@@ -52,7 +52,7 @@ void ff_write_pass1_stats(MpegEncContext *s)
s->b_code,
s->mc_mb_var_sum,
s->mb_var_sum,
- s->i_count, s->skip_count,
+ s->i_count,
s->header_bits);
}
@@ -606,13 +606,17 @@ av_cold int ff_rate_control_init(MpegEncContext *s)
av_assert0(picture_number < rcc->num_entries);
rce = &rcc->entry[picture_number];
- e += sscanf(p, " in:%*d out:%*d type:%d q:%f itex:%d ptex:%d mv:%d misc:%d fcode:%d bcode:%d mc-var:%"SCNd64" var:%"SCNd64" icount:%d skipcount:%d hbits:%d",
+ e += sscanf(p, " in:%*d out:%*d type:%d q:%f itex:%d ptex:%d "
+ "mv:%d misc:%d "
+ "fcode:%d bcode:%d "
+ "mc-var:%"SCNd64" var:%"SCNd64" "
+ "icount:%d hbits:%d",
&rce->pict_type, &rce->qscale, &rce->i_tex_bits, &rce->p_tex_bits,
&rce->mv_bits, &rce->misc_bits,
&rce->f_code, &rce->b_code,
&rce->mc_mb_var_sum, &rce->mb_var_sum,
- &rce->i_count, &rce->skip_count, &rce->header_bits);
- if (e != 14) {
+ &rce->i_count, &rce->header_bits);
+ if (e != 13) {
av_log(s->avctx, AV_LOG_ERROR,
"statistics are damaged at line %d, parser out=%d\n",
i, e);
diff --git a/libavcodec/ratecontrol.h b/libavcodec/ratecontrol.h
index a5434f5b90..1b49889f75 100644
--- a/libavcodec/ratecontrol.h
+++ b/libavcodec/ratecontrol.h
@@ -50,7 +50,6 @@ typedef struct RateControlEntry{
int64_t mc_mb_var_sum;
int64_t mb_var_sum;
int i_count;
- int skip_count;
int f_code;
int b_code;
}RateControlEntry;
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 05/71] avcodec/ratecontrol: Avoid padding in RateControlEntry
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (2 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 04/71] avcodec/mpegvideo, ratecontrol: Remove write-only skip_count Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 06/71] avcodec/get_buffer: Remove redundant check Andreas Rheinhardt
` (66 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/ratecontrol.h | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/libavcodec/ratecontrol.h b/libavcodec/ratecontrol.h
index 1b49889f75..4d71a181b5 100644
--- a/libavcodec/ratecontrol.h
+++ b/libavcodec/ratecontrol.h
@@ -39,6 +39,9 @@ typedef struct Predictor{
typedef struct RateControlEntry{
int pict_type;
float qscale;
+ int i_count;
+ int f_code;
+ int b_code;
int mv_bits;
int i_tex_bits;
int p_tex_bits;
@@ -49,9 +52,6 @@ typedef struct RateControlEntry{
float new_qscale;
int64_t mc_mb_var_sum;
int64_t mb_var_sum;
- int i_count;
- int f_code;
- int b_code;
}RateControlEntry;
/**
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 06/71] avcodec/get_buffer: Remove redundant check
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (3 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 05/71] avcodec/ratecontrol: Avoid padding in RateControlEntry Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 07/71] avcodec/mpegpicture: Store linesize in ScratchpadContext Andreas Rheinhardt
` (65 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
It is unnecessary to check for whether the number of planes
of an already existing audio pool coincides with the number
of planes to use for the frame: If the common format of both
is planar, then the number of planes coincides with the number
of channels for which there is already a check*; if not,
then both the existing pool as well as the frame use one pool.
*: In fact, one could reuse the pool in this case even if the
number of channels changes.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/get_buffer.c | 20 ++++++++------------
1 file changed, 8 insertions(+), 12 deletions(-)
diff --git a/libavcodec/get_buffer.c b/libavcodec/get_buffer.c
index 9b35fde7c6..ff19f61e86 100644
--- a/libavcodec/get_buffer.c
+++ b/libavcodec/get_buffer.c
@@ -65,20 +65,15 @@ static void frame_pool_free(FFRefStructOpaque unused, void *obj)
static int update_frame_pool(AVCodecContext *avctx, AVFrame *frame)
{
FramePool *pool = avctx->internal->pool;
- int i, ret, ch, planes;
-
- if (avctx->codec_type == AVMEDIA_TYPE_AUDIO) {
- int planar = av_sample_fmt_is_planar(frame->format);
- ch = frame->ch_layout.nb_channels;
- planes = planar ? ch : 1;
- }
+ int i, ret;
if (pool && pool->format == frame->format) {
if (avctx->codec_type == AVMEDIA_TYPE_VIDEO &&
pool->width == frame->width && pool->height == frame->height)
return 0;
- if (avctx->codec_type == AVMEDIA_TYPE_AUDIO && pool->planes == planes &&
- pool->channels == ch && frame->nb_samples == pool->samples)
+ if (avctx->codec_type == AVMEDIA_TYPE_AUDIO &&
+ pool->channels == frame->ch_layout.nb_channels &&
+ frame->nb_samples == pool->samples)
return 0;
}
@@ -141,7 +136,8 @@ static int update_frame_pool(AVCodecContext *avctx, AVFrame *frame)
break;
}
case AVMEDIA_TYPE_AUDIO: {
- ret = av_samples_get_buffer_size(&pool->linesize[0], ch,
+ ret = av_samples_get_buffer_size(&pool->linesize[0],
+ frame->ch_layout.nb_channels,
frame->nb_samples, frame->format, 0);
if (ret < 0)
goto fail;
@@ -153,9 +149,9 @@ static int update_frame_pool(AVCodecContext *avctx, AVFrame *frame)
}
pool->format = frame->format;
- pool->planes = planes;
- pool->channels = ch;
+ pool->channels = frame->ch_layout.nb_channels;
pool->samples = frame->nb_samples;
+ pool->planes = av_sample_fmt_is_planar(pool->format) ? pool->channels : 1;
break;
}
default: av_assert0(0);
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 07/71] avcodec/mpegpicture: Store linesize in ScratchpadContext
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (4 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 06/71] avcodec/get_buffer: Remove redundant check Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 08/71] avcodec/mpegvideo_dec: Sync linesize and uvlinesize between threads Andreas Rheinhardt
` (64 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
The mpegvideo-based codecs currently require the linesize to be
constant (except when the frame dimensions change); one reason
for this is that certain scratch buffers whose size depend on
linesize are only allocated once and are presumed to be correctly
sized if the pointers are != NULL.
This commit changes this by storing the actual linesize these
buffers belong to and reallocating the buffers if it does not
suffice. This is not enough to actually support changing linesizes,
but it is a start. And it is a prerequisite for the next patch.
Also don't emit an error message in case the source ctx's
edge_emu_buffer is unset in ff_mpeg_update_thread_context().
It need not be an error at all; e.g. it is a perfectly normal
state in case a hardware acceleration is used as the scratch
buffers are not allocated in this case (it is easy to run into
this issue with MPEG-4) or if the src context was not initialized
at all (e.g. because the first packet contained garbage).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegpicture.c | 19 ++++++++++++++-----
libavcodec/mpegpicture.h | 1 +
libavcodec/mpegvideo.c | 19 +++++++------------
libavcodec/mpegvideo_dec.c | 19 +++++++------------
4 files changed, 29 insertions(+), 29 deletions(-)
diff --git a/libavcodec/mpegpicture.c b/libavcodec/mpegpicture.c
index 06b6daa01a..aa882cf747 100644
--- a/libavcodec/mpegpicture.c
+++ b/libavcodec/mpegpicture.c
@@ -89,12 +89,16 @@ int ff_mpeg_framesize_alloc(AVCodecContext *avctx, MotionEstContext *me,
ScratchpadContext *sc, int linesize)
{
# define EMU_EDGE_HEIGHT (4 * 70)
- int alloc_size = FFALIGN(FFABS(linesize) + 64, 32);
+ int linesizeabs = FFABS(linesize);
+ int alloc_size = FFALIGN(linesizeabs + 64, 32);
+
+ if (linesizeabs <= sc->linesize)
+ return 0;
if (avctx->hwaccel)
return 0;
- if (linesize < 24) {
+ if (linesizeabs < 24) {
av_log(avctx, AV_LOG_ERROR, "Image too small, temporary buffers cannot function\n");
return AVERROR_PATCHWELCOME;
}
@@ -102,6 +106,9 @@ int ff_mpeg_framesize_alloc(AVCodecContext *avctx, MotionEstContext *me,
if (av_image_check_size2(alloc_size, EMU_EDGE_HEIGHT, avctx->max_pixels, AV_PIX_FMT_NONE, 0, avctx) < 0)
return AVERROR(ENOMEM);
+ av_freep(&sc->edge_emu_buffer);
+ av_freep(&me->scratchpad);
+
// edge emu needs blocksize + filter length - 1
// (= 17x17 for halfpel / 21x21 for H.264)
// VC-1 computes luma and chroma simultaneously and needs 19X19 + 9x9
@@ -110,9 +117,11 @@ int ff_mpeg_framesize_alloc(AVCodecContext *avctx, MotionEstContext *me,
// we also use this buffer for encoding in encode_mb_internal() needig an additional 32 lines
if (!FF_ALLOCZ_TYPED_ARRAY(sc->edge_emu_buffer, alloc_size * EMU_EDGE_HEIGHT) ||
!FF_ALLOCZ_TYPED_ARRAY(me->scratchpad, alloc_size * 4 * 16 * 2)) {
+ sc->linesize = 0;
av_freep(&sc->edge_emu_buffer);
return AVERROR(ENOMEM);
}
+ sc->linesize = linesizeabs;
me->temp = me->scratchpad;
sc->rd_scratchpad = me->scratchpad;
@@ -149,9 +158,9 @@ static int handle_pic_linesizes(AVCodecContext *avctx, Picture *pic,
return -1;
}
- if (!sc->edge_emu_buffer &&
- (ret = ff_mpeg_framesize_alloc(avctx, me, sc,
- pic->f->linesize[0])) < 0) {
+ ret = ff_mpeg_framesize_alloc(avctx, me, sc,
+ pic->f->linesize[0]);
+ if (ret < 0) {
av_log(avctx, AV_LOG_ERROR,
"get_buffer() failed to allocate context scratch buffers.\n");
ff_mpeg_unref_picture(pic);
diff --git a/libavcodec/mpegpicture.h b/libavcodec/mpegpicture.h
index a457586be5..215e7388ef 100644
--- a/libavcodec/mpegpicture.h
+++ b/libavcodec/mpegpicture.h
@@ -38,6 +38,7 @@ typedef struct ScratchpadContext {
uint8_t *rd_scratchpad; ///< scratchpad for rate distortion mb decision
uint8_t *obmc_scratchpad;
uint8_t *b_scratchpad; ///< scratchpad used for writing into write only buffers
+ int linesize; ///< linesize that the buffers in this context have been allocated for
} ScratchpadContext;
/**
diff --git a/libavcodec/mpegvideo.c b/libavcodec/mpegvideo.c
index 7af823b8bd..130ccb4c97 100644
--- a/libavcodec/mpegvideo.c
+++ b/libavcodec/mpegvideo.c
@@ -443,6 +443,7 @@ static void free_duplicate_context(MpegEncContext *s)
s->sc.rd_scratchpad =
s->sc.b_scratchpad =
s->sc.obmc_scratchpad = NULL;
+ s->sc.linesize = 0;
av_freep(&s->dct_error_sum);
av_freep(&s->me.map);
@@ -464,12 +465,9 @@ static void free_duplicate_contexts(MpegEncContext *s)
static void backup_duplicate_context(MpegEncContext *bak, MpegEncContext *src)
{
#define COPY(a) bak->a = src->a
- COPY(sc.edge_emu_buffer);
+ COPY(sc);
COPY(me.scratchpad);
COPY(me.temp);
- COPY(sc.rd_scratchpad);
- COPY(sc.b_scratchpad);
- COPY(sc.obmc_scratchpad);
COPY(me.map);
COPY(me.score_map);
COPY(blocks);
@@ -503,9 +501,9 @@ int ff_update_duplicate_context(MpegEncContext *dst, const MpegEncContext *src)
// exchange uv
FFSWAP(void *, dst->pblocks[4], dst->pblocks[5]);
}
- if (!dst->sc.edge_emu_buffer &&
- (ret = ff_mpeg_framesize_alloc(dst->avctx, &dst->me,
- &dst->sc, dst->linesize)) < 0) {
+ ret = ff_mpeg_framesize_alloc(dst->avctx, &dst->me,
+ &dst->sc, dst->linesize);
+ if (ret < 0) {
av_log(dst->avctx, AV_LOG_ERROR, "failed to allocate context "
"scratch buffers.\n");
return ret;
@@ -646,12 +644,9 @@ static void clear_context(MpegEncContext *s)
s->ac_val[0] =
s->ac_val[1] =
s->ac_val[2] =NULL;
- s->sc.edge_emu_buffer = NULL;
s->me.scratchpad = NULL;
- s->me.temp =
- s->sc.rd_scratchpad =
- s->sc.b_scratchpad =
- s->sc.obmc_scratchpad = NULL;
+ s->me.temp = NULL;
+ memset(&s->sc, 0, sizeof(s->sc));
s->bitstream_buffer = NULL;
diff --git a/libavcodec/mpegvideo_dec.c b/libavcodec/mpegvideo_dec.c
index 4353f1fd68..31403d9acc 100644
--- a/libavcodec/mpegvideo_dec.c
+++ b/libavcodec/mpegvideo_dec.c
@@ -167,18 +167,13 @@ do {\
}
// linesize-dependent scratch buffer allocation
- if (!s->sc.edge_emu_buffer)
- if (s1->linesize) {
- if (ff_mpeg_framesize_alloc(s->avctx, &s->me,
- &s->sc, s1->linesize) < 0) {
- av_log(s->avctx, AV_LOG_ERROR, "Failed to allocate context "
- "scratch buffers.\n");
- return AVERROR(ENOMEM);
- }
- } else {
- av_log(s->avctx, AV_LOG_ERROR, "Context scratch buffers could not "
- "be allocated due to unknown size.\n");
- }
+ ret = ff_mpeg_framesize_alloc(s->avctx, &s->me,
+ &s->sc, s1->linesize);
+ if (ret < 0) {
+ av_log(s->avctx, AV_LOG_ERROR, "Failed to allocate context "
+ "scratch buffers.\n");
+ return ret;
+ }
// MPEG-2/interlacing info
memcpy(&s->progressive_sequence, &s1->progressive_sequence,
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 08/71] avcodec/mpegvideo_dec: Sync linesize and uvlinesize between threads
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (5 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 07/71] avcodec/mpegpicture: Store linesize in ScratchpadContext Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 09/71] avcodec/mpegvideo_dec: Factor allocating dummy frames out Andreas Rheinhardt
` (63 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
linesize and uvlinesize are supposed to be the common linesize of all
the Y/UV-planes of all the currently cached pictures.
ff_mpeg_update_thread_context() syncs the pictures, yet it did not sync
linesize and uvlinesize. This mostly works, because ff_alloc_picture()
only accepts new pictures if they coincide with the linesize of the
already provided pictures (if any). Yet there is a catch: Linesize
changes are accepted when the dimensions change (in which case the
cached frames are discarded).
So imagine a scenario where all frame threads use the same dimension A
until a frame with a different dimension B is encountered in the
bitstream, only to be instantly reverted to A in the next picture. If
the user changes the linesize of the frames upon the change to dimension
B and keeps the linesize thereafter (possible if B > A),
ff_alloc_picture() will report an error when frame-threading is in use:
The thread decoding B will perform a frame size change and so will the
next thread in ff_mpeg_update_thread_context() as well as when decoding
its picture. But the next thread will (presuming it is not the same
thread that decoded B, i.e. presuming >= 3 threads) not perform a frame
size change, because the new frame size coincides with its old frame
size, yet the linesize it expects from ff_alloc_picture() is outdated,
so that it errors out.
It is also possible for the user to use the original linesizes for
the frame after the frame that reverted back to A; this will be
accepted, yet the assumption that of all pictures are the same
will be broken, leading to segfaults.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegvideo_dec.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/libavcodec/mpegvideo_dec.c b/libavcodec/mpegvideo_dec.c
index 31403d9acc..597ffde7f8 100644
--- a/libavcodec/mpegvideo_dec.c
+++ b/libavcodec/mpegvideo_dec.c
@@ -127,6 +127,9 @@ do {\
UPDATE_PICTURE(last_picture);
UPDATE_PICTURE(next_picture);
+ s->linesize = s1->linesize;
+ s->uvlinesize = s1->uvlinesize;
+
#define REBASE_PICTURE(pic, new_ctx, old_ctx) \
((pic && pic >= old_ctx->picture && \
pic < old_ctx->picture + MAX_PICTURE_COUNT) ? \
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 09/71] avcodec/mpegvideo_dec: Factor allocating dummy frames out
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (6 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 08/71] avcodec/mpegvideo_dec: Sync linesize and uvlinesize between threads Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 10/71] avcodec/mpegpicture: Mark dummy frames as such Andreas Rheinhardt
` (62 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
This will allow to reuse it to allocate dummy frames for
the second field (which can be a P-field even if the first
field was an intra field).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegvideo_dec.c | 85 +++++++++++++++++++++++---------------
libavcodec/mpegvideodec.h | 4 ++
2 files changed, 56 insertions(+), 33 deletions(-)
diff --git a/libavcodec/mpegvideo_dec.c b/libavcodec/mpegvideo_dec.c
index 597ffde7f8..efc257d43e 100644
--- a/libavcodec/mpegvideo_dec.c
+++ b/libavcodec/mpegvideo_dec.c
@@ -281,14 +281,21 @@ fail:
return ret;
}
-static int av_cold alloc_dummy_frame(MpegEncContext *s, Picture **picp)
+static int av_cold alloc_dummy_frame(MpegEncContext *s, Picture **picp, Picture *wpic)
{
Picture *pic;
- int ret = alloc_picture(s, picp, 1);
+ int ret = alloc_picture(s, &pic, 1);
if (ret < 0)
return ret;
- pic = *picp;
+ ff_mpeg_unref_picture(wpic);
+ ret = ff_mpeg_ref_picture(wpic, pic);
+ if (ret < 0) {
+ ff_mpeg_unref_picture(pic);
+ return ret;
+ }
+
+ *picp = pic;
ff_thread_report_progress(&pic->tf, INT_MAX, 0);
ff_thread_report_progress(&pic->tf, INT_MAX, 1);
@@ -314,6 +321,45 @@ static void color_frame(AVFrame *frame, int luma)
}
}
+int ff_mpv_alloc_dummy_frames(MpegEncContext *s)
+{
+ AVCodecContext *avctx = s->avctx;
+ int ret;
+
+ if ((!s->last_picture_ptr || !s->last_picture_ptr->f->buf[0]) &&
+ (s->pict_type != AV_PICTURE_TYPE_I)) {
+ if (s->pict_type == AV_PICTURE_TYPE_B && s->next_picture_ptr && s->next_picture_ptr->f->buf[0])
+ av_log(avctx, AV_LOG_DEBUG,
+ "allocating dummy last picture for B frame\n");
+ else if (s->codec_id != AV_CODEC_ID_H261 /* H.261 has no keyframes */ &&
+ (s->picture_structure == PICT_FRAME || s->first_field))
+ av_log(avctx, AV_LOG_ERROR,
+ "warning: first frame is no keyframe\n");
+
+ /* Allocate a dummy frame */
+ ret = alloc_dummy_frame(s, &s->last_picture_ptr, &s->last_picture);
+ if (ret < 0)
+ return ret;
+
+ if (!avctx->hwaccel) {
+ int luma_val = s->codec_id == AV_CODEC_ID_FLV1 || s->codec_id == AV_CODEC_ID_H263 ? 16 : 0x80;
+ color_frame(s->last_picture_ptr->f, luma_val);
+ }
+ }
+ if ((!s->next_picture_ptr || !s->next_picture_ptr->f->buf[0]) &&
+ s->pict_type == AV_PICTURE_TYPE_B) {
+ /* Allocate a dummy frame */
+ ret = alloc_dummy_frame(s, &s->next_picture_ptr, &s->next_picture);
+ if (ret < 0)
+ return ret;
+ }
+
+ av_assert0(s->pict_type == AV_PICTURE_TYPE_I || (s->last_picture_ptr &&
+ s->last_picture_ptr->f->buf[0]));
+
+ return 0;
+}
+
/**
* generic function called after decoding
* the header and before a frame is decoded.
@@ -382,34 +428,6 @@ int ff_mpv_frame_start(MpegEncContext *s, AVCodecContext *avctx)
s->current_picture_ptr ? s->current_picture_ptr->f->data[0] : NULL,
s->pict_type, s->droppable);
- if ((!s->last_picture_ptr || !s->last_picture_ptr->f->buf[0]) &&
- (s->pict_type != AV_PICTURE_TYPE_I)) {
- if (s->pict_type == AV_PICTURE_TYPE_B && s->next_picture_ptr && s->next_picture_ptr->f->buf[0])
- av_log(avctx, AV_LOG_DEBUG,
- "allocating dummy last picture for B frame\n");
- else if (s->codec_id != AV_CODEC_ID_H261)
- av_log(avctx, AV_LOG_ERROR,
- "warning: first frame is no keyframe\n");
-
- /* Allocate a dummy frame */
- ret = alloc_dummy_frame(s, &s->last_picture_ptr);
- if (ret < 0)
- return ret;
-
- if (!avctx->hwaccel) {
- int luma_val = s->codec_id == AV_CODEC_ID_FLV1 || s->codec_id == AV_CODEC_ID_H263 ? 16 : 0x80;
- color_frame(s->last_picture_ptr->f, luma_val);
- }
-
- }
- if ((!s->next_picture_ptr || !s->next_picture_ptr->f->buf[0]) &&
- s->pict_type == AV_PICTURE_TYPE_B) {
- /* Allocate a dummy frame */
- ret = alloc_dummy_frame(s, &s->next_picture_ptr);
- if (ret < 0)
- return ret;
- }
-
if (s->last_picture_ptr) {
if (s->last_picture_ptr->f->buf[0] &&
(ret = ff_mpeg_ref_picture(&s->last_picture,
@@ -423,8 +441,9 @@ int ff_mpv_frame_start(MpegEncContext *s, AVCodecContext *avctx)
return ret;
}
- av_assert0(s->pict_type == AV_PICTURE_TYPE_I || (s->last_picture_ptr &&
- s->last_picture_ptr->f->buf[0]));
+ ret = ff_mpv_alloc_dummy_frames(s);
+ if (ret < 0)
+ return ret;
/* set dequantizer, we can't do it during init as
* it might change for MPEG-4 and we can't do it in the header
diff --git a/libavcodec/mpegvideodec.h b/libavcodec/mpegvideodec.h
index 0b841bc1a1..42c2697749 100644
--- a/libavcodec/mpegvideodec.h
+++ b/libavcodec/mpegvideodec.h
@@ -50,6 +50,10 @@ void ff_mpv_decode_init(MpegEncContext *s, AVCodecContext *avctx);
int ff_mpv_common_frame_size_change(MpegEncContext *s);
int ff_mpv_frame_start(MpegEncContext *s, AVCodecContext *avctx);
+/**
+ * Ensure that the dummy frames are allocated according to pict_type if necessary.
+ */
+int ff_mpv_alloc_dummy_frames(MpegEncContext *s);
void ff_mpv_reconstruct_mb(MpegEncContext *s, int16_t block[12][64]);
void ff_mpv_report_decode_progress(MpegEncContext *s);
void ff_mpv_frame_end(MpegEncContext *s);
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 10/71] avcodec/mpegpicture: Mark dummy frames as such
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (7 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 09/71] avcodec/mpegvideo_dec: Factor allocating dummy frames out Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 11/71] avcodec/mpeg12dec: Allocate dummy frames for non-I fields Andreas Rheinhardt
` (61 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
This will allow to avoid outputting them.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegpicture.c | 2 ++
libavcodec/mpegpicture.h | 1 +
libavcodec/mpegvideo_dec.c | 2 ++
3 files changed, 5 insertions(+)
diff --git a/libavcodec/mpegpicture.c b/libavcodec/mpegpicture.c
index aa882cf747..88b4d5dec1 100644
--- a/libavcodec/mpegpicture.c
+++ b/libavcodec/mpegpicture.c
@@ -270,6 +270,7 @@ void ff_mpeg_unref_picture(Picture *pic)
if (pic->needs_realloc)
free_picture_tables(pic);
+ pic->dummy = 0;
pic->field_picture = 0;
pic->b_frame_score = 0;
pic->needs_realloc = 0;
@@ -331,6 +332,7 @@ int ff_mpeg_ref_picture(Picture *dst, Picture *src)
ff_refstruct_replace(&dst->hwaccel_picture_private,
src->hwaccel_picture_private);
+ dst->dummy = src->dummy;
dst->field_picture = src->field_picture;
dst->b_frame_score = src->b_frame_score;
dst->needs_realloc = src->needs_realloc;
diff --git a/libavcodec/mpegpicture.h b/libavcodec/mpegpicture.h
index 215e7388ef..664c116a47 100644
--- a/libavcodec/mpegpicture.h
+++ b/libavcodec/mpegpicture.h
@@ -70,6 +70,7 @@ typedef struct Picture {
/// RefStruct reference for hardware accelerator private data
void *hwaccel_picture_private;
+ int dummy; ///< Picture is a dummy and should not be output
int field_picture; ///< whether or not the picture was encoded in separate fields
int b_frame_score;
diff --git a/libavcodec/mpegvideo_dec.c b/libavcodec/mpegvideo_dec.c
index efc257d43e..bf274e0c48 100644
--- a/libavcodec/mpegvideo_dec.c
+++ b/libavcodec/mpegvideo_dec.c
@@ -288,6 +288,8 @@ static int av_cold alloc_dummy_frame(MpegEncContext *s, Picture **picp, Picture
if (ret < 0)
return ret;
+ pic->dummy = 1;
+
ff_mpeg_unref_picture(wpic);
ret = ff_mpeg_ref_picture(wpic, pic);
if (ret < 0) {
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 11/71] avcodec/mpeg12dec: Allocate dummy frames for non-I fields
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (8 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 10/71] avcodec/mpegpicture: Mark dummy frames as such Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 12/71] avcodec/mpegvideo_motion: Remove dead checks for existence of reference Andreas Rheinhardt
` (60 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
MPEG-2 allows to pair an intra field (as first field) together
with a P-field. In this case a conformant bitstream has to satisfy
certain restrictions in order to ensure that only the I field
is used for prediction. See section 7.6.3.5 of the MPEG-2
specifications.
We do not check these restrictions; normally we simply allocate
dummy frames for reference in order to avoid checks lateron.
This happens in ff_mpv_frame_start() and therefore does not happen
for a second field. This is inconsistent. Fix this by allocating
these dummy frames for the second field, too.
This already fixes two bugs:
1. Undefined pointer arithmetic in prefetch_motion() in
mpegvideo_motion.c where it is simply presumed that the reference
frame exists.
2. Several MPEG-2 hardware accelerations rely on last_picture
being allocated for P pictures and next picture for B pictures;
e.g. VDPAU returns VDP_STATUS_INVALID_HANDLE when decoding
an I-P fields pair because the forward_reference was set incorrectly.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpeg12dec.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/libavcodec/mpeg12dec.c b/libavcodec/mpeg12dec.c
index 21a214ef5b..9940ff898c 100644
--- a/libavcodec/mpeg12dec.c
+++ b/libavcodec/mpeg12dec.c
@@ -1372,6 +1372,9 @@ static int mpeg_field_start(MpegEncContext *s, const uint8_t *buf, int buf_size)
return ret;
}
}
+ ret = ff_mpv_alloc_dummy_frames(s);
+ if (ret < 0)
+ return ret;
for (int i = 0; i < 3; i++) {
s->current_picture.f->data[i] = s->current_picture_ptr->f->data[i];
@@ -1727,7 +1730,7 @@ static int slice_decode_thread(AVCodecContext *c, void *arg)
* Handle slice ends.
* @return 1 if it seems to be the last slice
*/
-static int slice_end(AVCodecContext *avctx, AVFrame *pict)
+static int slice_end(AVCodecContext *avctx, AVFrame *pict, int *got_output)
{
Mpeg1Context *s1 = avctx->priv_data;
MpegEncContext *s = &s1->mpeg_enc_ctx;
@@ -1758,14 +1761,16 @@ static int slice_end(AVCodecContext *avctx, AVFrame *pict)
return ret;
ff_print_debug_info(s, s->current_picture_ptr, pict);
ff_mpv_export_qp_table(s, pict, s->current_picture_ptr, FF_MPV_QSCALE_TYPE_MPEG2);
+ *got_output = 1;
} else {
/* latency of 1 frame for I- and P-frames */
- if (s->last_picture_ptr) {
+ if (s->last_picture_ptr && !s->last_picture_ptr->dummy) {
int ret = av_frame_ref(pict, s->last_picture_ptr->f);
if (ret < 0)
return ret;
ff_print_debug_info(s, s->last_picture_ptr, pict);
ff_mpv_export_qp_table(s, pict, s->last_picture_ptr, FF_MPV_QSCALE_TYPE_MPEG2);
+ *got_output = 1;
}
}
@@ -2204,14 +2209,9 @@ static int decode_chunks(AVCodecContext *avctx, AVFrame *picture,
s2->er.error_count += s2->thread_context[i]->er.error_count;
}
- ret = slice_end(avctx, picture);
+ ret = slice_end(avctx, picture, got_output);
if (ret < 0)
return ret;
- else if (ret) {
- // FIXME: merge with the stuff in mpeg_decode_slice
- if (s2->last_picture_ptr || s2->low_delay || s2->pict_type == AV_PICTURE_TYPE_B)
- *got_output = 1;
- }
}
s2->pict_type = 0;
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 12/71] avcodec/mpegvideo_motion: Remove dead checks for existence of reference
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (9 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 11/71] avcodec/mpeg12dec: Allocate dummy frames for non-I fields Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 13/71] avcodec/mpegvideo_motion: Optimize check away Andreas Rheinhardt
` (59 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
These references now always exist due to dummy frames.
Also remove the corresponding checks in the lowres code
in mpegvideo_dec.c.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegvideo_dec.c | 12 ++++--------
libavcodec/mpegvideo_motion.c | 12 ++++--------
2 files changed, 8 insertions(+), 16 deletions(-)
diff --git a/libavcodec/mpegvideo_dec.c b/libavcodec/mpegvideo_dec.c
index bf274e0c48..c1f49bce14 100644
--- a/libavcodec/mpegvideo_dec.c
+++ b/libavcodec/mpegvideo_dec.c
@@ -862,8 +862,8 @@ static inline void MPV_motion_lowres(MpegEncContext *s,
s->mv[dir][1][0], s->mv[dir][1][1],
block_s, mb_y);
} else {
- if ( s->picture_structure != s->field_select[dir][0] + 1 && s->pict_type != AV_PICTURE_TYPE_B && !s->first_field
- || !ref_picture[0]) {
+ if (s->picture_structure != s->field_select[dir][0] + 1 &&
+ s->pict_type != AV_PICTURE_TYPE_B && !s->first_field) {
ref_picture = s->current_picture_ptr->f->data;
}
mpeg_motion_lowres(s, dest_y, dest_cb, dest_cr,
@@ -877,9 +877,8 @@ static inline void MPV_motion_lowres(MpegEncContext *s,
for (int i = 0; i < 2; i++) {
uint8_t *const *ref2picture;
- if ((s->picture_structure == s->field_select[dir][i] + 1 ||
- s->pict_type == AV_PICTURE_TYPE_B || s->first_field) &&
- ref_picture[0]) {
+ if (s->picture_structure == s->field_select[dir][i] + 1 ||
+ s->pict_type == AV_PICTURE_TYPE_B || s->first_field) {
ref2picture = ref_picture;
} else {
ref2picture = s->current_picture_ptr->f->data;
@@ -910,9 +909,6 @@ static inline void MPV_motion_lowres(MpegEncContext *s,
pix_op = s->h264chroma.avg_h264_chroma_pixels_tab;
}
} else {
- if (!ref_picture[0]) {
- ref_picture = s->current_picture_ptr->f->data;
- }
for (int i = 0; i < 2; i++) {
mpeg_motion_lowres(s, dest_y, dest_cb, dest_cr,
0, 0, s->picture_structure != i + 1,
diff --git a/libavcodec/mpegvideo_motion.c b/libavcodec/mpegvideo_motion.c
index 8922f5b1a5..01c8d82e98 100644
--- a/libavcodec/mpegvideo_motion.c
+++ b/libavcodec/mpegvideo_motion.c
@@ -739,8 +739,8 @@ static av_always_inline void mpv_motion_internal(MpegEncContext *s,
s->mv[dir][1][0], s->mv[dir][1][1], 8, mb_y);
}
} else {
- if ( s->picture_structure != s->field_select[dir][0] + 1 && s->pict_type != AV_PICTURE_TYPE_B && !s->first_field
- || !ref_picture[0]) {
+ if (s->picture_structure != s->field_select[dir][0] + 1 &&
+ s->pict_type != AV_PICTURE_TYPE_B && !s->first_field) {
ref_picture = s->current_picture_ptr->f->data;
}
@@ -755,9 +755,8 @@ static av_always_inline void mpv_motion_internal(MpegEncContext *s,
for (i = 0; i < 2; i++) {
uint8_t *const *ref2picture;
- if ((s->picture_structure == s->field_select[dir][i] + 1 ||
- s->pict_type == AV_PICTURE_TYPE_B || s->first_field) &&
- ref_picture[0]) {
+ if (s->picture_structure == s->field_select[dir][i] + 1 ||
+ s->pict_type == AV_PICTURE_TYPE_B || s->first_field) {
ref2picture = ref_picture;
} else {
ref2picture = s->current_picture_ptr->f->data;
@@ -787,9 +786,6 @@ static av_always_inline void mpv_motion_internal(MpegEncContext *s,
pix_op = s->hdsp.avg_pixels_tab;
}
} else {
- if (!ref_picture[0]) {
- ref_picture = s->current_picture_ptr->f->data;
- }
for (i = 0; i < 2; i++) {
mpeg_motion(s, dest_y, dest_cb, dest_cr,
s->picture_structure != i + 1,
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 13/71] avcodec/mpegvideo_motion: Optimize check away
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (10 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 12/71] avcodec/mpegvideo_motion: Remove dead checks for existence of reference Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 14/71] " Andreas Rheinhardt
` (58 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Only MPEG-2 can have field motion vectors with coded fields.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegvideo_motion.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/libavcodec/mpegvideo_motion.c b/libavcodec/mpegvideo_motion.c
index 01c8d82e98..5b72196395 100644
--- a/libavcodec/mpegvideo_motion.c
+++ b/libavcodec/mpegvideo_motion.c
@@ -719,7 +719,11 @@ static av_always_inline void mpv_motion_internal(MpegEncContext *s,
dir, ref_picture, qpix_op, pix_op);
break;
case MV_TYPE_FIELD:
- if (s->picture_structure == PICT_FRAME) {
+ // Only MPEG-1/2 can have a picture_structure != PICT_FRAME here.
+ if (!CONFIG_SMALL)
+ av_assert2(is_mpeg12 || s->picture_structure == PICT_FRAME);
+ if ((!CONFIG_SMALL && !is_mpeg12) ||
+ s->picture_structure == PICT_FRAME) {
if (!is_mpeg12 && s->quarter_sample) {
for (i = 0; i < 2; i++)
qpel_motion(s, dest_y, dest_cb, dest_cr,
@@ -739,6 +743,7 @@ static av_always_inline void mpv_motion_internal(MpegEncContext *s,
s->mv[dir][1][0], s->mv[dir][1][1], 8, mb_y);
}
} else {
+ av_assert2(s->out_format == FMT_MPEG1);
if (s->picture_structure != s->field_select[dir][0] + 1 &&
s->pict_type != AV_PICTURE_TYPE_B && !s->first_field) {
ref_picture = s->current_picture_ptr->f->data;
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 14/71] avcodec/mpegvideo_motion: Optimize check away
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (11 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 13/71] avcodec/mpegvideo_motion: Optimize check away Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 15/71] avcodec/mpegvideo_motion: Avoid constant function argument Andreas Rheinhardt
` (57 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
When !CONFIG_SMALL, we create separate functions for FMT_MPEG1
(i.e. for MPEG-1/2); given that there are only three possibilities
for out_format (FMT_MPEG1, FMT_H263 and FMT_H261 -- MJPEG and SpeedHQ
are both intra-only and do not have motion vectors at all, ergo
they don't call this function), one can optimize MPEG-1/2-only code
away in mpeg_motion_internal().
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegvideo_motion.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/libavcodec/mpegvideo_motion.c b/libavcodec/mpegvideo_motion.c
index 5b72196395..ccda20c0f1 100644
--- a/libavcodec/mpegvideo_motion.c
+++ b/libavcodec/mpegvideo_motion.c
@@ -114,13 +114,16 @@ void mpeg_motion_internal(MpegEncContext *s,
uvsrc_y = src_y >> 1;
}
// Even chroma mv's are full pel in H261
- } else if (!is_mpeg12 && s->out_format == FMT_H261) {
+ } else if (!CONFIG_SMALL && !is_mpeg12 ||
+ CONFIG_SMALL && s->out_format == FMT_H261) {
+ av_assert2(s->out_format == FMT_H261);
mx = motion_x / 4;
my = motion_y / 4;
uvdxy = 0;
uvsrc_x = s->mb_x * 8 + mx;
uvsrc_y = mb_y * 8 + my;
} else {
+ av_assert2(s->out_format == FMT_MPEG1);
if (s->chroma_y_shift) {
mx = motion_x / 2;
my = motion_y / 2;
@@ -820,6 +823,9 @@ void ff_mpv_motion(MpegEncContext *s,
op_pixels_func (*pix_op)[4],
qpel_mc_func (*qpix_op)[16])
{
+ av_assert2(s->out_format == FMT_MPEG1 ||
+ s->out_format == FMT_H263 ||
+ s->out_format == FMT_H261);
prefetch_motion(s, ref_picture, dir);
#if !CONFIG_SMALL
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 15/71] avcodec/mpegvideo_motion: Avoid constant function argument
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (12 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 14/71] " Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 16/71] avcodec/msmpeg4enc: Only calculate coded_cbp when used Andreas Rheinhardt
` (56 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Always 8.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegvideo_motion.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/libavcodec/mpegvideo_motion.c b/libavcodec/mpegvideo_motion.c
index ccda20c0f1..56bdce59c0 100644
--- a/libavcodec/mpegvideo_motion.c
+++ b/libavcodec/mpegvideo_motion.c
@@ -239,18 +239,18 @@ static void mpeg_motion_field(MpegEncContext *s, uint8_t *dest_y,
int bottom_field, int field_select,
uint8_t *const *ref_picture,
op_pixels_func (*pix_op)[4],
- int motion_x, int motion_y, int h, int mb_y)
+ int motion_x, int motion_y, int mb_y)
{
#if !CONFIG_SMALL
if (s->out_format == FMT_MPEG1)
mpeg_motion_internal(s, dest_y, dest_cb, dest_cr, 1,
bottom_field, field_select, ref_picture, pix_op,
- motion_x, motion_y, h, 1, 0, mb_y);
+ motion_x, motion_y, 8, 1, 0, mb_y);
else
#endif
mpeg_motion_internal(s, dest_y, dest_cb, dest_cr, 1,
bottom_field, field_select, ref_picture, pix_op,
- motion_x, motion_y, h, 0, 0, mb_y);
+ motion_x, motion_y, 8, 0, 0, mb_y);
}
// FIXME: SIMDify, avg variant, 16x16 version
@@ -738,12 +738,12 @@ static av_always_inline void mpv_motion_internal(MpegEncContext *s,
mpeg_motion_field(s, dest_y, dest_cb, dest_cr,
0, s->field_select[dir][0],
ref_picture, pix_op,
- s->mv[dir][0][0], s->mv[dir][0][1], 8, mb_y);
+ s->mv[dir][0][0], s->mv[dir][0][1], mb_y);
/* bottom field */
mpeg_motion_field(s, dest_y, dest_cb, dest_cr,
1, s->field_select[dir][1],
ref_picture, pix_op,
- s->mv[dir][1][0], s->mv[dir][1][1], 8, mb_y);
+ s->mv[dir][1][0], s->mv[dir][1][1], mb_y);
}
} else {
av_assert2(s->out_format == FMT_MPEG1);
@@ -790,7 +790,7 @@ static av_always_inline void mpv_motion_internal(MpegEncContext *s,
mpeg_motion_field(s, dest_y, dest_cb, dest_cr,
j, j ^ i, ref_picture, pix_op,
s->mv[dir][2 * i + j][0],
- s->mv[dir][2 * i + j][1], 8, mb_y);
+ s->mv[dir][2 * i + j][1], mb_y);
pix_op = s->hdsp.avg_pixels_tab;
}
} else {
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 16/71] avcodec/msmpeg4enc: Only calculate coded_cbp when used
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (13 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 15/71] avcodec/mpegvideo_motion: Avoid constant function argument Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 17/71] avcodec/mpegvideo: Only allocate coded_block when needed Andreas Rheinhardt
` (55 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
With this patch, msmpeg4v1 and msmpeg4v2 no longer use
MpegEncContext.coded_block.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/msmpeg4enc.c | 27 ++++++++++++++-------------
1 file changed, 14 insertions(+), 13 deletions(-)
diff --git a/libavcodec/msmpeg4enc.c b/libavcodec/msmpeg4enc.c
index c159256068..5e6bc231d4 100644
--- a/libavcodec/msmpeg4enc.c
+++ b/libavcodec/msmpeg4enc.c
@@ -389,7 +389,6 @@ void ff_msmpeg4_encode_mb(MpegEncContext * s,
{
int cbp, coded_cbp, i;
int pred_x, pred_y;
- uint8_t *coded_block;
ff_msmpeg4_handle_slices(s);
@@ -449,20 +448,10 @@ void ff_msmpeg4_encode_mb(MpegEncContext * s,
} else {
/* compute cbp */
cbp = 0;
- coded_cbp = 0;
- for (i = 0; i < 6; i++) {
- int val, pred;
- val = (s->block_last_index[i] >= 1);
+ for (int i = 0; i < 6; i++) {
+ int val = (s->block_last_index[i] >= 1);
cbp |= val << (5 - i);
- if (i < 4) {
- /* predict value for close blocks only for luma */
- pred = ff_msmpeg4_coded_block_pred(s, i, &coded_block);
- *coded_block = val;
- val = val ^ pred;
- }
- coded_cbp |= val << (5 - i);
}
-
if(s->msmpeg4_version<=2){
if (s->pict_type == AV_PICTURE_TYPE_I) {
put_bits(&s->pb,
@@ -480,6 +469,18 @@ void ff_msmpeg4_encode_mb(MpegEncContext * s,
ff_h263_cbpy_tab[cbp>>2][0]);
}else{
if (s->pict_type == AV_PICTURE_TYPE_I) {
+ /* compute coded_cbp; the 0x3 corresponds to chroma cbp;
+ * luma coded_cbp are set in the loop below */
+ coded_cbp = cbp & 0x3;
+ for (int i = 0; i < 4; i++) {
+ uint8_t *coded_block;
+ int pred = ff_msmpeg4_coded_block_pred(s, i, &coded_block);
+ int val = (s->block_last_index[i] >= 1);
+ *coded_block = val;
+ val ^= pred;
+ coded_cbp |= val << (5 - i);
+ }
+
put_bits(&s->pb,
ff_msmp4_mb_i_table[coded_cbp][1], ff_msmp4_mb_i_table[coded_cbp][0]);
} else {
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 17/71] avcodec/mpegvideo: Only allocate coded_block when needed
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (14 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 16/71] avcodec/msmpeg4enc: Only calculate coded_cbp when used Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 18/71] avcodec/mpegvideo: Don't reset coded_block unnecessarily Andreas Rheinhardt
` (54 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
It is only needed for msmpeg4v3, wmv1, wmv2 and VC-1.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegvideo.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)
diff --git a/libavcodec/mpegvideo.c b/libavcodec/mpegvideo.c
index 130ccb4c97..74be22346d 100644
--- a/libavcodec/mpegvideo.c
+++ b/libavcodec/mpegvideo.c
@@ -596,11 +596,16 @@ int ff_mpv_init_context_frame(MpegEncContext *s)
}
if (s->out_format == FMT_H263) {
- /* cbp values, cbp, ac_pred, pred_dir */
- if (!(s->coded_block_base = av_mallocz(y_size + (s->mb_height&1)*2*s->b8_stride)) ||
- !(s->cbp_table = av_mallocz(mb_array_size)) ||
+ /* cbp, ac_pred, pred_dir */
+ if (!(s->cbp_table = av_mallocz(mb_array_size)) ||
!(s->pred_dir_table = av_mallocz(mb_array_size)))
return AVERROR(ENOMEM);
+ }
+
+ if (s->msmpeg4_version >= 3) {
+ s->coded_block_base = av_mallocz(y_size + (s->mb_height&1)*2*s->b8_stride);
+ if (!s->coded_block_base)
+ return AVERROR(ENOMEM);
s->coded_block = s->coded_block_base + s->b8_stride + 1;
}
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 18/71] avcodec/mpegvideo: Don't reset coded_block unnecessarily
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (15 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 17/71] avcodec/mpegvideo: Only allocate coded_block when needed Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 19/71] avcodec/mpegvideo: Only allocate cbp_table, pred_dir_table when needed Andreas Rheinhardt
` (53 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
coded_block is only used for I-frames, so it is unnecessary
to reset it in ff_clean_intra_table_entries() (which
cleans certain tables for a non-intra MB).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegvideo.c | 8 +-------
1 file changed, 1 insertion(+), 7 deletions(-)
diff --git a/libavcodec/mpegvideo.c b/libavcodec/mpegvideo.c
index 74be22346d..ca6e637920 100644
--- a/libavcodec/mpegvideo.c
+++ b/libavcodec/mpegvideo.c
@@ -808,7 +808,7 @@ void ff_mpv_common_end(MpegEncContext *s)
/**
- * Clean dc, ac, coded_block for the current non-intra MB.
+ * Clean dc, ac for the current non-intra MB.
*/
void ff_clean_intra_table_entries(MpegEncContext *s)
{
@@ -822,12 +822,6 @@ void ff_clean_intra_table_entries(MpegEncContext *s)
/* ac pred */
memset(s->ac_val[0][xy ], 0, 32 * sizeof(int16_t));
memset(s->ac_val[0][xy + wrap], 0, 32 * sizeof(int16_t));
- if (s->msmpeg4_version>=3) {
- s->coded_block[xy ] =
- s->coded_block[xy + 1 ] =
- s->coded_block[xy + wrap] =
- s->coded_block[xy + 1 + wrap] = 0;
- }
/* chroma */
wrap = s->mb_stride;
xy = s->mb_x + s->mb_y * wrap;
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 19/71] avcodec/mpegvideo: Only allocate cbp_table, pred_dir_table when needed
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (16 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 18/71] avcodec/mpegvideo: Don't reset coded_block unnecessarily Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 20/71] avcodec/mpegpicture: Always reset motion val buffer Andreas Rheinhardt
` (52 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Namely for the MPEG-4 decoder.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegvideo.c | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/libavcodec/mpegvideo.c b/libavcodec/mpegvideo.c
index ca6e637920..2ef69a5224 100644
--- a/libavcodec/mpegvideo.c
+++ b/libavcodec/mpegvideo.c
@@ -593,13 +593,12 @@ int ff_mpv_init_context_frame(MpegEncContext *s)
tmp += mv_table_size;
}
}
- }
-
- if (s->out_format == FMT_H263) {
- /* cbp, ac_pred, pred_dir */
- if (!(s->cbp_table = av_mallocz(mb_array_size)) ||
- !(s->pred_dir_table = av_mallocz(mb_array_size)))
- return AVERROR(ENOMEM);
+ if (s->codec_id == AV_CODEC_ID_MPEG4 && !s->encoding) {
+ /* cbp, pred_dir */
+ if (!(s->cbp_table = av_mallocz(mb_array_size)) ||
+ !(s->pred_dir_table = av_mallocz(mb_array_size)))
+ return AVERROR(ENOMEM);
+ }
}
if (s->msmpeg4_version >= 3) {
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 20/71] avcodec/mpegpicture: Always reset motion val buffer
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (17 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 19/71] avcodec/mpegvideo: Only allocate cbp_table, pred_dir_table when needed Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 21/71] avcodec/mpegpicture: Always reset mbskip_table Andreas Rheinhardt
` (51 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Codecs call ff_find_unused_picture() to get the index of
an unused picture; said picture may have buffers left
from using it previously (these buffers are intentionally
not unreferenced so that it might be possible to reuse them;
this is mpegvideo's version of a bufferpool). They should
not make any assumptions about which picture they get.
Yet somehow this is not true when decoding OBMC: Returning
random empty pictures (instead of the first one) leads
to nondeterministic results; similarly, explicitly
rezeroing the buffer before handing it over to the codec
changes the outcome of the h263-obmc tests, but it makes it
independent of the returned pictures. Therefore this commit
does so.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegpicture.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/libavcodec/mpegpicture.c b/libavcodec/mpegpicture.c
index 88b4d5dec1..06c82880a8 100644
--- a/libavcodec/mpegpicture.c
+++ b/libavcodec/mpegpicture.c
@@ -245,6 +245,10 @@ int ff_alloc_picture(AVCodecContext *avctx, Picture *pic, MotionEstContext *me,
for (i = 0; i < 2; i++) {
pic->motion_val[i] = (int16_t (*)[2])pic->motion_val_buf[i]->data + 4;
pic->ref_index[i] = pic->ref_index_buf[i]->data;
+ /* FIXME: The output of H.263 with OBMC depends upon
+ * the earlier content of the buffer; therefore we
+ * reset it here. */
+ memset(pic->motion_val_buf[i]->data, 0, pic->motion_val_buf[i]->size);
}
}
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 21/71] avcodec/mpegpicture: Always reset mbskip_table
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (18 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 20/71] avcodec/mpegpicture: Always reset motion val buffer Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 22/71] avcodec/mpegvideo: Redo aligning mb_height for VC-1 Andreas Rheinhardt
` (50 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Codecs call ff_find_unused_picture() to get the index of
an unused picture; said picture may have buffers left
from using it previously (these buffers are intentionally
not unreferenced so that it might be possible to reuse them;
they are only reused when they are writable, otherwise
they are replaced by new, zeroed buffers). They should
not make any assumptions about which picture they get.
Yet this is not true for mbskip_table and damaged bitstreams.
When one returns old unused slots randomly, the output
becomes nondeterministic. This can't happen now (see below),
but it will be possible once mpegpicture uses proper pools
for the picture tables.
The following discussion uses the sample created via
ffmpeg -bitexact -i fate-suite/svq3/Vertical400kbit.sorenson3.mov -ps 50 -bf 2 -bitexact -an -qscale 5 -ss 40 -error_rate 4 -threads 1 out.avi
When decoding this with one thread, the slots are as follows:
Cur 0 (type I), last -1, Next -1; cur refcount -1, not reusing buffers
Cur 1 (type P), last -1, Next 0; cur refcount -1, not reusing buffers
Cur 2 (type B), last 0, Next 1; cur refcount -1, not reusing buffers
Cur 2 (type B), last 0, Next 1; cur refcount 2, not reusing buffers
Cur 0 (type P), last 0, Next 1; cur refcount 2, not reusing buffers
Cur 2 (type B), last 1, Next 0; cur refcount 1, reusing buffers
Cur 2 (type B), last 1, Next 0; cur refcount 2, not reusing buffers
Cur 1 (type P), last 1, Next 0; cur refcount 2, not reusing buffers
Cur 2 (type B), last 0, Next 1; cur refcount 1, reusing buffers
Cur 2 (type B), last 0, Next 1; cur refcount 2, not reusing buffers
Cur 0 (type I), last 0, Next 1; cur refcount 2, not reusing buffers
Cur 2 (type B), last 1, Next 0; cur refcount 1, reusing buffers
Cur 2 (type B), last 1, Next 0; cur refcount 2, not reusing buffers
Cur 1 (type P), last 1, Next 0; cur refcount 2, not reusing buffers
After the slots have been filled initially, the buffers are only
reused for the first B-frame in a B-frame chain:
a) When the new picture is an I or a P frame, the slot of the backward
reference is cleared and reused for the new frame (as has been said,
"cleared" does not mean that the auxiliary buffers have been
unreferenced). Given that not only the slot in the picture array,
but also MpegEncContext.last_picture contain references to these
auxiliary buffers, they are not writable and are therefore not reused,
but replaced by new, zero-allocated buffers.
b) When the new picture is the first B-frame in a B-frame chain,
the two reference slots are kept as-is and one gets a slot that
does not share its auxiliary buffers with any of MpegEncContext.
current_picture, last_picture, next_picture. The buffers are
therefore writable and are reused.
c) When the new picture is a B-frame that is not the first frame
in a B-frame chain, ff_mpv_frame_start() reuses the slot occupied
by the preceding B-frame. Said slot shares its auxilary buffers
with MpegEncContext.current_picture, so that they are not considered
writable and are therefore not reused.
When using frame-threading, the slots are made to match the one
from the last thread, so that the above analysis is mostly the same
with one exception: Other threads may also have references to these
buffers, so that initial B-frames of a B-frame chain need no longer
have writable/reusable buffers. In particular, all I and P-frames
always use new, zeroed buffers. Because only the mbskip_tables of
I- and P-frames are ever used, it follows that there is currently
no problem with using stale values for them at all.
Yet as the analysis shows this is very fragile:
1. MpegEncContext.(current|last|next)_picture need not have
references of their own, but they have them and this influences
the writability decision.
2. It would not work if the slots were returned in a truely random
fashion or if there were a proper pool used.
Therefore this commit always resets said buffer. This is in preparation
for actually adding such a pool (where the checksums for said sample
would otherwise be depending on the number of threads used for
decoding).
Future commits will restrict this to only the codecs for which
it is necessary (namely the MPEG-4 decoder).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegpicture.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/libavcodec/mpegpicture.c b/libavcodec/mpegpicture.c
index 06c82880a8..a1404c1d09 100644
--- a/libavcodec/mpegpicture.c
+++ b/libavcodec/mpegpicture.c
@@ -238,6 +238,7 @@ int ff_alloc_picture(AVCodecContext *avctx, Picture *pic, MotionEstContext *me,
goto fail;
pic->mbskip_table = pic->mbskip_table_buf->data;
+ memset(pic->mbskip_table, 0, pic->mbskip_table_buf->size);
pic->qscale_table = pic->qscale_table_buf->data + 2 * mb_stride + 1;
pic->mb_type = (uint32_t*)pic->mb_type_buf->data + 2 * mb_stride + 1;
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 22/71] avcodec/mpegvideo: Redo aligning mb_height for VC-1
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (19 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 21/71] avcodec/mpegpicture: Always reset mbskip_table Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 23/71] avcodec/mpegvideo, mpegpicture: Add buffer pool Andreas Rheinhardt
` (49 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
VC-1 can switch from between being progressive and interlaced
on a per-frame basis. In the latter case, the number of macroblocks
is aligned to two (or equivalently, the height to 32); therefore
certain buffers are allocated for the bigger mb_height
(see 950fb8acb42f4dab9b1638721992991c0584dbf5 and
017e234c204f8ffb5f85a073231247881be1ac6f).
This commit changes how this is done: Aligning these buffers is
restricted to VC-1 and it is done directly by aligning
mb_height (but not MpegEncContext.mb_height) instead of
adding something in an ad-hoc manner.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegvideo.c | 31 +++++++++++++++++--------------
1 file changed, 17 insertions(+), 14 deletions(-)
diff --git a/libavcodec/mpegvideo.c b/libavcodec/mpegvideo.c
index 2ef69a5224..ce1edca95d 100644
--- a/libavcodec/mpegvideo.c
+++ b/libavcodec/mpegvideo.c
@@ -364,14 +364,8 @@ av_cold void ff_mpv_idct_init(MpegEncContext *s)
static int init_duplicate_context(MpegEncContext *s)
{
- int y_size = s->b8_stride * (2 * s->mb_height + 1);
- int c_size = s->mb_stride * (s->mb_height + 1);
- int yc_size = y_size + 2 * c_size;
int i;
- if (s->mb_height & 1)
- yc_size += 2*s->b8_stride + 2*s->mb_stride;
-
if (s->encoding) {
s->me.map = av_mallocz(2 * ME_MAP_SIZE * sizeof(*s->me.map));
if (!s->me.map)
@@ -397,6 +391,11 @@ static int init_duplicate_context(MpegEncContext *s)
}
if (s->out_format == FMT_H263) {
+ int mb_height = s->msmpeg4_version == 6 /* VC-1 like */ ?
+ FFALIGN(s->mb_height, 2) : s->mb_height;
+ int y_size = s->b8_stride * (2 * mb_height + 1);
+ int c_size = s->mb_stride * (mb_height + 1);
+ int yc_size = y_size + 2 * c_size;
/* ac values */
if (!FF_ALLOCZ_TYPED_ARRAY(s->ac_val_base, yc_size))
return AVERROR(ENOMEM);
@@ -538,17 +537,24 @@ void ff_mpv_common_defaults(MpegEncContext *s)
int ff_mpv_init_context_frame(MpegEncContext *s)
{
int y_size, c_size, yc_size, i, mb_array_size, mv_table_size, x, y;
+ int mb_height;
if (s->codec_id == AV_CODEC_ID_MPEG2VIDEO && !s->progressive_sequence)
s->mb_height = (s->height + 31) / 32 * 2;
else
s->mb_height = (s->height + 15) / 16;
+ /* VC-1 can change from being progressive to interlaced on a per-frame
+ * basis. We therefore allocate certain buffers so big that they work
+ * in both instances. */
+ mb_height = s->msmpeg4_version == 6 /* VC-1 like*/ ?
+ FFALIGN(s->mb_height, 2) : s->mb_height;
+
s->mb_width = (s->width + 15) / 16;
s->mb_stride = s->mb_width + 1;
s->b8_stride = s->mb_width * 2 + 1;
- mb_array_size = s->mb_height * s->mb_stride;
- mv_table_size = (s->mb_height + 2) * s->mb_stride + 1;
+ mb_array_size = mb_height * s->mb_stride;
+ mv_table_size = (mb_height + 2) * s->mb_stride + 1;
/* set default edge pos, will be overridden
* in decode_header if needed */
@@ -564,13 +570,10 @@ int ff_mpv_init_context_frame(MpegEncContext *s)
s->block_wrap[4] =
s->block_wrap[5] = s->mb_stride;
- y_size = s->b8_stride * (2 * s->mb_height + 1);
- c_size = s->mb_stride * (s->mb_height + 1);
+ y_size = s->b8_stride * (2 * mb_height + 1);
+ c_size = s->mb_stride * (mb_height + 1);
yc_size = y_size + 2 * c_size;
- if (s->mb_height & 1)
- yc_size += 2*s->b8_stride + 2*s->mb_stride;
-
if (!FF_ALLOCZ_TYPED_ARRAY(s->mb_index2xy, s->mb_num + 1))
return AVERROR(ENOMEM);
for (y = 0; y < s->mb_height; y++)
@@ -602,7 +605,7 @@ int ff_mpv_init_context_frame(MpegEncContext *s)
}
if (s->msmpeg4_version >= 3) {
- s->coded_block_base = av_mallocz(y_size + (s->mb_height&1)*2*s->b8_stride);
+ s->coded_block_base = av_mallocz(y_size);
if (!s->coded_block_base)
return AVERROR(ENOMEM);
s->coded_block = s->coded_block_base + s->b8_stride + 1;
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 23/71] avcodec/mpegvideo, mpegpicture: Add buffer pool
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (20 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 22/71] avcodec/mpegvideo: Redo aligning mb_height for VC-1 Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 24/71] avcodec/mpegpicture: Reindent after the previous commit Andreas Rheinhardt
` (48 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
This avoids constant allocations+frees and will also allow
to simply switch to the RefStruct API, thereby avoiding
the overhead of the AVBuffer API.
It also simplifies the code, because it removes the "needs_realloc"
field: It was added in 435c0b87d28b48dc2e0360adc404a0e2d66d16a0,
before the introduction of the AVBuffer API: given that these buffers
may be used by different threads, they were not freed immediately
and instead were marked as being freed later by setting needs_realloc.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegpicture.c | 155 ++++++++-----------------------------
libavcodec/mpegpicture.h | 27 ++++---
libavcodec/mpegvideo.c | 37 +++++++++
libavcodec/mpegvideo.h | 2 +
libavcodec/mpegvideo_dec.c | 35 ++++-----
libavcodec/mpegvideo_enc.c | 13 ++--
6 files changed, 112 insertions(+), 157 deletions(-)
diff --git a/libavcodec/mpegpicture.c b/libavcodec/mpegpicture.c
index a1404c1d09..2d3cc247c4 100644
--- a/libavcodec/mpegpicture.c
+++ b/libavcodec/mpegpicture.c
@@ -29,15 +29,11 @@
#include "avcodec.h"
#include "motion_est.h"
#include "mpegpicture.h"
-#include "mpegvideo.h"
#include "refstruct.h"
#include "threadframe.h"
static void av_noinline free_picture_tables(Picture *pic)
{
- pic->alloc_mb_width =
- pic->alloc_mb_height = 0;
-
av_buffer_unref(&pic->mbskip_table_buf);
av_buffer_unref(&pic->qscale_table_buf);
av_buffer_unref(&pic->mb_type_buf);
@@ -46,43 +42,9 @@ static void av_noinline free_picture_tables(Picture *pic)
av_buffer_unref(&pic->motion_val_buf[i]);
av_buffer_unref(&pic->ref_index_buf[i]);
}
-}
-
-static int make_table_writable(AVBufferRef **ref)
-{
- AVBufferRef *old = *ref, *new;
-
- if (av_buffer_is_writable(old))
- return 0;
- new = av_buffer_allocz(old->size);
- if (!new)
- return AVERROR(ENOMEM);
- av_buffer_unref(ref);
- *ref = new;
- return 0;
-}
-
-static int make_tables_writable(Picture *pic)
-{
-#define MAKE_WRITABLE(table) \
-do {\
- int ret = make_table_writable(&pic->table); \
- if (ret < 0) \
- return ret; \
-} while (0)
-
- MAKE_WRITABLE(mbskip_table_buf);
- MAKE_WRITABLE(qscale_table_buf);
- MAKE_WRITABLE(mb_type_buf);
-
- if (pic->motion_val_buf[0]) {
- for (int i = 0; i < 2; i++) {
- MAKE_WRITABLE(motion_val_buf[i]);
- MAKE_WRITABLE(ref_index_buf[i]);
- }
- }
- return 0;
+ pic->mb_width =
+ pic->mb_height = 0;
}
int ff_mpeg_framesize_alloc(AVCodecContext *avctx, MotionEstContext *me,
@@ -170,38 +132,28 @@ static int handle_pic_linesizes(AVCodecContext *avctx, Picture *pic,
return 0;
}
-static int alloc_picture_tables(AVCodecContext *avctx, Picture *pic, int encoding, int out_format,
- int mb_stride, int mb_width, int mb_height, int b8_stride)
+static int alloc_picture_tables(BufferPoolContext *pools, Picture *pic,
+ int mb_height)
{
- const int big_mb_num = mb_stride * (mb_height + 1) + 1;
- const int mb_array_size = mb_stride * mb_height;
- const int b8_array_size = b8_stride * mb_height * 2;
- int i;
-
-
- pic->mbskip_table_buf = av_buffer_allocz(mb_array_size + 2);
- pic->qscale_table_buf = av_buffer_allocz(big_mb_num + mb_stride);
- pic->mb_type_buf = av_buffer_allocz((big_mb_num + mb_stride) *
- sizeof(uint32_t));
- if (!pic->mbskip_table_buf || !pic->qscale_table_buf || !pic->mb_type_buf)
- return AVERROR(ENOMEM);
-
- if (out_format == FMT_H263 || encoding ||
- (avctx->export_side_data & AV_CODEC_EXPORT_DATA_MVS)) {
- int mv_size = 2 * (b8_array_size + 4) * sizeof(int16_t);
- int ref_index_size = 4 * mb_array_size;
-
- for (i = 0; mv_size && i < 2; i++) {
- pic->motion_val_buf[i] = av_buffer_allocz(mv_size);
- pic->ref_index_buf[i] = av_buffer_allocz(ref_index_size);
- if (!pic->motion_val_buf[i] || !pic->ref_index_buf[i])
- return AVERROR(ENOMEM);
+#define GET_BUFFER(name, idx_suffix) do { \
+ pic->name ## _buf idx_suffix = av_buffer_pool_get(pools->name ## _pool); \
+ if (!pic->name ## _buf idx_suffix) \
+ return AVERROR(ENOMEM); \
+} while (0)
+ GET_BUFFER(mbskip_table,);
+ GET_BUFFER(qscale_table,);
+ GET_BUFFER(mb_type,);
+ if (pools->motion_val_pool) {
+ for (int i = 0; i < 2; i++) {
+ GET_BUFFER(motion_val, [i]);
+ GET_BUFFER(ref_index, [i]);
}
}
+#undef GET_BUFFER
- pic->alloc_mb_width = mb_width;
- pic->alloc_mb_height = mb_height;
- pic->alloc_mb_stride = mb_stride;
+ pic->mb_width = pools->alloc_mb_width;
+ pic->mb_height = mb_height;
+ pic->mb_stride = pools->alloc_mb_stride;
return 0;
}
@@ -211,17 +163,11 @@ static int alloc_picture_tables(AVCodecContext *avctx, Picture *pic, int encodin
* The pixels are allocated/set by calling get_buffer() if shared = 0
*/
int ff_alloc_picture(AVCodecContext *avctx, Picture *pic, MotionEstContext *me,
- ScratchpadContext *sc, int encoding, int out_format,
- int mb_stride, int mb_width, int mb_height, int b8_stride,
- ptrdiff_t *linesize, ptrdiff_t *uvlinesize)
+ ScratchpadContext *sc, BufferPoolContext *pools,
+ int mb_height, ptrdiff_t *linesize, ptrdiff_t *uvlinesize)
{
int i, ret;
- if (pic->qscale_table_buf)
- if ( pic->alloc_mb_width != mb_width
- || pic->alloc_mb_height != mb_height)
- free_picture_tables(pic);
-
if (handle_pic_linesizes(avctx, pic, me, sc,
*linesize, *uvlinesize) < 0)
return -1;
@@ -229,18 +175,14 @@ int ff_alloc_picture(AVCodecContext *avctx, Picture *pic, MotionEstContext *me,
*linesize = pic->f->linesize[0];
*uvlinesize = pic->f->linesize[1];
- if (!pic->qscale_table_buf)
- ret = alloc_picture_tables(avctx, pic, encoding, out_format,
- mb_stride, mb_width, mb_height, b8_stride);
- else
- ret = make_tables_writable(pic);
+ ret = alloc_picture_tables(pools, pic, mb_height);
if (ret < 0)
goto fail;
pic->mbskip_table = pic->mbskip_table_buf->data;
memset(pic->mbskip_table, 0, pic->mbskip_table_buf->size);
- pic->qscale_table = pic->qscale_table_buf->data + 2 * mb_stride + 1;
- pic->mb_type = (uint32_t*)pic->mb_type_buf->data + 2 * mb_stride + 1;
+ pic->qscale_table = pic->qscale_table_buf->data + 2 * pic->mb_stride + 1;
+ pic->mb_type = (uint32_t*)pic->mb_type_buf->data + 2 * pic->mb_stride + 1;
if (pic->motion_val_buf[0]) {
for (i = 0; i < 2; i++) {
@@ -257,7 +199,6 @@ int ff_alloc_picture(AVCodecContext *avctx, Picture *pic, MotionEstContext *me,
fail:
av_log(avctx, AV_LOG_ERROR, "Error allocating a picture.\n");
ff_mpeg_unref_picture(pic);
- free_picture_tables(pic);
return AVERROR(ENOMEM);
}
@@ -272,20 +213,18 @@ void ff_mpeg_unref_picture(Picture *pic)
ff_refstruct_unref(&pic->hwaccel_picture_private);
- if (pic->needs_realloc)
- free_picture_tables(pic);
+ free_picture_tables(pic);
pic->dummy = 0;
pic->field_picture = 0;
pic->b_frame_score = 0;
- pic->needs_realloc = 0;
pic->reference = 0;
pic->shared = 0;
pic->display_picture_number = 0;
pic->coded_picture_number = 0;
}
-int ff_update_picture_tables(Picture *dst, const Picture *src)
+static int update_picture_tables(Picture *dst, const Picture *src)
{
int i, ret;
@@ -310,9 +249,9 @@ int ff_update_picture_tables(Picture *dst, const Picture *src)
dst->ref_index[i] = src->ref_index[i];
}
- dst->alloc_mb_width = src->alloc_mb_width;
- dst->alloc_mb_height = src->alloc_mb_height;
- dst->alloc_mb_stride = src->alloc_mb_stride;
+ dst->mb_width = src->mb_width;
+ dst->mb_height = src->mb_height;
+ dst->mb_stride = src->mb_stride;
return 0;
}
@@ -330,7 +269,7 @@ int ff_mpeg_ref_picture(Picture *dst, Picture *src)
if (ret < 0)
goto fail;
- ret = ff_update_picture_tables(dst, src);
+ ret = update_picture_tables(dst, src);
if (ret < 0)
goto fail;
@@ -340,7 +279,6 @@ int ff_mpeg_ref_picture(Picture *dst, Picture *src)
dst->dummy = src->dummy;
dst->field_picture = src->field_picture;
dst->b_frame_score = src->b_frame_score;
- dst->needs_realloc = src->needs_realloc;
dst->reference = src->reference;
dst->shared = src->shared;
dst->display_picture_number = src->display_picture_number;
@@ -352,30 +290,14 @@ fail:
return ret;
}
-static inline int pic_is_unused(Picture *pic)
-{
- if (!pic->f->buf[0])
- return 1;
- if (pic->needs_realloc)
- return 1;
- return 0;
-}
-
-static int find_unused_picture(AVCodecContext *avctx, Picture *picture, int shared)
+int ff_find_unused_picture(AVCodecContext *avctx, Picture *picture, int shared)
{
int i;
- if (shared) {
for (i = 0; i < MAX_PICTURE_COUNT; i++) {
if (!picture[i].f->buf[0])
return i;
}
- } else {
- for (i = 0; i < MAX_PICTURE_COUNT; i++) {
- if (pic_is_unused(&picture[i]))
- return i;
- }
- }
av_log(avctx, AV_LOG_FATAL,
"Internal error, picture buffer overflow\n");
@@ -394,21 +316,8 @@ static int find_unused_picture(AVCodecContext *avctx, Picture *picture, int shar
return -1;
}
-int ff_find_unused_picture(AVCodecContext *avctx, Picture *picture, int shared)
-{
- int ret = find_unused_picture(avctx, picture, shared);
-
- if (ret >= 0 && ret < MAX_PICTURE_COUNT) {
- if (picture[ret].needs_realloc) {
- ff_mpeg_unref_picture(&picture[ret]);
- }
- }
- return ret;
-}
-
void av_cold ff_mpv_picture_free(Picture *pic)
{
- free_picture_tables(pic);
ff_mpeg_unref_picture(pic);
av_frame_free(&pic->f);
}
diff --git a/libavcodec/mpegpicture.h b/libavcodec/mpegpicture.h
index 664c116a47..a0bfd8250f 100644
--- a/libavcodec/mpegpicture.h
+++ b/libavcodec/mpegpicture.h
@@ -23,6 +23,7 @@
#include <stdint.h>
+#include "libavutil/buffer.h"
#include "libavutil/frame.h"
#include "avcodec.h"
@@ -41,6 +42,17 @@ typedef struct ScratchpadContext {
int linesize; ///< linesize that the buffers in this context have been allocated for
} ScratchpadContext;
+typedef struct BufferPoolContext {
+ AVBufferPool *mbskip_table_pool;
+ AVBufferPool *qscale_table_pool;
+ AVBufferPool *mb_type_pool;
+ AVBufferPool *motion_val_pool;
+ AVBufferPool *ref_index_pool;
+ int alloc_mb_width; ///< mb_width used to allocate tables
+ int alloc_mb_height; ///< mb_height used to allocate tables
+ int alloc_mb_stride; ///< mb_stride used to allocate tables
+} BufferPoolContext;
+
/**
* Picture.
*/
@@ -63,18 +75,17 @@ typedef struct Picture {
AVBufferRef *ref_index_buf[2];
int8_t *ref_index[2];
- int alloc_mb_width; ///< mb_width used to allocate tables
- int alloc_mb_height; ///< mb_height used to allocate tables
- int alloc_mb_stride; ///< mb_stride used to allocate tables
-
/// RefStruct reference for hardware accelerator private data
void *hwaccel_picture_private;
+ int mb_width; ///< mb_width of the tables
+ int mb_height; ///< mb_height of the tables
+ int mb_stride; ///< mb_stride of the tables
+
int dummy; ///< Picture is a dummy and should not be output
int field_picture; ///< whether or not the picture was encoded in separate fields
int b_frame_score;
- int needs_realloc; ///< Picture needs to be reallocated (eg due to a frame size change)
int reference;
int shared;
@@ -87,9 +98,8 @@ typedef struct Picture {
* Allocate a Picture's accessories, but not the AVFrame's buffer itself.
*/
int ff_alloc_picture(AVCodecContext *avctx, Picture *pic, MotionEstContext *me,
- ScratchpadContext *sc, int encoding, int out_format,
- int mb_stride, int mb_width, int mb_height, int b8_stride,
- ptrdiff_t *linesize, ptrdiff_t *uvlinesize);
+ ScratchpadContext *sc, BufferPoolContext *pools,
+ int mb_height, ptrdiff_t *linesize, ptrdiff_t *uvlinesize);
int ff_mpeg_framesize_alloc(AVCodecContext *avctx, MotionEstContext *me,
ScratchpadContext *sc, int linesize);
@@ -98,7 +108,6 @@ int ff_mpeg_ref_picture(Picture *dst, Picture *src);
void ff_mpeg_unref_picture(Picture *picture);
void ff_mpv_picture_free(Picture *pic);
-int ff_update_picture_tables(Picture *dst, const Picture *src);
int ff_find_unused_picture(AVCodecContext *avctx, Picture *picture, int shared);
diff --git a/libavcodec/mpegvideo.c b/libavcodec/mpegvideo.c
index ce1edca95d..5728f4cee3 100644
--- a/libavcodec/mpegvideo.c
+++ b/libavcodec/mpegvideo.c
@@ -534,8 +534,19 @@ void ff_mpv_common_defaults(MpegEncContext *s)
s->slice_context_count = 1;
}
+static void free_buffer_pools(BufferPoolContext *pools)
+{
+ av_buffer_pool_uninit(&pools->mbskip_table_pool);
+ av_buffer_pool_uninit(&pools->qscale_table_pool);
+ av_buffer_pool_uninit(&pools->mb_type_pool);
+ av_buffer_pool_uninit(&pools->motion_val_pool);
+ av_buffer_pool_uninit(&pools->ref_index_pool);
+ pools->alloc_mb_height = pools->alloc_mb_width = pools->alloc_mb_stride = 0;
+}
+
int ff_mpv_init_context_frame(MpegEncContext *s)
{
+ BufferPoolContext *const pools = &s->buffer_pools;
int y_size, c_size, yc_size, i, mb_array_size, mv_table_size, x, y;
int mb_height;
@@ -630,11 +641,36 @@ int ff_mpv_init_context_frame(MpegEncContext *s)
return AVERROR(ENOMEM);
memset(s->mbintra_table, 1, mb_array_size);
+#define ALLOC_POOL(name, size) do { \
+ pools->name ##_pool = av_buffer_pool_init((size), av_buffer_allocz); \
+ if (!pools->name ##_pool) \
+ return AVERROR(ENOMEM); \
+} while (0)
+
+ ALLOC_POOL(mbskip_table, mb_array_size + 2);
+ ALLOC_POOL(qscale_table, mv_table_size);
+ ALLOC_POOL(mb_type, mv_table_size * sizeof(uint32_t));
+
+ if (s->out_format == FMT_H263 || s->encoding ||
+ (s->avctx->export_side_data & AV_CODEC_EXPORT_DATA_MVS)) {
+ const int b8_array_size = s->b8_stride * mb_height * 2;
+ int mv_size = 2 * (b8_array_size + 4) * sizeof(int16_t);
+ int ref_index_size = 4 * mb_array_size;
+
+ ALLOC_POOL(motion_val, mv_size);
+ ALLOC_POOL(ref_index, ref_index_size);
+ }
+#undef ALLOC_POOL
+ pools->alloc_mb_width = s->mb_width;
+ pools->alloc_mb_height = mb_height;
+ pools->alloc_mb_stride = s->mb_stride;
+
return !CONFIG_MPEGVIDEODEC || s->encoding ? 0 : ff_mpeg_er_init(s);
}
static void clear_context(MpegEncContext *s)
{
+ memset(&s->buffer_pools, 0, sizeof(s->buffer_pools));
memset(&s->next_picture, 0, sizeof(s->next_picture));
memset(&s->last_picture, 0, sizeof(s->last_picture));
memset(&s->current_picture, 0, sizeof(s->current_picture));
@@ -762,6 +798,7 @@ void ff_mpv_free_context_frame(MpegEncContext *s)
{
free_duplicate_contexts(s);
+ free_buffer_pools(&s->buffer_pools);
av_freep(&s->p_field_mv_table_base);
for (int i = 0; i < 2; i++)
for (int j = 0; j < 2; j++)
diff --git a/libavcodec/mpegvideo.h b/libavcodec/mpegvideo.h
index a8ed1b60b6..f5ae0d1ca0 100644
--- a/libavcodec/mpegvideo.h
+++ b/libavcodec/mpegvideo.h
@@ -132,6 +132,8 @@ typedef struct MpegEncContext {
Picture **input_picture; ///< next pictures on display order for encoding
Picture **reordered_input_picture; ///< pointer to the next pictures in coded order for encoding
+ BufferPoolContext buffer_pools;
+
int64_t user_specified_pts; ///< last non-zero pts from AVFrame which was passed into avcodec_send_frame()
/**
* pts difference between the first and second input frame, used for
diff --git a/libavcodec/mpegvideo_dec.c b/libavcodec/mpegvideo_dec.c
index c1f49bce14..a4c7a0086a 100644
--- a/libavcodec/mpegvideo_dec.c
+++ b/libavcodec/mpegvideo_dec.c
@@ -115,12 +115,11 @@ int ff_mpeg_update_thread_context(AVCodecContext *dst,
#define UPDATE_PICTURE(pic)\
do {\
ff_mpeg_unref_picture(&s->pic);\
- if (s1->pic.f && s1->pic.f->buf[0])\
+ if (s1->pic.f && s1->pic.f->buf[0]) {\
ret = ff_mpeg_ref_picture(&s->pic, &s1->pic);\
- else\
- ret = ff_update_picture_tables(&s->pic, &s1->pic);\
- if (ret < 0)\
- return ret;\
+ if (ret < 0)\
+ return ret;\
+ }\
} while (0)
UPDATE_PICTURE(current_picture);
@@ -194,10 +193,6 @@ int ff_mpv_common_frame_size_change(MpegEncContext *s)
ff_mpv_free_context_frame(s);
- if (s->picture)
- for (int i = 0; i < MAX_PICTURE_COUNT; i++)
- s->picture[i].needs_realloc = 1;
-
s->last_picture_ptr =
s->next_picture_ptr =
s->current_picture_ptr = NULL;
@@ -268,9 +263,12 @@ static int alloc_picture(MpegEncContext *s, Picture **picp, int reference)
if (ret < 0)
goto fail;
- ret = ff_alloc_picture(s->avctx, pic, &s->me, &s->sc, 0, s->out_format,
- s->mb_stride, s->mb_width, s->mb_height, s->b8_stride,
- &s->linesize, &s->uvlinesize);
+ av_assert1(s->mb_width == s->buffer_pools.alloc_mb_width);
+ av_assert1(s->mb_height == s->buffer_pools.alloc_mb_height ||
+ FFALIGN(s->mb_height, 2) == s->buffer_pools.alloc_mb_height);
+ av_assert1(s->mb_stride == s->buffer_pools.alloc_mb_stride);
+ ret = ff_alloc_picture(s->avctx, pic, &s->me, &s->sc, &s->buffer_pools,
+ s->mb_height, &s->linesize, &s->uvlinesize);
if (ret < 0)
goto fail;
*picp = pic;
@@ -388,8 +386,7 @@ int ff_mpv_frame_start(MpegEncContext *s, AVCodecContext *avctx)
for (int i = 0; i < MAX_PICTURE_COUNT; i++) {
if (!s->picture[i].reference ||
(&s->picture[i] != s->last_picture_ptr &&
- &s->picture[i] != s->next_picture_ptr &&
- !s->picture[i].needs_realloc)) {
+ &s->picture[i] != s->next_picture_ptr)) {
ff_mpeg_unref_picture(&s->picture[i]);
}
}
@@ -487,7 +484,7 @@ int ff_mpv_export_qp_table(const MpegEncContext *s, AVFrame *f, const Picture *p
{
AVVideoEncParams *par;
int mult = (qp_type == FF_MPV_QSCALE_TYPE_MPEG1) ? 2 : 1;
- unsigned int nb_mb = p->alloc_mb_height * p->alloc_mb_width;
+ unsigned int nb_mb = p->mb_height * p->mb_width;
if (!(s->avctx->export_side_data & AV_CODEC_EXPORT_DATA_VIDEO_ENC_PARAMS))
return 0;
@@ -496,10 +493,10 @@ int ff_mpv_export_qp_table(const MpegEncContext *s, AVFrame *f, const Picture *p
if (!par)
return AVERROR(ENOMEM);
- for (unsigned y = 0; y < p->alloc_mb_height; y++)
- for (unsigned x = 0; x < p->alloc_mb_width; x++) {
- const unsigned int block_idx = y * p->alloc_mb_width + x;
- const unsigned int mb_xy = y * p->alloc_mb_stride + x;
+ for (unsigned y = 0; y < p->mb_height; y++)
+ for (unsigned x = 0; x < p->mb_width; x++) {
+ const unsigned int block_idx = y * p->mb_width + x;
+ const unsigned int mb_xy = y * p->mb_stride + x;
AVVideoBlockParams *const b = av_video_enc_params_block(par, block_idx);
b->src_x = x * 16;
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index f45a5f1b37..4121cc034f 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -1112,9 +1112,11 @@ static int alloc_picture(MpegEncContext *s, Picture *pic)
pic->f->width = avctx->width;
pic->f->height = avctx->height;
- return ff_alloc_picture(s->avctx, pic, &s->me, &s->sc, 1, s->out_format,
- s->mb_stride, s->mb_width, s->mb_height, s->b8_stride,
- &s->linesize, &s->uvlinesize);
+ av_assert1(s->mb_width == s->buffer_pools.alloc_mb_width);
+ av_assert1(s->mb_height == s->buffer_pools.alloc_mb_height);
+ av_assert1(s->mb_stride == s->buffer_pools.alloc_mb_stride);
+ return ff_alloc_picture(s->avctx, pic, &s->me, &s->sc, &s->buffer_pools,
+ s->mb_height, &s->linesize, &s->uvlinesize);
}
static int load_input_picture(MpegEncContext *s, const AVFrame *pic_arg)
@@ -1480,7 +1482,7 @@ static int select_input_picture(MpegEncContext *s)
s->next_picture_ptr &&
skip_check(s, s->input_picture[0], s->next_picture_ptr)) {
// FIXME check that the gop check above is +-1 correct
- av_frame_unref(s->input_picture[0]->f);
+ ff_mpeg_unref_picture(s->input_picture[0]);
ff_vbv_update(s, 0);
@@ -1627,8 +1629,7 @@ no_output_pic:
pic->display_picture_number = s->reordered_input_picture[0]->display_picture_number;
/* mark us unused / free shared pic */
- av_frame_unref(s->reordered_input_picture[0]->f);
- s->reordered_input_picture[0]->shared = 0;
+ ff_mpeg_unref_picture(s->reordered_input_picture[0]);
s->current_picture_ptr = pic;
} else {
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 24/71] avcodec/mpegpicture: Reindent after the previous commit
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (21 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 23/71] avcodec/mpegvideo, mpegpicture: Add buffer pool Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 25/71] avcodec/mpegpicture: Use RefStruct-pool API Andreas Rheinhardt
` (47 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegpicture.c | 9 +++------
1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/libavcodec/mpegpicture.c b/libavcodec/mpegpicture.c
index 2d3cc247c4..32ca037526 100644
--- a/libavcodec/mpegpicture.c
+++ b/libavcodec/mpegpicture.c
@@ -292,12 +292,9 @@ fail:
int ff_find_unused_picture(AVCodecContext *avctx, Picture *picture, int shared)
{
- int i;
-
- for (i = 0; i < MAX_PICTURE_COUNT; i++) {
- if (!picture[i].f->buf[0])
- return i;
- }
+ for (int i = 0; i < MAX_PICTURE_COUNT; i++)
+ if (!picture[i].f->buf[0])
+ return i;
av_log(avctx, AV_LOG_FATAL,
"Internal error, picture buffer overflow\n");
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 25/71] avcodec/mpegpicture: Use RefStruct-pool API
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (22 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 24/71] avcodec/mpegpicture: Reindent after the previous commit Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 26/71] avcodec/h263: Move encoder-only part out of ff_h263_update_motion_val() Andreas Rheinhardt
` (46 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
It involves less allocations and therefore has less
potential errors to be checked. One consequence thereof
is that updating the picture tables can no longer fail.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegpicture.c | 82 ++++++++++++++--------------------------
libavcodec/mpegpicture.h | 21 ++++------
libavcodec/mpegvideo.c | 28 ++++++++------
3 files changed, 53 insertions(+), 78 deletions(-)
diff --git a/libavcodec/mpegpicture.c b/libavcodec/mpegpicture.c
index 32ca037526..ad6157f0c1 100644
--- a/libavcodec/mpegpicture.c
+++ b/libavcodec/mpegpicture.c
@@ -18,8 +18,6 @@
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
-#include <stdint.h>
-
#include "libavutil/avassert.h"
#include "libavutil/common.h"
#include "libavutil/mem.h"
@@ -34,13 +32,13 @@
static void av_noinline free_picture_tables(Picture *pic)
{
- av_buffer_unref(&pic->mbskip_table_buf);
- av_buffer_unref(&pic->qscale_table_buf);
- av_buffer_unref(&pic->mb_type_buf);
+ ff_refstruct_unref(&pic->mbskip_table);
+ ff_refstruct_unref(&pic->qscale_table_base);
+ ff_refstruct_unref(&pic->mb_type_base);
for (int i = 0; i < 2; i++) {
- av_buffer_unref(&pic->motion_val_buf[i]);
- av_buffer_unref(&pic->ref_index_buf[i]);
+ ff_refstruct_unref(&pic->motion_val_base[i]);
+ ff_refstruct_unref(&pic->ref_index[i]);
}
pic->mb_width =
@@ -135,18 +133,18 @@ static int handle_pic_linesizes(AVCodecContext *avctx, Picture *pic,
static int alloc_picture_tables(BufferPoolContext *pools, Picture *pic,
int mb_height)
{
-#define GET_BUFFER(name, idx_suffix) do { \
- pic->name ## _buf idx_suffix = av_buffer_pool_get(pools->name ## _pool); \
- if (!pic->name ## _buf idx_suffix) \
+#define GET_BUFFER(name, buf_suffix, idx_suffix) do { \
+ pic->name ## buf_suffix idx_suffix = ff_refstruct_pool_get(pools->name ## _pool); \
+ if (!pic->name ## buf_suffix idx_suffix) \
return AVERROR(ENOMEM); \
} while (0)
- GET_BUFFER(mbskip_table,);
- GET_BUFFER(qscale_table,);
- GET_BUFFER(mb_type,);
+ GET_BUFFER(mbskip_table,,);
+ GET_BUFFER(qscale_table, _base,);
+ GET_BUFFER(mb_type, _base,);
if (pools->motion_val_pool) {
for (int i = 0; i < 2; i++) {
- GET_BUFFER(motion_val, [i]);
- GET_BUFFER(ref_index, [i]);
+ GET_BUFFER(ref_index,, [i]);
+ GET_BUFFER(motion_val, _base, [i]);
}
}
#undef GET_BUFFER
@@ -166,7 +164,7 @@ int ff_alloc_picture(AVCodecContext *avctx, Picture *pic, MotionEstContext *me,
ScratchpadContext *sc, BufferPoolContext *pools,
int mb_height, ptrdiff_t *linesize, ptrdiff_t *uvlinesize)
{
- int i, ret;
+ int ret;
if (handle_pic_linesizes(avctx, pic, me, sc,
*linesize, *uvlinesize) < 0)
@@ -179,20 +177,12 @@ int ff_alloc_picture(AVCodecContext *avctx, Picture *pic, MotionEstContext *me,
if (ret < 0)
goto fail;
- pic->mbskip_table = pic->mbskip_table_buf->data;
- memset(pic->mbskip_table, 0, pic->mbskip_table_buf->size);
- pic->qscale_table = pic->qscale_table_buf->data + 2 * pic->mb_stride + 1;
- pic->mb_type = (uint32_t*)pic->mb_type_buf->data + 2 * pic->mb_stride + 1;
-
- if (pic->motion_val_buf[0]) {
- for (i = 0; i < 2; i++) {
- pic->motion_val[i] = (int16_t (*)[2])pic->motion_val_buf[i]->data + 4;
- pic->ref_index[i] = pic->ref_index_buf[i]->data;
- /* FIXME: The output of H.263 with OBMC depends upon
- * the earlier content of the buffer; therefore we
- * reset it here. */
- memset(pic->motion_val_buf[i]->data, 0, pic->motion_val_buf[i]->size);
- }
+ pic->qscale_table = pic->qscale_table_base + 2 * pic->mb_stride + 1;
+ pic->mb_type = pic->mb_type_base + 2 * pic->mb_stride + 1;
+
+ if (pic->motion_val_base[0]) {
+ for (int i = 0; i < 2; i++)
+ pic->motion_val[i] = pic->motion_val_base[i] + 4;
}
return 0;
@@ -224,36 +214,24 @@ void ff_mpeg_unref_picture(Picture *pic)
pic->coded_picture_number = 0;
}
-static int update_picture_tables(Picture *dst, const Picture *src)
+static void update_picture_tables(Picture *dst, const Picture *src)
{
- int i, ret;
-
- ret = av_buffer_replace(&dst->mbskip_table_buf, src->mbskip_table_buf);
- ret |= av_buffer_replace(&dst->qscale_table_buf, src->qscale_table_buf);
- ret |= av_buffer_replace(&dst->mb_type_buf, src->mb_type_buf);
- for (i = 0; i < 2; i++) {
- ret |= av_buffer_replace(&dst->motion_val_buf[i], src->motion_val_buf[i]);
- ret |= av_buffer_replace(&dst->ref_index_buf[i], src->ref_index_buf[i]);
- }
-
- if (ret < 0) {
- free_picture_tables(dst);
- return ret;
+ ff_refstruct_replace(&dst->mbskip_table, src->mbskip_table);
+ ff_refstruct_replace(&dst->qscale_table_base, src->qscale_table_base);
+ ff_refstruct_replace(&dst->mb_type_base, src->mb_type_base);
+ for (int i = 0; i < 2; i++) {
+ ff_refstruct_replace(&dst->motion_val_base[i], src->motion_val_base[i]);
+ ff_refstruct_replace(&dst->ref_index[i], src->ref_index[i]);
}
- dst->mbskip_table = src->mbskip_table;
dst->qscale_table = src->qscale_table;
dst->mb_type = src->mb_type;
- for (i = 0; i < 2; i++) {
+ for (int i = 0; i < 2; i++)
dst->motion_val[i] = src->motion_val[i];
- dst->ref_index[i] = src->ref_index[i];
- }
dst->mb_width = src->mb_width;
dst->mb_height = src->mb_height;
dst->mb_stride = src->mb_stride;
-
- return 0;
}
int ff_mpeg_ref_picture(Picture *dst, Picture *src)
@@ -269,9 +247,7 @@ int ff_mpeg_ref_picture(Picture *dst, Picture *src)
if (ret < 0)
goto fail;
- ret = update_picture_tables(dst, src);
- if (ret < 0)
- goto fail;
+ update_picture_tables(dst, src);
ff_refstruct_replace(&dst->hwaccel_picture_private,
src->hwaccel_picture_private);
diff --git a/libavcodec/mpegpicture.h b/libavcodec/mpegpicture.h
index a0bfd8250f..363732910a 100644
--- a/libavcodec/mpegpicture.h
+++ b/libavcodec/mpegpicture.h
@@ -23,9 +23,6 @@
#include <stdint.h>
-#include "libavutil/buffer.h"
-#include "libavutil/frame.h"
-
#include "avcodec.h"
#include "motion_est.h"
#include "threadframe.h"
@@ -43,11 +40,11 @@ typedef struct ScratchpadContext {
} ScratchpadContext;
typedef struct BufferPoolContext {
- AVBufferPool *mbskip_table_pool;
- AVBufferPool *qscale_table_pool;
- AVBufferPool *mb_type_pool;
- AVBufferPool *motion_val_pool;
- AVBufferPool *ref_index_pool;
+ struct FFRefStructPool *mbskip_table_pool;
+ struct FFRefStructPool *qscale_table_pool;
+ struct FFRefStructPool *mb_type_pool;
+ struct FFRefStructPool *motion_val_pool;
+ struct FFRefStructPool *ref_index_pool;
int alloc_mb_width; ///< mb_width used to allocate tables
int alloc_mb_height; ///< mb_height used to allocate tables
int alloc_mb_stride; ///< mb_stride used to allocate tables
@@ -60,19 +57,17 @@ typedef struct Picture {
struct AVFrame *f;
ThreadFrame tf;
- AVBufferRef *qscale_table_buf;
+ int8_t *qscale_table_base;
int8_t *qscale_table;
- AVBufferRef *motion_val_buf[2];
+ int16_t (*motion_val_base[2])[2];
int16_t (*motion_val[2])[2];
- AVBufferRef *mb_type_buf;
+ uint32_t *mb_type_base;
uint32_t *mb_type; ///< types and macros are defined in mpegutils.h
- AVBufferRef *mbskip_table_buf;
uint8_t *mbskip_table;
- AVBufferRef *ref_index_buf[2];
int8_t *ref_index[2];
/// RefStruct reference for hardware accelerator private data
diff --git a/libavcodec/mpegvideo.c b/libavcodec/mpegvideo.c
index 5728f4cee3..eab4451e1e 100644
--- a/libavcodec/mpegvideo.c
+++ b/libavcodec/mpegvideo.c
@@ -41,6 +41,7 @@
#include "mpegutils.h"
#include "mpegvideo.h"
#include "mpegvideodata.h"
+#include "refstruct.h"
static void dct_unquantize_mpeg1_intra_c(MpegEncContext *s,
int16_t *block, int n, int qscale)
@@ -536,11 +537,11 @@ void ff_mpv_common_defaults(MpegEncContext *s)
static void free_buffer_pools(BufferPoolContext *pools)
{
- av_buffer_pool_uninit(&pools->mbskip_table_pool);
- av_buffer_pool_uninit(&pools->qscale_table_pool);
- av_buffer_pool_uninit(&pools->mb_type_pool);
- av_buffer_pool_uninit(&pools->motion_val_pool);
- av_buffer_pool_uninit(&pools->ref_index_pool);
+ ff_refstruct_pool_uninit(&pools->mbskip_table_pool);
+ ff_refstruct_pool_uninit(&pools->qscale_table_pool);
+ ff_refstruct_pool_uninit(&pools->mb_type_pool);
+ ff_refstruct_pool_uninit(&pools->motion_val_pool);
+ ff_refstruct_pool_uninit(&pools->ref_index_pool);
pools->alloc_mb_height = pools->alloc_mb_width = pools->alloc_mb_stride = 0;
}
@@ -641,15 +642,15 @@ int ff_mpv_init_context_frame(MpegEncContext *s)
return AVERROR(ENOMEM);
memset(s->mbintra_table, 1, mb_array_size);
-#define ALLOC_POOL(name, size) do { \
- pools->name ##_pool = av_buffer_pool_init((size), av_buffer_allocz); \
+#define ALLOC_POOL(name, size, flags) do { \
+ pools->name ##_pool = ff_refstruct_pool_alloc((size), (flags)); \
if (!pools->name ##_pool) \
return AVERROR(ENOMEM); \
} while (0)
- ALLOC_POOL(mbskip_table, mb_array_size + 2);
- ALLOC_POOL(qscale_table, mv_table_size);
- ALLOC_POOL(mb_type, mv_table_size * sizeof(uint32_t));
+ ALLOC_POOL(mbskip_table, mb_array_size + 2, FF_REFSTRUCT_POOL_FLAG_ZERO_EVERY_TIME);
+ ALLOC_POOL(qscale_table, mv_table_size, 0);
+ ALLOC_POOL(mb_type, mv_table_size * sizeof(uint32_t), 0);
if (s->out_format == FMT_H263 || s->encoding ||
(s->avctx->export_side_data & AV_CODEC_EXPORT_DATA_MVS)) {
@@ -657,8 +658,11 @@ int ff_mpv_init_context_frame(MpegEncContext *s)
int mv_size = 2 * (b8_array_size + 4) * sizeof(int16_t);
int ref_index_size = 4 * mb_array_size;
- ALLOC_POOL(motion_val, mv_size);
- ALLOC_POOL(ref_index, ref_index_size);
+ /* FIXME: The output of H.263 with OBMC depends upon
+ * the earlier content of the buffer; therefore we set
+ * the flags to always reset returned buffers here. */
+ ALLOC_POOL(motion_val, mv_size, FF_REFSTRUCT_POOL_FLAG_ZERO_EVERY_TIME);
+ ALLOC_POOL(ref_index, ref_index_size, 0);
}
#undef ALLOC_POOL
pools->alloc_mb_width = s->mb_width;
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 26/71] avcodec/h263: Move encoder-only part out of ff_h263_update_motion_val()
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (23 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 25/71] avcodec/mpegpicture: Use RefStruct-pool API Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 27/71] avcodec/h263, mpeg(picture|video): Only allocate mbskip_table for MPEG-4 Andreas Rheinhardt
` (45 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/h263.c | 9 ---------
libavcodec/h263enc.h | 2 +-
libavcodec/ituh263enc.c | 14 ++++++++++++++
libavcodec/mpegvideo_enc.c | 4 ++--
4 files changed, 17 insertions(+), 12 deletions(-)
diff --git a/libavcodec/h263.c b/libavcodec/h263.c
index b30ffaf878..3edf810bcc 100644
--- a/libavcodec/h263.c
+++ b/libavcodec/h263.c
@@ -91,15 +91,6 @@ void ff_h263_update_motion_val(MpegEncContext * s){
s->current_picture.motion_val[0][xy + 1 + wrap][0] = motion_x;
s->current_picture.motion_val[0][xy + 1 + wrap][1] = motion_y;
}
-
- if(s->encoding){ //FIXME encoding MUST be cleaned up
- if (s->mv_type == MV_TYPE_8X8)
- s->current_picture.mb_type[mb_xy] = MB_TYPE_L0 | MB_TYPE_8x8;
- else if(s->mb_intra)
- s->current_picture.mb_type[mb_xy] = MB_TYPE_INTRA;
- else
- s->current_picture.mb_type[mb_xy] = MB_TYPE_L0 | MB_TYPE_16x16;
- }
}
void ff_h263_loop_filter(MpegEncContext * s){
diff --git a/libavcodec/h263enc.h b/libavcodec/h263enc.h
index e45475686e..cd5ded1593 100644
--- a/libavcodec/h263enc.h
+++ b/libavcodec/h263enc.h
@@ -36,7 +36,7 @@ void ff_init_qscale_tab(MpegEncContext *s);
void ff_clean_h263_qscales(MpegEncContext *s);
void ff_h263_encode_motion(PutBitContext *pb, int val, int f_code);
-
+void ff_h263_update_mb(MpegEncContext *s);
static inline int h263_get_motion_length(int val, int f_code)
{
diff --git a/libavcodec/ituh263enc.c b/libavcodec/ituh263enc.c
index 4741ada853..87689e5f5b 100644
--- a/libavcodec/ituh263enc.c
+++ b/libavcodec/ituh263enc.c
@@ -688,6 +688,20 @@ void ff_h263_encode_mb(MpegEncContext * s,
}
}
+void ff_h263_update_mb(MpegEncContext *s)
+{
+ const int mb_xy = s->mb_y * s->mb_stride + s->mb_x;
+
+ if (s->mv_type == MV_TYPE_8X8)
+ s->current_picture.mb_type[mb_xy] = MB_TYPE_L0 | MB_TYPE_8x8;
+ else if(s->mb_intra)
+ s->current_picture.mb_type[mb_xy] = MB_TYPE_INTRA;
+ else
+ s->current_picture.mb_type[mb_xy] = MB_TYPE_L0 | MB_TYPE_16x16;
+
+ ff_h263_update_motion_val(s);
+}
+
void ff_h263_encode_motion(PutBitContext *pb, int val, int f_code)
{
int range, bit_size, sign, code, bits;
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index 4121cc034f..1798a25ed9 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -3314,7 +3314,7 @@ static int encode_thread(AVCodecContext *c, void *arg){
if (CONFIG_H263_ENCODER &&
s->out_format == FMT_H263 && s->pict_type!=AV_PICTURE_TYPE_B)
- ff_h263_update_motion_val(s);
+ ff_h263_update_mb(s);
if(next_block==0){ //FIXME 16 vs linesize16
s->hdsp.put_pixels_tab[0][0](s->dest[0], s->sc.rd_scratchpad , s->linesize ,16);
@@ -3440,7 +3440,7 @@ static int encode_thread(AVCodecContext *c, void *arg){
if (CONFIG_H263_ENCODER &&
s->out_format == FMT_H263 && s->pict_type!=AV_PICTURE_TYPE_B)
- ff_h263_update_motion_val(s);
+ ff_h263_update_mb(s);
mpv_reconstruct_mb(s, s->block);
}
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 27/71] avcodec/h263, mpeg(picture|video): Only allocate mbskip_table for MPEG-4
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (24 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 26/71] avcodec/h263: Move encoder-only part out of ff_h263_update_motion_val() Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 28/71] avcodec/mpegvideo: Reindent after the previous commit Andreas Rheinhardt
` (44 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
It is the only user of said table and doing so is especially
important given that this buffer is zeroed every time.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/h263.c | 3 ++-
libavcodec/mpegpicture.c | 3 ++-
libavcodec/mpegvideo.c | 19 +++++++++++--------
3 files changed, 15 insertions(+), 10 deletions(-)
diff --git a/libavcodec/h263.c b/libavcodec/h263.c
index 3edf810bcc..b4cf5ee0de 100644
--- a/libavcodec/h263.c
+++ b/libavcodec/h263.c
@@ -56,7 +56,8 @@ void ff_h263_update_motion_val(MpegEncContext * s){
const int wrap = s->b8_stride;
const int xy = s->block_index[0];
- s->current_picture.mbskip_table[mb_xy] = s->mb_skipped;
+ if (s->current_picture.mbskip_table)
+ s->current_picture.mbskip_table[mb_xy] = s->mb_skipped;
if(s->mv_type != MV_TYPE_8X8){
int motion_x, motion_y;
diff --git a/libavcodec/mpegpicture.c b/libavcodec/mpegpicture.c
index ad6157f0c1..ca265da9fc 100644
--- a/libavcodec/mpegpicture.c
+++ b/libavcodec/mpegpicture.c
@@ -138,10 +138,11 @@ static int alloc_picture_tables(BufferPoolContext *pools, Picture *pic,
if (!pic->name ## buf_suffix idx_suffix) \
return AVERROR(ENOMEM); \
} while (0)
- GET_BUFFER(mbskip_table,,);
GET_BUFFER(qscale_table, _base,);
GET_BUFFER(mb_type, _base,);
if (pools->motion_val_pool) {
+ if (pools->mbskip_table_pool)
+ GET_BUFFER(mbskip_table,,);
for (int i = 0; i < 2; i++) {
GET_BUFFER(ref_index,, [i]);
GET_BUFFER(motion_val, _base, [i]);
diff --git a/libavcodec/mpegvideo.c b/libavcodec/mpegvideo.c
index eab4451e1e..5c6ec7db55 100644
--- a/libavcodec/mpegvideo.c
+++ b/libavcodec/mpegvideo.c
@@ -594,6 +594,12 @@ int ff_mpv_init_context_frame(MpegEncContext *s)
s->mb_index2xy[s->mb_height * s->mb_width] = (s->mb_height - 1) * s->mb_stride + s->mb_width; // FIXME really needed?
+#define ALLOC_POOL(name, size, flags) do { \
+ pools->name ##_pool = ff_refstruct_pool_alloc((size), (flags)); \
+ if (!pools->name ##_pool) \
+ return AVERROR(ENOMEM); \
+} while (0)
+
if (s->codec_id == AV_CODEC_ID_MPEG4 ||
(s->avctx->flags & AV_CODEC_FLAG_INTERLACED_ME)) {
/* interlaced direct mode decoding tables */
@@ -608,12 +614,16 @@ int ff_mpv_init_context_frame(MpegEncContext *s)
tmp += mv_table_size;
}
}
- if (s->codec_id == AV_CODEC_ID_MPEG4 && !s->encoding) {
+ if (s->codec_id == AV_CODEC_ID_MPEG4) {
+ ALLOC_POOL(mbskip_table, mb_array_size + 2,
+ FF_REFSTRUCT_POOL_FLAG_ZERO_EVERY_TIME);
+ if (!s->encoding) {
/* cbp, pred_dir */
if (!(s->cbp_table = av_mallocz(mb_array_size)) ||
!(s->pred_dir_table = av_mallocz(mb_array_size)))
return AVERROR(ENOMEM);
}
+ }
}
if (s->msmpeg4_version >= 3) {
@@ -642,13 +652,6 @@ int ff_mpv_init_context_frame(MpegEncContext *s)
return AVERROR(ENOMEM);
memset(s->mbintra_table, 1, mb_array_size);
-#define ALLOC_POOL(name, size, flags) do { \
- pools->name ##_pool = ff_refstruct_pool_alloc((size), (flags)); \
- if (!pools->name ##_pool) \
- return AVERROR(ENOMEM); \
-} while (0)
-
- ALLOC_POOL(mbskip_table, mb_array_size + 2, FF_REFSTRUCT_POOL_FLAG_ZERO_EVERY_TIME);
ALLOC_POOL(qscale_table, mv_table_size, 0);
ALLOC_POOL(mb_type, mv_table_size * sizeof(uint32_t), 0);
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 28/71] avcodec/mpegvideo: Reindent after the previous commit
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (25 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 27/71] avcodec/h263, mpeg(picture|video): Only allocate mbskip_table for MPEG-4 Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 29/71] avcodec/h263: Move setting mbskip_table to decoder/encoders Andreas Rheinhardt
` (43 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegvideo.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/libavcodec/mpegvideo.c b/libavcodec/mpegvideo.c
index 5c6ec7db55..d82a89566c 100644
--- a/libavcodec/mpegvideo.c
+++ b/libavcodec/mpegvideo.c
@@ -617,12 +617,12 @@ int ff_mpv_init_context_frame(MpegEncContext *s)
if (s->codec_id == AV_CODEC_ID_MPEG4) {
ALLOC_POOL(mbskip_table, mb_array_size + 2,
FF_REFSTRUCT_POOL_FLAG_ZERO_EVERY_TIME);
- if (!s->encoding) {
- /* cbp, pred_dir */
- if (!(s->cbp_table = av_mallocz(mb_array_size)) ||
- !(s->pred_dir_table = av_mallocz(mb_array_size)))
- return AVERROR(ENOMEM);
- }
+ if (!s->encoding) {
+ /* cbp, pred_dir */
+ if (!(s->cbp_table = av_mallocz(mb_array_size)) ||
+ !(s->pred_dir_table = av_mallocz(mb_array_size)))
+ return AVERROR(ENOMEM);
+ }
}
}
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 29/71] avcodec/h263: Move setting mbskip_table to decoder/encoders
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (26 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 28/71] avcodec/mpegvideo: Reindent after the previous commit Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 30/71] avcodec/mpegvideo: Restrict resetting mbskip_table to MPEG-4 decoder Andreas Rheinhardt
` (42 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
This removes a branch from H.263 based decoders.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/h263.c | 3 ---
libavcodec/ituh263enc.c | 3 +++
libavcodec/mpeg4videodec.c | 4 ++++
3 files changed, 7 insertions(+), 3 deletions(-)
diff --git a/libavcodec/h263.c b/libavcodec/h263.c
index b4cf5ee0de..9849f651cb 100644
--- a/libavcodec/h263.c
+++ b/libavcodec/h263.c
@@ -56,9 +56,6 @@ void ff_h263_update_motion_val(MpegEncContext * s){
const int wrap = s->b8_stride;
const int xy = s->block_index[0];
- if (s->current_picture.mbskip_table)
- s->current_picture.mbskip_table[mb_xy] = s->mb_skipped;
-
if(s->mv_type != MV_TYPE_8X8){
int motion_x, motion_y;
if (s->mb_intra) {
diff --git a/libavcodec/ituh263enc.c b/libavcodec/ituh263enc.c
index 87689e5f5b..e27bd258d7 100644
--- a/libavcodec/ituh263enc.c
+++ b/libavcodec/ituh263enc.c
@@ -692,6 +692,9 @@ void ff_h263_update_mb(MpegEncContext *s)
{
const int mb_xy = s->mb_y * s->mb_stride + s->mb_x;
+ if (s->current_picture.mbskip_table)
+ s->current_picture.mbskip_table[mb_xy] = s->mb_skipped;
+
if (s->mv_type == MV_TYPE_8X8)
s->current_picture.mb_type[mb_xy] = MB_TYPE_L0 | MB_TYPE_8x8;
else if(s->mb_intra)
diff --git a/libavcodec/mpeg4videodec.c b/libavcodec/mpeg4videodec.c
index 6a7a37e817..482bc48f89 100644
--- a/libavcodec/mpeg4videodec.c
+++ b/libavcodec/mpeg4videodec.c
@@ -1592,9 +1592,11 @@ static int mpeg4_decode_partitioned_mb(MpegEncContext *s, int16_t block[6][64])
&& ctx->vol_sprite_usage == GMC_SPRITE) {
s->mcsel = 1;
s->mb_skipped = 0;
+ s->current_picture.mbskip_table[xy] = 0;
} else {
s->mcsel = 0;
s->mb_skipped = 1;
+ s->current_picture.mbskip_table[xy] = 1;
}
} else if (s->mb_intra) {
s->ac_pred = IS_ACPRED(s->current_picture.mb_type[xy]);
@@ -1676,6 +1678,7 @@ static int mpeg4_decode_mb(MpegEncContext *s, int16_t block[6][64])
s->mcsel = 1;
s->mv[0][0][0] = get_amv(ctx, 0);
s->mv[0][0][1] = get_amv(ctx, 1);
+ s->current_picture.mbskip_table[xy] = 0;
s->mb_skipped = 0;
} else {
s->current_picture.mb_type[xy] = MB_TYPE_SKIP |
@@ -1684,6 +1687,7 @@ static int mpeg4_decode_mb(MpegEncContext *s, int16_t block[6][64])
s->mcsel = 0;
s->mv[0][0][0] = 0;
s->mv[0][0][1] = 0;
+ s->current_picture.mbskip_table[xy] = 1;
s->mb_skipped = 1;
}
goto end;
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 30/71] avcodec/mpegvideo: Restrict resetting mbskip_table to MPEG-4 decoder
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (27 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 29/71] avcodec/h263: Move setting mbskip_table to decoder/encoders Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 31/71] avcodec/mpegvideo: Shorten variable names Andreas Rheinhardt
` (41 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
This is done due to invalid input and therefore the encoder is not
affected by it.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegvideo.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/libavcodec/mpegvideo.c b/libavcodec/mpegvideo.c
index d82a89566c..4b1f882105 100644
--- a/libavcodec/mpegvideo.c
+++ b/libavcodec/mpegvideo.c
@@ -616,7 +616,7 @@ int ff_mpv_init_context_frame(MpegEncContext *s)
}
if (s->codec_id == AV_CODEC_ID_MPEG4) {
ALLOC_POOL(mbskip_table, mb_array_size + 2,
- FF_REFSTRUCT_POOL_FLAG_ZERO_EVERY_TIME);
+ !s->encoding ? FF_REFSTRUCT_POOL_FLAG_ZERO_EVERY_TIME : 0);
if (!s->encoding) {
/* cbp, pred_dir */
if (!(s->cbp_table = av_mallocz(mb_array_size)) ||
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 31/71] avcodec/mpegvideo: Shorten variable names
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (28 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 30/71] avcodec/mpegvideo: Restrict resetting mbskip_table to MPEG-4 decoder Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 32/71] avcodec/mpegpicture: Reduce value of MAX_PLANES define Andreas Rheinhardt
` (40 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
current_picture->cur_pic, last_picture->last_pic, similarly
for new_picture and next_picture.
Also rename the corresponding *_ptr fields.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/d3d12va_mpeg2.c | 10 +-
libavcodec/d3d12va_vc1.c | 10 +-
libavcodec/dxva2_mpeg2.c | 16 +-
libavcodec/dxva2_vc1.c | 16 +-
libavcodec/h261dec.c | 26 +--
libavcodec/h263.c | 40 ++---
libavcodec/h263dec.c | 34 ++--
libavcodec/ituh263dec.c | 30 ++--
libavcodec/ituh263enc.c | 16 +-
libavcodec/motion_est.c | 84 ++++-----
libavcodec/mpeg12dec.c | 94 +++++-----
libavcodec/mpeg12enc.c | 14 +-
libavcodec/mpeg4video.c | 8 +-
libavcodec/mpeg4videodec.c | 84 ++++-----
libavcodec/mpeg4videoenc.c | 16 +-
libavcodec/mpeg_er.c | 12 +-
libavcodec/mpegvideo.c | 34 ++--
libavcodec/mpegvideo.h | 14 +-
libavcodec/mpegvideo_dec.c | 128 +++++++-------
libavcodec/mpegvideo_enc.c | 182 ++++++++++---------
libavcodec/mpegvideo_motion.c | 12 +-
libavcodec/mpv_reconstruct_mb_template.c | 20 +--
libavcodec/msmpeg4.c | 4 +-
libavcodec/msmpeg4dec.c | 4 +-
libavcodec/mss2.c | 2 +-
libavcodec/nvdec_mpeg12.c | 6 +-
libavcodec/nvdec_mpeg4.c | 6 +-
libavcodec/nvdec_vc1.c | 6 +-
libavcodec/ratecontrol.c | 10 +-
libavcodec/rv10.c | 28 +--
libavcodec/rv30.c | 18 +-
libavcodec/rv34.c | 156 ++++++++---------
libavcodec/rv40.c | 10 +-
libavcodec/snowenc.c | 18 +-
libavcodec/svq1enc.c | 24 +--
libavcodec/vaapi_mpeg2.c | 12 +-
libavcodec/vaapi_mpeg4.c | 14 +-
libavcodec/vaapi_vc1.c | 12 +-
libavcodec/vc1.c | 2 +-
libavcodec/vc1_block.c | 194 ++++++++++----------
libavcodec/vc1_loopfilter.c | 30 ++--
libavcodec/vc1_mc.c | 112 ++++++------
libavcodec/vc1_pred.c | 214 +++++++++++------------
libavcodec/vc1dec.c | 58 +++---
libavcodec/vdpau.c | 2 +-
libavcodec/vdpau_mpeg12.c | 8 +-
libavcodec/vdpau_mpeg4.c | 6 +-
libavcodec/vdpau_vc1.c | 12 +-
libavcodec/videotoolbox.c | 2 +-
libavcodec/wmv2dec.c | 18 +-
50 files changed, 941 insertions(+), 947 deletions(-)
diff --git a/libavcodec/d3d12va_mpeg2.c b/libavcodec/d3d12va_mpeg2.c
index 936af5f86a..c2cf78104c 100644
--- a/libavcodec/d3d12va_mpeg2.c
+++ b/libavcodec/d3d12va_mpeg2.c
@@ -44,7 +44,7 @@ static int d3d12va_mpeg2_start_frame(AVCodecContext *avctx, av_unused const uint
{
const MpegEncContext *s = avctx->priv_data;
D3D12VADecodeContext *ctx = D3D12VA_DECODE_CONTEXT(avctx);
- D3D12DecodePictureContext *ctx_pic = s->current_picture_ptr->hwaccel_picture_private;
+ D3D12DecodePictureContext *ctx_pic = s->cur_pic_ptr->hwaccel_picture_private;
if (!ctx)
return -1;
@@ -69,7 +69,7 @@ static int d3d12va_mpeg2_start_frame(AVCodecContext *avctx, av_unused const uint
static int d3d12va_mpeg2_decode_slice(AVCodecContext *avctx, const uint8_t *buffer, uint32_t size)
{
const MpegEncContext *s = avctx->priv_data;
- D3D12DecodePictureContext *ctx_pic = s->current_picture_ptr->hwaccel_picture_private;
+ D3D12DecodePictureContext *ctx_pic = s->cur_pic_ptr->hwaccel_picture_private;
if (ctx_pic->slice_count >= MAX_SLICES) {
return AVERROR(ERANGE);
@@ -88,7 +88,7 @@ static int d3d12va_mpeg2_decode_slice(AVCodecContext *avctx, const uint8_t *buff
static int update_input_arguments(AVCodecContext *avctx, D3D12_VIDEO_DECODE_INPUT_STREAM_ARGUMENTS *input_args, ID3D12Resource *buffer)
{
const MpegEncContext *s = avctx->priv_data;
- D3D12DecodePictureContext *ctx_pic = s->current_picture_ptr->hwaccel_picture_private;
+ D3D12DecodePictureContext *ctx_pic = s->cur_pic_ptr->hwaccel_picture_private;
const int is_field = s->picture_structure != PICT_FRAME;
const unsigned mb_count = s->mb_width * (s->mb_height >> is_field);
@@ -137,12 +137,12 @@ static int d3d12va_mpeg2_end_frame(AVCodecContext *avctx)
{
int ret;
MpegEncContext *s = avctx->priv_data;
- D3D12DecodePictureContext *ctx_pic = s->current_picture_ptr->hwaccel_picture_private;
+ D3D12DecodePictureContext *ctx_pic = s->cur_pic_ptr->hwaccel_picture_private;
if (ctx_pic->slice_count <= 0 || ctx_pic->bitstream_size <= 0)
return -1;
- ret = ff_d3d12va_common_end_frame(avctx, s->current_picture_ptr->f, &ctx_pic->pp, sizeof(ctx_pic->pp),
+ ret = ff_d3d12va_common_end_frame(avctx, s->cur_pic_ptr->f, &ctx_pic->pp, sizeof(ctx_pic->pp),
&ctx_pic->qm, sizeof(ctx_pic->qm), update_input_arguments);
if (!ret)
ff_mpeg_draw_horiz_band(s, 0, avctx->height);
diff --git a/libavcodec/d3d12va_vc1.c b/libavcodec/d3d12va_vc1.c
index 110926be82..c4ac67ca04 100644
--- a/libavcodec/d3d12va_vc1.c
+++ b/libavcodec/d3d12va_vc1.c
@@ -45,7 +45,7 @@ static int d3d12va_vc1_start_frame(AVCodecContext *avctx, av_unused const uint8_
{
const VC1Context *v = avctx->priv_data;
D3D12VADecodeContext *ctx = D3D12VA_DECODE_CONTEXT(avctx);
- D3D12DecodePictureContext *ctx_pic = v->s.current_picture_ptr->hwaccel_picture_private;
+ D3D12DecodePictureContext *ctx_pic = v->s.cur_pic_ptr->hwaccel_picture_private;
if (!ctx)
return -1;
@@ -67,7 +67,7 @@ static int d3d12va_vc1_start_frame(AVCodecContext *avctx, av_unused const uint8_
static int d3d12va_vc1_decode_slice(AVCodecContext *avctx, const uint8_t *buffer, uint32_t size)
{
const VC1Context *v = avctx->priv_data;
- D3D12DecodePictureContext *ctx_pic = v->s.current_picture_ptr->hwaccel_picture_private;
+ D3D12DecodePictureContext *ctx_pic = v->s.cur_pic_ptr->hwaccel_picture_private;
if (ctx_pic->slice_count >= MAX_SLICES) {
return AVERROR(ERANGE);
@@ -93,7 +93,7 @@ static int update_input_arguments(AVCodecContext *avctx, D3D12_VIDEO_DECODE_INPU
{
const VC1Context *v = avctx->priv_data;
const MpegEncContext *s = &v->s;
- D3D12DecodePictureContext *ctx_pic = s->current_picture_ptr->hwaccel_picture_private;
+ D3D12DecodePictureContext *ctx_pic = s->cur_pic_ptr->hwaccel_picture_private;
D3D12_VIDEO_DECODE_FRAME_ARGUMENT *args = &input_args->FrameArguments[input_args->NumFrameArguments++];
const unsigned mb_count = s->mb_width * (s->mb_height >> v->field_mode);
@@ -151,12 +151,12 @@ static int update_input_arguments(AVCodecContext *avctx, D3D12_VIDEO_DECODE_INPU
static int d3d12va_vc1_end_frame(AVCodecContext *avctx)
{
const VC1Context *v = avctx->priv_data;
- D3D12DecodePictureContext *ctx_pic = v->s.current_picture_ptr->hwaccel_picture_private;
+ D3D12DecodePictureContext *ctx_pic = v->s.cur_pic_ptr->hwaccel_picture_private;
if (ctx_pic->slice_count <= 0 || ctx_pic->bitstream_size <= 0)
return -1;
- return ff_d3d12va_common_end_frame(avctx, v->s.current_picture_ptr->f,
+ return ff_d3d12va_common_end_frame(avctx, v->s.cur_pic_ptr->f,
&ctx_pic->pp, sizeof(ctx_pic->pp),
NULL, 0,
update_input_arguments);
diff --git a/libavcodec/dxva2_mpeg2.c b/libavcodec/dxva2_mpeg2.c
index d31a8bb872..fde615f530 100644
--- a/libavcodec/dxva2_mpeg2.c
+++ b/libavcodec/dxva2_mpeg2.c
@@ -45,17 +45,17 @@ void ff_dxva2_mpeg2_fill_picture_parameters(AVCodecContext *avctx,
DXVA_PictureParameters *pp)
{
const struct MpegEncContext *s = avctx->priv_data;
- const Picture *current_picture = s->current_picture_ptr;
+ const Picture *current_picture = s->cur_pic_ptr;
int is_field = s->picture_structure != PICT_FRAME;
memset(pp, 0, sizeof(*pp));
pp->wDeblockedPictureIndex = 0;
if (s->pict_type != AV_PICTURE_TYPE_I)
- pp->wForwardRefPictureIndex = ff_dxva2_get_surface_index(avctx, ctx, s->last_picture.f, 0);
+ pp->wForwardRefPictureIndex = ff_dxva2_get_surface_index(avctx, ctx, s->last_pic.f, 0);
else
pp->wForwardRefPictureIndex = 0xffff;
if (s->pict_type == AV_PICTURE_TYPE_B)
- pp->wBackwardRefPictureIndex = ff_dxva2_get_surface_index(avctx, ctx, s->next_picture.f, 0);
+ pp->wBackwardRefPictureIndex = ff_dxva2_get_surface_index(avctx, ctx, s->next_pic.f, 0);
else
pp->wBackwardRefPictureIndex = 0xffff;
pp->wDecodedPictureIndex = ff_dxva2_get_surface_index(avctx, ctx, current_picture->f, 1);
@@ -157,7 +157,7 @@ static int commit_bitstream_and_slice_buffer(AVCodecContext *avctx,
const struct MpegEncContext *s = avctx->priv_data;
AVDXVAContext *ctx = DXVA_CONTEXT(avctx);
struct dxva2_picture_context *ctx_pic =
- s->current_picture_ptr->hwaccel_picture_private;
+ s->cur_pic_ptr->hwaccel_picture_private;
const int is_field = s->picture_structure != PICT_FRAME;
const unsigned mb_count = s->mb_width * (s->mb_height >> is_field);
void *dxva_data_ptr;
@@ -260,7 +260,7 @@ static int dxva2_mpeg2_start_frame(AVCodecContext *avctx,
const struct MpegEncContext *s = avctx->priv_data;
AVDXVAContext *ctx = DXVA_CONTEXT(avctx);
struct dxva2_picture_context *ctx_pic =
- s->current_picture_ptr->hwaccel_picture_private;
+ s->cur_pic_ptr->hwaccel_picture_private;
if (!DXVA_CONTEXT_VALID(avctx, ctx))
return -1;
@@ -280,7 +280,7 @@ static int dxva2_mpeg2_decode_slice(AVCodecContext *avctx,
{
const struct MpegEncContext *s = avctx->priv_data;
struct dxva2_picture_context *ctx_pic =
- s->current_picture_ptr->hwaccel_picture_private;
+ s->cur_pic_ptr->hwaccel_picture_private;
unsigned position;
if (ctx_pic->slice_count >= MAX_SLICES) {
@@ -302,12 +302,12 @@ static int dxva2_mpeg2_end_frame(AVCodecContext *avctx)
{
struct MpegEncContext *s = avctx->priv_data;
struct dxva2_picture_context *ctx_pic =
- s->current_picture_ptr->hwaccel_picture_private;
+ s->cur_pic_ptr->hwaccel_picture_private;
int ret;
if (ctx_pic->slice_count <= 0 || ctx_pic->bitstream_size <= 0)
return -1;
- ret = ff_dxva2_common_end_frame(avctx, s->current_picture_ptr->f,
+ ret = ff_dxva2_common_end_frame(avctx, s->cur_pic_ptr->f,
&ctx_pic->pp, sizeof(ctx_pic->pp),
&ctx_pic->qm, sizeof(ctx_pic->qm),
commit_bitstream_and_slice_buffer);
diff --git a/libavcodec/dxva2_vc1.c b/libavcodec/dxva2_vc1.c
index f7513b2b15..7122f1cfea 100644
--- a/libavcodec/dxva2_vc1.c
+++ b/libavcodec/dxva2_vc1.c
@@ -46,7 +46,7 @@ void ff_dxva2_vc1_fill_picture_parameters(AVCodecContext *avctx,
{
const VC1Context *v = avctx->priv_data;
const MpegEncContext *s = &v->s;
- const Picture *current_picture = s->current_picture_ptr;
+ const Picture *current_picture = s->cur_pic_ptr;
int intcomp = 0;
// determine if intensity compensation is needed
@@ -59,11 +59,11 @@ void ff_dxva2_vc1_fill_picture_parameters(AVCodecContext *avctx,
memset(pp, 0, sizeof(*pp));
if (s->pict_type != AV_PICTURE_TYPE_I && !v->bi_type)
- pp->wForwardRefPictureIndex = ff_dxva2_get_surface_index(avctx, ctx, s->last_picture.f, 0);
+ pp->wForwardRefPictureIndex = ff_dxva2_get_surface_index(avctx, ctx, s->last_pic.f, 0);
else
pp->wForwardRefPictureIndex = 0xffff;
if (s->pict_type == AV_PICTURE_TYPE_B && !v->bi_type)
- pp->wBackwardRefPictureIndex = ff_dxva2_get_surface_index(avctx, ctx, s->next_picture.f, 0);
+ pp->wBackwardRefPictureIndex = ff_dxva2_get_surface_index(avctx, ctx, s->next_pic.f, 0);
else
pp->wBackwardRefPictureIndex = 0xffff;
pp->wDecodedPictureIndex =
@@ -191,7 +191,7 @@ static int commit_bitstream_and_slice_buffer(AVCodecContext *avctx,
const VC1Context *v = avctx->priv_data;
AVDXVAContext *ctx = DXVA_CONTEXT(avctx);
const MpegEncContext *s = &v->s;
- struct dxva2_picture_context *ctx_pic = s->current_picture_ptr->hwaccel_picture_private;
+ struct dxva2_picture_context *ctx_pic = s->cur_pic_ptr->hwaccel_picture_private;
static const uint8_t start_code[] = { 0, 0, 1, 0x0d };
const unsigned start_code_size = avctx->codec_id == AV_CODEC_ID_VC1 ? sizeof(start_code) : 0;
@@ -317,7 +317,7 @@ static int dxva2_vc1_start_frame(AVCodecContext *avctx,
{
const VC1Context *v = avctx->priv_data;
AVDXVAContext *ctx = DXVA_CONTEXT(avctx);
- struct dxva2_picture_context *ctx_pic = v->s.current_picture_ptr->hwaccel_picture_private;
+ struct dxva2_picture_context *ctx_pic = v->s.cur_pic_ptr->hwaccel_picture_private;
if (!DXVA_CONTEXT_VALID(avctx, ctx))
return -1;
@@ -336,7 +336,7 @@ static int dxva2_vc1_decode_slice(AVCodecContext *avctx,
uint32_t size)
{
const VC1Context *v = avctx->priv_data;
- const Picture *current_picture = v->s.current_picture_ptr;
+ const Picture *current_picture = v->s.cur_pic_ptr;
struct dxva2_picture_context *ctx_pic = current_picture->hwaccel_picture_private;
unsigned position;
@@ -364,13 +364,13 @@ static int dxva2_vc1_decode_slice(AVCodecContext *avctx,
static int dxva2_vc1_end_frame(AVCodecContext *avctx)
{
VC1Context *v = avctx->priv_data;
- struct dxva2_picture_context *ctx_pic = v->s.current_picture_ptr->hwaccel_picture_private;
+ struct dxva2_picture_context *ctx_pic = v->s.cur_pic_ptr->hwaccel_picture_private;
int ret;
if (ctx_pic->slice_count <= 0 || ctx_pic->bitstream_size <= 0)
return -1;
- ret = ff_dxva2_common_end_frame(avctx, v->s.current_picture_ptr->f,
+ ret = ff_dxva2_common_end_frame(avctx, v->s.cur_pic_ptr->f,
&ctx_pic->pp, sizeof(ctx_pic->pp),
NULL, 0,
commit_bitstream_and_slice_buffer);
diff --git a/libavcodec/h261dec.c b/libavcodec/h261dec.c
index 4fbd5985b3..77aa08687d 100644
--- a/libavcodec/h261dec.c
+++ b/libavcodec/h261dec.c
@@ -228,17 +228,17 @@ static int h261_decode_mb_skipped(H261DecContext *h, int mba1, int mba2)
s->mv_dir = MV_DIR_FORWARD;
s->mv_type = MV_TYPE_16X16;
- s->current_picture.mb_type[xy] = MB_TYPE_SKIP | MB_TYPE_16x16 | MB_TYPE_L0;
+ s->cur_pic.mb_type[xy] = MB_TYPE_SKIP | MB_TYPE_16x16 | MB_TYPE_L0;
s->mv[0][0][0] = 0;
s->mv[0][0][1] = 0;
s->mb_skipped = 1;
h->common.mtype &= ~MB_TYPE_H261_FIL;
- if (s->current_picture.motion_val[0]) {
+ if (s->cur_pic.motion_val[0]) {
int b_stride = 2*s->mb_width + 1;
int b_xy = 2 * s->mb_x + (2 * s->mb_y) * b_stride;
- s->current_picture.motion_val[0][b_xy][0] = s->mv[0][0][0];
- s->current_picture.motion_val[0][b_xy][1] = s->mv[0][0][1];
+ s->cur_pic.motion_val[0][b_xy][0] = s->mv[0][0][0];
+ s->cur_pic.motion_val[0][b_xy][1] = s->mv[0][0][1];
}
ff_mpv_reconstruct_mb(s, s->block);
@@ -452,22 +452,22 @@ static int h261_decode_mb(H261DecContext *h)
cbp = get_vlc2(&s->gb, h261_cbp_vlc, H261_CBP_VLC_BITS, 1) + 1;
if (s->mb_intra) {
- s->current_picture.mb_type[xy] = MB_TYPE_INTRA;
+ s->cur_pic.mb_type[xy] = MB_TYPE_INTRA;
goto intra;
}
//set motion vectors
s->mv_dir = MV_DIR_FORWARD;
s->mv_type = MV_TYPE_16X16;
- s->current_picture.mb_type[xy] = MB_TYPE_16x16 | MB_TYPE_L0;
+ s->cur_pic.mb_type[xy] = MB_TYPE_16x16 | MB_TYPE_L0;
s->mv[0][0][0] = h->current_mv_x * 2; // gets divided by 2 in motion compensation
s->mv[0][0][1] = h->current_mv_y * 2;
- if (s->current_picture.motion_val[0]) {
+ if (s->cur_pic.motion_val[0]) {
int b_stride = 2*s->mb_width + 1;
int b_xy = 2 * s->mb_x + (2 * s->mb_y) * b_stride;
- s->current_picture.motion_val[0][b_xy][0] = s->mv[0][0][0];
- s->current_picture.motion_val[0][b_xy][1] = s->mv[0][0][1];
+ s->cur_pic.motion_val[0][b_xy][0] = s->mv[0][0][0];
+ s->cur_pic.motion_val[0][b_xy][1] = s->mv[0][0][1];
}
intra:
@@ -649,12 +649,12 @@ static int h261_decode_frame(AVCodecContext *avctx, AVFrame *pict,
}
ff_mpv_frame_end(s);
- av_assert0(s->current_picture.f->pict_type == s->current_picture_ptr->f->pict_type);
- av_assert0(s->current_picture.f->pict_type == s->pict_type);
+ av_assert0(s->cur_pic.f->pict_type == s->cur_pic_ptr->f->pict_type);
+ av_assert0(s->cur_pic.f->pict_type == s->pict_type);
- if ((ret = av_frame_ref(pict, s->current_picture_ptr->f)) < 0)
+ if ((ret = av_frame_ref(pict, s->cur_pic_ptr->f)) < 0)
return ret;
- ff_print_debug_info(s, s->current_picture_ptr, pict);
+ ff_print_debug_info(s, s->cur_pic_ptr, pict);
*got_frame = 1;
diff --git a/libavcodec/h263.c b/libavcodec/h263.c
index 9849f651cb..19eb3ba52f 100644
--- a/libavcodec/h263.c
+++ b/libavcodec/h263.c
@@ -73,21 +73,21 @@ void ff_h263_update_motion_val(MpegEncContext * s){
s->p_field_mv_table[i][0][mb_xy][0]= s->mv[0][i][0];
s->p_field_mv_table[i][0][mb_xy][1]= s->mv[0][i][1];
}
- s->current_picture.ref_index[0][4*mb_xy ] =
- s->current_picture.ref_index[0][4*mb_xy + 1] = s->field_select[0][0];
- s->current_picture.ref_index[0][4*mb_xy + 2] =
- s->current_picture.ref_index[0][4*mb_xy + 3] = s->field_select[0][1];
+ s->cur_pic.ref_index[0][4*mb_xy ] =
+ s->cur_pic.ref_index[0][4*mb_xy + 1] = s->field_select[0][0];
+ s->cur_pic.ref_index[0][4*mb_xy + 2] =
+ s->cur_pic.ref_index[0][4*mb_xy + 3] = s->field_select[0][1];
}
/* no update if 8X8 because it has been done during parsing */
- s->current_picture.motion_val[0][xy][0] = motion_x;
- s->current_picture.motion_val[0][xy][1] = motion_y;
- s->current_picture.motion_val[0][xy + 1][0] = motion_x;
- s->current_picture.motion_val[0][xy + 1][1] = motion_y;
- s->current_picture.motion_val[0][xy + wrap][0] = motion_x;
- s->current_picture.motion_val[0][xy + wrap][1] = motion_y;
- s->current_picture.motion_val[0][xy + 1 + wrap][0] = motion_x;
- s->current_picture.motion_val[0][xy + 1 + wrap][1] = motion_y;
+ s->cur_pic.motion_val[0][xy][0] = motion_x;
+ s->cur_pic.motion_val[0][xy][1] = motion_y;
+ s->cur_pic.motion_val[0][xy + 1][0] = motion_x;
+ s->cur_pic.motion_val[0][xy + 1][1] = motion_y;
+ s->cur_pic.motion_val[0][xy + wrap][0] = motion_x;
+ s->cur_pic.motion_val[0][xy + wrap][1] = motion_y;
+ s->cur_pic.motion_val[0][xy + 1 + wrap][0] = motion_x;
+ s->cur_pic.motion_val[0][xy + 1 + wrap][1] = motion_y;
}
}
@@ -104,7 +104,7 @@ void ff_h263_loop_filter(MpegEncContext * s){
Diag Top
Left Center
*/
- if (!IS_SKIP(s->current_picture.mb_type[xy])) {
+ if (!IS_SKIP(s->cur_pic.mb_type[xy])) {
qp_c= s->qscale;
s->h263dsp.h263_v_loop_filter(dest_y + 8 * linesize, linesize, qp_c);
s->h263dsp.h263_v_loop_filter(dest_y + 8 * linesize + 8, linesize, qp_c);
@@ -114,10 +114,10 @@ void ff_h263_loop_filter(MpegEncContext * s){
if(s->mb_y){
int qp_dt, qp_tt, qp_tc;
- if (IS_SKIP(s->current_picture.mb_type[xy - s->mb_stride]))
+ if (IS_SKIP(s->cur_pic.mb_type[xy - s->mb_stride]))
qp_tt=0;
else
- qp_tt = s->current_picture.qscale_table[xy - s->mb_stride];
+ qp_tt = s->cur_pic.qscale_table[xy - s->mb_stride];
if(qp_c)
qp_tc= qp_c;
@@ -137,10 +137,10 @@ void ff_h263_loop_filter(MpegEncContext * s){
s->h263dsp.h263_h_loop_filter(dest_y - 8 * linesize + 8, linesize, qp_tt);
if(s->mb_x){
- if (qp_tt || IS_SKIP(s->current_picture.mb_type[xy - 1 - s->mb_stride]))
+ if (qp_tt || IS_SKIP(s->cur_pic.mb_type[xy - 1 - s->mb_stride]))
qp_dt= qp_tt;
else
- qp_dt = s->current_picture.qscale_table[xy - 1 - s->mb_stride];
+ qp_dt = s->cur_pic.qscale_table[xy - 1 - s->mb_stride];
if(qp_dt){
const int chroma_qp= s->chroma_qscale_table[qp_dt];
@@ -159,10 +159,10 @@ void ff_h263_loop_filter(MpegEncContext * s){
if(s->mb_x){
int qp_lc;
- if (qp_c || IS_SKIP(s->current_picture.mb_type[xy - 1]))
+ if (qp_c || IS_SKIP(s->cur_pic.mb_type[xy - 1]))
qp_lc= qp_c;
else
- qp_lc = s->current_picture.qscale_table[xy - 1];
+ qp_lc = s->cur_pic.qscale_table[xy - 1];
if(qp_lc){
s->h263dsp.h263_h_loop_filter(dest_y, linesize, qp_lc);
@@ -184,7 +184,7 @@ int16_t *ff_h263_pred_motion(MpegEncContext * s, int block, int dir,
static const int off[4]= {2, 1, 1, -1};
wrap = s->b8_stride;
- mot_val = s->current_picture.motion_val[dir] + s->block_index[block];
+ mot_val = s->cur_pic.motion_val[dir] + s->block_index[block];
A = mot_val[ - 1];
/* special case for first (slice) line */
diff --git a/libavcodec/h263dec.c b/libavcodec/h263dec.c
index 48bd467f30..6ae634fceb 100644
--- a/libavcodec/h263dec.c
+++ b/libavcodec/h263dec.c
@@ -432,22 +432,22 @@ int ff_h263_decode_frame(AVCodecContext *avctx, AVFrame *pict,
/* no supplementary picture */
if (buf_size == 0) {
/* special case for last picture */
- if (s->low_delay == 0 && s->next_picture_ptr) {
- if ((ret = av_frame_ref(pict, s->next_picture_ptr->f)) < 0)
+ if (s->low_delay == 0 && s->next_pic_ptr) {
+ if ((ret = av_frame_ref(pict, s->next_pic_ptr->f)) < 0)
return ret;
- s->next_picture_ptr = NULL;
+ s->next_pic_ptr = NULL;
*got_frame = 1;
- } else if (s->skipped_last_frame && s->current_picture_ptr) {
+ } else if (s->skipped_last_frame && s->cur_pic_ptr) {
/* Output the last picture we decoded again if the stream ended with
* an NVOP */
- if ((ret = av_frame_ref(pict, s->current_picture_ptr->f)) < 0)
+ if ((ret = av_frame_ref(pict, s->cur_pic_ptr->f)) < 0)
return ret;
/* Copy props from the last input packet. Otherwise, props from the last
* returned picture would be reused */
if ((ret = ff_decode_frame_props(avctx, pict)) < 0)
return ret;
- s->current_picture_ptr = NULL;
+ s->cur_pic_ptr = NULL;
*got_frame = 1;
}
@@ -561,7 +561,7 @@ retry:
s->gob_index = H263_GOB_HEIGHT(s->height);
/* skip B-frames if we don't have reference frames */
- if (!s->last_picture_ptr &&
+ if (!s->last_pic_ptr &&
(s->pict_type == AV_PICTURE_TYPE_B || s->droppable))
return get_consumed_bytes(s, buf_size);
if ((avctx->skip_frame >= AVDISCARD_NONREF &&
@@ -647,21 +647,21 @@ frame_end:
if (!s->divx_packed && avctx->hwaccel)
ff_thread_finish_setup(avctx);
- av_assert1(s->current_picture.f->pict_type == s->current_picture_ptr->f->pict_type);
- av_assert1(s->current_picture.f->pict_type == s->pict_type);
+ av_assert1(s->cur_pic.f->pict_type == s->cur_pic_ptr->f->pict_type);
+ av_assert1(s->cur_pic.f->pict_type == s->pict_type);
if (s->pict_type == AV_PICTURE_TYPE_B || s->low_delay) {
- if ((ret = av_frame_ref(pict, s->current_picture_ptr->f)) < 0)
+ if ((ret = av_frame_ref(pict, s->cur_pic_ptr->f)) < 0)
return ret;
- ff_print_debug_info(s, s->current_picture_ptr, pict);
- ff_mpv_export_qp_table(s, pict, s->current_picture_ptr, FF_MPV_QSCALE_TYPE_MPEG1);
- } else if (s->last_picture_ptr) {
- if ((ret = av_frame_ref(pict, s->last_picture_ptr->f)) < 0)
+ ff_print_debug_info(s, s->cur_pic_ptr, pict);
+ ff_mpv_export_qp_table(s, pict, s->cur_pic_ptr, FF_MPV_QSCALE_TYPE_MPEG1);
+ } else if (s->last_pic_ptr) {
+ if ((ret = av_frame_ref(pict, s->last_pic_ptr->f)) < 0)
return ret;
- ff_print_debug_info(s, s->last_picture_ptr, pict);
- ff_mpv_export_qp_table(s, pict, s->last_picture_ptr, FF_MPV_QSCALE_TYPE_MPEG1);
+ ff_print_debug_info(s, s->last_pic_ptr, pict);
+ ff_mpv_export_qp_table(s, pict, s->last_pic_ptr, FF_MPV_QSCALE_TYPE_MPEG1);
}
- if (s->last_picture_ptr || s->low_delay) {
+ if (s->last_pic_ptr || s->low_delay) {
if ( pict->format == AV_PIX_FMT_YUV420P
&& (s->codec_tag == AV_RL32("GEOV") || s->codec_tag == AV_RL32("GEOX"))) {
for (int p = 0; p < 3; p++) {
diff --git a/libavcodec/ituh263dec.c b/libavcodec/ituh263dec.c
index aeeda1cc42..9358363ed8 100644
--- a/libavcodec/ituh263dec.c
+++ b/libavcodec/ituh263dec.c
@@ -357,20 +357,20 @@ static void preview_obmc(MpegEncContext *s){
do{
if (get_bits1(&s->gb)) {
/* skip mb */
- mot_val = s->current_picture.motion_val[0][s->block_index[0]];
+ mot_val = s->cur_pic.motion_val[0][s->block_index[0]];
mot_val[0 ]= mot_val[2 ]=
mot_val[0+stride]= mot_val[2+stride]= 0;
mot_val[1 ]= mot_val[3 ]=
mot_val[1+stride]= mot_val[3+stride]= 0;
- s->current_picture.mb_type[xy] = MB_TYPE_SKIP | MB_TYPE_16x16 | MB_TYPE_L0;
+ s->cur_pic.mb_type[xy] = MB_TYPE_SKIP | MB_TYPE_16x16 | MB_TYPE_L0;
goto end;
}
cbpc = get_vlc2(&s->gb, ff_h263_inter_MCBPC_vlc, INTER_MCBPC_VLC_BITS, 2);
}while(cbpc == 20);
if(cbpc & 4){
- s->current_picture.mb_type[xy] = MB_TYPE_INTRA;
+ s->cur_pic.mb_type[xy] = MB_TYPE_INTRA;
}else{
get_vlc2(&s->gb, ff_h263_cbpy_vlc, CBPY_VLC_BITS, 1);
if (cbpc & 8) {
@@ -382,7 +382,7 @@ static void preview_obmc(MpegEncContext *s){
}
if ((cbpc & 16) == 0) {
- s->current_picture.mb_type[xy] = MB_TYPE_16x16 | MB_TYPE_L0;
+ s->cur_pic.mb_type[xy] = MB_TYPE_16x16 | MB_TYPE_L0;
/* 16x16 motion prediction */
mot_val= ff_h263_pred_motion(s, 0, 0, &pred_x, &pred_y);
if (s->umvplus)
@@ -400,7 +400,7 @@ static void preview_obmc(MpegEncContext *s){
mot_val[1 ]= mot_val[3 ]=
mot_val[1+stride]= mot_val[3+stride]= my;
} else {
- s->current_picture.mb_type[xy] = MB_TYPE_8x8 | MB_TYPE_L0;
+ s->cur_pic.mb_type[xy] = MB_TYPE_8x8 | MB_TYPE_L0;
for(i=0;i<4;i++) {
mot_val = ff_h263_pred_motion(s, i, 0, &pred_x, &pred_y);
if (s->umvplus)
@@ -750,12 +750,12 @@ static inline void set_one_direct_mv(MpegEncContext *s, Picture *p, int i)
static int set_direct_mv(MpegEncContext *s)
{
const int mb_index = s->mb_x + s->mb_y * s->mb_stride;
- Picture *p = &s->next_picture;
+ Picture *p = &s->next_pic;
int colocated_mb_type = p->mb_type[mb_index];
int i;
if (s->codec_tag == AV_RL32("U263") && p->f->pict_type == AV_PICTURE_TYPE_I) {
- p = &s->last_picture;
+ p = &s->last_pic;
colocated_mb_type = p->mb_type[mb_index];
}
@@ -803,7 +803,7 @@ int ff_h263_decode_mb(MpegEncContext *s,
s->block_last_index[i] = -1;
s->mv_dir = MV_DIR_FORWARD;
s->mv_type = MV_TYPE_16X16;
- s->current_picture.mb_type[xy] = MB_TYPE_SKIP | MB_TYPE_16x16 | MB_TYPE_L0;
+ s->cur_pic.mb_type[xy] = MB_TYPE_SKIP | MB_TYPE_16x16 | MB_TYPE_L0;
s->mv[0][0][0] = 0;
s->mv[0][0][1] = 0;
s->mb_skipped = !(s->obmc | s->loop_filter);
@@ -841,7 +841,7 @@ int ff_h263_decode_mb(MpegEncContext *s,
s->mv_dir = MV_DIR_FORWARD;
if ((cbpc & 16) == 0) {
- s->current_picture.mb_type[xy] = MB_TYPE_16x16 | MB_TYPE_L0;
+ s->cur_pic.mb_type[xy] = MB_TYPE_16x16 | MB_TYPE_L0;
/* 16x16 motion prediction */
s->mv_type = MV_TYPE_16X16;
ff_h263_pred_motion(s, 0, 0, &pred_x, &pred_y);
@@ -866,7 +866,7 @@ int ff_h263_decode_mb(MpegEncContext *s,
if (s->umvplus && (mx - pred_x) == 1 && (my - pred_y) == 1)
skip_bits1(&s->gb); /* Bit stuffing to prevent PSC */
} else {
- s->current_picture.mb_type[xy] = MB_TYPE_8x8 | MB_TYPE_L0;
+ s->cur_pic.mb_type[xy] = MB_TYPE_8x8 | MB_TYPE_L0;
s->mv_type = MV_TYPE_8X8;
for(i=0;i<4;i++) {
mot_val = ff_h263_pred_motion(s, i, 0, &pred_x, &pred_y);
@@ -894,8 +894,8 @@ int ff_h263_decode_mb(MpegEncContext *s,
} else if(s->pict_type==AV_PICTURE_TYPE_B) {
int mb_type;
const int stride= s->b8_stride;
- int16_t *mot_val0 = s->current_picture.motion_val[0][2 * (s->mb_x + s->mb_y * stride)];
- int16_t *mot_val1 = s->current_picture.motion_val[1][2 * (s->mb_x + s->mb_y * stride)];
+ int16_t *mot_val0 = s->cur_pic.motion_val[0][2 * (s->mb_x + s->mb_y * stride)];
+ int16_t *mot_val1 = s->cur_pic.motion_val[1][2 * (s->mb_x + s->mb_y * stride)];
// const int mv_xy= s->mb_x + 1 + s->mb_y * s->mb_stride;
//FIXME ugly
@@ -1007,7 +1007,7 @@ int ff_h263_decode_mb(MpegEncContext *s,
}
}
- s->current_picture.mb_type[xy] = mb_type;
+ s->cur_pic.mb_type[xy] = mb_type;
} else { /* I-Frame */
do{
cbpc = get_vlc2(&s->gb, ff_h263_intra_MCBPC_vlc, INTRA_MCBPC_VLC_BITS, 2);
@@ -1022,11 +1022,11 @@ int ff_h263_decode_mb(MpegEncContext *s,
dquant = cbpc & 4;
s->mb_intra = 1;
intra:
- s->current_picture.mb_type[xy] = MB_TYPE_INTRA;
+ s->cur_pic.mb_type[xy] = MB_TYPE_INTRA;
if (s->h263_aic) {
s->ac_pred = get_bits1(&s->gb);
if(s->ac_pred){
- s->current_picture.mb_type[xy] = MB_TYPE_INTRA | MB_TYPE_ACPRED;
+ s->cur_pic.mb_type[xy] = MB_TYPE_INTRA | MB_TYPE_ACPRED;
s->h263_aic_dir = get_bits1(&s->gb);
}
diff --git a/libavcodec/ituh263enc.c b/libavcodec/ituh263enc.c
index e27bd258d7..bcb230871e 100644
--- a/libavcodec/ituh263enc.c
+++ b/libavcodec/ituh263enc.c
@@ -271,7 +271,7 @@ void ff_h263_encode_gob_header(MpegEncContext * s, int mb_line)
*/
void ff_clean_h263_qscales(MpegEncContext *s){
int i;
- int8_t * const qscale_table = s->current_picture.qscale_table;
+ int8_t * const qscale_table = s->cur_pic.qscale_table;
ff_init_qscale_tab(s);
@@ -565,8 +565,8 @@ void ff_h263_encode_mb(MpegEncContext * s,
/* motion vectors: 8x8 mode*/
ff_h263_pred_motion(s, i, 0, &pred_x, &pred_y);
- motion_x = s->current_picture.motion_val[0][s->block_index[i]][0];
- motion_y = s->current_picture.motion_val[0][s->block_index[i]][1];
+ motion_x = s->cur_pic.motion_val[0][s->block_index[i]][0];
+ motion_y = s->cur_pic.motion_val[0][s->block_index[i]][1];
if (!s->umvplus) {
ff_h263_encode_motion_vector(s, motion_x - pred_x,
motion_y - pred_y, 1);
@@ -692,15 +692,15 @@ void ff_h263_update_mb(MpegEncContext *s)
{
const int mb_xy = s->mb_y * s->mb_stride + s->mb_x;
- if (s->current_picture.mbskip_table)
- s->current_picture.mbskip_table[mb_xy] = s->mb_skipped;
+ if (s->cur_pic.mbskip_table)
+ s->cur_pic.mbskip_table[mb_xy] = s->mb_skipped;
if (s->mv_type == MV_TYPE_8X8)
- s->current_picture.mb_type[mb_xy] = MB_TYPE_L0 | MB_TYPE_8x8;
+ s->cur_pic.mb_type[mb_xy] = MB_TYPE_L0 | MB_TYPE_8x8;
else if(s->mb_intra)
- s->current_picture.mb_type[mb_xy] = MB_TYPE_INTRA;
+ s->cur_pic.mb_type[mb_xy] = MB_TYPE_INTRA;
else
- s->current_picture.mb_type[mb_xy] = MB_TYPE_L0 | MB_TYPE_16x16;
+ s->cur_pic.mb_type[mb_xy] = MB_TYPE_L0 | MB_TYPE_16x16;
ff_h263_update_motion_val(s);
}
diff --git a/libavcodec/motion_est.c b/libavcodec/motion_est.c
index fb569ede8a..b2644b5328 100644
--- a/libavcodec/motion_est.c
+++ b/libavcodec/motion_est.c
@@ -510,16 +510,16 @@ static inline void set_p_mv_tables(MpegEncContext * s, int mx, int my, int mv4)
if(mv4){
int mot_xy= s->block_index[0];
- s->current_picture.motion_val[0][mot_xy ][0] = mx;
- s->current_picture.motion_val[0][mot_xy ][1] = my;
- s->current_picture.motion_val[0][mot_xy + 1][0] = mx;
- s->current_picture.motion_val[0][mot_xy + 1][1] = my;
+ s->cur_pic.motion_val[0][mot_xy ][0] = mx;
+ s->cur_pic.motion_val[0][mot_xy ][1] = my;
+ s->cur_pic.motion_val[0][mot_xy + 1][0] = mx;
+ s->cur_pic.motion_val[0][mot_xy + 1][1] = my;
mot_xy += s->b8_stride;
- s->current_picture.motion_val[0][mot_xy ][0] = mx;
- s->current_picture.motion_val[0][mot_xy ][1] = my;
- s->current_picture.motion_val[0][mot_xy + 1][0] = mx;
- s->current_picture.motion_val[0][mot_xy + 1][1] = my;
+ s->cur_pic.motion_val[0][mot_xy ][0] = mx;
+ s->cur_pic.motion_val[0][mot_xy ][1] = my;
+ s->cur_pic.motion_val[0][mot_xy + 1][0] = mx;
+ s->cur_pic.motion_val[0][mot_xy + 1][1] = my;
}
}
@@ -601,8 +601,8 @@ static inline int h263_mv4_search(MpegEncContext *s, int mx, int my, int shift)
c->ymax = - 16*s->mb_y + s->height - 8*(block>>1);
}
- P_LEFT[0] = s->current_picture.motion_val[0][mot_xy - 1][0];
- P_LEFT[1] = s->current_picture.motion_val[0][mot_xy - 1][1];
+ P_LEFT[0] = s->cur_pic.motion_val[0][mot_xy - 1][0];
+ P_LEFT[1] = s->cur_pic.motion_val[0][mot_xy - 1][1];
if (P_LEFT[0] > c->xmax * (1 << shift)) P_LEFT[0] = c->xmax * (1 << shift);
@@ -611,10 +611,10 @@ static inline int h263_mv4_search(MpegEncContext *s, int mx, int my, int shift)
c->pred_x= pred_x4= P_LEFT[0];
c->pred_y= pred_y4= P_LEFT[1];
} else {
- P_TOP[0] = s->current_picture.motion_val[0][mot_xy - mot_stride ][0];
- P_TOP[1] = s->current_picture.motion_val[0][mot_xy - mot_stride ][1];
- P_TOPRIGHT[0] = s->current_picture.motion_val[0][mot_xy - mot_stride + off[block]][0];
- P_TOPRIGHT[1] = s->current_picture.motion_val[0][mot_xy - mot_stride + off[block]][1];
+ P_TOP[0] = s->cur_pic.motion_val[0][mot_xy - mot_stride ][0];
+ P_TOP[1] = s->cur_pic.motion_val[0][mot_xy - mot_stride ][1];
+ P_TOPRIGHT[0] = s->cur_pic.motion_val[0][mot_xy - mot_stride + off[block]][0];
+ P_TOPRIGHT[1] = s->cur_pic.motion_val[0][mot_xy - mot_stride + off[block]][1];
if (P_TOP[1] > c->ymax * (1 << shift)) P_TOP[1] = c->ymax * (1 << shift);
if (P_TOPRIGHT[0] < c->xmin * (1 << shift)) P_TOPRIGHT[0] = c->xmin * (1 << shift);
if (P_TOPRIGHT[0] > c->xmax * (1 << shift)) P_TOPRIGHT[0] = c->xmax * (1 << shift);
@@ -675,8 +675,8 @@ static inline int h263_mv4_search(MpegEncContext *s, int mx, int my, int shift)
my4_sum+= my4;
}
- s->current_picture.motion_val[0][s->block_index[block]][0] = mx4;
- s->current_picture.motion_val[0][s->block_index[block]][1] = my4;
+ s->cur_pic.motion_val[0][s->block_index[block]][0] = mx4;
+ s->cur_pic.motion_val[0][s->block_index[block]][1] = my4;
if(mx4 != mx || my4 != my) same=0;
}
@@ -686,7 +686,7 @@ static inline int h263_mv4_search(MpegEncContext *s, int mx, int my, int shift)
if (s->mecc.me_sub_cmp[0] != s->mecc.mb_cmp[0]) {
dmin_sum += s->mecc.mb_cmp[0](s,
- s->new_picture->data[0] +
+ s->new_pic->data[0] +
s->mb_x * 16 + s->mb_y * 16 * stride,
c->scratchpad, stride, 16);
}
@@ -703,15 +703,15 @@ static inline int h263_mv4_search(MpegEncContext *s, int mx, int my, int shift)
offset= (s->mb_x*8 + (mx>>1)) + (s->mb_y*8 + (my>>1))*s->uvlinesize;
if(s->no_rounding){
- s->hdsp.put_no_rnd_pixels_tab[1][dxy](c->scratchpad , s->last_picture.f->data[1] + offset, s->uvlinesize, 8);
- s->hdsp.put_no_rnd_pixels_tab[1][dxy](c->scratchpad + 8, s->last_picture.f->data[2] + offset, s->uvlinesize, 8);
+ s->hdsp.put_no_rnd_pixels_tab[1][dxy](c->scratchpad , s->last_pic.f->data[1] + offset, s->uvlinesize, 8);
+ s->hdsp.put_no_rnd_pixels_tab[1][dxy](c->scratchpad + 8, s->last_pic.f->data[2] + offset, s->uvlinesize, 8);
}else{
- s->hdsp.put_pixels_tab [1][dxy](c->scratchpad , s->last_picture.f->data[1] + offset, s->uvlinesize, 8);
- s->hdsp.put_pixels_tab [1][dxy](c->scratchpad + 8, s->last_picture.f->data[2] + offset, s->uvlinesize, 8);
+ s->hdsp.put_pixels_tab [1][dxy](c->scratchpad , s->last_pic.f->data[1] + offset, s->uvlinesize, 8);
+ s->hdsp.put_pixels_tab [1][dxy](c->scratchpad + 8, s->last_pic.f->data[2] + offset, s->uvlinesize, 8);
}
- dmin_sum += s->mecc.mb_cmp[1](s, s->new_picture->data[1] + s->mb_x * 8 + s->mb_y * 8 * s->uvlinesize, c->scratchpad, s->uvlinesize, 8);
- dmin_sum += s->mecc.mb_cmp[1](s, s->new_picture->data[2] + s->mb_x * 8 + s->mb_y * 8 * s->uvlinesize, c->scratchpad + 8, s->uvlinesize, 8);
+ dmin_sum += s->mecc.mb_cmp[1](s, s->new_pic->data[1] + s->mb_x * 8 + s->mb_y * 8 * s->uvlinesize, c->scratchpad, s->uvlinesize, 8);
+ dmin_sum += s->mecc.mb_cmp[1](s, s->new_pic->data[2] + s->mb_x * 8 + s->mb_y * 8 * s->uvlinesize, c->scratchpad + 8, s->uvlinesize, 8);
}
c->pred_x= mx;
@@ -899,7 +899,7 @@ void ff_estimate_p_frame_motion(MpegEncContext * s,
const int shift= 1+s->quarter_sample;
int mb_type=0;
- init_ref(c, s->new_picture->data, s->last_picture.f->data, NULL, 16*mb_x, 16*mb_y, 0);
+ init_ref(c, s->new_pic->data, s->last_pic.f->data, NULL, 16*mb_x, 16*mb_y, 0);
av_assert0(s->quarter_sample==0 || s->quarter_sample==1);
av_assert0(s->linesize == c->stride);
@@ -927,17 +927,17 @@ void ff_estimate_p_frame_motion(MpegEncContext * s,
const int mot_stride = s->b8_stride;
const int mot_xy = s->block_index[0];
- P_LEFT[0] = s->current_picture.motion_val[0][mot_xy - 1][0];
- P_LEFT[1] = s->current_picture.motion_val[0][mot_xy - 1][1];
+ P_LEFT[0] = s->cur_pic.motion_val[0][mot_xy - 1][0];
+ P_LEFT[1] = s->cur_pic.motion_val[0][mot_xy - 1][1];
if (P_LEFT[0] > (c->xmax << shift))
P_LEFT[0] = c->xmax << shift;
if (!s->first_slice_line) {
- P_TOP[0] = s->current_picture.motion_val[0][mot_xy - mot_stride ][0];
- P_TOP[1] = s->current_picture.motion_val[0][mot_xy - mot_stride ][1];
- P_TOPRIGHT[0] = s->current_picture.motion_val[0][mot_xy - mot_stride + 2][0];
- P_TOPRIGHT[1] = s->current_picture.motion_val[0][mot_xy - mot_stride + 2][1];
+ P_TOP[0] = s->cur_pic.motion_val[0][mot_xy - mot_stride ][0];
+ P_TOP[1] = s->cur_pic.motion_val[0][mot_xy - mot_stride ][1];
+ P_TOPRIGHT[0] = s->cur_pic.motion_val[0][mot_xy - mot_stride + 2][0];
+ P_TOPRIGHT[1] = s->cur_pic.motion_val[0][mot_xy - mot_stride + 2][1];
if (P_TOP[1] > (c->ymax << shift))
P_TOP[1] = c->ymax << shift;
if (P_TOPRIGHT[0] < (c->xmin * (1 << shift)))
@@ -1048,9 +1048,9 @@ void ff_estimate_p_frame_motion(MpegEncContext * s,
if(intra_score < dmin){
mb_type= CANDIDATE_MB_TYPE_INTRA;
- s->current_picture.mb_type[mb_y*s->mb_stride + mb_x] = CANDIDATE_MB_TYPE_INTRA; //FIXME cleanup
+ s->cur_pic.mb_type[mb_y*s->mb_stride + mb_x] = CANDIDATE_MB_TYPE_INTRA; //FIXME cleanup
}else
- s->current_picture.mb_type[mb_y*s->mb_stride + mb_x] = 0;
+ s->cur_pic.mb_type[mb_y*s->mb_stride + mb_x] = 0;
{
int p_score= FFMIN(vard, varc-500+(s->lambda2>>FF_LAMBDA_SHIFT)*100);
@@ -1070,7 +1070,7 @@ int ff_pre_estimate_p_frame_motion(MpegEncContext * s,
int P[10][2];
const int shift= 1+s->quarter_sample;
const int xy= mb_x + mb_y*s->mb_stride;
- init_ref(c, s->new_picture->data, s->last_picture.f->data, NULL, 16*mb_x, 16*mb_y, 0);
+ init_ref(c, s->new_pic->data, s->last_pic.f->data, NULL, 16*mb_x, 16*mb_y, 0);
av_assert0(s->quarter_sample==0 || s->quarter_sample==1);
@@ -1403,7 +1403,7 @@ static inline int direct_search(MpegEncContext * s, int mb_x, int mb_y)
ymin= xmin=(-32)>>shift;
ymax= xmax= 31>>shift;
- if (IS_8X8(s->next_picture.mb_type[mot_xy])) {
+ if (IS_8X8(s->next_pic.mb_type[mot_xy])) {
s->mv_type= MV_TYPE_8X8;
}else{
s->mv_type= MV_TYPE_16X16;
@@ -1413,8 +1413,8 @@ static inline int direct_search(MpegEncContext * s, int mb_x, int mb_y)
int index= s->block_index[i];
int min, max;
- c->co_located_mv[i][0] = s->next_picture.motion_val[0][index][0];
- c->co_located_mv[i][1] = s->next_picture.motion_val[0][index][1];
+ c->co_located_mv[i][0] = s->next_pic.motion_val[0][index][0];
+ c->co_located_mv[i][1] = s->next_pic.motion_val[0][index][1];
c->direct_basis_mv[i][0]= c->co_located_mv[i][0]*time_pb/time_pp + ((i& 1)<<(shift+3));
c->direct_basis_mv[i][1]= c->co_located_mv[i][1]*time_pb/time_pp + ((i>>1)<<(shift+3));
// c->direct_basis_mv[1][i][0]= c->co_located_mv[i][0]*(time_pb - time_pp)/time_pp + ((i &1)<<(shift+3);
@@ -1495,14 +1495,14 @@ void ff_estimate_b_frame_motion(MpegEncContext * s,
int fmin, bmin, dmin, fbmin, bimin, fimin;
int type=0;
const int xy = mb_y*s->mb_stride + mb_x;
- init_ref(c, s->new_picture->data, s->last_picture.f->data,
- s->next_picture.f->data, 16 * mb_x, 16 * mb_y, 2);
+ init_ref(c, s->new_pic->data, s->last_pic.f->data,
+ s->next_pic.f->data, 16 * mb_x, 16 * mb_y, 2);
get_limits(s, 16*mb_x, 16*mb_y);
c->skip=0;
- if (s->codec_id == AV_CODEC_ID_MPEG4 && s->next_picture.mbskip_table[xy]) {
+ if (s->codec_id == AV_CODEC_ID_MPEG4 && s->next_pic.mbskip_table[xy]) {
int score= direct_search(s, mb_x, mb_y); //FIXME just check 0,0
score= ((unsigned)(score*score + 128*256))>>16;
@@ -1681,14 +1681,14 @@ void ff_fix_long_p_mvs(MpegEncContext * s, int type)
int block;
for(block=0; block<4; block++){
int off= (block& 1) + (block>>1)*wrap;
- int mx = s->current_picture.motion_val[0][ xy + off ][0];
- int my = s->current_picture.motion_val[0][ xy + off ][1];
+ int mx = s->cur_pic.motion_val[0][ xy + off ][0];
+ int my = s->cur_pic.motion_val[0][ xy + off ][1];
if( mx >=range || mx <-range
|| my >=range || my <-range){
s->mb_type[i] &= ~CANDIDATE_MB_TYPE_INTER4V;
s->mb_type[i] |= type;
- s->current_picture.mb_type[i] = type;
+ s->cur_pic.mb_type[i] = type;
}
}
}
diff --git a/libavcodec/mpeg12dec.c b/libavcodec/mpeg12dec.c
index 9940ff898c..4aba5651a6 100644
--- a/libavcodec/mpeg12dec.c
+++ b/libavcodec/mpeg12dec.c
@@ -437,21 +437,21 @@ static int mpeg_decode_mb(MpegEncContext *s, int16_t block[12][64])
if (s->mb_skip_run-- != 0) {
if (s->pict_type == AV_PICTURE_TYPE_P) {
s->mb_skipped = 1;
- s->current_picture.mb_type[s->mb_x + s->mb_y * s->mb_stride] =
+ s->cur_pic.mb_type[s->mb_x + s->mb_y * s->mb_stride] =
MB_TYPE_SKIP | MB_TYPE_L0 | MB_TYPE_16x16;
} else {
int mb_type;
if (s->mb_x)
- mb_type = s->current_picture.mb_type[s->mb_x + s->mb_y * s->mb_stride - 1];
+ mb_type = s->cur_pic.mb_type[s->mb_x + s->mb_y * s->mb_stride - 1];
else
// FIXME not sure if this is allowed in MPEG at all
- mb_type = s->current_picture.mb_type[s->mb_width + (s->mb_y - 1) * s->mb_stride - 1];
+ mb_type = s->cur_pic.mb_type[s->mb_width + (s->mb_y - 1) * s->mb_stride - 1];
if (IS_INTRA(mb_type)) {
av_log(s->avctx, AV_LOG_ERROR, "skip with previntra\n");
return AVERROR_INVALIDDATA;
}
- s->current_picture.mb_type[s->mb_x + s->mb_y * s->mb_stride] =
+ s->cur_pic.mb_type[s->mb_x + s->mb_y * s->mb_stride] =
mb_type | MB_TYPE_SKIP;
if ((s->mv[0][0][0] | s->mv[0][0][1] | s->mv[1][0][0] | s->mv[1][0][1]) == 0)
@@ -784,7 +784,7 @@ static int mpeg_decode_mb(MpegEncContext *s, int16_t block[12][64])
}
}
- s->current_picture.mb_type[s->mb_x + s->mb_y * s->mb_stride] = mb_type;
+ s->cur_pic.mb_type[s->mb_x + s->mb_y * s->mb_stride] = mb_type;
return 0;
}
@@ -1292,36 +1292,36 @@ static int mpeg_field_start(MpegEncContext *s, const uint8_t *buf, int buf_size)
return ret;
if (s->picture_structure != PICT_FRAME) {
- s->current_picture_ptr->f->flags |= AV_FRAME_FLAG_TOP_FIELD_FIRST *
- (s->picture_structure == PICT_TOP_FIELD);
+ s->cur_pic_ptr->f->flags |= AV_FRAME_FLAG_TOP_FIELD_FIRST *
+ (s->picture_structure == PICT_TOP_FIELD);
for (int i = 0; i < 3; i++) {
if (s->picture_structure == PICT_BOTTOM_FIELD) {
- s->current_picture.f->data[i] = FF_PTR_ADD(s->current_picture.f->data[i],
- s->current_picture.f->linesize[i]);
+ s->cur_pic.f->data[i] = FF_PTR_ADD(s->cur_pic.f->data[i],
+ s->cur_pic.f->linesize[i]);
}
- s->current_picture.f->linesize[i] *= 2;
- s->last_picture.f->linesize[i] *= 2;
- s->next_picture.f->linesize[i] *= 2;
+ s->cur_pic.f->linesize[i] *= 2;
+ s->last_pic.f->linesize[i] *= 2;
+ s->next_pic.f->linesize[i] *= 2;
}
}
ff_mpeg_er_frame_start(s);
/* first check if we must repeat the frame */
- s->current_picture_ptr->f->repeat_pict = 0;
+ s->cur_pic_ptr->f->repeat_pict = 0;
if (s->repeat_first_field) {
if (s->progressive_sequence) {
if (s->top_field_first)
- s->current_picture_ptr->f->repeat_pict = 4;
+ s->cur_pic_ptr->f->repeat_pict = 4;
else
- s->current_picture_ptr->f->repeat_pict = 2;
+ s->cur_pic_ptr->f->repeat_pict = 2;
} else if (s->progressive_frame) {
- s->current_picture_ptr->f->repeat_pict = 1;
+ s->cur_pic_ptr->f->repeat_pict = 1;
}
}
- ret = ff_frame_new_side_data(s->avctx, s->current_picture_ptr->f,
+ ret = ff_frame_new_side_data(s->avctx, s->cur_pic_ptr->f,
AV_FRAME_DATA_PANSCAN, sizeof(s1->pan_scan),
&pan_scan);
if (ret < 0)
@@ -1331,14 +1331,14 @@ static int mpeg_field_start(MpegEncContext *s, const uint8_t *buf, int buf_size)
if (s1->a53_buf_ref) {
ret = ff_frame_new_side_data_from_buf(
- s->avctx, s->current_picture_ptr->f, AV_FRAME_DATA_A53_CC,
+ s->avctx, s->cur_pic_ptr->f, AV_FRAME_DATA_A53_CC,
&s1->a53_buf_ref, NULL);
if (ret < 0)
return ret;
}
if (s1->has_stereo3d) {
- AVStereo3D *stereo = av_stereo3d_create_side_data(s->current_picture_ptr->f);
+ AVStereo3D *stereo = av_stereo3d_create_side_data(s->cur_pic_ptr->f);
if (!stereo)
return AVERROR(ENOMEM);
@@ -1348,7 +1348,7 @@ static int mpeg_field_start(MpegEncContext *s, const uint8_t *buf, int buf_size)
if (s1->has_afd) {
AVFrameSideData *sd;
- ret = ff_frame_new_side_data(s->avctx, s->current_picture_ptr->f,
+ ret = ff_frame_new_side_data(s->avctx, s->cur_pic_ptr->f,
AV_FRAME_DATA_AFD, 1, &sd);
if (ret < 0)
return ret;
@@ -1360,7 +1360,7 @@ static int mpeg_field_start(MpegEncContext *s, const uint8_t *buf, int buf_size)
if (HAVE_THREADS && (avctx->active_thread_type & FF_THREAD_FRAME))
ff_thread_finish_setup(avctx);
} else { // second field
- if (!s->current_picture_ptr) {
+ if (!s->cur_pic_ptr) {
av_log(s->avctx, AV_LOG_ERROR, "first field missing\n");
return AVERROR_INVALIDDATA;
}
@@ -1377,10 +1377,10 @@ static int mpeg_field_start(MpegEncContext *s, const uint8_t *buf, int buf_size)
return ret;
for (int i = 0; i < 3; i++) {
- s->current_picture.f->data[i] = s->current_picture_ptr->f->data[i];
+ s->cur_pic.f->data[i] = s->cur_pic_ptr->f->data[i];
if (s->picture_structure == PICT_BOTTOM_FIELD)
- s->current_picture.f->data[i] +=
- s->current_picture_ptr->f->linesize[i];
+ s->cur_pic.f->data[i] +=
+ s->cur_pic_ptr->f->linesize[i];
}
}
@@ -1507,7 +1507,7 @@ static int mpeg_decode_slice(MpegEncContext *s, int mb_y,
return ret;
// Note motion_val is normally NULL unless we want to extract the MVs.
- if (s->current_picture.motion_val[0]) {
+ if (s->cur_pic.motion_val[0]) {
const int wrap = s->b8_stride;
int xy = s->mb_x * 2 + s->mb_y * 2 * wrap;
int b8_xy = 4 * (s->mb_x + s->mb_y * s->mb_stride);
@@ -1527,12 +1527,12 @@ static int mpeg_decode_slice(MpegEncContext *s, int mb_y,
motion_y = s->mv[dir][i][1];
}
- s->current_picture.motion_val[dir][xy][0] = motion_x;
- s->current_picture.motion_val[dir][xy][1] = motion_y;
- s->current_picture.motion_val[dir][xy + 1][0] = motion_x;
- s->current_picture.motion_val[dir][xy + 1][1] = motion_y;
- s->current_picture.ref_index [dir][b8_xy] =
- s->current_picture.ref_index [dir][b8_xy + 1] = s->field_select[dir][i];
+ s->cur_pic.motion_val[dir][xy][0] = motion_x;
+ s->cur_pic.motion_val[dir][xy][1] = motion_y;
+ s->cur_pic.motion_val[dir][xy + 1][0] = motion_x;
+ s->cur_pic.motion_val[dir][xy + 1][1] = motion_y;
+ s->cur_pic.ref_index [dir][b8_xy] =
+ s->cur_pic.ref_index [dir][b8_xy + 1] = s->field_select[dir][i];
av_assert2(s->field_select[dir][i] == 0 ||
s->field_select[dir][i] == 1);
}
@@ -1735,7 +1735,7 @@ static int slice_end(AVCodecContext *avctx, AVFrame *pict, int *got_output)
Mpeg1Context *s1 = avctx->priv_data;
MpegEncContext *s = &s1->mpeg_enc_ctx;
- if (!s->context_initialized || !s->current_picture_ptr)
+ if (!s->context_initialized || !s->cur_pic_ptr)
return 0;
if (s->avctx->hwaccel) {
@@ -1756,20 +1756,20 @@ static int slice_end(AVCodecContext *avctx, AVFrame *pict, int *got_output)
ff_mpv_frame_end(s);
if (s->pict_type == AV_PICTURE_TYPE_B || s->low_delay) {
- int ret = av_frame_ref(pict, s->current_picture_ptr->f);
+ int ret = av_frame_ref(pict, s->cur_pic_ptr->f);
if (ret < 0)
return ret;
- ff_print_debug_info(s, s->current_picture_ptr, pict);
- ff_mpv_export_qp_table(s, pict, s->current_picture_ptr, FF_MPV_QSCALE_TYPE_MPEG2);
+ ff_print_debug_info(s, s->cur_pic_ptr, pict);
+ ff_mpv_export_qp_table(s, pict, s->cur_pic_ptr, FF_MPV_QSCALE_TYPE_MPEG2);
*got_output = 1;
} else {
/* latency of 1 frame for I- and P-frames */
- if (s->last_picture_ptr && !s->last_picture_ptr->dummy) {
- int ret = av_frame_ref(pict, s->last_picture_ptr->f);
+ if (s->last_pic_ptr && !s->last_pic_ptr->dummy) {
+ int ret = av_frame_ref(pict, s->last_pic_ptr->f);
if (ret < 0)
return ret;
- ff_print_debug_info(s, s->last_picture_ptr, pict);
- ff_mpv_export_qp_table(s, pict, s->last_picture_ptr, FF_MPV_QSCALE_TYPE_MPEG2);
+ ff_print_debug_info(s, s->last_pic_ptr, pict);
+ ff_mpv_export_qp_table(s, pict, s->last_pic_ptr, FF_MPV_QSCALE_TYPE_MPEG2);
*got_output = 1;
}
}
@@ -2405,7 +2405,7 @@ static int decode_chunks(AVCodecContext *avctx, AVFrame *picture,
return AVERROR_INVALIDDATA;
}
- if (!s2->last_picture_ptr) {
+ if (!s2->last_pic_ptr) {
/* Skip B-frames if we do not have reference frames and
* GOP is not closed. */
if (s2->pict_type == AV_PICTURE_TYPE_B) {
@@ -2419,7 +2419,7 @@ static int decode_chunks(AVCodecContext *avctx, AVFrame *picture,
}
if (s2->pict_type == AV_PICTURE_TYPE_I || (s2->avctx->flags2 & AV_CODEC_FLAG2_SHOW_ALL))
s->sync = 1;
- if (!s2->next_picture_ptr) {
+ if (!s2->next_pic_ptr) {
/* Skip P-frames if we do not have a reference frame or
* we have an invalid header. */
if (s2->pict_type == AV_PICTURE_TYPE_P && !s->sync) {
@@ -2460,7 +2460,7 @@ static int decode_chunks(AVCodecContext *avctx, AVFrame *picture,
if ((ret = mpeg_field_start(s2, buf, buf_size)) < 0)
return ret;
}
- if (!s2->current_picture_ptr) {
+ if (!s2->cur_pic_ptr) {
av_log(avctx, AV_LOG_ERROR,
"current_picture not initialized\n");
return AVERROR_INVALIDDATA;
@@ -2524,12 +2524,12 @@ static int mpeg_decode_frame(AVCodecContext *avctx, AVFrame *picture,
if (buf_size == 0 || (buf_size == 4 && AV_RB32(buf) == SEQ_END_CODE)) {
/* special case for last picture */
- if (s2->low_delay == 0 && s2->next_picture_ptr) {
- int ret = av_frame_ref(picture, s2->next_picture_ptr->f);
+ if (s2->low_delay == 0 && s2->next_pic_ptr) {
+ int ret = av_frame_ref(picture, s2->next_pic_ptr->f);
if (ret < 0)
return ret;
- s2->next_picture_ptr = NULL;
+ s2->next_pic_ptr = NULL;
*got_output = 1;
}
@@ -2552,14 +2552,14 @@ static int mpeg_decode_frame(AVCodecContext *avctx, AVFrame *picture,
}
s->extradata_decoded = 1;
if (ret < 0 && (avctx->err_recognition & AV_EF_EXPLODE)) {
- s2->current_picture_ptr = NULL;
+ s2->cur_pic_ptr = NULL;
return ret;
}
}
ret = decode_chunks(avctx, picture, got_output, buf, buf_size);
if (ret<0 || *got_output) {
- s2->current_picture_ptr = NULL;
+ s2->cur_pic_ptr = NULL;
if (s->timecode_frame_start != -1 && *got_output) {
char tcbuf[AV_TIMECODE_STR_SIZE];
diff --git a/libavcodec/mpeg12enc.c b/libavcodec/mpeg12enc.c
index fdb1b1e4a6..bd95451b68 100644
--- a/libavcodec/mpeg12enc.c
+++ b/libavcodec/mpeg12enc.c
@@ -290,7 +290,7 @@ static void mpeg1_encode_sequence_header(MpegEncContext *s)
AVRational aspect_ratio = s->avctx->sample_aspect_ratio;
int aspect_ratio_info;
- if (!(s->current_picture.f->flags & AV_FRAME_FLAG_KEY))
+ if (!(s->cur_pic.f->flags & AV_FRAME_FLAG_KEY))
return;
if (aspect_ratio.num == 0 || aspect_ratio.den == 0)
@@ -382,7 +382,7 @@ static void mpeg1_encode_sequence_header(MpegEncContext *s)
put_bits(&s->pb, 2, mpeg12->frame_rate_ext.num-1); // frame_rate_ext_n
put_bits(&s->pb, 5, mpeg12->frame_rate_ext.den-1); // frame_rate_ext_d
- side_data = av_frame_get_side_data(s->current_picture_ptr->f, AV_FRAME_DATA_PANSCAN);
+ side_data = av_frame_get_side_data(s->cur_pic_ptr->f, AV_FRAME_DATA_PANSCAN);
if (side_data) {
const AVPanScan *pan_scan = (AVPanScan *)side_data->data;
if (pan_scan->width && pan_scan->height) {
@@ -419,10 +419,10 @@ static void mpeg1_encode_sequence_header(MpegEncContext *s)
/* time code: we must convert from the real frame rate to a
* fake MPEG frame rate in case of low frame rate */
fps = (framerate.num + framerate.den / 2) / framerate.den;
- time_code = s->current_picture_ptr->coded_picture_number +
+ time_code = s->cur_pic_ptr->coded_picture_number +
mpeg12->timecode_frame_start;
- mpeg12->gop_picture_number = s->current_picture_ptr->coded_picture_number;
+ mpeg12->gop_picture_number = s->cur_pic_ptr->coded_picture_number;
av_assert0(mpeg12->drop_frame_timecode == !!(mpeg12->tc.flags & AV_TIMECODE_FLAG_DROPFRAME));
if (mpeg12->drop_frame_timecode)
@@ -530,7 +530,7 @@ void ff_mpeg1_encode_picture_header(MpegEncContext *s)
if (s->progressive_sequence)
put_bits(&s->pb, 1, 0); /* no repeat */
else
- put_bits(&s->pb, 1, !!(s->current_picture_ptr->f->flags & AV_FRAME_FLAG_TOP_FIELD_FIRST));
+ put_bits(&s->pb, 1, !!(s->cur_pic_ptr->f->flags & AV_FRAME_FLAG_TOP_FIELD_FIRST));
/* XXX: optimize the generation of this flag with entropy measures */
s->frame_pred_frame_dct = s->progressive_sequence;
@@ -554,7 +554,7 @@ void ff_mpeg1_encode_picture_header(MpegEncContext *s)
for (i = 0; i < sizeof(svcd_scan_offset_placeholder); i++)
put_bits(&s->pb, 8, svcd_scan_offset_placeholder[i]);
}
- side_data = av_frame_get_side_data(s->current_picture_ptr->f,
+ side_data = av_frame_get_side_data(s->cur_pic_ptr->f,
AV_FRAME_DATA_STEREO3D);
if (side_data) {
AVStereo3D *stereo = (AVStereo3D *)side_data->data;
@@ -594,7 +594,7 @@ void ff_mpeg1_encode_picture_header(MpegEncContext *s)
}
if (CONFIG_MPEG2VIDEO_ENCODER && mpeg12->a53_cc) {
- side_data = av_frame_get_side_data(s->current_picture_ptr->f,
+ side_data = av_frame_get_side_data(s->cur_pic_ptr->f,
AV_FRAME_DATA_A53_CC);
if (side_data) {
if (side_data->size <= A53_MAX_CC_COUNT * 3 && side_data->size % 3 == 0) {
diff --git a/libavcodec/mpeg4video.c b/libavcodec/mpeg4video.c
index ffeaf822b2..7bbd412aa8 100644
--- a/libavcodec/mpeg4video.c
+++ b/libavcodec/mpeg4video.c
@@ -98,7 +98,7 @@ static inline void ff_mpeg4_set_one_direct_mv(MpegEncContext *s, int mx,
uint16_t time_pb = s->pb_time;
int p_mx, p_my;
- p_mx = s->next_picture.motion_val[0][xy][0];
+ p_mx = s->next_pic.motion_val[0][xy][0];
if ((unsigned)(p_mx + tab_bias) < tab_size) {
s->mv[0][i][0] = s->direct_scale_mv[0][p_mx + tab_bias] + mx;
s->mv[1][i][0] = mx ? s->mv[0][i][0] - p_mx
@@ -108,7 +108,7 @@ static inline void ff_mpeg4_set_one_direct_mv(MpegEncContext *s, int mx,
s->mv[1][i][0] = mx ? s->mv[0][i][0] - p_mx
: p_mx * (time_pb - time_pp) / time_pp;
}
- p_my = s->next_picture.motion_val[0][xy][1];
+ p_my = s->next_pic.motion_val[0][xy][1];
if ((unsigned)(p_my + tab_bias) < tab_size) {
s->mv[0][i][1] = s->direct_scale_mv[0][p_my + tab_bias] + my;
s->mv[1][i][1] = my ? s->mv[0][i][1] - p_my
@@ -129,7 +129,7 @@ static inline void ff_mpeg4_set_one_direct_mv(MpegEncContext *s, int mx,
int ff_mpeg4_set_direct_mv(MpegEncContext *s, int mx, int my)
{
const int mb_index = s->mb_x + s->mb_y * s->mb_stride;
- const int colocated_mb_type = s->next_picture.mb_type[mb_index];
+ const int colocated_mb_type = s->next_pic.mb_type[mb_index];
uint16_t time_pp;
uint16_t time_pb;
int i;
@@ -145,7 +145,7 @@ int ff_mpeg4_set_direct_mv(MpegEncContext *s, int mx, int my)
} else if (IS_INTERLACED(colocated_mb_type)) {
s->mv_type = MV_TYPE_FIELD;
for (i = 0; i < 2; i++) {
- int field_select = s->next_picture.ref_index[0][4 * mb_index + 2 * i];
+ int field_select = s->next_pic.ref_index[0][4 * mb_index + 2 * i];
s->field_select[0][i] = field_select;
s->field_select[1][i] = i;
if (s->top_field_first) {
diff --git a/libavcodec/mpeg4videodec.c b/libavcodec/mpeg4videodec.c
index 482bc48f89..8659ec0376 100644
--- a/libavcodec/mpeg4videodec.c
+++ b/libavcodec/mpeg4videodec.c
@@ -316,7 +316,7 @@ void ff_mpeg4_pred_ac(MpegEncContext *s, int16_t *block, int n, int dir)
{
int i;
int16_t *ac_val, *ac_val1;
- int8_t *const qscale_table = s->current_picture.qscale_table;
+ int8_t *const qscale_table = s->cur_pic.qscale_table;
/* find prediction */
ac_val = &s->ac_val[0][0][0] + s->block_index[n] * 16;
@@ -968,13 +968,13 @@ static int mpeg4_decode_partition_a(Mpeg4DecContext *ctx)
} while (cbpc == 8);
s->cbp_table[xy] = cbpc & 3;
- s->current_picture.mb_type[xy] = MB_TYPE_INTRA;
+ s->cur_pic.mb_type[xy] = MB_TYPE_INTRA;
s->mb_intra = 1;
if (cbpc & 4)
ff_set_qscale(s, s->qscale + quant_tab[get_bits(&s->gb, 2)]);
- s->current_picture.qscale_table[xy] = s->qscale;
+ s->cur_pic.qscale_table[xy] = s->qscale;
s->mbintra_table[xy] = 1;
for (i = 0; i < 6; i++) {
@@ -992,7 +992,7 @@ static int mpeg4_decode_partition_a(Mpeg4DecContext *ctx)
s->pred_dir_table[xy] = dir;
} else { /* P/S_TYPE */
int mx, my, pred_x, pred_y, bits;
- int16_t *const mot_val = s->current_picture.motion_val[0][s->block_index[0]];
+ int16_t *const mot_val = s->cur_pic.motion_val[0][s->block_index[0]];
const int stride = s->b8_stride * 2;
try_again:
@@ -1005,14 +1005,14 @@ try_again:
/* skip mb */
if (s->pict_type == AV_PICTURE_TYPE_S &&
ctx->vol_sprite_usage == GMC_SPRITE) {
- s->current_picture.mb_type[xy] = MB_TYPE_SKIP |
+ s->cur_pic.mb_type[xy] = MB_TYPE_SKIP |
MB_TYPE_16x16 |
MB_TYPE_GMC |
MB_TYPE_L0;
mx = get_amv(ctx, 0);
my = get_amv(ctx, 1);
} else {
- s->current_picture.mb_type[xy] = MB_TYPE_SKIP |
+ s->cur_pic.mb_type[xy] = MB_TYPE_SKIP |
MB_TYPE_16x16 |
MB_TYPE_L0;
mx = my = 0;
@@ -1045,7 +1045,7 @@ try_again:
s->mb_intra = ((cbpc & 4) != 0);
if (s->mb_intra) {
- s->current_picture.mb_type[xy] = MB_TYPE_INTRA;
+ s->cur_pic.mb_type[xy] = MB_TYPE_INTRA;
s->mbintra_table[xy] = 1;
mot_val[0] =
mot_val[2] =
@@ -1078,12 +1078,12 @@ try_again:
my = ff_h263_decode_motion(s, pred_y, s->f_code);
if (my >= 0xffff)
return AVERROR_INVALIDDATA;
- s->current_picture.mb_type[xy] = MB_TYPE_16x16 |
+ s->cur_pic.mb_type[xy] = MB_TYPE_16x16 |
MB_TYPE_L0;
} else {
mx = get_amv(ctx, 0);
my = get_amv(ctx, 1);
- s->current_picture.mb_type[xy] = MB_TYPE_16x16 |
+ s->cur_pic.mb_type[xy] = MB_TYPE_16x16 |
MB_TYPE_GMC |
MB_TYPE_L0;
}
@@ -1098,7 +1098,7 @@ try_again:
mot_val[3 + stride] = my;
} else {
int i;
- s->current_picture.mb_type[xy] = MB_TYPE_8x8 |
+ s->cur_pic.mb_type[xy] = MB_TYPE_8x8 |
MB_TYPE_L0;
for (i = 0; i < 4; i++) {
int16_t *mot_val = ff_h263_pred_motion(s, i, 0, &pred_x, &pred_y);
@@ -1154,9 +1154,9 @@ static int mpeg4_decode_partition_b(MpegEncContext *s, int mb_count)
}
s->cbp_table[xy] |= cbpy << 2;
- s->current_picture.mb_type[xy] |= ac_pred * MB_TYPE_ACPRED;
+ s->cur_pic.mb_type[xy] |= ac_pred * MB_TYPE_ACPRED;
} else { /* P || S_TYPE */
- if (IS_INTRA(s->current_picture.mb_type[xy])) {
+ if (IS_INTRA(s->cur_pic.mb_type[xy])) {
int i;
int dir = 0;
int ac_pred = get_bits1(&s->gb);
@@ -1170,7 +1170,7 @@ static int mpeg4_decode_partition_b(MpegEncContext *s, int mb_count)
if (s->cbp_table[xy] & 8)
ff_set_qscale(s, s->qscale + quant_tab[get_bits(&s->gb, 2)]);
- s->current_picture.qscale_table[xy] = s->qscale;
+ s->cur_pic.qscale_table[xy] = s->qscale;
for (i = 0; i < 6; i++) {
int dc_pred_dir;
@@ -1186,10 +1186,10 @@ static int mpeg4_decode_partition_b(MpegEncContext *s, int mb_count)
}
s->cbp_table[xy] &= 3; // remove dquant
s->cbp_table[xy] |= cbpy << 2;
- s->current_picture.mb_type[xy] |= ac_pred * MB_TYPE_ACPRED;
+ s->cur_pic.mb_type[xy] |= ac_pred * MB_TYPE_ACPRED;
s->pred_dir_table[xy] = dir;
- } else if (IS_SKIP(s->current_picture.mb_type[xy])) {
- s->current_picture.qscale_table[xy] = s->qscale;
+ } else if (IS_SKIP(s->cur_pic.mb_type[xy])) {
+ s->cur_pic.qscale_table[xy] = s->qscale;
s->cbp_table[xy] = 0;
} else {
int cbpy = get_vlc2(&s->gb, ff_h263_cbpy_vlc, CBPY_VLC_BITS, 1);
@@ -1202,7 +1202,7 @@ static int mpeg4_decode_partition_b(MpegEncContext *s, int mb_count)
if (s->cbp_table[xy] & 8)
ff_set_qscale(s, s->qscale + quant_tab[get_bits(&s->gb, 2)]);
- s->current_picture.qscale_table[xy] = s->qscale;
+ s->cur_pic.qscale_table[xy] = s->qscale;
s->cbp_table[xy] &= 3; // remove dquant
s->cbp_table[xy] |= (cbpy ^ 0xf) << 2;
@@ -1565,20 +1565,20 @@ static int mpeg4_decode_partitioned_mb(MpegEncContext *s, int16_t block[6][64])
av_assert2(s == (void*)ctx);
- mb_type = s->current_picture.mb_type[xy];
+ mb_type = s->cur_pic.mb_type[xy];
cbp = s->cbp_table[xy];
use_intra_dc_vlc = s->qscale < ctx->intra_dc_threshold;
- if (s->current_picture.qscale_table[xy] != s->qscale)
- ff_set_qscale(s, s->current_picture.qscale_table[xy]);
+ if (s->cur_pic.qscale_table[xy] != s->qscale)
+ ff_set_qscale(s, s->cur_pic.qscale_table[xy]);
if (s->pict_type == AV_PICTURE_TYPE_P ||
s->pict_type == AV_PICTURE_TYPE_S) {
int i;
for (i = 0; i < 4; i++) {
- s->mv[0][i][0] = s->current_picture.motion_val[0][s->block_index[i]][0];
- s->mv[0][i][1] = s->current_picture.motion_val[0][s->block_index[i]][1];
+ s->mv[0][i][0] = s->cur_pic.motion_val[0][s->block_index[i]][0];
+ s->mv[0][i][1] = s->cur_pic.motion_val[0][s->block_index[i]][1];
}
s->mb_intra = IS_INTRA(mb_type);
@@ -1592,14 +1592,14 @@ static int mpeg4_decode_partitioned_mb(MpegEncContext *s, int16_t block[6][64])
&& ctx->vol_sprite_usage == GMC_SPRITE) {
s->mcsel = 1;
s->mb_skipped = 0;
- s->current_picture.mbskip_table[xy] = 0;
+ s->cur_pic.mbskip_table[xy] = 0;
} else {
s->mcsel = 0;
s->mb_skipped = 1;
- s->current_picture.mbskip_table[xy] = 1;
+ s->cur_pic.mbskip_table[xy] = 1;
}
} else if (s->mb_intra) {
- s->ac_pred = IS_ACPRED(s->current_picture.mb_type[xy]);
+ s->ac_pred = IS_ACPRED(s->cur_pic.mb_type[xy]);
} else if (!s->mb_intra) {
// s->mcsel = 0; // FIXME do we need to init that?
@@ -1612,7 +1612,7 @@ static int mpeg4_decode_partitioned_mb(MpegEncContext *s, int16_t block[6][64])
}
} else { /* I-Frame */
s->mb_intra = 1;
- s->ac_pred = IS_ACPRED(s->current_picture.mb_type[xy]);
+ s->ac_pred = IS_ACPRED(s->cur_pic.mb_type[xy]);
}
if (!IS_SKIP(mb_type)) {
@@ -1671,23 +1671,23 @@ static int mpeg4_decode_mb(MpegEncContext *s, int16_t block[6][64])
s->mv_type = MV_TYPE_16X16;
if (s->pict_type == AV_PICTURE_TYPE_S &&
ctx->vol_sprite_usage == GMC_SPRITE) {
- s->current_picture.mb_type[xy] = MB_TYPE_SKIP |
+ s->cur_pic.mb_type[xy] = MB_TYPE_SKIP |
MB_TYPE_GMC |
MB_TYPE_16x16 |
MB_TYPE_L0;
s->mcsel = 1;
s->mv[0][0][0] = get_amv(ctx, 0);
s->mv[0][0][1] = get_amv(ctx, 1);
- s->current_picture.mbskip_table[xy] = 0;
+ s->cur_pic.mbskip_table[xy] = 0;
s->mb_skipped = 0;
} else {
- s->current_picture.mb_type[xy] = MB_TYPE_SKIP |
+ s->cur_pic.mb_type[xy] = MB_TYPE_SKIP |
MB_TYPE_16x16 |
MB_TYPE_L0;
s->mcsel = 0;
s->mv[0][0][0] = 0;
s->mv[0][0][1] = 0;
- s->current_picture.mbskip_table[xy] = 1;
+ s->cur_pic.mbskip_table[xy] = 1;
s->mb_skipped = 1;
}
goto end;
@@ -1728,7 +1728,7 @@ static int mpeg4_decode_mb(MpegEncContext *s, int16_t block[6][64])
s->mv_dir = MV_DIR_FORWARD;
if ((cbpc & 16) == 0) {
if (s->mcsel) {
- s->current_picture.mb_type[xy] = MB_TYPE_GMC |
+ s->cur_pic.mb_type[xy] = MB_TYPE_GMC |
MB_TYPE_16x16 |
MB_TYPE_L0;
/* 16x16 global motion prediction */
@@ -1738,7 +1738,7 @@ static int mpeg4_decode_mb(MpegEncContext *s, int16_t block[6][64])
s->mv[0][0][0] = mx;
s->mv[0][0][1] = my;
} else if ((!s->progressive_sequence) && get_bits1(&s->gb)) {
- s->current_picture.mb_type[xy] = MB_TYPE_16x8 |
+ s->cur_pic.mb_type[xy] = MB_TYPE_16x8 |
MB_TYPE_L0 |
MB_TYPE_INTERLACED;
/* 16x8 field motion prediction */
@@ -1762,7 +1762,7 @@ static int mpeg4_decode_mb(MpegEncContext *s, int16_t block[6][64])
s->mv[0][i][1] = my;
}
} else {
- s->current_picture.mb_type[xy] = MB_TYPE_16x16 | MB_TYPE_L0;
+ s->cur_pic.mb_type[xy] = MB_TYPE_16x16 | MB_TYPE_L0;
/* 16x16 motion prediction */
s->mv_type = MV_TYPE_16X16;
ff_h263_pred_motion(s, 0, 0, &pred_x, &pred_y);
@@ -1779,7 +1779,7 @@ static int mpeg4_decode_mb(MpegEncContext *s, int16_t block[6][64])
s->mv[0][0][1] = my;
}
} else {
- s->current_picture.mb_type[xy] = MB_TYPE_8x8 | MB_TYPE_L0;
+ s->cur_pic.mb_type[xy] = MB_TYPE_8x8 | MB_TYPE_L0;
s->mv_type = MV_TYPE_8X8;
for (i = 0; i < 4; i++) {
mot_val = ff_h263_pred_motion(s, i, 0, &pred_x, &pred_y);
@@ -1812,11 +1812,11 @@ static int mpeg4_decode_mb(MpegEncContext *s, int16_t block[6][64])
s->last_mv[i][1][1] = 0;
}
- ff_thread_await_progress(&s->next_picture_ptr->tf, s->mb_y, 0);
+ ff_thread_await_progress(&s->next_pic_ptr->tf, s->mb_y, 0);
}
/* if we skipped it in the future P-frame than skip it now too */
- s->mb_skipped = s->next_picture.mbskip_table[s->mb_y * s->mb_stride + s->mb_x]; // Note, skiptab=0 if last was GMC
+ s->mb_skipped = s->next_pic.mbskip_table[s->mb_y * s->mb_stride + s->mb_x]; // Note, skiptab=0 if last was GMC
if (s->mb_skipped) {
/* skip mb */
@@ -1829,7 +1829,7 @@ static int mpeg4_decode_mb(MpegEncContext *s, int16_t block[6][64])
s->mv[0][0][1] =
s->mv[1][0][0] =
s->mv[1][0][1] = 0;
- s->current_picture.mb_type[xy] = MB_TYPE_SKIP |
+ s->cur_pic.mb_type[xy] = MB_TYPE_SKIP |
MB_TYPE_16x16 |
MB_TYPE_L0;
goto end;
@@ -1949,7 +1949,7 @@ static int mpeg4_decode_mb(MpegEncContext *s, int16_t block[6][64])
s->mv_dir = MV_DIR_FORWARD | MV_DIR_BACKWARD | MV_DIRECT;
mb_type |= ff_mpeg4_set_direct_mv(s, mx, my);
}
- s->current_picture.mb_type[xy] = mb_type;
+ s->cur_pic.mb_type[xy] = mb_type;
} else { /* I-Frame */
int use_intra_dc_vlc;
@@ -1968,9 +1968,9 @@ static int mpeg4_decode_mb(MpegEncContext *s, int16_t block[6][64])
intra:
s->ac_pred = get_bits1(&s->gb);
if (s->ac_pred)
- s->current_picture.mb_type[xy] = MB_TYPE_INTRA | MB_TYPE_ACPRED;
+ s->cur_pic.mb_type[xy] = MB_TYPE_INTRA | MB_TYPE_ACPRED;
else
- s->current_picture.mb_type[xy] = MB_TYPE_INTRA;
+ s->cur_pic.mb_type[xy] = MB_TYPE_INTRA;
cbpy = get_vlc2(&s->gb, ff_h263_cbpy_vlc, CBPY_VLC_BITS, 1);
if (cbpy < 0) {
@@ -2017,11 +2017,11 @@ end:
if (s->pict_type == AV_PICTURE_TYPE_B) {
const int delta = s->mb_x + 1 == s->mb_width ? 2 : 1;
- ff_thread_await_progress(&s->next_picture_ptr->tf,
+ ff_thread_await_progress(&s->next_pic_ptr->tf,
(s->mb_x + delta >= s->mb_width)
? FFMIN(s->mb_y + 1, s->mb_height - 1)
: s->mb_y, 0);
- if (s->next_picture.mbskip_table[xy + delta])
+ if (s->next_pic.mbskip_table[xy + delta])
return SLICE_OK;
}
diff --git a/libavcodec/mpeg4videoenc.c b/libavcodec/mpeg4videoenc.c
index bddc26650a..87b12413ab 100644
--- a/libavcodec/mpeg4videoenc.c
+++ b/libavcodec/mpeg4videoenc.c
@@ -142,7 +142,7 @@ static inline int decide_ac_pred(MpegEncContext *s, int16_t block[6][64],
{
int score = 0;
int i, n;
- int8_t *const qscale_table = s->current_picture.qscale_table;
+ int8_t *const qscale_table = s->cur_pic.qscale_table;
memcpy(zigzag_last_index, s->block_last_index, sizeof(int) * 6);
@@ -222,7 +222,7 @@ static inline int decide_ac_pred(MpegEncContext *s, int16_t block[6][64],
void ff_clean_mpeg4_qscales(MpegEncContext *s)
{
int i;
- int8_t *const qscale_table = s->current_picture.qscale_table;
+ int8_t *const qscale_table = s->cur_pic.qscale_table;
ff_clean_h263_qscales(s);
@@ -511,7 +511,7 @@ void ff_mpeg4_encode_mb(MpegEncContext *s, int16_t block[6][64],
av_assert2(mb_type >= 0);
/* nothing to do if this MB was skipped in the next P-frame */
- if (s->next_picture.mbskip_table[s->mb_y * s->mb_stride + s->mb_x]) { // FIXME avoid DCT & ...
+ if (s->next_pic.mbskip_table[s->mb_y * s->mb_stride + s->mb_x]) { // FIXME avoid DCT & ...
s->mv[0][0][0] =
s->mv[0][0][1] =
s->mv[1][0][0] =
@@ -644,7 +644,7 @@ void ff_mpeg4_encode_mb(MpegEncContext *s, int16_t block[6][64],
y = s->mb_y * 16;
offset = x + y * s->linesize;
- p_pic = s->new_picture->data[0] + offset;
+ p_pic = s->new_pic->data[0] + offset;
s->mb_skipped = 1;
for (i = 0; i < s->max_b_frames; i++) {
@@ -777,8 +777,8 @@ void ff_mpeg4_encode_mb(MpegEncContext *s, int16_t block[6][64],
ff_h263_pred_motion(s, i, 0, &pred_x, &pred_y);
ff_h263_encode_motion_vector(s,
- s->current_picture.motion_val[0][s->block_index[i]][0] - pred_x,
- s->current_picture.motion_val[0][s->block_index[i]][1] - pred_y,
+ s->cur_pic.motion_val[0][s->block_index[i]][0] - pred_x,
+ s->cur_pic.motion_val[0][s->block_index[i]][1] - pred_y,
s->f_code);
}
}
@@ -886,7 +886,7 @@ static void mpeg4_encode_gop_header(MpegEncContext *s)
put_bits(&s->pb, 16, 0);
put_bits(&s->pb, 16, GOP_STARTCODE);
- time = s->current_picture_ptr->f->pts;
+ time = s->cur_pic_ptr->f->pts;
if (s->reordered_input_picture[1])
time = FFMIN(time, s->reordered_input_picture[1]->f->pts);
time = time * s->avctx->time_base.num;
@@ -1098,7 +1098,7 @@ int ff_mpeg4_encode_picture_header(MpegEncContext *s)
}
put_bits(&s->pb, 3, 0); /* intra dc VLC threshold */
if (!s->progressive_sequence) {
- put_bits(&s->pb, 1, !!(s->current_picture_ptr->f->flags & AV_FRAME_FLAG_TOP_FIELD_FIRST));
+ put_bits(&s->pb, 1, !!(s->cur_pic_ptr->f->flags & AV_FRAME_FLAG_TOP_FIELD_FIRST));
put_bits(&s->pb, 1, s->alternate_scan);
}
// FIXME sprite stuff
diff --git a/libavcodec/mpeg_er.c b/libavcodec/mpeg_er.c
index d429b0a839..bc838b05ba 100644
--- a/libavcodec/mpeg_er.c
+++ b/libavcodec/mpeg_er.c
@@ -49,9 +49,9 @@ void ff_mpeg_er_frame_start(MpegEncContext *s)
{
ERContext *er = &s->er;
- set_erpic(&er->cur_pic, s->current_picture_ptr);
- set_erpic(&er->next_pic, s->next_picture_ptr);
- set_erpic(&er->last_pic, s->last_picture_ptr);
+ set_erpic(&er->cur_pic, s->cur_pic_ptr);
+ set_erpic(&er->next_pic, s->next_pic_ptr);
+ set_erpic(&er->last_pic, s->last_pic_ptr);
er->pp_time = s->pp_time;
er->pb_time = s->pb_time;
@@ -84,13 +84,13 @@ static void mpeg_er_decode_mb(void *opaque, int ref, int mv_dir, int mv_type,
if (!s->chroma_y_shift)
s->bdsp.clear_blocks(s->block[6]);
- s->dest[0] = s->current_picture.f->data[0] +
+ s->dest[0] = s->cur_pic.f->data[0] +
s->mb_y * 16 * s->linesize +
s->mb_x * 16;
- s->dest[1] = s->current_picture.f->data[1] +
+ s->dest[1] = s->cur_pic.f->data[1] +
s->mb_y * (16 >> s->chroma_y_shift) * s->uvlinesize +
s->mb_x * (16 >> s->chroma_x_shift);
- s->dest[2] = s->current_picture.f->data[2] +
+ s->dest[2] = s->cur_pic.f->data[2] +
s->mb_y * (16 >> s->chroma_y_shift) * s->uvlinesize +
s->mb_x * (16 >> s->chroma_x_shift);
diff --git a/libavcodec/mpegvideo.c b/libavcodec/mpegvideo.c
index 4b1f882105..c8a1d6487a 100644
--- a/libavcodec/mpegvideo.c
+++ b/libavcodec/mpegvideo.c
@@ -678,9 +678,9 @@ int ff_mpv_init_context_frame(MpegEncContext *s)
static void clear_context(MpegEncContext *s)
{
memset(&s->buffer_pools, 0, sizeof(s->buffer_pools));
- memset(&s->next_picture, 0, sizeof(s->next_picture));
- memset(&s->last_picture, 0, sizeof(s->last_picture));
- memset(&s->current_picture, 0, sizeof(s->current_picture));
+ memset(&s->next_pic, 0, sizeof(s->next_pic));
+ memset(&s->last_pic, 0, sizeof(s->last_pic));
+ memset(&s->cur_pic, 0, sizeof(s->cur_pic));
memset(s->thread_context, 0, sizeof(s->thread_context));
@@ -763,9 +763,9 @@ av_cold int ff_mpv_common_init(MpegEncContext *s)
goto fail_nomem;
}
- if (!(s->next_picture.f = av_frame_alloc()) ||
- !(s->last_picture.f = av_frame_alloc()) ||
- !(s->current_picture.f = av_frame_alloc()))
+ if (!(s->next_pic.f = av_frame_alloc()) ||
+ !(s->last_pic.f = av_frame_alloc()) ||
+ !(s->cur_pic.f = av_frame_alloc()))
goto fail_nomem;
if ((ret = ff_mpv_init_context_frame(s)))
@@ -840,15 +840,15 @@ void ff_mpv_common_end(MpegEncContext *s)
ff_mpv_picture_free(&s->picture[i]);
}
av_freep(&s->picture);
- ff_mpv_picture_free(&s->last_picture);
- ff_mpv_picture_free(&s->current_picture);
- ff_mpv_picture_free(&s->next_picture);
+ ff_mpv_picture_free(&s->last_pic);
+ ff_mpv_picture_free(&s->cur_pic);
+ ff_mpv_picture_free(&s->next_pic);
s->context_initialized = 0;
s->context_reinit = 0;
- s->last_picture_ptr =
- s->next_picture_ptr =
- s->current_picture_ptr = NULL;
+ s->last_pic_ptr =
+ s->next_pic_ptr =
+ s->cur_pic_ptr = NULL;
s->linesize = s->uvlinesize = 0;
}
@@ -881,8 +881,8 @@ void ff_clean_intra_table_entries(MpegEncContext *s)
}
void ff_init_block_index(MpegEncContext *s){ //FIXME maybe rename
- const int linesize = s->current_picture.f->linesize[0]; //not s->linesize as this would be wrong for field pics
- const int uvlinesize = s->current_picture.f->linesize[1];
+ const int linesize = s->cur_pic.f->linesize[0]; //not s->linesize as this would be wrong for field pics
+ const int uvlinesize = s->cur_pic.f->linesize[1];
const int width_of_mb = (4 + (s->avctx->bits_per_raw_sample > 8)) - s->avctx->lowres;
const int height_of_mb = 4 - s->avctx->lowres;
@@ -894,9 +894,9 @@ void ff_init_block_index(MpegEncContext *s){ //FIXME maybe rename
s->block_index[5]= s->mb_stride*(s->mb_y + s->mb_height + 2) + s->b8_stride*s->mb_height*2 + s->mb_x - 1;
//block_index is not used by mpeg2, so it is not affected by chroma_format
- s->dest[0] = s->current_picture.f->data[0] + (int)((s->mb_x - 1U) << width_of_mb);
- s->dest[1] = s->current_picture.f->data[1] + (int)((s->mb_x - 1U) << (width_of_mb - s->chroma_x_shift));
- s->dest[2] = s->current_picture.f->data[2] + (int)((s->mb_x - 1U) << (width_of_mb - s->chroma_x_shift));
+ s->dest[0] = s->cur_pic.f->data[0] + (int)((s->mb_x - 1U) << width_of_mb);
+ s->dest[1] = s->cur_pic.f->data[1] + (int)((s->mb_x - 1U) << (width_of_mb - s->chroma_x_shift));
+ s->dest[2] = s->cur_pic.f->data[2] + (int)((s->mb_x - 1U) << (width_of_mb - s->chroma_x_shift));
if (s->picture_structure == PICT_FRAME) {
s->dest[0] += s->mb_y * linesize << height_of_mb;
diff --git a/libavcodec/mpegvideo.h b/libavcodec/mpegvideo.h
index f5ae0d1ca0..e2953a3198 100644
--- a/libavcodec/mpegvideo.h
+++ b/libavcodec/mpegvideo.h
@@ -156,29 +156,29 @@ typedef struct MpegEncContext {
* copy of the previous picture structure.
* note, linesize & data, might not match the previous picture (for field pictures)
*/
- Picture last_picture;
+ Picture last_pic;
/**
* copy of the next picture structure.
* note, linesize & data, might not match the next picture (for field pictures)
*/
- Picture next_picture;
+ Picture next_pic;
/**
* Reference to the source picture for encoding.
* note, linesize & data, might not match the source picture (for field pictures)
*/
- AVFrame *new_picture;
+ AVFrame *new_pic;
/**
* copy of the current picture structure.
* note, linesize & data, might not match the current picture (for field pictures)
*/
- Picture current_picture; ///< buffer to store the decompressed current picture
+ Picture cur_pic; ///< buffer to store the decompressed current picture
- Picture *last_picture_ptr; ///< pointer to the previous picture.
- Picture *next_picture_ptr; ///< pointer to the next picture (for bidir pred)
- Picture *current_picture_ptr; ///< pointer to the current picture
+ Picture *last_pic_ptr; ///< pointer to the previous picture.
+ Picture *next_pic_ptr; ///< pointer to the next picture (for bidir pred)
+ Picture *cur_pic_ptr; ///< pointer to the current picture
int skipped_last_frame;
int last_dc[3]; ///< last DC values for MPEG-1
int16_t *dc_val_base;
diff --git a/libavcodec/mpegvideo_dec.c b/libavcodec/mpegvideo_dec.c
index a4c7a0086a..9b04d6a351 100644
--- a/libavcodec/mpegvideo_dec.c
+++ b/libavcodec/mpegvideo_dec.c
@@ -122,9 +122,9 @@ do {\
}\
} while (0)
- UPDATE_PICTURE(current_picture);
- UPDATE_PICTURE(last_picture);
- UPDATE_PICTURE(next_picture);
+ UPDATE_PICTURE(cur_pic);
+ UPDATE_PICTURE(last_pic);
+ UPDATE_PICTURE(next_pic);
s->linesize = s1->linesize;
s->uvlinesize = s1->uvlinesize;
@@ -134,9 +134,9 @@ do {\
pic < old_ctx->picture + MAX_PICTURE_COUNT) ? \
&new_ctx->picture[pic - old_ctx->picture] : NULL)
- s->last_picture_ptr = REBASE_PICTURE(s1->last_picture_ptr, s, s1);
- s->current_picture_ptr = REBASE_PICTURE(s1->current_picture_ptr, s, s1);
- s->next_picture_ptr = REBASE_PICTURE(s1->next_picture_ptr, s, s1);
+ s->last_pic_ptr = REBASE_PICTURE(s1->last_pic_ptr, s, s1);
+ s->cur_pic_ptr = REBASE_PICTURE(s1->cur_pic_ptr, s, s1);
+ s->next_pic_ptr = REBASE_PICTURE(s1->next_pic_ptr, s, s1);
// Error/bug resilience
s->workaround_bugs = s1->workaround_bugs;
@@ -193,9 +193,9 @@ int ff_mpv_common_frame_size_change(MpegEncContext *s)
ff_mpv_free_context_frame(s);
- s->last_picture_ptr =
- s->next_picture_ptr =
- s->current_picture_ptr = NULL;
+ s->last_pic_ptr =
+ s->next_pic_ptr =
+ s->cur_pic_ptr = NULL;
if ((s->width || s->height) &&
(err = av_image_check_size(s->width, s->height, 0, s->avctx)) < 0)
@@ -326,9 +326,9 @@ int ff_mpv_alloc_dummy_frames(MpegEncContext *s)
AVCodecContext *avctx = s->avctx;
int ret;
- if ((!s->last_picture_ptr || !s->last_picture_ptr->f->buf[0]) &&
+ if ((!s->last_pic_ptr || !s->last_pic_ptr->f->buf[0]) &&
(s->pict_type != AV_PICTURE_TYPE_I)) {
- if (s->pict_type == AV_PICTURE_TYPE_B && s->next_picture_ptr && s->next_picture_ptr->f->buf[0])
+ if (s->pict_type == AV_PICTURE_TYPE_B && s->next_pic_ptr && s->next_pic_ptr->f->buf[0])
av_log(avctx, AV_LOG_DEBUG,
"allocating dummy last picture for B frame\n");
else if (s->codec_id != AV_CODEC_ID_H261 /* H.261 has no keyframes */ &&
@@ -337,25 +337,25 @@ int ff_mpv_alloc_dummy_frames(MpegEncContext *s)
"warning: first frame is no keyframe\n");
/* Allocate a dummy frame */
- ret = alloc_dummy_frame(s, &s->last_picture_ptr, &s->last_picture);
+ ret = alloc_dummy_frame(s, &s->last_pic_ptr, &s->last_pic);
if (ret < 0)
return ret;
if (!avctx->hwaccel) {
int luma_val = s->codec_id == AV_CODEC_ID_FLV1 || s->codec_id == AV_CODEC_ID_H263 ? 16 : 0x80;
- color_frame(s->last_picture_ptr->f, luma_val);
+ color_frame(s->last_pic_ptr->f, luma_val);
}
}
- if ((!s->next_picture_ptr || !s->next_picture_ptr->f->buf[0]) &&
+ if ((!s->next_pic_ptr || !s->next_pic_ptr->f->buf[0]) &&
s->pict_type == AV_PICTURE_TYPE_B) {
/* Allocate a dummy frame */
- ret = alloc_dummy_frame(s, &s->next_picture_ptr, &s->next_picture);
+ ret = alloc_dummy_frame(s, &s->next_pic_ptr, &s->next_pic);
if (ret < 0)
return ret;
}
- av_assert0(s->pict_type == AV_PICTURE_TYPE_I || (s->last_picture_ptr &&
- s->last_picture_ptr->f->buf[0]));
+ av_assert0(s->pict_type == AV_PICTURE_TYPE_I || (s->last_pic_ptr &&
+ s->last_pic_ptr->f->buf[0]));
return 0;
}
@@ -376,67 +376,65 @@ int ff_mpv_frame_start(MpegEncContext *s, AVCodecContext *avctx)
}
/* mark & release old frames */
- if (s->pict_type != AV_PICTURE_TYPE_B && s->last_picture_ptr &&
- s->last_picture_ptr != s->next_picture_ptr &&
- s->last_picture_ptr->f->buf[0]) {
- ff_mpeg_unref_picture(s->last_picture_ptr);
+ if (s->pict_type != AV_PICTURE_TYPE_B && s->last_pic_ptr &&
+ s->last_pic_ptr != s->next_pic_ptr &&
+ s->last_pic_ptr->f->buf[0]) {
+ ff_mpeg_unref_picture(s->last_pic_ptr);
}
/* release non reference/forgotten frames */
for (int i = 0; i < MAX_PICTURE_COUNT; i++) {
if (!s->picture[i].reference ||
- (&s->picture[i] != s->last_picture_ptr &&
- &s->picture[i] != s->next_picture_ptr)) {
+ (&s->picture[i] != s->last_pic_ptr &&
+ &s->picture[i] != s->next_pic_ptr)) {
ff_mpeg_unref_picture(&s->picture[i]);
}
}
- ff_mpeg_unref_picture(&s->current_picture);
- ff_mpeg_unref_picture(&s->last_picture);
- ff_mpeg_unref_picture(&s->next_picture);
+ ff_mpeg_unref_picture(&s->cur_pic);
+ ff_mpeg_unref_picture(&s->last_pic);
+ ff_mpeg_unref_picture(&s->next_pic);
- ret = alloc_picture(s, &s->current_picture_ptr,
+ ret = alloc_picture(s, &s->cur_pic_ptr,
s->pict_type != AV_PICTURE_TYPE_B && !s->droppable);
if (ret < 0)
return ret;
- s->current_picture_ptr->f->flags |= AV_FRAME_FLAG_TOP_FIELD_FIRST * !!s->top_field_first;
- s->current_picture_ptr->f->flags |= AV_FRAME_FLAG_INTERLACED * (!s->progressive_frame &&
- !s->progressive_sequence);
- s->current_picture_ptr->field_picture = s->picture_structure != PICT_FRAME;
+ s->cur_pic_ptr->f->flags |= AV_FRAME_FLAG_TOP_FIELD_FIRST * !!s->top_field_first;
+ s->cur_pic_ptr->f->flags |= AV_FRAME_FLAG_INTERLACED *
+ (!s->progressive_frame && !s->progressive_sequence);
+ s->cur_pic_ptr->field_picture = s->picture_structure != PICT_FRAME;
- s->current_picture_ptr->f->pict_type = s->pict_type;
+ s->cur_pic_ptr->f->pict_type = s->pict_type;
if (s->pict_type == AV_PICTURE_TYPE_I)
- s->current_picture_ptr->f->flags |= AV_FRAME_FLAG_KEY;
+ s->cur_pic_ptr->f->flags |= AV_FRAME_FLAG_KEY;
else
- s->current_picture_ptr->f->flags &= ~AV_FRAME_FLAG_KEY;
+ s->cur_pic_ptr->f->flags &= ~AV_FRAME_FLAG_KEY;
- if ((ret = ff_mpeg_ref_picture(&s->current_picture,
- s->current_picture_ptr)) < 0)
+ if ((ret = ff_mpeg_ref_picture(&s->cur_pic, s->cur_pic_ptr)) < 0)
return ret;
if (s->pict_type != AV_PICTURE_TYPE_B) {
- s->last_picture_ptr = s->next_picture_ptr;
+ s->last_pic_ptr = s->next_pic_ptr;
if (!s->droppable)
- s->next_picture_ptr = s->current_picture_ptr;
+ s->next_pic_ptr = s->cur_pic_ptr;
}
ff_dlog(s->avctx, "L%p N%p C%p L%p N%p C%p type:%d drop:%d\n",
- s->last_picture_ptr, s->next_picture_ptr,s->current_picture_ptr,
- s->last_picture_ptr ? s->last_picture_ptr->f->data[0] : NULL,
- s->next_picture_ptr ? s->next_picture_ptr->f->data[0] : NULL,
- s->current_picture_ptr ? s->current_picture_ptr->f->data[0] : NULL,
+ s->last_pic_ptr, s->next_pic_ptr, s->cur_pic_ptr,
+ s->last_pic_ptr ? s->last_pic_ptr->f->data[0] : NULL,
+ s->next_pic_ptr ? s->next_pic_ptr->f->data[0] : NULL,
+ s->cur_pic_ptr ? s->cur_pic_ptr->f->data[0] : NULL,
s->pict_type, s->droppable);
- if (s->last_picture_ptr) {
- if (s->last_picture_ptr->f->buf[0] &&
- (ret = ff_mpeg_ref_picture(&s->last_picture,
- s->last_picture_ptr)) < 0)
+ if (s->last_pic_ptr) {
+ if (s->last_pic_ptr->f->buf[0] &&
+ (ret = ff_mpeg_ref_picture(&s->last_pic,
+ s->last_pic_ptr)) < 0)
return ret;
}
- if (s->next_picture_ptr) {
- if (s->next_picture_ptr->f->buf[0] &&
- (ret = ff_mpeg_ref_picture(&s->next_picture,
- s->next_picture_ptr)) < 0)
+ if (s->next_pic_ptr) {
+ if (s->next_pic_ptr->f->buf[0] &&
+ (ret = ff_mpeg_ref_picture(&s->next_pic, s->next_pic_ptr)) < 0)
return ret;
}
@@ -459,7 +457,7 @@ int ff_mpv_frame_start(MpegEncContext *s, AVCodecContext *avctx)
}
if (s->avctx->debug & FF_DEBUG_NOMC)
- color_frame(s->current_picture_ptr->f, 0x80);
+ color_frame(s->cur_pic_ptr->f, 0x80);
return 0;
}
@@ -469,8 +467,8 @@ void ff_mpv_frame_end(MpegEncContext *s)
{
emms_c();
- if (s->current_picture.reference)
- ff_thread_report_progress(&s->current_picture_ptr->tf, INT_MAX, 0);
+ if (s->cur_pic.reference)
+ ff_thread_report_progress(&s->cur_pic_ptr->tf, INT_MAX, 0);
}
void ff_print_debug_info(const MpegEncContext *s, const Picture *p, AVFrame *pict)
@@ -512,8 +510,8 @@ int ff_mpv_export_qp_table(const MpegEncContext *s, AVFrame *f, const Picture *p
void ff_mpeg_draw_horiz_band(MpegEncContext *s, int y, int h)
{
- ff_draw_horiz_band(s->avctx, s->current_picture_ptr->f,
- s->last_picture_ptr ? s->last_picture_ptr->f : NULL,
+ ff_draw_horiz_band(s->avctx, s->cur_pic_ptr->f,
+ s->last_pic_ptr ? s->last_pic_ptr->f : NULL,
y, h, s->picture_structure,
s->first_field, s->low_delay);
}
@@ -527,11 +525,11 @@ void ff_mpeg_flush(AVCodecContext *avctx)
for (int i = 0; i < MAX_PICTURE_COUNT; i++)
ff_mpeg_unref_picture(&s->picture[i]);
- s->current_picture_ptr = s->last_picture_ptr = s->next_picture_ptr = NULL;
+ s->cur_pic_ptr = s->last_pic_ptr = s->next_pic_ptr = NULL;
- ff_mpeg_unref_picture(&s->current_picture);
- ff_mpeg_unref_picture(&s->last_picture);
- ff_mpeg_unref_picture(&s->next_picture);
+ ff_mpeg_unref_picture(&s->cur_pic);
+ ff_mpeg_unref_picture(&s->last_pic);
+ ff_mpeg_unref_picture(&s->next_pic);
s->mb_x = s->mb_y = 0;
@@ -542,7 +540,7 @@ void ff_mpeg_flush(AVCodecContext *avctx)
void ff_mpv_report_decode_progress(MpegEncContext *s)
{
if (s->pict_type != AV_PICTURE_TYPE_B && !s->partitioned_frame && !s->er.error_occurred)
- ff_thread_report_progress(&s->current_picture_ptr->tf, s->mb_y, 0);
+ ff_thread_report_progress(&s->cur_pic_ptr->tf, s->mb_y, 0);
}
@@ -615,8 +613,8 @@ static av_always_inline void mpeg_motion_lowres(MpegEncContext *s,
const int h_edge_pos = s->h_edge_pos >> lowres;
const int v_edge_pos = s->v_edge_pos >> lowres;
int hc = s->chroma_y_shift ? (h+1-bottom_field)>>1 : h;
- linesize = s->current_picture.f->linesize[0] << field_based;
- uvlinesize = s->current_picture.f->linesize[1] << field_based;
+ linesize = s->cur_pic.f->linesize[0] << field_based;
+ uvlinesize = s->cur_pic.f->linesize[1] << field_based;
// FIXME obviously not perfect but qpel will not work in lowres anyway
if (s->quarter_sample) {
@@ -861,7 +859,7 @@ static inline void MPV_motion_lowres(MpegEncContext *s,
} else {
if (s->picture_structure != s->field_select[dir][0] + 1 &&
s->pict_type != AV_PICTURE_TYPE_B && !s->first_field) {
- ref_picture = s->current_picture_ptr->f->data;
+ ref_picture = s->cur_pic_ptr->f->data;
}
mpeg_motion_lowres(s, dest_y, dest_cb, dest_cr,
0, 0, s->field_select[dir][0],
@@ -878,7 +876,7 @@ static inline void MPV_motion_lowres(MpegEncContext *s,
s->pict_type == AV_PICTURE_TYPE_B || s->first_field) {
ref2picture = ref_picture;
} else {
- ref2picture = s->current_picture_ptr->f->data;
+ ref2picture = s->cur_pic_ptr->f->data;
}
mpeg_motion_lowres(s, dest_y, dest_cb, dest_cr,
@@ -919,7 +917,7 @@ static inline void MPV_motion_lowres(MpegEncContext *s,
// opposite parity is always in the same
// frame if this is second field
if (!s->first_field) {
- ref_picture = s->current_picture_ptr->f->data;
+ ref_picture = s->cur_pic_ptr->f->data;
}
}
}
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index 1798a25ed9..d7e1085cf8 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -231,11 +231,11 @@ void ff_write_quant_matrix(PutBitContext *pb, uint16_t *matrix)
}
/**
- * init s->current_picture.qscale_table from s->lambda_table
+ * init s->cur_pic.qscale_table from s->lambda_table
*/
void ff_init_qscale_tab(MpegEncContext *s)
{
- int8_t * const qscale_table = s->current_picture.qscale_table;
+ int8_t * const qscale_table = s->cur_pic.qscale_table;
int i;
for (i = 0; i < s->mb_num; i++) {
@@ -821,7 +821,7 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx)
!FF_ALLOCZ_TYPED_ARRAY(s->q_inter_matrix16, 32) ||
!FF_ALLOCZ_TYPED_ARRAY(s->input_picture, MAX_B_FRAMES + 1) ||
!FF_ALLOCZ_TYPED_ARRAY(s->reordered_input_picture, MAX_B_FRAMES + 1) ||
- !(s->new_picture = av_frame_alloc()))
+ !(s->new_pic = av_frame_alloc()))
return AVERROR(ENOMEM);
/* Allocate MV tables; the MV and MB tables will be copied
@@ -996,7 +996,7 @@ av_cold int ff_mpv_encode_end(AVCodecContext *avctx)
for (i = 0; i < FF_ARRAY_ELEMS(s->tmp_frames); i++)
av_frame_free(&s->tmp_frames[i]);
- av_frame_free(&s->new_picture);
+ av_frame_free(&s->new_pic);
av_freep(&avctx->stats_out);
@@ -1340,7 +1340,6 @@ static int estimate_best_b_count(MpegEncContext *s)
return AVERROR(ENOMEM);
//emms_c();
- //s->next_picture_ptr->quality;
p_lambda = s->last_lambda_for[AV_PICTURE_TYPE_P];
//p_lambda * FFABS(s->avctx->b_quant_factor) + s->avctx->b_quant_offset;
b_lambda = s->last_lambda_for[AV_PICTURE_TYPE_B];
@@ -1351,7 +1350,7 @@ static int estimate_best_b_count(MpegEncContext *s)
for (i = 0; i < s->max_b_frames + 2; i++) {
const Picture *pre_input_ptr = i ? s->input_picture[i - 1] :
- s->next_picture_ptr;
+ s->next_pic_ptr;
if (pre_input_ptr) {
const uint8_t *data[4];
@@ -1479,8 +1478,8 @@ static int select_input_picture(MpegEncContext *s)
if (!s->reordered_input_picture[0] && s->input_picture[0]) {
if (s->frame_skip_threshold || s->frame_skip_factor) {
if (s->picture_in_gop_number < s->gop_size &&
- s->next_picture_ptr &&
- skip_check(s, s->input_picture[0], s->next_picture_ptr)) {
+ s->next_pic_ptr &&
+ skip_check(s, s->input_picture[0], s->next_pic_ptr)) {
// FIXME check that the gop check above is +-1 correct
ff_mpeg_unref_picture(s->input_picture[0]);
@@ -1491,7 +1490,7 @@ static int select_input_picture(MpegEncContext *s)
}
if (/*s->picture_in_gop_number >= s->gop_size ||*/
- !s->next_picture_ptr || s->intra_only) {
+ !s->next_pic_ptr || s->intra_only) {
s->reordered_input_picture[0] = s->input_picture[0];
s->reordered_input_picture[0]->f->pict_type = AV_PICTURE_TYPE_I;
s->reordered_input_picture[0]->coded_picture_number =
@@ -1594,14 +1593,14 @@ static int select_input_picture(MpegEncContext *s)
}
}
no_output_pic:
- av_frame_unref(s->new_picture);
+ av_frame_unref(s->new_pic);
if (s->reordered_input_picture[0]) {
s->reordered_input_picture[0]->reference =
s->reordered_input_picture[0]->f->pict_type !=
AV_PICTURE_TYPE_B ? 3 : 0;
- if ((ret = av_frame_ref(s->new_picture,
+ if ((ret = av_frame_ref(s->new_pic,
s->reordered_input_picture[0]->f)))
goto fail;
@@ -1631,16 +1630,16 @@ no_output_pic:
/* mark us unused / free shared pic */
ff_mpeg_unref_picture(s->reordered_input_picture[0]);
- s->current_picture_ptr = pic;
+ s->cur_pic_ptr = pic;
} else {
// input is not a shared pix -> reuse buffer for current_pix
- s->current_picture_ptr = s->reordered_input_picture[0];
+ s->cur_pic_ptr = s->reordered_input_picture[0];
for (i = 0; i < 4; i++) {
- if (s->new_picture->data[i])
- s->new_picture->data[i] += INPLACE_OFFSET;
+ if (s->new_pic->data[i])
+ s->new_pic->data[i] += INPLACE_OFFSET;
}
}
- s->picture_number = s->current_picture_ptr->display_picture_number;
+ s->picture_number = s->cur_pic_ptr->display_picture_number;
}
return 0;
@@ -1652,24 +1651,24 @@ fail:
static void frame_end(MpegEncContext *s)
{
if (s->unrestricted_mv &&
- s->current_picture.reference &&
+ s->cur_pic.reference &&
!s->intra_only) {
int hshift = s->chroma_x_shift;
int vshift = s->chroma_y_shift;
- s->mpvencdsp.draw_edges(s->current_picture.f->data[0],
- s->current_picture.f->linesize[0],
+ s->mpvencdsp.draw_edges(s->cur_pic.f->data[0],
+ s->cur_pic.f->linesize[0],
s->h_edge_pos, s->v_edge_pos,
EDGE_WIDTH, EDGE_WIDTH,
EDGE_TOP | EDGE_BOTTOM);
- s->mpvencdsp.draw_edges(s->current_picture.f->data[1],
- s->current_picture.f->linesize[1],
+ s->mpvencdsp.draw_edges(s->cur_pic.f->data[1],
+ s->cur_pic.f->linesize[1],
s->h_edge_pos >> hshift,
s->v_edge_pos >> vshift,
EDGE_WIDTH >> hshift,
EDGE_WIDTH >> vshift,
EDGE_TOP | EDGE_BOTTOM);
- s->mpvencdsp.draw_edges(s->current_picture.f->data[2],
- s->current_picture.f->linesize[2],
+ s->mpvencdsp.draw_edges(s->cur_pic.f->data[2],
+ s->cur_pic.f->linesize[2],
s->h_edge_pos >> hshift,
s->v_edge_pos >> vshift,
EDGE_WIDTH >> hshift,
@@ -1680,7 +1679,7 @@ static void frame_end(MpegEncContext *s)
emms_c();
s->last_pict_type = s->pict_type;
- s->last_lambda_for [s->pict_type] = s->current_picture_ptr->f->quality;
+ s->last_lambda_for [s->pict_type] = s->cur_pic_ptr->f->quality;
if (s->pict_type!= AV_PICTURE_TYPE_B)
s->last_non_b_pict_type = s->pict_type;
}
@@ -1711,36 +1710,33 @@ static int frame_start(MpegEncContext *s)
int ret;
/* mark & release old frames */
- if (s->pict_type != AV_PICTURE_TYPE_B && s->last_picture_ptr &&
- s->last_picture_ptr != s->next_picture_ptr &&
- s->last_picture_ptr->f->buf[0]) {
- ff_mpeg_unref_picture(s->last_picture_ptr);
+ if (s->pict_type != AV_PICTURE_TYPE_B && s->last_pic_ptr &&
+ s->last_pic_ptr != s->next_pic_ptr &&
+ s->last_pic_ptr->f->buf[0]) {
+ ff_mpeg_unref_picture(s->last_pic_ptr);
}
- s->current_picture_ptr->f->pict_type = s->pict_type;
+ s->cur_pic_ptr->f->pict_type = s->pict_type;
- ff_mpeg_unref_picture(&s->current_picture);
- if ((ret = ff_mpeg_ref_picture(&s->current_picture,
- s->current_picture_ptr)) < 0)
+ ff_mpeg_unref_picture(&s->cur_pic);
+ if ((ret = ff_mpeg_ref_picture(&s->cur_pic, s->cur_pic_ptr)) < 0)
return ret;
if (s->pict_type != AV_PICTURE_TYPE_B) {
- s->last_picture_ptr = s->next_picture_ptr;
- s->next_picture_ptr = s->current_picture_ptr;
+ s->last_pic_ptr = s->next_pic_ptr;
+ s->next_pic_ptr = s->cur_pic_ptr;
}
- if (s->last_picture_ptr) {
- ff_mpeg_unref_picture(&s->last_picture);
- if (s->last_picture_ptr->f->buf[0] &&
- (ret = ff_mpeg_ref_picture(&s->last_picture,
- s->last_picture_ptr)) < 0)
+ if (s->last_pic_ptr) {
+ ff_mpeg_unref_picture(&s->last_pic);
+ if (s->last_pic_ptr->f->buf[0] &&
+ (ret = ff_mpeg_ref_picture(&s->last_pic, s->last_pic_ptr)) < 0)
return ret;
}
- if (s->next_picture_ptr) {
- ff_mpeg_unref_picture(&s->next_picture);
- if (s->next_picture_ptr->f->buf[0] &&
- (ret = ff_mpeg_ref_picture(&s->next_picture,
- s->next_picture_ptr)) < 0)
+ if (s->next_pic_ptr) {
+ ff_mpeg_unref_picture(&s->next_pic);
+ if (s->next_pic_ptr->f->buf[0] &&
+ (ret = ff_mpeg_ref_picture(&s->next_pic, s->next_pic_ptr)) < 0)
return ret;
}
@@ -1771,12 +1767,12 @@ int ff_mpv_encode_picture(AVCodecContext *avctx, AVPacket *pkt,
}
/* output? */
- if (s->new_picture->data[0]) {
+ if (s->new_pic->data[0]) {
int growing_buffer = context_count == 1 && !s->data_partitioning;
size_t pkt_size = 10000 + s->mb_width * s->mb_height *
(growing_buffer ? 64 : (MAX_MB_BYTES + 100));
if (CONFIG_MJPEG_ENCODER && avctx->codec_id == AV_CODEC_ID_MJPEG) {
- ret = ff_mjpeg_add_icc_profile_size(avctx, s->new_picture, &pkt_size);
+ ret = ff_mjpeg_add_icc_profile_size(avctx, s->new_pic, &pkt_size);
if (ret < 0)
return ret;
}
@@ -1800,7 +1796,7 @@ int ff_mpv_encode_picture(AVCodecContext *avctx, AVPacket *pkt,
init_put_bits(&s->thread_context[i]->pb, start, end - start);
}
- s->pict_type = s->new_picture->pict_type;
+ s->pict_type = s->new_pic->pict_type;
//emms_c();
ret = frame_start(s);
if (ret < 0)
@@ -1868,7 +1864,7 @@ vbv_retry:
for (i = 0; i < 4; i++) {
avctx->error[i] += s->encoding_error[i];
}
- ff_side_data_set_encoder_stats(pkt, s->current_picture.f->quality,
+ ff_side_data_set_encoder_stats(pkt, s->cur_pic.f->quality,
s->encoding_error,
(avctx->flags&AV_CODEC_FLAG_PSNR) ? MPEGVIDEO_MAX_PLANES : 0,
s->pict_type);
@@ -1962,10 +1958,10 @@ vbv_retry:
}
s->total_bits += s->frame_bits;
- pkt->pts = s->current_picture.f->pts;
- pkt->duration = s->current_picture.f->duration;
+ pkt->pts = s->cur_pic.f->pts;
+ pkt->duration = s->cur_pic.f->duration;
if (!s->low_delay && s->pict_type != AV_PICTURE_TYPE_B) {
- if (!s->current_picture.coded_picture_number)
+ if (!s->cur_pic.coded_picture_number)
pkt->dts = pkt->pts - s->dts_delta;
else
pkt->dts = s->reordered_pts;
@@ -1975,12 +1971,12 @@ vbv_retry:
// the no-delay case is handled in generic code
if (avctx->codec->capabilities & AV_CODEC_CAP_DELAY) {
- ret = ff_encode_reordered_opaque(avctx, pkt, s->current_picture.f);
+ ret = ff_encode_reordered_opaque(avctx, pkt, s->cur_pic.f);
if (ret < 0)
return ret;
}
- if (s->current_picture.f->flags & AV_FRAME_FLAG_KEY)
+ if (s->cur_pic.f->flags & AV_FRAME_FLAG_KEY)
pkt->flags |= AV_PKT_FLAG_KEY;
if (s->mb_info)
av_packet_shrink_side_data(pkt, AV_PKT_DATA_H263_MB_INFO, s->mb_info_size);
@@ -2150,7 +2146,7 @@ static av_always_inline void encode_mb_internal(MpegEncContext *s,
update_qscale(s);
if (!(s->mpv_flags & FF_MPV_FLAG_QP_RD)) {
- s->qscale = s->current_picture_ptr->qscale_table[mb_xy];
+ s->qscale = s->cur_pic_ptr->qscale_table[mb_xy];
s->dquant = s->qscale - last_qp;
if (s->out_format == FMT_H263) {
@@ -2174,11 +2170,11 @@ static av_always_inline void encode_mb_internal(MpegEncContext *s,
wrap_y = s->linesize;
wrap_c = s->uvlinesize;
- ptr_y = s->new_picture->data[0] +
+ ptr_y = s->new_pic->data[0] +
(mb_y * 16 * wrap_y) + mb_x * 16;
- ptr_cb = s->new_picture->data[1] +
+ ptr_cb = s->new_pic->data[1] +
(mb_y * mb_block_height * wrap_c) + mb_x * mb_block_width;
- ptr_cr = s->new_picture->data[2] +
+ ptr_cr = s->new_pic->data[2] +
(mb_y * mb_block_height * wrap_c) + mb_x * mb_block_width;
if((mb_x * 16 + 16 > s->width || mb_y * 16 + 16 > s->height) && s->codec_id != AV_CODEC_ID_AMV){
@@ -2273,14 +2269,14 @@ static av_always_inline void encode_mb_internal(MpegEncContext *s,
if (s->mv_dir & MV_DIR_FORWARD) {
ff_mpv_motion(s, dest_y, dest_cb, dest_cr, 0,
- s->last_picture.f->data,
+ s->last_pic.f->data,
op_pix, op_qpix);
op_pix = s->hdsp.avg_pixels_tab;
op_qpix = s->qdsp.avg_qpel_pixels_tab;
}
if (s->mv_dir & MV_DIR_BACKWARD) {
ff_mpv_motion(s, dest_y, dest_cb, dest_cr, 1,
- s->next_picture.f->data,
+ s->next_pic.f->data,
op_pix, op_qpix);
}
@@ -2664,26 +2660,26 @@ static int sse_mb(MpegEncContext *s){
if(w==16 && h==16)
if(s->avctx->mb_cmp == FF_CMP_NSSE){
- return s->mecc.nsse[0](s, s->new_picture->data[0] + s->mb_x * 16 + s->mb_y * s->linesize * 16,
+ return s->mecc.nsse[0](s, s->new_pic->data[0] + s->mb_x * 16 + s->mb_y * s->linesize * 16,
s->dest[0], s->linesize, 16) +
- s->mecc.nsse[1](s, s->new_picture->data[1] + s->mb_x * chroma_mb_w + s->mb_y * s->uvlinesize * chroma_mb_h,
+ s->mecc.nsse[1](s, s->new_pic->data[1] + s->mb_x * chroma_mb_w + s->mb_y * s->uvlinesize * chroma_mb_h,
s->dest[1], s->uvlinesize, chroma_mb_h) +
- s->mecc.nsse[1](s, s->new_picture->data[2] + s->mb_x * chroma_mb_w + s->mb_y * s->uvlinesize * chroma_mb_h,
+ s->mecc.nsse[1](s, s->new_pic->data[2] + s->mb_x * chroma_mb_w + s->mb_y * s->uvlinesize * chroma_mb_h,
s->dest[2], s->uvlinesize, chroma_mb_h);
}else{
- return s->mecc.sse[0](NULL, s->new_picture->data[0] + s->mb_x * 16 + s->mb_y * s->linesize * 16,
+ return s->mecc.sse[0](NULL, s->new_pic->data[0] + s->mb_x * 16 + s->mb_y * s->linesize * 16,
s->dest[0], s->linesize, 16) +
- s->mecc.sse[1](NULL, s->new_picture->data[1] + s->mb_x * chroma_mb_w + s->mb_y * s->uvlinesize * chroma_mb_h,
+ s->mecc.sse[1](NULL, s->new_pic->data[1] + s->mb_x * chroma_mb_w + s->mb_y * s->uvlinesize * chroma_mb_h,
s->dest[1], s->uvlinesize, chroma_mb_h) +
- s->mecc.sse[1](NULL, s->new_picture->data[2] + s->mb_x * chroma_mb_w + s->mb_y * s->uvlinesize * chroma_mb_h,
+ s->mecc.sse[1](NULL, s->new_pic->data[2] + s->mb_x * chroma_mb_w + s->mb_y * s->uvlinesize * chroma_mb_h,
s->dest[2], s->uvlinesize, chroma_mb_h);
}
else
- return sse(s, s->new_picture->data[0] + s->mb_x * 16 + s->mb_y * s->linesize * 16,
+ return sse(s, s->new_pic->data[0] + s->mb_x * 16 + s->mb_y * s->linesize * 16,
s->dest[0], w, h, s->linesize) +
- sse(s, s->new_picture->data[1] + s->mb_x * chroma_mb_w + s->mb_y * s->uvlinesize * chroma_mb_h,
+ sse(s, s->new_pic->data[1] + s->mb_x * chroma_mb_w + s->mb_y * s->uvlinesize * chroma_mb_h,
s->dest[1], w >> s->chroma_x_shift, h >> s->chroma_y_shift, s->uvlinesize) +
- sse(s, s->new_picture->data[2] + s->mb_x * chroma_mb_w + s->mb_y * s->uvlinesize * chroma_mb_h,
+ sse(s, s->new_pic->data[2] + s->mb_x * chroma_mb_w + s->mb_y * s->uvlinesize * chroma_mb_h,
s->dest[2], w >> s->chroma_x_shift, h >> s->chroma_y_shift, s->uvlinesize);
}
@@ -2739,7 +2735,7 @@ static int mb_var_thread(AVCodecContext *c, void *arg){
for(mb_x=0; mb_x < s->mb_width; mb_x++) {
int xx = mb_x * 16;
int yy = mb_y * 16;
- const uint8_t *pix = s->new_picture->data[0] + (yy * s->linesize) + xx;
+ const uint8_t *pix = s->new_pic->data[0] + (yy * s->linesize) + xx;
int varc;
int sum = s->mpvencdsp.pix_sum(pix, s->linesize);
@@ -3102,8 +3098,8 @@ static int encode_thread(AVCodecContext *c, void *arg){
s->mv_type = MV_TYPE_8X8;
s->mb_intra= 0;
for(i=0; i<4; i++){
- s->mv[0][i][0] = s->current_picture.motion_val[0][s->block_index[i]][0];
- s->mv[0][i][1] = s->current_picture.motion_val[0][s->block_index[i]][1];
+ s->mv[0][i][0] = s->cur_pic.motion_val[0][s->block_index[i]][0];
+ s->mv[0][i][1] = s->cur_pic.motion_val[0][s->block_index[i]][1];
}
encode_mb_hq(s, &backup_s, &best_s, pb, pb2, tex_pb,
&dmin, &next_block, 0, 0);
@@ -3290,7 +3286,7 @@ static int encode_thread(AVCodecContext *c, void *arg){
}
}
- s->current_picture.qscale_table[xy] = best_s.qscale;
+ s->cur_pic.qscale_table[xy] = best_s.qscale;
copy_context_after_encode(s, &best_s);
@@ -3357,8 +3353,8 @@ static int encode_thread(AVCodecContext *c, void *arg){
s->mv_type = MV_TYPE_8X8;
s->mb_intra= 0;
for(i=0; i<4; i++){
- s->mv[0][i][0] = s->current_picture.motion_val[0][s->block_index[i]][0];
- s->mv[0][i][1] = s->current_picture.motion_val[0][s->block_index[i]][1];
+ s->mv[0][i][0] = s->cur_pic.motion_val[0][s->block_index[i]][0];
+ s->mv[0][i][1] = s->cur_pic.motion_val[0][s->block_index[i]][1];
}
break;
case CANDIDATE_MB_TYPE_DIRECT:
@@ -3459,13 +3455,13 @@ static int encode_thread(AVCodecContext *c, void *arg){
if(s->mb_y*16 + 16 > s->height) h= s->height- s->mb_y*16;
s->encoding_error[0] += sse(
- s, s->new_picture->data[0] + s->mb_x*16 + s->mb_y*s->linesize*16,
+ s, s->new_pic->data[0] + s->mb_x*16 + s->mb_y*s->linesize*16,
s->dest[0], w, h, s->linesize);
s->encoding_error[1] += sse(
- s, s->new_picture->data[1] + s->mb_x*8 + s->mb_y*s->uvlinesize*chr_h,
+ s, s->new_pic->data[1] + s->mb_x*8 + s->mb_y*s->uvlinesize*chr_h,
s->dest[1], w>>1, h>>s->chroma_y_shift, s->uvlinesize);
s->encoding_error[2] += sse(
- s, s->new_picture->data[2] + s->mb_x*8 + s->mb_y*s->uvlinesize*chr_h,
+ s, s->new_pic->data[2] + s->mb_x*8 + s->mb_y*s->uvlinesize*chr_h,
s->dest[2], w>>1, h>>s->chroma_y_shift, s->uvlinesize);
}
if(s->loop_filter){
@@ -3522,14 +3518,14 @@ static void merge_context_after_encode(MpegEncContext *dst, MpegEncContext *src)
static int estimate_qp(MpegEncContext *s, int dry_run){
if (s->next_lambda){
- s->current_picture_ptr->f->quality =
- s->current_picture.f->quality = s->next_lambda;
+ s->cur_pic_ptr->f->quality =
+ s->cur_pic.f->quality = s->next_lambda;
if(!dry_run) s->next_lambda= 0;
} else if (!s->fixed_qscale) {
int quality = ff_rate_estimate_qscale(s, dry_run);
- s->current_picture_ptr->f->quality =
- s->current_picture.f->quality = quality;
- if (s->current_picture.f->quality < 0)
+ s->cur_pic_ptr->f->quality =
+ s->cur_pic.f->quality = quality;
+ if (s->cur_pic.f->quality < 0)
return -1;
}
@@ -3552,15 +3548,15 @@ static int estimate_qp(MpegEncContext *s, int dry_run){
s->lambda= s->lambda_table[0];
//FIXME broken
}else
- s->lambda = s->current_picture.f->quality;
+ s->lambda = s->cur_pic.f->quality;
update_qscale(s);
return 0;
}
/* must be called before writing the header */
static void set_frame_distances(MpegEncContext * s){
- av_assert1(s->current_picture_ptr->f->pts != AV_NOPTS_VALUE);
- s->time = s->current_picture_ptr->f->pts * s->avctx->time_base.num;
+ av_assert1(s->cur_pic_ptr->f->pts != AV_NOPTS_VALUE);
+ s->time = s->cur_pic_ptr->f->pts * s->avctx->time_base.num;
if(s->pict_type==AV_PICTURE_TYPE_B){
s->pb_time= s->pp_time - (s->last_non_b_time - s->time);
@@ -3591,7 +3587,7 @@ static int encode_picture(MpegEncContext *s)
s->me.scene_change_score=0;
-// s->lambda= s->current_picture_ptr->quality; //FIXME qscale / ... stuff for ME rate distortion
+// s->lambda= s->cur_pic_ptr->quality; //FIXME qscale / ... stuff for ME rate distortion
if(s->pict_type==AV_PICTURE_TYPE_I){
if(s->msmpeg4_version >= 3) s->no_rounding=1;
@@ -3781,16 +3777,16 @@ static int encode_picture(MpegEncContext *s)
//FIXME var duplication
if (s->pict_type == AV_PICTURE_TYPE_I) {
- s->current_picture_ptr->f->flags |= AV_FRAME_FLAG_KEY; //FIXME pic_ptr
- s->current_picture.f->flags |= AV_FRAME_FLAG_KEY;
+ s->cur_pic_ptr->f->flags |= AV_FRAME_FLAG_KEY; //FIXME pic_ptr
+ s->cur_pic.f->flags |= AV_FRAME_FLAG_KEY;
} else {
- s->current_picture_ptr->f->flags &= ~AV_FRAME_FLAG_KEY; //FIXME pic_ptr
- s->current_picture.f->flags &= ~AV_FRAME_FLAG_KEY;
+ s->cur_pic_ptr->f->flags &= ~AV_FRAME_FLAG_KEY; //FIXME pic_ptr
+ s->cur_pic.f->flags &= ~AV_FRAME_FLAG_KEY;
}
- s->current_picture_ptr->f->pict_type =
- s->current_picture.f->pict_type = s->pict_type;
+ s->cur_pic_ptr->f->pict_type =
+ s->cur_pic.f->pict_type = s->pict_type;
- if (s->current_picture.f->flags & AV_FRAME_FLAG_KEY)
+ if (s->cur_pic.f->flags & AV_FRAME_FLAG_KEY)
s->picture_in_gop_number=0;
s->mb_x = s->mb_y = 0;
diff --git a/libavcodec/mpegvideo_motion.c b/libavcodec/mpegvideo_motion.c
index 56bdce59c0..3824832f9d 100644
--- a/libavcodec/mpegvideo_motion.c
+++ b/libavcodec/mpegvideo_motion.c
@@ -93,8 +93,8 @@ void mpeg_motion_internal(MpegEncContext *s,
ptrdiff_t uvlinesize, linesize;
v_edge_pos = s->v_edge_pos >> field_based;
- linesize = s->current_picture.f->linesize[0] << field_based;
- uvlinesize = s->current_picture.f->linesize[1] << field_based;
+ linesize = s->cur_pic.f->linesize[0] << field_based;
+ uvlinesize = s->cur_pic.f->linesize[1] << field_based;
block_y_half = (field_based | is_16x8);
dxy = ((motion_y & 1) << 1) | (motion_x & 1);
@@ -514,7 +514,7 @@ static inline void apply_obmc(MpegEncContext *s,
op_pixels_func (*pix_op)[4])
{
LOCAL_ALIGNED_8(int16_t, mv_cache, [4], [4][2]);
- const Picture *cur_frame = &s->current_picture;
+ const Picture *cur_frame = &s->cur_pic;
int mb_x = s->mb_x;
int mb_y = s->mb_y;
const int xy = mb_x + mb_y * s->mb_stride;
@@ -749,7 +749,7 @@ static av_always_inline void mpv_motion_internal(MpegEncContext *s,
av_assert2(s->out_format == FMT_MPEG1);
if (s->picture_structure != s->field_select[dir][0] + 1 &&
s->pict_type != AV_PICTURE_TYPE_B && !s->first_field) {
- ref_picture = s->current_picture_ptr->f->data;
+ ref_picture = s->cur_pic_ptr->f->data;
}
mpeg_motion(s, dest_y, dest_cb, dest_cr,
@@ -767,7 +767,7 @@ static av_always_inline void mpv_motion_internal(MpegEncContext *s,
s->pict_type == AV_PICTURE_TYPE_B || s->first_field) {
ref2picture = ref_picture;
} else {
- ref2picture = s->current_picture_ptr->f->data;
+ ref2picture = s->cur_pic_ptr->f->data;
}
mpeg_motion(s, dest_y, dest_cb, dest_cr,
@@ -807,7 +807,7 @@ static av_always_inline void mpv_motion_internal(MpegEncContext *s,
/* opposite parity is always in the same frame if this is
* second field */
if (!s->first_field)
- ref_picture = s->current_picture_ptr->f->data;
+ ref_picture = s->cur_pic_ptr->f->data;
}
}
break;
diff --git a/libavcodec/mpv_reconstruct_mb_template.c b/libavcodec/mpv_reconstruct_mb_template.c
index 6f7a5fb1b4..febada041a 100644
--- a/libavcodec/mpv_reconstruct_mb_template.c
+++ b/libavcodec/mpv_reconstruct_mb_template.c
@@ -59,7 +59,7 @@ void mpv_reconstruct_mb_internal(MpegEncContext *s, int16_t block[12][64],
#define IS_MPEG12(s) (is_mpeg12 == MAY_BE_MPEG12 ? ((s)->out_format == FMT_MPEG1) : is_mpeg12)
const int mb_xy = s->mb_y * s->mb_stride + s->mb_x;
- s->current_picture.qscale_table[mb_xy] = s->qscale;
+ s->cur_pic.qscale_table[mb_xy] = s->qscale;
/* update DC predictors for P macroblocks */
if (!s->mb_intra) {
@@ -82,8 +82,8 @@ void mpv_reconstruct_mb_internal(MpegEncContext *s, int16_t block[12][64],
{
uint8_t *dest_y, *dest_cb, *dest_cr;
int dct_linesize, dct_offset;
- const int linesize = s->current_picture.f->linesize[0]; //not s->linesize as this would be wrong for field pics
- const int uvlinesize = s->current_picture.f->linesize[1];
+ const int linesize = s->cur_pic.f->linesize[0]; //not s->linesize as this would be wrong for field pics
+ const int uvlinesize = s->cur_pic.f->linesize[1];
const int readable = IS_ENCODER || lowres_flag || s->pict_type != AV_PICTURE_TYPE_B;
const int block_size = lowres_flag ? 8 >> s->avctx->lowres : 8;
@@ -96,7 +96,7 @@ void mpv_reconstruct_mb_internal(MpegEncContext *s, int16_t block[12][64],
s->mb_skipped = 0;
av_assert2(s->pict_type!=AV_PICTURE_TYPE_I);
*mbskip_ptr = 1;
- } else if(!s->current_picture.reference) {
+ } else if (!s->cur_pic.reference) {
*mbskip_ptr = 1;
} else{
*mbskip_ptr = 0; /* not skipped */
@@ -124,11 +124,11 @@ void mpv_reconstruct_mb_internal(MpegEncContext *s, int16_t block[12][64],
if (HAVE_THREADS && is_mpeg12 != DEFINITELY_MPEG12 &&
s->avctx->active_thread_type & FF_THREAD_FRAME) {
if (s->mv_dir & MV_DIR_FORWARD) {
- ff_thread_await_progress(&s->last_picture_ptr->tf,
+ ff_thread_await_progress(&s->last_pic_ptr->tf,
lowest_referenced_row(s, 0), 0);
}
if (s->mv_dir & MV_DIR_BACKWARD) {
- ff_thread_await_progress(&s->next_picture_ptr->tf,
+ ff_thread_await_progress(&s->next_pic_ptr->tf,
lowest_referenced_row(s, 1), 0);
}
}
@@ -137,11 +137,11 @@ void mpv_reconstruct_mb_internal(MpegEncContext *s, int16_t block[12][64],
const h264_chroma_mc_func *op_pix = s->h264chroma.put_h264_chroma_pixels_tab;
if (s->mv_dir & MV_DIR_FORWARD) {
- MPV_motion_lowres(s, dest_y, dest_cb, dest_cr, 0, s->last_picture.f->data, op_pix);
+ MPV_motion_lowres(s, dest_y, dest_cb, dest_cr, 0, s->last_pic.f->data, op_pix);
op_pix = s->h264chroma.avg_h264_chroma_pixels_tab;
}
if (s->mv_dir & MV_DIR_BACKWARD) {
- MPV_motion_lowres(s, dest_y, dest_cb, dest_cr, 1, s->next_picture.f->data, op_pix);
+ MPV_motion_lowres(s, dest_y, dest_cb, dest_cr, 1, s->next_pic.f->data, op_pix);
}
} else {
op_pixels_func (*op_pix)[4];
@@ -155,12 +155,12 @@ void mpv_reconstruct_mb_internal(MpegEncContext *s, int16_t block[12][64],
op_qpix = s->qdsp.put_no_rnd_qpel_pixels_tab;
}
if (s->mv_dir & MV_DIR_FORWARD) {
- ff_mpv_motion(s, dest_y, dest_cb, dest_cr, 0, s->last_picture.f->data, op_pix, op_qpix);
+ ff_mpv_motion(s, dest_y, dest_cb, dest_cr, 0, s->last_pic.f->data, op_pix, op_qpix);
op_pix = s->hdsp.avg_pixels_tab;
op_qpix = s->qdsp.avg_qpel_pixels_tab;
}
if (s->mv_dir & MV_DIR_BACKWARD) {
- ff_mpv_motion(s, dest_y, dest_cb, dest_cr, 1, s->next_picture.f->data, op_pix, op_qpix);
+ ff_mpv_motion(s, dest_y, dest_cb, dest_cr, 1, s->next_pic.f->data, op_pix, op_qpix);
}
}
diff --git a/libavcodec/msmpeg4.c b/libavcodec/msmpeg4.c
index e327bf36a7..323f083f8f 100644
--- a/libavcodec/msmpeg4.c
+++ b/libavcodec/msmpeg4.c
@@ -282,10 +282,10 @@ int ff_msmpeg4_pred_dc(MpegEncContext *s, int n,
int bs = 8 >> s->avctx->lowres;
if(n<4){
wrap= s->linesize;
- dest= s->current_picture.f->data[0] + (((n >> 1) + 2*s->mb_y) * bs* wrap ) + ((n & 1) + 2*s->mb_x) * bs;
+ dest = s->cur_pic.f->data[0] + (((n >> 1) + 2*s->mb_y) * bs* wrap ) + ((n & 1) + 2*s->mb_x) * bs;
}else{
wrap= s->uvlinesize;
- dest= s->current_picture.f->data[n - 3] + (s->mb_y * bs * wrap) + s->mb_x * bs;
+ dest = s->cur_pic.f->data[n - 3] + (s->mb_y * bs * wrap) + s->mb_x * bs;
}
if(s->mb_x==0) a= (1024 + (scale>>1))/scale;
else a= get_dc(dest-bs, wrap, scale*8>>(2*s->avctx->lowres), bs);
diff --git a/libavcodec/msmpeg4dec.c b/libavcodec/msmpeg4dec.c
index bf1e4877bd..c354f46c50 100644
--- a/libavcodec/msmpeg4dec.c
+++ b/libavcodec/msmpeg4dec.c
@@ -105,7 +105,7 @@ static int msmpeg4v2_decode_motion(MpegEncContext * s, int pred, int f_code)
static int msmpeg4v12_decode_mb(MpegEncContext *s, int16_t block[6][64])
{
int cbp, code, i;
- uint32_t * const mb_type_ptr = &s->current_picture.mb_type[s->mb_x + s->mb_y*s->mb_stride];
+ uint32_t * const mb_type_ptr = &s->cur_pic.mb_type[s->mb_x + s->mb_y*s->mb_stride];
if (s->pict_type == AV_PICTURE_TYPE_P) {
if (s->use_skip_mb_code) {
@@ -207,7 +207,7 @@ static int msmpeg4v34_decode_mb(MpegEncContext *s, int16_t block[6][64])
{
int cbp, code, i;
uint8_t *coded_val;
- uint32_t * const mb_type_ptr = &s->current_picture.mb_type[s->mb_x + s->mb_y*s->mb_stride];
+ uint32_t * const mb_type_ptr = &s->cur_pic.mb_type[s->mb_x + s->mb_y*s->mb_stride];
if (get_bits_left(&s->gb) <= 0)
return AVERROR_INVALIDDATA;
diff --git a/libavcodec/mss2.c b/libavcodec/mss2.c
index dd0d403338..6a4b5aeb59 100644
--- a/libavcodec/mss2.c
+++ b/libavcodec/mss2.c
@@ -431,7 +431,7 @@ static int decode_wmv9(AVCodecContext *avctx, const uint8_t *buf, int buf_size,
ff_mpv_frame_end(s);
- f = s->current_picture.f;
+ f = s->cur_pic.f;
if (v->respic == 3) {
ctx->dsp.upsample_plane(f->data[0], f->linesize[0], w, h);
diff --git a/libavcodec/nvdec_mpeg12.c b/libavcodec/nvdec_mpeg12.c
index 139f287617..76ef81ea4d 100644
--- a/libavcodec/nvdec_mpeg12.c
+++ b/libavcodec/nvdec_mpeg12.c
@@ -39,7 +39,7 @@ static int nvdec_mpeg12_start_frame(AVCodecContext *avctx, const uint8_t *buffer
CUVIDMPEG2PICPARAMS *ppc = &pp->CodecSpecific.mpeg2;
FrameDecodeData *fdd;
NVDECFrame *cf;
- AVFrame *cur_frame = s->current_picture.f;
+ AVFrame *cur_frame = s->cur_pic.f;
int ret, i;
@@ -64,8 +64,8 @@ static int nvdec_mpeg12_start_frame(AVCodecContext *avctx, const uint8_t *buffer
s->pict_type == AV_PICTURE_TYPE_P,
.CodecSpecific.mpeg2 = {
- .ForwardRefIdx = ff_nvdec_get_ref_idx(s->last_picture.f),
- .BackwardRefIdx = ff_nvdec_get_ref_idx(s->next_picture.f),
+ .ForwardRefIdx = ff_nvdec_get_ref_idx(s->last_pic.f),
+ .BackwardRefIdx = ff_nvdec_get_ref_idx(s->next_pic.f),
.picture_coding_type = s->pict_type,
.full_pel_forward_vector = s->full_pel[0],
diff --git a/libavcodec/nvdec_mpeg4.c b/libavcodec/nvdec_mpeg4.c
index 20a0499437..468002d1c5 100644
--- a/libavcodec/nvdec_mpeg4.c
+++ b/libavcodec/nvdec_mpeg4.c
@@ -38,7 +38,7 @@ static int nvdec_mpeg4_start_frame(AVCodecContext *avctx, const uint8_t *buffer,
CUVIDMPEG4PICPARAMS *ppc = &pp->CodecSpecific.mpeg4;
FrameDecodeData *fdd;
NVDECFrame *cf;
- AVFrame *cur_frame = s->current_picture.f;
+ AVFrame *cur_frame = s->cur_pic.f;
int ret, i;
@@ -60,8 +60,8 @@ static int nvdec_mpeg4_start_frame(AVCodecContext *avctx, const uint8_t *buffer,
s->pict_type == AV_PICTURE_TYPE_S,
.CodecSpecific.mpeg4 = {
- .ForwardRefIdx = ff_nvdec_get_ref_idx(s->last_picture.f),
- .BackwardRefIdx = ff_nvdec_get_ref_idx(s->next_picture.f),
+ .ForwardRefIdx = ff_nvdec_get_ref_idx(s->last_pic.f),
+ .BackwardRefIdx = ff_nvdec_get_ref_idx(s->next_pic.f),
.video_object_layer_width = s->width,
.video_object_layer_height = s->height,
diff --git a/libavcodec/nvdec_vc1.c b/libavcodec/nvdec_vc1.c
index 5096d784df..40cd18a8e7 100644
--- a/libavcodec/nvdec_vc1.c
+++ b/libavcodec/nvdec_vc1.c
@@ -38,7 +38,7 @@ static int nvdec_vc1_start_frame(AVCodecContext *avctx, const uint8_t *buffer, u
CUVIDPICPARAMS *pp = &ctx->pic_params;
FrameDecodeData *fdd;
NVDECFrame *cf;
- AVFrame *cur_frame = s->current_picture.f;
+ AVFrame *cur_frame = s->cur_pic.f;
int ret;
@@ -63,8 +63,8 @@ static int nvdec_vc1_start_frame(AVCodecContext *avctx, const uint8_t *buffer, u
s->pict_type == AV_PICTURE_TYPE_P,
.CodecSpecific.vc1 = {
- .ForwardRefIdx = ff_nvdec_get_ref_idx(s->last_picture.f),
- .BackwardRefIdx = ff_nvdec_get_ref_idx(s->next_picture.f),
+ .ForwardRefIdx = ff_nvdec_get_ref_idx(s->last_pic.f),
+ .BackwardRefIdx = ff_nvdec_get_ref_idx(s->next_pic.f),
.FrameWidth = cur_frame->width,
.FrameHeight = cur_frame->height,
diff --git a/libavcodec/ratecontrol.c b/libavcodec/ratecontrol.c
index ecc157a9eb..e4d18ff669 100644
--- a/libavcodec/ratecontrol.c
+++ b/libavcodec/ratecontrol.c
@@ -40,10 +40,10 @@ void ff_write_pass1_stats(MpegEncContext *s)
snprintf(s->avctx->stats_out, 256,
"in:%d out:%d type:%d q:%d itex:%d ptex:%d mv:%d misc:%d "
"fcode:%d bcode:%d mc-var:%"PRId64" var:%"PRId64" icount:%d hbits:%d;\n",
- s->current_picture_ptr->display_picture_number,
- s->current_picture_ptr->coded_picture_number,
+ s->cur_pic_ptr->display_picture_number,
+ s->cur_pic_ptr->coded_picture_number,
s->pict_type,
- s->current_picture.f->quality,
+ s->cur_pic.f->quality,
s->i_tex_bits,
s->p_tex_bits,
s->mv_bits,
@@ -936,9 +936,9 @@ float ff_rate_estimate_qscale(MpegEncContext *s, int dry_run)
* here instead of reordering but the reordering is simpler for now
* until H.264 B-pyramid must be handled. */
if (s->pict_type == AV_PICTURE_TYPE_B || s->low_delay)
- dts_pic = s->current_picture_ptr;
+ dts_pic = s->cur_pic_ptr;
else
- dts_pic = s->last_picture_ptr;
+ dts_pic = s->last_pic_ptr;
if (!dts_pic || dts_pic->f->pts == AV_NOPTS_VALUE)
wanted_bits = (uint64_t)(s->bit_rate * (double)picture_number / fps);
diff --git a/libavcodec/rv10.c b/libavcodec/rv10.c
index df487b24a9..aea42dd314 100644
--- a/libavcodec/rv10.c
+++ b/libavcodec/rv10.c
@@ -170,7 +170,7 @@ static int rv20_decode_picture_header(RVDecContext *rv, int whole_size)
av_log(s->avctx, AV_LOG_ERROR, "low delay B\n");
return -1;
}
- if (!s->last_picture_ptr && s->pict_type == AV_PICTURE_TYPE_B) {
+ if (!s->last_pic_ptr && s->pict_type == AV_PICTURE_TYPE_B) {
av_log(s->avctx, AV_LOG_ERROR, "early B-frame\n");
return AVERROR_INVALIDDATA;
}
@@ -458,9 +458,9 @@ static int rv10_decode_packet(AVCodecContext *avctx, const uint8_t *buf,
if (whole_size < s->mb_width * s->mb_height / 8)
return AVERROR_INVALIDDATA;
- if ((s->mb_x == 0 && s->mb_y == 0) || !s->current_picture_ptr) {
+ if ((s->mb_x == 0 && s->mb_y == 0) || !s->cur_pic_ptr) {
// FIXME write parser so we always have complete frames?
- if (s->current_picture_ptr) {
+ if (s->cur_pic_ptr) {
ff_er_frame_end(&s->er, NULL);
ff_mpv_frame_end(s);
s->mb_x = s->mb_y = s->resync_mb_x = s->resync_mb_y = 0;
@@ -469,7 +469,7 @@ static int rv10_decode_packet(AVCodecContext *avctx, const uint8_t *buf,
return ret;
ff_mpeg_er_frame_start(s);
} else {
- if (s->current_picture_ptr->f->pict_type != s->pict_type) {
+ if (s->cur_pic_ptr->f->pict_type != s->pict_type) {
av_log(s->avctx, AV_LOG_ERROR, "Slice type mismatch\n");
return AVERROR_INVALIDDATA;
}
@@ -632,28 +632,28 @@ static int rv10_decode_frame(AVCodecContext *avctx, AVFrame *pict,
i++;
}
- if (s->current_picture_ptr && s->mb_y >= s->mb_height) {
+ if (s->cur_pic_ptr && s->mb_y >= s->mb_height) {
ff_er_frame_end(&s->er, NULL);
ff_mpv_frame_end(s);
if (s->pict_type == AV_PICTURE_TYPE_B || s->low_delay) {
- if ((ret = av_frame_ref(pict, s->current_picture_ptr->f)) < 0)
+ if ((ret = av_frame_ref(pict, s->cur_pic_ptr->f)) < 0)
return ret;
- ff_print_debug_info(s, s->current_picture_ptr, pict);
- ff_mpv_export_qp_table(s, pict, s->current_picture_ptr, FF_MPV_QSCALE_TYPE_MPEG1);
- } else if (s->last_picture_ptr) {
- if ((ret = av_frame_ref(pict, s->last_picture_ptr->f)) < 0)
+ ff_print_debug_info(s, s->cur_pic_ptr, pict);
+ ff_mpv_export_qp_table(s, pict, s->cur_pic_ptr, FF_MPV_QSCALE_TYPE_MPEG1);
+ } else if (s->last_pic_ptr) {
+ if ((ret = av_frame_ref(pict, s->last_pic_ptr->f)) < 0)
return ret;
- ff_print_debug_info(s, s->last_picture_ptr, pict);
- ff_mpv_export_qp_table(s, pict,s->last_picture_ptr, FF_MPV_QSCALE_TYPE_MPEG1);
+ ff_print_debug_info(s, s->last_pic_ptr, pict);
+ ff_mpv_export_qp_table(s, pict,s->last_pic_ptr, FF_MPV_QSCALE_TYPE_MPEG1);
}
- if (s->last_picture_ptr || s->low_delay) {
+ if (s->last_pic_ptr || s->low_delay) {
*got_frame = 1;
}
// so we can detect if frame_end was not called (find some nicer solution...)
- s->current_picture_ptr = NULL;
+ s->cur_pic_ptr = NULL;
}
return avpkt->size;
diff --git a/libavcodec/rv30.c b/libavcodec/rv30.c
index 316962fbbb..a4e38edf54 100644
--- a/libavcodec/rv30.c
+++ b/libavcodec/rv30.c
@@ -160,7 +160,7 @@ static void rv30_loop_filter(RV34DecContext *r, int row)
mb_pos = row * s->mb_stride;
for(mb_x = 0; mb_x < s->mb_width; mb_x++, mb_pos++){
- int mbtype = s->current_picture_ptr->mb_type[mb_pos];
+ int mbtype = s->cur_pic_ptr->mb_type[mb_pos];
if(IS_INTRA(mbtype) || IS_SEPARATE_DC(mbtype))
r->deblock_coefs[mb_pos] = 0xFFFF;
if(IS_INTRA(mbtype))
@@ -172,11 +172,11 @@ static void rv30_loop_filter(RV34DecContext *r, int row)
*/
mb_pos = row * s->mb_stride;
for(mb_x = 0; mb_x < s->mb_width; mb_x++, mb_pos++){
- cur_lim = rv30_loop_filt_lim[s->current_picture_ptr->qscale_table[mb_pos]];
+ cur_lim = rv30_loop_filt_lim[s->cur_pic_ptr->qscale_table[mb_pos]];
if(mb_x)
- left_lim = rv30_loop_filt_lim[s->current_picture_ptr->qscale_table[mb_pos - 1]];
+ left_lim = rv30_loop_filt_lim[s->cur_pic_ptr->qscale_table[mb_pos - 1]];
for(j = 0; j < 16; j += 4){
- Y = s->current_picture_ptr->f->data[0] + mb_x*16 + (row*16 + j) * s->linesize + 4 * !mb_x;
+ Y = s->cur_pic_ptr->f->data[0] + mb_x*16 + (row*16 + j) * s->linesize + 4 * !mb_x;
for(i = !mb_x; i < 4; i++, Y += 4){
int ij = i + j;
loc_lim = 0;
@@ -196,7 +196,7 @@ static void rv30_loop_filter(RV34DecContext *r, int row)
if(mb_x)
left_cbp = (r->cbp_chroma[mb_pos - 1] >> (k*4)) & 0xF;
for(j = 0; j < 8; j += 4){
- C = s->current_picture_ptr->f->data[k + 1] + mb_x*8 + (row*8 + j) * s->uvlinesize + 4 * !mb_x;
+ C = s->cur_pic_ptr->f->data[k + 1] + mb_x*8 + (row*8 + j) * s->uvlinesize + 4 * !mb_x;
for(i = !mb_x; i < 2; i++, C += 4){
int ij = i + (j >> 1);
loc_lim = 0;
@@ -214,11 +214,11 @@ static void rv30_loop_filter(RV34DecContext *r, int row)
}
mb_pos = row * s->mb_stride;
for(mb_x = 0; mb_x < s->mb_width; mb_x++, mb_pos++){
- cur_lim = rv30_loop_filt_lim[s->current_picture_ptr->qscale_table[mb_pos]];
+ cur_lim = rv30_loop_filt_lim[s->cur_pic_ptr->qscale_table[mb_pos]];
if(row)
- top_lim = rv30_loop_filt_lim[s->current_picture_ptr->qscale_table[mb_pos - s->mb_stride]];
+ top_lim = rv30_loop_filt_lim[s->cur_pic_ptr->qscale_table[mb_pos - s->mb_stride]];
for(j = 4*!row; j < 16; j += 4){
- Y = s->current_picture_ptr->f->data[0] + mb_x*16 + (row*16 + j) * s->linesize;
+ Y = s->cur_pic_ptr->f->data[0] + mb_x*16 + (row*16 + j) * s->linesize;
for(i = 0; i < 4; i++, Y += 4){
int ij = i + j;
loc_lim = 0;
@@ -238,7 +238,7 @@ static void rv30_loop_filter(RV34DecContext *r, int row)
if(row)
top_cbp = (r->cbp_chroma[mb_pos - s->mb_stride] >> (k*4)) & 0xF;
for(j = 4*!row; j < 8; j += 4){
- C = s->current_picture_ptr->f->data[k+1] + mb_x*8 + (row*8 + j) * s->uvlinesize;
+ C = s->cur_pic_ptr->f->data[k+1] + mb_x*8 + (row*8 + j) * s->uvlinesize;
for(i = 0; i < 2; i++, C += 4){
int ij = i + (j >> 1);
loc_lim = 0;
diff --git a/libavcodec/rv34.c b/libavcodec/rv34.c
index 23a570bb80..467a6ab5a1 100644
--- a/libavcodec/rv34.c
+++ b/libavcodec/rv34.c
@@ -367,7 +367,7 @@ static int rv34_decode_intra_mb_header(RV34DecContext *r, int8_t *intra_types)
r->is16 = get_bits1(gb);
if(r->is16){
- s->current_picture_ptr->mb_type[mb_pos] = MB_TYPE_INTRA16x16;
+ s->cur_pic_ptr->mb_type[mb_pos] = MB_TYPE_INTRA16x16;
r->block_type = RV34_MB_TYPE_INTRA16x16;
t = get_bits(gb, 2);
fill_rectangle(intra_types, 4, 4, r->intra_types_stride, t, sizeof(intra_types[0]));
@@ -377,7 +377,7 @@ static int rv34_decode_intra_mb_header(RV34DecContext *r, int8_t *intra_types)
if(!get_bits1(gb))
av_log(s->avctx, AV_LOG_ERROR, "Need DQUANT\n");
}
- s->current_picture_ptr->mb_type[mb_pos] = MB_TYPE_INTRA;
+ s->cur_pic_ptr->mb_type[mb_pos] = MB_TYPE_INTRA;
r->block_type = RV34_MB_TYPE_INTRA;
if(r->decode_intra_types(r, gb, intra_types) < 0)
return -1;
@@ -403,7 +403,7 @@ static int rv34_decode_inter_mb_header(RV34DecContext *r, int8_t *intra_types)
r->block_type = r->decode_mb_info(r);
if(r->block_type == -1)
return -1;
- s->current_picture_ptr->mb_type[mb_pos] = rv34_mb_type_to_lavc[r->block_type];
+ s->cur_pic_ptr->mb_type[mb_pos] = rv34_mb_type_to_lavc[r->block_type];
r->mb_type[mb_pos] = r->block_type;
if(r->block_type == RV34_MB_SKIP){
if(s->pict_type == AV_PICTURE_TYPE_P)
@@ -411,7 +411,7 @@ static int rv34_decode_inter_mb_header(RV34DecContext *r, int8_t *intra_types)
if(s->pict_type == AV_PICTURE_TYPE_B)
r->mb_type[mb_pos] = RV34_MB_B_DIRECT;
}
- r->is16 = !!IS_INTRA16x16(s->current_picture_ptr->mb_type[mb_pos]);
+ r->is16 = !!IS_INTRA16x16(s->cur_pic_ptr->mb_type[mb_pos]);
if (rv34_decode_mv(r, r->block_type) < 0)
return -1;
if(r->block_type == RV34_MB_SKIP){
@@ -421,7 +421,7 @@ static int rv34_decode_inter_mb_header(RV34DecContext *r, int8_t *intra_types)
r->chroma_vlc = 1;
r->luma_vlc = 0;
- if(IS_INTRA(s->current_picture_ptr->mb_type[mb_pos])){
+ if(IS_INTRA(s->cur_pic_ptr->mb_type[mb_pos])){
if(r->is16){
t = get_bits(gb, 2);
fill_rectangle(intra_types, 4, 4, r->intra_types_stride, t, sizeof(intra_types[0]));
@@ -486,27 +486,27 @@ static void rv34_pred_mv(RV34DecContext *r, int block_type, int subblock_no, int
c_off = -1;
if(avail[-1]){
- A[0] = s->current_picture_ptr->motion_val[0][mv_pos-1][0];
- A[1] = s->current_picture_ptr->motion_val[0][mv_pos-1][1];
+ A[0] = s->cur_pic_ptr->motion_val[0][mv_pos-1][0];
+ A[1] = s->cur_pic_ptr->motion_val[0][mv_pos-1][1];
}
if(avail[-4]){
- B[0] = s->current_picture_ptr->motion_val[0][mv_pos-s->b8_stride][0];
- B[1] = s->current_picture_ptr->motion_val[0][mv_pos-s->b8_stride][1];
+ B[0] = s->cur_pic_ptr->motion_val[0][mv_pos-s->b8_stride][0];
+ B[1] = s->cur_pic_ptr->motion_val[0][mv_pos-s->b8_stride][1];
}else{
B[0] = A[0];
B[1] = A[1];
}
if(!avail[c_off-4]){
if(avail[-4] && (avail[-1] || r->rv30)){
- C[0] = s->current_picture_ptr->motion_val[0][mv_pos-s->b8_stride-1][0];
- C[1] = s->current_picture_ptr->motion_val[0][mv_pos-s->b8_stride-1][1];
+ C[0] = s->cur_pic_ptr->motion_val[0][mv_pos-s->b8_stride-1][0];
+ C[1] = s->cur_pic_ptr->motion_val[0][mv_pos-s->b8_stride-1][1];
}else{
C[0] = A[0];
C[1] = A[1];
}
}else{
- C[0] = s->current_picture_ptr->motion_val[0][mv_pos-s->b8_stride+c_off][0];
- C[1] = s->current_picture_ptr->motion_val[0][mv_pos-s->b8_stride+c_off][1];
+ C[0] = s->cur_pic_ptr->motion_val[0][mv_pos-s->b8_stride+c_off][0];
+ C[1] = s->cur_pic_ptr->motion_val[0][mv_pos-s->b8_stride+c_off][1];
}
mx = mid_pred(A[0], B[0], C[0]);
my = mid_pred(A[1], B[1], C[1]);
@@ -514,8 +514,8 @@ static void rv34_pred_mv(RV34DecContext *r, int block_type, int subblock_no, int
my += r->dmv[dmv_no][1];
for(j = 0; j < part_sizes_h[block_type]; j++){
for(i = 0; i < part_sizes_w[block_type]; i++){
- s->current_picture_ptr->motion_val[0][mv_pos + i + j*s->b8_stride][0] = mx;
- s->current_picture_ptr->motion_val[0][mv_pos + i + j*s->b8_stride][1] = my;
+ s->cur_pic_ptr->motion_val[0][mv_pos + i + j*s->b8_stride][0] = mx;
+ s->cur_pic_ptr->motion_val[0][mv_pos + i + j*s->b8_stride][1] = my;
}
}
}
@@ -564,7 +564,7 @@ static void rv34_pred_mv_b(RV34DecContext *r, int block_type, int dir)
int has_A = 0, has_B = 0, has_C = 0;
int mx, my;
int i, j;
- Picture *cur_pic = s->current_picture_ptr;
+ Picture *cur_pic = s->cur_pic_ptr;
const int mask = dir ? MB_TYPE_L1 : MB_TYPE_L0;
int type = cur_pic->mb_type[mb_pos];
@@ -617,27 +617,27 @@ static void rv34_pred_mv_rv3(RV34DecContext *r, int block_type, int dir)
int* avail = r->avail_cache + avail_indexes[0];
if(avail[-1]){
- A[0] = s->current_picture_ptr->motion_val[0][mv_pos - 1][0];
- A[1] = s->current_picture_ptr->motion_val[0][mv_pos - 1][1];
+ A[0] = s->cur_pic_ptr->motion_val[0][mv_pos - 1][0];
+ A[1] = s->cur_pic_ptr->motion_val[0][mv_pos - 1][1];
}
if(avail[-4]){
- B[0] = s->current_picture_ptr->motion_val[0][mv_pos - s->b8_stride][0];
- B[1] = s->current_picture_ptr->motion_val[0][mv_pos - s->b8_stride][1];
+ B[0] = s->cur_pic_ptr->motion_val[0][mv_pos - s->b8_stride][0];
+ B[1] = s->cur_pic_ptr->motion_val[0][mv_pos - s->b8_stride][1];
}else{
B[0] = A[0];
B[1] = A[1];
}
if(!avail[-4 + 2]){
if(avail[-4] && (avail[-1])){
- C[0] = s->current_picture_ptr->motion_val[0][mv_pos - s->b8_stride - 1][0];
- C[1] = s->current_picture_ptr->motion_val[0][mv_pos - s->b8_stride - 1][1];
+ C[0] = s->cur_pic_ptr->motion_val[0][mv_pos - s->b8_stride - 1][0];
+ C[1] = s->cur_pic_ptr->motion_val[0][mv_pos - s->b8_stride - 1][1];
}else{
C[0] = A[0];
C[1] = A[1];
}
}else{
- C[0] = s->current_picture_ptr->motion_val[0][mv_pos - s->b8_stride + 2][0];
- C[1] = s->current_picture_ptr->motion_val[0][mv_pos - s->b8_stride + 2][1];
+ C[0] = s->cur_pic_ptr->motion_val[0][mv_pos - s->b8_stride + 2][0];
+ C[1] = s->cur_pic_ptr->motion_val[0][mv_pos - s->b8_stride + 2][1];
}
mx = mid_pred(A[0], B[0], C[0]);
my = mid_pred(A[1], B[1], C[1]);
@@ -646,8 +646,8 @@ static void rv34_pred_mv_rv3(RV34DecContext *r, int block_type, int dir)
for(j = 0; j < 2; j++){
for(i = 0; i < 2; i++){
for(k = 0; k < 2; k++){
- s->current_picture_ptr->motion_val[k][mv_pos + i + j*s->b8_stride][0] = mx;
- s->current_picture_ptr->motion_val[k][mv_pos + i + j*s->b8_stride][1] = my;
+ s->cur_pic_ptr->motion_val[k][mv_pos + i + j*s->b8_stride][0] = mx;
+ s->cur_pic_ptr->motion_val[k][mv_pos + i + j*s->b8_stride][1] = my;
}
}
}
@@ -686,24 +686,24 @@ static inline void rv34_mc(RV34DecContext *r, const int block_type,
if(thirdpel){
int chroma_mx, chroma_my;
- mx = (s->current_picture_ptr->motion_val[dir][mv_pos][0] + (3 << 24)) / 3 - (1 << 24);
- my = (s->current_picture_ptr->motion_val[dir][mv_pos][1] + (3 << 24)) / 3 - (1 << 24);
- lx = (s->current_picture_ptr->motion_val[dir][mv_pos][0] + (3 << 24)) % 3;
- ly = (s->current_picture_ptr->motion_val[dir][mv_pos][1] + (3 << 24)) % 3;
- chroma_mx = s->current_picture_ptr->motion_val[dir][mv_pos][0] / 2;
- chroma_my = s->current_picture_ptr->motion_val[dir][mv_pos][1] / 2;
+ mx = (s->cur_pic_ptr->motion_val[dir][mv_pos][0] + (3 << 24)) / 3 - (1 << 24);
+ my = (s->cur_pic_ptr->motion_val[dir][mv_pos][1] + (3 << 24)) / 3 - (1 << 24);
+ lx = (s->cur_pic_ptr->motion_val[dir][mv_pos][0] + (3 << 24)) % 3;
+ ly = (s->cur_pic_ptr->motion_val[dir][mv_pos][1] + (3 << 24)) % 3;
+ chroma_mx = s->cur_pic_ptr->motion_val[dir][mv_pos][0] / 2;
+ chroma_my = s->cur_pic_ptr->motion_val[dir][mv_pos][1] / 2;
umx = (chroma_mx + (3 << 24)) / 3 - (1 << 24);
umy = (chroma_my + (3 << 24)) / 3 - (1 << 24);
uvmx = chroma_coeffs[(chroma_mx + (3 << 24)) % 3];
uvmy = chroma_coeffs[(chroma_my + (3 << 24)) % 3];
}else{
int cx, cy;
- mx = s->current_picture_ptr->motion_val[dir][mv_pos][0] >> 2;
- my = s->current_picture_ptr->motion_val[dir][mv_pos][1] >> 2;
- lx = s->current_picture_ptr->motion_val[dir][mv_pos][0] & 3;
- ly = s->current_picture_ptr->motion_val[dir][mv_pos][1] & 3;
- cx = s->current_picture_ptr->motion_val[dir][mv_pos][0] / 2;
- cy = s->current_picture_ptr->motion_val[dir][mv_pos][1] / 2;
+ mx = s->cur_pic_ptr->motion_val[dir][mv_pos][0] >> 2;
+ my = s->cur_pic_ptr->motion_val[dir][mv_pos][1] >> 2;
+ lx = s->cur_pic_ptr->motion_val[dir][mv_pos][0] & 3;
+ ly = s->cur_pic_ptr->motion_val[dir][mv_pos][1] & 3;
+ cx = s->cur_pic_ptr->motion_val[dir][mv_pos][0] / 2;
+ cy = s->cur_pic_ptr->motion_val[dir][mv_pos][1] / 2;
umx = cx >> 2;
umy = cy >> 2;
uvmx = (cx & 3) << 1;
@@ -716,14 +716,14 @@ static inline void rv34_mc(RV34DecContext *r, const int block_type,
if (HAVE_THREADS && (s->avctx->active_thread_type & FF_THREAD_FRAME)) {
/* wait for the referenced mb row to be finished */
int mb_row = s->mb_y + ((yoff + my + 5 + 8 * height) >> 4);
- const ThreadFrame *f = dir ? &s->next_picture_ptr->tf : &s->last_picture_ptr->tf;
+ const ThreadFrame *f = dir ? &s->next_pic_ptr->tf : &s->last_pic_ptr->tf;
ff_thread_await_progress(f, mb_row, 0);
}
dxy = ly*4 + lx;
- srcY = dir ? s->next_picture_ptr->f->data[0] : s->last_picture_ptr->f->data[0];
- srcU = dir ? s->next_picture_ptr->f->data[1] : s->last_picture_ptr->f->data[1];
- srcV = dir ? s->next_picture_ptr->f->data[2] : s->last_picture_ptr->f->data[2];
+ srcY = dir ? s->next_pic_ptr->f->data[0] : s->last_pic_ptr->f->data[0];
+ srcU = dir ? s->next_pic_ptr->f->data[1] : s->last_pic_ptr->f->data[1];
+ srcV = dir ? s->next_pic_ptr->f->data[2] : s->last_pic_ptr->f->data[2];
src_x = s->mb_x * 16 + xoff + mx;
src_y = s->mb_y * 16 + yoff + my;
uvsrc_x = s->mb_x * 8 + (xoff >> 1) + umx;
@@ -884,11 +884,11 @@ static int rv34_decode_mv(RV34DecContext *r, int block_type)
switch(block_type){
case RV34_MB_TYPE_INTRA:
case RV34_MB_TYPE_INTRA16x16:
- ZERO8x2(s->current_picture_ptr->motion_val[0][s->mb_x * 2 + s->mb_y * 2 * s->b8_stride], s->b8_stride);
+ ZERO8x2(s->cur_pic_ptr->motion_val[0][s->mb_x * 2 + s->mb_y * 2 * s->b8_stride], s->b8_stride);
return 0;
case RV34_MB_SKIP:
if(s->pict_type == AV_PICTURE_TYPE_P){
- ZERO8x2(s->current_picture_ptr->motion_val[0][s->mb_x * 2 + s->mb_y * 2 * s->b8_stride], s->b8_stride);
+ ZERO8x2(s->cur_pic_ptr->motion_val[0][s->mb_x * 2 + s->mb_y * 2 * s->b8_stride], s->b8_stride);
rv34_mc_1mv (r, block_type, 0, 0, 0, 2, 2, 0);
break;
}
@@ -896,23 +896,23 @@ static int rv34_decode_mv(RV34DecContext *r, int block_type)
//surprisingly, it uses motion scheme from next reference frame
/* wait for the current mb row to be finished */
if (HAVE_THREADS && (s->avctx->active_thread_type & FF_THREAD_FRAME))
- ff_thread_await_progress(&s->next_picture_ptr->tf, FFMAX(0, s->mb_y-1), 0);
+ ff_thread_await_progress(&s->next_pic_ptr->tf, FFMAX(0, s->mb_y-1), 0);
- next_bt = s->next_picture_ptr->mb_type[s->mb_x + s->mb_y * s->mb_stride];
+ next_bt = s->next_pic_ptr->mb_type[s->mb_x + s->mb_y * s->mb_stride];
if(IS_INTRA(next_bt) || IS_SKIP(next_bt)){
- ZERO8x2(s->current_picture_ptr->motion_val[0][s->mb_x * 2 + s->mb_y * 2 * s->b8_stride], s->b8_stride);
- ZERO8x2(s->current_picture_ptr->motion_val[1][s->mb_x * 2 + s->mb_y * 2 * s->b8_stride], s->b8_stride);
+ ZERO8x2(s->cur_pic_ptr->motion_val[0][s->mb_x * 2 + s->mb_y * 2 * s->b8_stride], s->b8_stride);
+ ZERO8x2(s->cur_pic_ptr->motion_val[1][s->mb_x * 2 + s->mb_y * 2 * s->b8_stride], s->b8_stride);
}else
for(j = 0; j < 2; j++)
for(i = 0; i < 2; i++)
for(k = 0; k < 2; k++)
for(l = 0; l < 2; l++)
- s->current_picture_ptr->motion_val[l][mv_pos + i + j*s->b8_stride][k] = calc_add_mv(r, l, s->next_picture_ptr->motion_val[0][mv_pos + i + j*s->b8_stride][k]);
+ s->cur_pic_ptr->motion_val[l][mv_pos + i + j*s->b8_stride][k] = calc_add_mv(r, l, s->next_pic_ptr->motion_val[0][mv_pos + i + j*s->b8_stride][k]);
if(!(IS_16X8(next_bt) || IS_8X16(next_bt) || IS_8X8(next_bt))) //we can use whole macroblock MC
rv34_mc_2mv(r, block_type);
else
rv34_mc_2mv_skip(r);
- ZERO8x2(s->current_picture_ptr->motion_val[0][s->mb_x * 2 + s->mb_y * 2 * s->b8_stride], s->b8_stride);
+ ZERO8x2(s->cur_pic_ptr->motion_val[0][s->mb_x * 2 + s->mb_y * 2 * s->b8_stride], s->b8_stride);
break;
case RV34_MB_P_16x16:
case RV34_MB_P_MIX16x16:
@@ -1180,7 +1180,7 @@ static int rv34_set_deblock_coef(RV34DecContext *r)
MpegEncContext *s = &r->s;
int hmvmask = 0, vmvmask = 0, i, j;
int midx = s->mb_x * 2 + s->mb_y * 2 * s->b8_stride;
- int16_t (*motion_val)[2] = &s->current_picture_ptr->motion_val[0][midx];
+ int16_t (*motion_val)[2] = &s->cur_pic_ptr->motion_val[0][midx];
for(j = 0; j < 16; j += 8){
for(i = 0; i < 2; i++){
if(is_mv_diff_gt_3(motion_val + i, 1))
@@ -1223,26 +1223,26 @@ static int rv34_decode_inter_macroblock(RV34DecContext *r, int8_t *intra_types)
dist = (s->mb_x - s->resync_mb_x) + (s->mb_y - s->resync_mb_y) * s->mb_width;
if(s->mb_x && dist)
r->avail_cache[5] =
- r->avail_cache[9] = s->current_picture_ptr->mb_type[mb_pos - 1];
+ r->avail_cache[9] = s->cur_pic_ptr->mb_type[mb_pos - 1];
if(dist >= s->mb_width)
r->avail_cache[2] =
- r->avail_cache[3] = s->current_picture_ptr->mb_type[mb_pos - s->mb_stride];
+ r->avail_cache[3] = s->cur_pic_ptr->mb_type[mb_pos - s->mb_stride];
if(((s->mb_x+1) < s->mb_width) && dist >= s->mb_width - 1)
- r->avail_cache[4] = s->current_picture_ptr->mb_type[mb_pos - s->mb_stride + 1];
+ r->avail_cache[4] = s->cur_pic_ptr->mb_type[mb_pos - s->mb_stride + 1];
if(s->mb_x && dist > s->mb_width)
- r->avail_cache[1] = s->current_picture_ptr->mb_type[mb_pos - s->mb_stride - 1];
+ r->avail_cache[1] = s->cur_pic_ptr->mb_type[mb_pos - s->mb_stride - 1];
s->qscale = r->si.quant;
cbp = cbp2 = rv34_decode_inter_mb_header(r, intra_types);
r->cbp_luma [mb_pos] = cbp;
r->cbp_chroma[mb_pos] = cbp >> 16;
r->deblock_coefs[mb_pos] = rv34_set_deblock_coef(r) | r->cbp_luma[mb_pos];
- s->current_picture_ptr->qscale_table[mb_pos] = s->qscale;
+ s->cur_pic_ptr->qscale_table[mb_pos] = s->qscale;
if(cbp == -1)
return -1;
- if (IS_INTRA(s->current_picture_ptr->mb_type[mb_pos])){
+ if (IS_INTRA(s->cur_pic_ptr->mb_type[mb_pos])){
if(r->is16) rv34_output_i16x16(r, intra_types, cbp);
else rv34_output_intra(r, intra_types, cbp);
return 0;
@@ -1325,21 +1325,21 @@ static int rv34_decode_intra_macroblock(RV34DecContext *r, int8_t *intra_types)
dist = (s->mb_x - s->resync_mb_x) + (s->mb_y - s->resync_mb_y) * s->mb_width;
if(s->mb_x && dist)
r->avail_cache[5] =
- r->avail_cache[9] = s->current_picture_ptr->mb_type[mb_pos - 1];
+ r->avail_cache[9] = s->cur_pic_ptr->mb_type[mb_pos - 1];
if(dist >= s->mb_width)
r->avail_cache[2] =
- r->avail_cache[3] = s->current_picture_ptr->mb_type[mb_pos - s->mb_stride];
+ r->avail_cache[3] = s->cur_pic_ptr->mb_type[mb_pos - s->mb_stride];
if(((s->mb_x+1) < s->mb_width) && dist >= s->mb_width - 1)
- r->avail_cache[4] = s->current_picture_ptr->mb_type[mb_pos - s->mb_stride + 1];
+ r->avail_cache[4] = s->cur_pic_ptr->mb_type[mb_pos - s->mb_stride + 1];
if(s->mb_x && dist > s->mb_width)
- r->avail_cache[1] = s->current_picture_ptr->mb_type[mb_pos - s->mb_stride - 1];
+ r->avail_cache[1] = s->cur_pic_ptr->mb_type[mb_pos - s->mb_stride - 1];
s->qscale = r->si.quant;
cbp = rv34_decode_intra_mb_header(r, intra_types);
r->cbp_luma [mb_pos] = cbp;
r->cbp_chroma[mb_pos] = cbp >> 16;
r->deblock_coefs[mb_pos] = 0xFFFF;
- s->current_picture_ptr->qscale_table[mb_pos] = s->qscale;
+ s->cur_pic_ptr->qscale_table[mb_pos] = s->qscale;
if(cbp == -1)
return -1;
@@ -1480,7 +1480,7 @@ static int rv34_decode_slice(RV34DecContext *r, int end, const uint8_t* buf, int
r->loop_filter(r, s->mb_y - 2);
if (HAVE_THREADS && (s->avctx->active_thread_type & FF_THREAD_FRAME))
- ff_thread_report_progress(&s->current_picture_ptr->tf,
+ ff_thread_report_progress(&s->cur_pic_ptr->tf,
s->mb_y - 2, 0);
}
@@ -1578,19 +1578,19 @@ static int finish_frame(AVCodecContext *avctx, AVFrame *pict)
s->mb_num_left = 0;
if (HAVE_THREADS && (s->avctx->active_thread_type & FF_THREAD_FRAME))
- ff_thread_report_progress(&s->current_picture_ptr->tf, INT_MAX, 0);
+ ff_thread_report_progress(&s->cur_pic_ptr->tf, INT_MAX, 0);
if (s->pict_type == AV_PICTURE_TYPE_B) {
- if ((ret = av_frame_ref(pict, s->current_picture_ptr->f)) < 0)
+ if ((ret = av_frame_ref(pict, s->cur_pic_ptr->f)) < 0)
return ret;
- ff_print_debug_info(s, s->current_picture_ptr, pict);
- ff_mpv_export_qp_table(s, pict, s->current_picture_ptr, FF_MPV_QSCALE_TYPE_MPEG1);
+ ff_print_debug_info(s, s->cur_pic_ptr, pict);
+ ff_mpv_export_qp_table(s, pict, s->cur_pic_ptr, FF_MPV_QSCALE_TYPE_MPEG1);
got_picture = 1;
- } else if (s->last_picture_ptr) {
- if ((ret = av_frame_ref(pict, s->last_picture_ptr->f)) < 0)
+ } else if (s->last_pic_ptr) {
+ if ((ret = av_frame_ref(pict, s->last_pic_ptr->f)) < 0)
return ret;
- ff_print_debug_info(s, s->last_picture_ptr, pict);
- ff_mpv_export_qp_table(s, pict, s->last_picture_ptr, FF_MPV_QSCALE_TYPE_MPEG1);
+ ff_print_debug_info(s, s->last_pic_ptr, pict);
+ ff_mpv_export_qp_table(s, pict, s->last_pic_ptr, FF_MPV_QSCALE_TYPE_MPEG1);
got_picture = 1;
}
@@ -1625,10 +1625,10 @@ int ff_rv34_decode_frame(AVCodecContext *avctx, AVFrame *pict,
/* no supplementary picture */
if (buf_size == 0) {
/* special case for last picture */
- if (s->next_picture_ptr) {
- if ((ret = av_frame_ref(pict, s->next_picture_ptr->f)) < 0)
+ if (s->next_pic_ptr) {
+ if ((ret = av_frame_ref(pict, s->next_pic_ptr->f)) < 0)
return ret;
- s->next_picture_ptr = NULL;
+ s->next_pic_ptr = NULL;
*got_picture_ptr = 1;
}
@@ -1651,7 +1651,7 @@ int ff_rv34_decode_frame(AVCodecContext *avctx, AVFrame *pict,
av_log(avctx, AV_LOG_ERROR, "First slice header is incorrect\n");
return AVERROR_INVALIDDATA;
}
- if ((!s->last_picture_ptr || !s->last_picture_ptr->f->data[0]) &&
+ if ((!s->last_pic_ptr || !s->last_pic_ptr->f->data[0]) &&
si.type == AV_PICTURE_TYPE_B) {
av_log(avctx, AV_LOG_ERROR, "Invalid decoder state: B-frame without "
"reference data.\n");
@@ -1664,7 +1664,7 @@ int ff_rv34_decode_frame(AVCodecContext *avctx, AVFrame *pict,
/* first slice */
if (si.start == 0) {
- if (s->mb_num_left > 0 && s->current_picture_ptr) {
+ if (s->mb_num_left > 0 && s->cur_pic_ptr) {
av_log(avctx, AV_LOG_ERROR, "New frame but still %d MB left.\n",
s->mb_num_left);
if (!s->context_reinit)
@@ -1789,7 +1789,7 @@ int ff_rv34_decode_frame(AVCodecContext *avctx, AVFrame *pict,
break;
}
- if (s->current_picture_ptr) {
+ if (s->cur_pic_ptr) {
if (last) {
if(r->loop_filter)
r->loop_filter(r, s->mb_height - 1);
@@ -1806,7 +1806,7 @@ int ff_rv34_decode_frame(AVCodecContext *avctx, AVFrame *pict,
ff_er_frame_end(&s->er, NULL);
ff_mpv_frame_end(s);
s->mb_num_left = 0;
- ff_thread_report_progress(&s->current_picture_ptr->tf, INT_MAX, 0);
+ ff_thread_report_progress(&s->cur_pic_ptr->tf, INT_MAX, 0);
return AVERROR_INVALIDDATA;
}
}
diff --git a/libavcodec/rv40.c b/libavcodec/rv40.c
index 19d4e742df..a98e64f5bf 100644
--- a/libavcodec/rv40.c
+++ b/libavcodec/rv40.c
@@ -371,7 +371,7 @@ static void rv40_loop_filter(RV34DecContext *r, int row)
mb_pos = row * s->mb_stride;
for(mb_x = 0; mb_x < s->mb_width; mb_x++, mb_pos++){
- int mbtype = s->current_picture_ptr->mb_type[mb_pos];
+ int mbtype = s->cur_pic_ptr->mb_type[mb_pos];
if(IS_INTRA(mbtype) || IS_SEPARATE_DC(mbtype))
r->cbp_luma [mb_pos] = r->deblock_coefs[mb_pos] = 0xFFFF;
if(IS_INTRA(mbtype))
@@ -386,7 +386,7 @@ static void rv40_loop_filter(RV34DecContext *r, int row)
unsigned y_to_deblock;
int c_to_deblock[2];
- q = s->current_picture_ptr->qscale_table[mb_pos];
+ q = s->cur_pic_ptr->qscale_table[mb_pos];
alpha = rv40_alpha_tab[q];
beta = rv40_beta_tab [q];
betaY = betaC = beta * 3;
@@ -401,7 +401,7 @@ static void rv40_loop_filter(RV34DecContext *r, int row)
if(avail[i]){
int pos = mb_pos + neighbour_offs_x[i] + neighbour_offs_y[i]*s->mb_stride;
mvmasks[i] = r->deblock_coefs[pos];
- mbtype [i] = s->current_picture_ptr->mb_type[pos];
+ mbtype [i] = s->cur_pic_ptr->mb_type[pos];
cbp [i] = r->cbp_luma[pos];
uvcbp[i][0] = r->cbp_chroma[pos] & 0xF;
uvcbp[i][1] = r->cbp_chroma[pos] >> 4;
@@ -460,7 +460,7 @@ static void rv40_loop_filter(RV34DecContext *r, int row)
}
for(j = 0; j < 16; j += 4){
- Y = s->current_picture_ptr->f->data[0] + mb_x*16 + (row*16 + j) * s->linesize;
+ Y = s->cur_pic_ptr->f->data[0] + mb_x*16 + (row*16 + j) * s->linesize;
for(i = 0; i < 4; i++, Y += 4){
int ij = i + j;
int clip_cur = y_to_deblock & (MASK_CUR << ij) ? clip[POS_CUR] : 0;
@@ -505,7 +505,7 @@ static void rv40_loop_filter(RV34DecContext *r, int row)
}
for(k = 0; k < 2; k++){
for(j = 0; j < 2; j++){
- C = s->current_picture_ptr->f->data[k + 1] + mb_x*8 + (row*8 + j*4) * s->uvlinesize;
+ C = s->cur_pic_ptr->f->data[k + 1] + mb_x*8 + (row*8 + j*4) * s->uvlinesize;
for(i = 0; i < 2; i++, C += 4){
int ij = i + j*2;
int clip_cur = c_to_deblock[k] & (MASK_CUR << ij) ? clip[POS_CUR] : 0;
diff --git a/libavcodec/snowenc.c b/libavcodec/snowenc.c
index b59dc04edc..1ed7581fea 100644
--- a/libavcodec/snowenc.c
+++ b/libavcodec/snowenc.c
@@ -1834,9 +1834,9 @@ static int encode_frame(AVCodecContext *avctx, AVPacket *pkt,
if (ret < 0)
return ret;
- mpv->current_picture_ptr = &mpv->current_picture;
- mpv->current_picture.f = s->current_picture;
- mpv->current_picture.f->pts = pict->pts;
+ mpv->cur_pic_ptr = &mpv->cur_pic;
+ mpv->cur_pic.f = s->current_picture;
+ mpv->cur_pic.f->pts = pict->pts;
if(pic->pict_type == AV_PICTURE_TYPE_P){
int block_width = (width +15)>>4;
int block_height= (height+15)>>4;
@@ -1846,9 +1846,9 @@ static int encode_frame(AVCodecContext *avctx, AVPacket *pkt,
av_assert0(s->last_picture[0]->data[0]);
mpv->avctx = s->avctx;
- mpv->last_picture.f = s->last_picture[0];
- mpv-> new_picture = s->input_picture;
- mpv->last_picture_ptr = &mpv->last_picture;
+ mpv->last_pic.f = s->last_picture[0];
+ mpv-> new_pic = s->input_picture;
+ mpv->last_pic_ptr = &mpv->last_pic;
mpv->linesize = stride;
mpv->uvlinesize = s->current_picture->linesize[1];
mpv->width = width;
@@ -2043,9 +2043,9 @@ redo_frame:
mpv->frame_bits = 8 * (s->c.bytestream - s->c.bytestream_start);
mpv->p_tex_bits = mpv->frame_bits - mpv->misc_bits - mpv->mv_bits;
mpv->total_bits += 8*(s->c.bytestream - s->c.bytestream_start);
- mpv->current_picture.display_picture_number =
- mpv->current_picture.coded_picture_number = avctx->frame_num;
- mpv->current_picture.f->quality = pic->quality;
+ mpv->cur_pic.display_picture_number =
+ mpv->cur_pic.coded_picture_number = avctx->frame_num;
+ mpv->cur_pic.f->quality = pic->quality;
if (enc->pass1_rc)
if (ff_rate_estimate_qscale(mpv, 0) < 0)
return -1;
diff --git a/libavcodec/svq1enc.c b/libavcodec/svq1enc.c
index d71ad07b86..52140494bb 100644
--- a/libavcodec/svq1enc.c
+++ b/libavcodec/svq1enc.c
@@ -326,13 +326,13 @@ static int svq1_encode_plane(SVQ1EncContext *s, int plane,
if (s->pict_type == AV_PICTURE_TYPE_P) {
s->m.avctx = s->avctx;
- s->m.current_picture_ptr = &s->m.current_picture;
- s->m.last_picture_ptr = &s->m.last_picture;
- s->m.last_picture.f->data[0] = ref_plane;
+ s->m.cur_pic_ptr = &s->m.cur_pic;
+ s->m.last_pic_ptr = &s->m.last_pic;
+ s->m.last_pic.f->data[0] = ref_plane;
s->m.linesize =
- s->m.last_picture.f->linesize[0] =
- s->m.new_picture->linesize[0] =
- s->m.current_picture.f->linesize[0] = stride;
+ s->m.last_pic.f->linesize[0] =
+ s->m.new_pic->linesize[0] =
+ s->m.cur_pic.f->linesize[0] = stride;
s->m.width = width;
s->m.height = height;
s->m.mb_width = block_width;
@@ -370,9 +370,9 @@ static int svq1_encode_plane(SVQ1EncContext *s, int plane,
s->m.mb_mean = (uint8_t *)s->dummy;
s->m.mb_var = (uint16_t *)s->dummy;
s->m.mc_mb_var = (uint16_t *)s->dummy;
- s->m.current_picture.mb_type = s->dummy;
+ s->m.cur_pic.mb_type = s->dummy;
- s->m.current_picture.motion_val[0] = s->motion_val8[plane] + 2;
+ s->m.cur_pic.motion_val[0] = s->motion_val8[plane] + 2;
s->m.p_mv_table = s->motion_val16[plane] +
s->m.mb_stride + 1;
s->m.mecc = s->mecc; // move
@@ -381,7 +381,7 @@ static int svq1_encode_plane(SVQ1EncContext *s, int plane,
s->m.me.dia_size = s->avctx->dia_size;
s->m.first_slice_line = 1;
for (y = 0; y < block_height; y++) {
- s->m.new_picture->data[0] = src - y * 16 * stride; // ugly
+ s->m.new_pic->data[0] = src - y * 16 * stride; // ugly
s->m.mb_y = y;
for (i = 0; i < 16 && i + 16 * y < height; i++) {
@@ -561,7 +561,7 @@ static av_cold int svq1_encode_end(AVCodecContext *avctx)
av_frame_free(&s->current_picture);
av_frame_free(&s->last_picture);
- av_frame_free(&s->m.new_picture);
+ av_frame_free(&s->m.new_pic);
return 0;
}
@@ -624,10 +624,10 @@ static av_cold int svq1_encode_init(AVCodecContext *avctx)
s->dummy = av_mallocz((s->y_block_width + 1) *
s->y_block_height * sizeof(int32_t));
s->m.me.map = av_mallocz(2 * ME_MAP_SIZE * sizeof(*s->m.me.map));
- s->m.new_picture = av_frame_alloc();
+ s->m.new_pic = av_frame_alloc();
if (!s->m.me.scratchpad || !s->m.me.map ||
- !s->mb_type || !s->dummy || !s->m.new_picture)
+ !s->mb_type || !s->dummy || !s->m.new_pic)
return AVERROR(ENOMEM);
s->m.me.score_map = s->m.me.map + ME_MAP_SIZE;
diff --git a/libavcodec/vaapi_mpeg2.c b/libavcodec/vaapi_mpeg2.c
index eeb4e87321..389540fd0c 100644
--- a/libavcodec/vaapi_mpeg2.c
+++ b/libavcodec/vaapi_mpeg2.c
@@ -42,12 +42,12 @@ static inline int mpeg2_get_is_frame_start(const MpegEncContext *s)
static int vaapi_mpeg2_start_frame(AVCodecContext *avctx, av_unused const uint8_t *buffer, av_unused uint32_t size)
{
const MpegEncContext *s = avctx->priv_data;
- VAAPIDecodePicture *pic = s->current_picture_ptr->hwaccel_picture_private;
+ VAAPIDecodePicture *pic = s->cur_pic_ptr->hwaccel_picture_private;
VAPictureParameterBufferMPEG2 pic_param;
VAIQMatrixBufferMPEG2 iq_matrix;
int i, err;
- pic->output_surface = ff_vaapi_get_surface_id(s->current_picture_ptr->f);
+ pic->output_surface = ff_vaapi_get_surface_id(s->cur_pic_ptr->f);
pic_param = (VAPictureParameterBufferMPEG2) {
.horizontal_size = s->width,
@@ -73,10 +73,10 @@ static int vaapi_mpeg2_start_frame(AVCodecContext *avctx, av_unused const uint8_
switch (s->pict_type) {
case AV_PICTURE_TYPE_B:
- pic_param.backward_reference_picture = ff_vaapi_get_surface_id(s->next_picture.f);
+ pic_param.backward_reference_picture = ff_vaapi_get_surface_id(s->next_pic.f);
// fall-through
case AV_PICTURE_TYPE_P:
- pic_param.forward_reference_picture = ff_vaapi_get_surface_id(s->last_picture.f);
+ pic_param.forward_reference_picture = ff_vaapi_get_surface_id(s->last_pic.f);
break;
}
@@ -115,7 +115,7 @@ fail:
static int vaapi_mpeg2_end_frame(AVCodecContext *avctx)
{
MpegEncContext *s = avctx->priv_data;
- VAAPIDecodePicture *pic = s->current_picture_ptr->hwaccel_picture_private;
+ VAAPIDecodePicture *pic = s->cur_pic_ptr->hwaccel_picture_private;
int ret;
ret = ff_vaapi_decode_issue(avctx, pic);
@@ -131,7 +131,7 @@ fail:
static int vaapi_mpeg2_decode_slice(AVCodecContext *avctx, const uint8_t *buffer, uint32_t size)
{
const MpegEncContext *s = avctx->priv_data;
- VAAPIDecodePicture *pic = s->current_picture_ptr->hwaccel_picture_private;
+ VAAPIDecodePicture *pic = s->cur_pic_ptr->hwaccel_picture_private;
VASliceParameterBufferMPEG2 slice_param;
GetBitContext gb;
uint32_t quantiser_scale_code, intra_slice_flag, macroblock_offset;
diff --git a/libavcodec/vaapi_mpeg4.c b/libavcodec/vaapi_mpeg4.c
index 363b686e42..e227bee113 100644
--- a/libavcodec/vaapi_mpeg4.c
+++ b/libavcodec/vaapi_mpeg4.c
@@ -49,11 +49,11 @@ static int vaapi_mpeg4_start_frame(AVCodecContext *avctx, av_unused const uint8_
{
Mpeg4DecContext *ctx = avctx->priv_data;
MpegEncContext *s = &ctx->m;
- VAAPIDecodePicture *pic = s->current_picture_ptr->hwaccel_picture_private;
+ VAAPIDecodePicture *pic = s->cur_pic_ptr->hwaccel_picture_private;
VAPictureParameterBufferMPEG4 pic_param;
int i, err;
- pic->output_surface = ff_vaapi_get_surface_id(s->current_picture_ptr->f);
+ pic->output_surface = ff_vaapi_get_surface_id(s->cur_pic_ptr->f);
pic_param = (VAPictureParameterBufferMPEG4) {
.vop_width = s->width,
@@ -78,7 +78,7 @@ static int vaapi_mpeg4_start_frame(AVCodecContext *avctx, av_unused const uint8_
.vop_fields.bits = {
.vop_coding_type = s->pict_type - AV_PICTURE_TYPE_I,
.backward_reference_vop_coding_type =
- s->pict_type == AV_PICTURE_TYPE_B ? s->next_picture.f->pict_type - AV_PICTURE_TYPE_I : 0,
+ s->pict_type == AV_PICTURE_TYPE_B ? s->next_pic.f->pict_type - AV_PICTURE_TYPE_I : 0,
.vop_rounding_type = s->no_rounding,
.intra_dc_vlc_thr = mpeg4_get_intra_dc_vlc_thr(ctx),
.top_field_first = s->top_field_first,
@@ -100,9 +100,9 @@ static int vaapi_mpeg4_start_frame(AVCodecContext *avctx, av_unused const uint8_
}
if (s->pict_type == AV_PICTURE_TYPE_B)
- pic_param.backward_reference_picture = ff_vaapi_get_surface_id(s->next_picture.f);
+ pic_param.backward_reference_picture = ff_vaapi_get_surface_id(s->next_pic.f);
if (s->pict_type != AV_PICTURE_TYPE_I)
- pic_param.forward_reference_picture = ff_vaapi_get_surface_id(s->last_picture.f);
+ pic_param.forward_reference_picture = ff_vaapi_get_surface_id(s->last_pic.f);
err = ff_vaapi_decode_make_param_buffer(avctx, pic,
VAPictureParameterBufferType,
@@ -139,7 +139,7 @@ fail:
static int vaapi_mpeg4_end_frame(AVCodecContext *avctx)
{
MpegEncContext *s = avctx->priv_data;
- VAAPIDecodePicture *pic = s->current_picture_ptr->hwaccel_picture_private;
+ VAAPIDecodePicture *pic = s->cur_pic_ptr->hwaccel_picture_private;
int ret;
ret = ff_vaapi_decode_issue(avctx, pic);
@@ -155,7 +155,7 @@ fail:
static int vaapi_mpeg4_decode_slice(AVCodecContext *avctx, const uint8_t *buffer, uint32_t size)
{
MpegEncContext *s = avctx->priv_data;
- VAAPIDecodePicture *pic = s->current_picture_ptr->hwaccel_picture_private;
+ VAAPIDecodePicture *pic = s->cur_pic_ptr->hwaccel_picture_private;
VASliceParameterBufferMPEG4 slice_param;
int err;
diff --git a/libavcodec/vaapi_vc1.c b/libavcodec/vaapi_vc1.c
index 5594118a69..ef914cf4b2 100644
--- a/libavcodec/vaapi_vc1.c
+++ b/libavcodec/vaapi_vc1.c
@@ -253,11 +253,11 @@ static int vaapi_vc1_start_frame(AVCodecContext *avctx, av_unused const uint8_t
{
const VC1Context *v = avctx->priv_data;
const MpegEncContext *s = &v->s;
- VAAPIDecodePicture *pic = s->current_picture_ptr->hwaccel_picture_private;
+ VAAPIDecodePicture *pic = s->cur_pic_ptr->hwaccel_picture_private;
VAPictureParameterBufferVC1 pic_param;
int err;
- pic->output_surface = ff_vaapi_get_surface_id(s->current_picture_ptr->f);
+ pic->output_surface = ff_vaapi_get_surface_id(s->cur_pic_ptr->f);
pic_param = (VAPictureParameterBufferVC1) {
.forward_reference_picture = VA_INVALID_ID,
@@ -374,10 +374,10 @@ static int vaapi_vc1_start_frame(AVCodecContext *avctx, av_unused const uint8_t
switch (s->pict_type) {
case AV_PICTURE_TYPE_B:
- pic_param.backward_reference_picture = ff_vaapi_get_surface_id(s->next_picture.f);
+ pic_param.backward_reference_picture = ff_vaapi_get_surface_id(s->next_pic.f);
// fall-through
case AV_PICTURE_TYPE_P:
- pic_param.forward_reference_picture = ff_vaapi_get_surface_id(s->last_picture.f);
+ pic_param.forward_reference_picture = ff_vaapi_get_surface_id(s->last_pic.f);
break;
}
@@ -450,7 +450,7 @@ static int vaapi_vc1_end_frame(AVCodecContext *avctx)
{
VC1Context *v = avctx->priv_data;
MpegEncContext *s = &v->s;
- VAAPIDecodePicture *pic = s->current_picture_ptr->hwaccel_picture_private;
+ VAAPIDecodePicture *pic = s->cur_pic_ptr->hwaccel_picture_private;
int ret;
ret = ff_vaapi_decode_issue(avctx, pic);
@@ -465,7 +465,7 @@ static int vaapi_vc1_decode_slice(AVCodecContext *avctx, const uint8_t *buffer,
{
const VC1Context *v = avctx->priv_data;
const MpegEncContext *s = &v->s;
- VAAPIDecodePicture *pic = s->current_picture_ptr->hwaccel_picture_private;
+ VAAPIDecodePicture *pic = s->cur_pic_ptr->hwaccel_picture_private;
VASliceParameterBufferVC1 slice_param;
int mb_height;
int err;
diff --git a/libavcodec/vc1.c b/libavcodec/vc1.c
index e234192fdd..643232653c 100644
--- a/libavcodec/vc1.c
+++ b/libavcodec/vc1.c
@@ -856,7 +856,7 @@ int ff_vc1_parse_frame_header_adv(VC1Context *v, GetBitContext* gb)
v->s.pict_type = (v->fptype & 1) ? AV_PICTURE_TYPE_BI : AV_PICTURE_TYPE_B;
else
v->s.pict_type = (v->fptype & 1) ? AV_PICTURE_TYPE_P : AV_PICTURE_TYPE_I;
- v->s.current_picture_ptr->f->pict_type = v->s.pict_type;
+ v->s.cur_pic_ptr->f->pict_type = v->s.pict_type;
if (!v->pic_header_flag)
goto parse_common_info;
}
diff --git a/libavcodec/vc1_block.c b/libavcodec/vc1_block.c
index a6ee4922f9..6b5b1d0566 100644
--- a/libavcodec/vc1_block.c
+++ b/libavcodec/vc1_block.c
@@ -59,9 +59,9 @@ static inline void init_block_index(VC1Context *v)
MpegEncContext *s = &v->s;
ff_init_block_index(s);
if (v->field_mode && !(v->second_field ^ v->tff)) {
- s->dest[0] += s->current_picture_ptr->f->linesize[0];
- s->dest[1] += s->current_picture_ptr->f->linesize[1];
- s->dest[2] += s->current_picture_ptr->f->linesize[2];
+ s->dest[0] += s->cur_pic_ptr->f->linesize[0];
+ s->dest[1] += s->cur_pic_ptr->f->linesize[1];
+ s->dest[2] += s->cur_pic_ptr->f->linesize[2];
}
}
@@ -417,7 +417,7 @@ static inline int ff_vc1_pred_dc(MpegEncContext *s, int overlap, int pq, int n,
int dqscale_index;
/* scale predictors if needed */
- q1 = FFABS(s->current_picture.qscale_table[mb_pos]);
+ q1 = FFABS(s->cur_pic.qscale_table[mb_pos]);
dqscale_index = s->y_dc_scale_table[q1] - 1;
if (dqscale_index < 0)
return 0;
@@ -433,12 +433,12 @@ static inline int ff_vc1_pred_dc(MpegEncContext *s, int overlap, int pq, int n,
a = dc_val[ - wrap];
if (c_avail && (n != 1 && n != 3)) {
- q2 = FFABS(s->current_picture.qscale_table[mb_pos - 1]);
+ q2 = FFABS(s->cur_pic.qscale_table[mb_pos - 1]);
if (q2 && q2 != q1)
c = (int)((unsigned)c * s->y_dc_scale_table[q2] * ff_vc1_dqscale[dqscale_index] + 0x20000) >> 18;
}
if (a_avail && (n != 2 && n != 3)) {
- q2 = FFABS(s->current_picture.qscale_table[mb_pos - s->mb_stride]);
+ q2 = FFABS(s->cur_pic.qscale_table[mb_pos - s->mb_stride]);
if (q2 && q2 != q1)
a = (int)((unsigned)a * s->y_dc_scale_table[q2] * ff_vc1_dqscale[dqscale_index] + 0x20000) >> 18;
}
@@ -448,7 +448,7 @@ static inline int ff_vc1_pred_dc(MpegEncContext *s, int overlap, int pq, int n,
off--;
if (n != 2)
off -= s->mb_stride;
- q2 = FFABS(s->current_picture.qscale_table[off]);
+ q2 = FFABS(s->cur_pic.qscale_table[off]);
if (q2 && q2 != q1)
b = (int)((unsigned)b * s->y_dc_scale_table[q2] * ff_vc1_dqscale[dqscale_index] + 0x20000) >> 18;
}
@@ -771,19 +771,19 @@ static int vc1_decode_i_block_adv(VC1Context *v, int16_t block[64], int n,
else // top
ac_val -= 16 * s->block_wrap[n];
- q1 = s->current_picture.qscale_table[mb_pos];
+ q1 = s->cur_pic.qscale_table[mb_pos];
if (n == 3)
q2 = q1;
else if (dc_pred_dir) {
if (n == 1)
q2 = q1;
else if (c_avail && mb_pos)
- q2 = s->current_picture.qscale_table[mb_pos - 1];
+ q2 = s->cur_pic.qscale_table[mb_pos - 1];
} else {
if (n == 2)
q2 = q1;
else if (a_avail && mb_pos >= s->mb_stride)
- q2 = s->current_picture.qscale_table[mb_pos - s->mb_stride];
+ q2 = s->cur_pic.qscale_table[mb_pos - s->mb_stride];
}
//AC Decoding
@@ -973,11 +973,11 @@ static int vc1_decode_intra_block(VC1Context *v, int16_t block[64], int n,
else //top
ac_val -= 16 * s->block_wrap[n];
- q1 = s->current_picture.qscale_table[mb_pos];
+ q1 = s->cur_pic.qscale_table[mb_pos];
if (dc_pred_dir && c_avail && mb_pos)
- q2 = s->current_picture.qscale_table[mb_pos - 1];
+ q2 = s->cur_pic.qscale_table[mb_pos - 1];
if (!dc_pred_dir && a_avail && mb_pos >= s->mb_stride)
- q2 = s->current_picture.qscale_table[mb_pos - s->mb_stride];
+ q2 = s->cur_pic.qscale_table[mb_pos - s->mb_stride];
if (dc_pred_dir && n == 1)
q2 = q1;
if (!dc_pred_dir && n == 2)
@@ -1314,10 +1314,10 @@ static int vc1_decode_p_mb(VC1Context *v)
GET_MVDATA(dmv_x, dmv_y);
if (s->mb_intra) {
- s->current_picture.motion_val[1][s->block_index[0]][0] = 0;
- s->current_picture.motion_val[1][s->block_index[0]][1] = 0;
+ s->cur_pic.motion_val[1][s->block_index[0]][0] = 0;
+ s->cur_pic.motion_val[1][s->block_index[0]][1] = 0;
}
- s->current_picture.mb_type[mb_pos] = s->mb_intra ? MB_TYPE_INTRA : MB_TYPE_16x16;
+ s->cur_pic.mb_type[mb_pos] = s->mb_intra ? MB_TYPE_INTRA : MB_TYPE_16x16;
ff_vc1_pred_mv(v, 0, dmv_x, dmv_y, 1, v->range_x, v->range_y, v->mb_type[0], 0, 0);
/* FIXME Set DC val for inter block ? */
@@ -1334,7 +1334,7 @@ static int vc1_decode_p_mb(VC1Context *v)
mquant = v->pq;
cbp = 0;
}
- s->current_picture.qscale_table[mb_pos] = mquant;
+ s->cur_pic.qscale_table[mb_pos] = mquant;
if (!v->ttmbf && !s->mb_intra && mb_has_coeffs)
ttmb = get_vlc2(gb, ff_vc1_ttmb_vlc[v->tt_index],
@@ -1383,8 +1383,8 @@ static int vc1_decode_p_mb(VC1Context *v)
v->mb_type[0][s->block_index[i]] = 0;
s->dc_val[0][s->block_index[i]] = 0;
}
- s->current_picture.mb_type[mb_pos] = MB_TYPE_SKIP;
- s->current_picture.qscale_table[mb_pos] = 0;
+ s->cur_pic.mb_type[mb_pos] = MB_TYPE_SKIP;
+ s->cur_pic.qscale_table[mb_pos] = 0;
ff_vc1_pred_mv(v, 0, 0, 0, 1, v->range_x, v->range_y, v->mb_type[0], 0, 0);
ff_vc1_mc_1mv(v, 0);
}
@@ -1427,7 +1427,7 @@ static int vc1_decode_p_mb(VC1Context *v)
if (!intra_count && !coded_inter)
goto end;
GET_MQUANT();
- s->current_picture.qscale_table[mb_pos] = mquant;
+ s->cur_pic.qscale_table[mb_pos] = mquant;
/* test if block is intra and has pred */
{
int intrapred = 0;
@@ -1484,7 +1484,7 @@ static int vc1_decode_p_mb(VC1Context *v)
}
} else { // skipped MB
s->mb_intra = 0;
- s->current_picture.qscale_table[mb_pos] = 0;
+ s->cur_pic.qscale_table[mb_pos] = 0;
for (i = 0; i < 6; i++) {
v->mb_type[0][s->block_index[i]] = 0;
s->dc_val[0][s->block_index[i]] = 0;
@@ -1494,7 +1494,7 @@ static int vc1_decode_p_mb(VC1Context *v)
ff_vc1_mc_4mv_luma(v, i, 0, 0);
}
ff_vc1_mc_4mv_chroma(v, 0);
- s->current_picture.qscale_table[mb_pos] = 0;
+ s->cur_pic.qscale_table[mb_pos] = 0;
}
}
end:
@@ -1574,19 +1574,19 @@ static int vc1_decode_p_mb_intfr(VC1Context *v)
}
if (ff_vc1_mbmode_intfrp[v->fourmvswitch][idx_mbmode][0] == MV_PMODE_INTFR_INTRA) { // intra MB
for (i = 0; i < 4; i++) {
- s->current_picture.motion_val[1][s->block_index[i]][0] = 0;
- s->current_picture.motion_val[1][s->block_index[i]][1] = 0;
+ s->cur_pic.motion_val[1][s->block_index[i]][0] = 0;
+ s->cur_pic.motion_val[1][s->block_index[i]][1] = 0;
}
v->is_intra[s->mb_x] = 0x3f; // Set the bitfield to all 1.
s->mb_intra = 1;
- s->current_picture.mb_type[mb_pos] = MB_TYPE_INTRA;
+ s->cur_pic.mb_type[mb_pos] = MB_TYPE_INTRA;
fieldtx = v->fieldtx_plane[mb_pos] = get_bits1(gb);
mb_has_coeffs = get_bits1(gb);
if (mb_has_coeffs)
cbp = 1 + get_vlc2(&v->s.gb, v->cbpcy_vlc, VC1_CBPCY_P_VLC_BITS, 2);
v->s.ac_pred = v->acpred_plane[mb_pos] = get_bits1(gb);
GET_MQUANT();
- s->current_picture.qscale_table[mb_pos] = mquant;
+ s->cur_pic.qscale_table[mb_pos] = mquant;
/* Set DC scale - y and c use the same (not sure if necessary here) */
s->y_dc_scale = s->y_dc_scale_table[FFABS(mquant)];
s->c_dc_scale = s->c_dc_scale_table[FFABS(mquant)];
@@ -1670,7 +1670,7 @@ static int vc1_decode_p_mb_intfr(VC1Context *v)
}
if (cbp)
GET_MQUANT(); // p. 227
- s->current_picture.qscale_table[mb_pos] = mquant;
+ s->cur_pic.qscale_table[mb_pos] = mquant;
if (!v->ttmbf && cbp)
ttmb = get_vlc2(gb, ff_vc1_ttmb_vlc[v->tt_index], VC1_TTMB_VLC_BITS, 2);
for (i = 0; i < 6; i++) {
@@ -1701,8 +1701,8 @@ static int vc1_decode_p_mb_intfr(VC1Context *v)
v->mb_type[0][s->block_index[i]] = 0;
s->dc_val[0][s->block_index[i]] = 0;
}
- s->current_picture.mb_type[mb_pos] = MB_TYPE_SKIP;
- s->current_picture.qscale_table[mb_pos] = 0;
+ s->cur_pic.mb_type[mb_pos] = MB_TYPE_SKIP;
+ s->cur_pic.qscale_table[mb_pos] = 0;
v->blk_mv_type[s->block_index[0]] = 0;
v->blk_mv_type[s->block_index[1]] = 0;
v->blk_mv_type[s->block_index[2]] = 0;
@@ -1746,11 +1746,11 @@ static int vc1_decode_p_mb_intfi(VC1Context *v)
if (idx_mbmode <= 1) { // intra MB
v->is_intra[s->mb_x] = 0x3f; // Set the bitfield to all 1.
s->mb_intra = 1;
- s->current_picture.motion_val[1][s->block_index[0] + v->blocks_off][0] = 0;
- s->current_picture.motion_val[1][s->block_index[0] + v->blocks_off][1] = 0;
- s->current_picture.mb_type[mb_pos + v->mb_off] = MB_TYPE_INTRA;
+ s->cur_pic.motion_val[1][s->block_index[0] + v->blocks_off][0] = 0;
+ s->cur_pic.motion_val[1][s->block_index[0] + v->blocks_off][1] = 0;
+ s->cur_pic.mb_type[mb_pos + v->mb_off] = MB_TYPE_INTRA;
GET_MQUANT();
- s->current_picture.qscale_table[mb_pos] = mquant;
+ s->cur_pic.qscale_table[mb_pos] = mquant;
/* Set DC scale - y and c use the same (not sure if necessary here) */
s->y_dc_scale = s->y_dc_scale_table[FFABS(mquant)];
s->c_dc_scale = s->c_dc_scale_table[FFABS(mquant)];
@@ -1780,7 +1780,7 @@ static int vc1_decode_p_mb_intfi(VC1Context *v)
}
} else {
s->mb_intra = v->is_intra[s->mb_x] = 0;
- s->current_picture.mb_type[mb_pos + v->mb_off] = MB_TYPE_16x16;
+ s->cur_pic.mb_type[mb_pos + v->mb_off] = MB_TYPE_16x16;
for (i = 0; i < 6; i++)
v->mb_type[0][s->block_index[i]] = 0;
if (idx_mbmode <= 5) { // 1-MV
@@ -1808,7 +1808,7 @@ static int vc1_decode_p_mb_intfi(VC1Context *v)
if (cbp) {
GET_MQUANT();
}
- s->current_picture.qscale_table[mb_pos] = mquant;
+ s->cur_pic.qscale_table[mb_pos] = mquant;
if (!v->ttmbf && cbp) {
ttmb = get_vlc2(gb, ff_vc1_ttmb_vlc[v->tt_index], VC1_TTMB_VLC_BITS, 2);
}
@@ -1880,7 +1880,7 @@ static int vc1_decode_b_mb(VC1Context *v)
v->mb_type[0][s->block_index[i]] = 0;
s->dc_val[0][s->block_index[i]] = 0;
}
- s->current_picture.qscale_table[mb_pos] = 0;
+ s->cur_pic.qscale_table[mb_pos] = 0;
if (!direct) {
if (!skipped) {
@@ -1917,7 +1917,7 @@ static int vc1_decode_b_mb(VC1Context *v)
cbp = get_vlc2(&v->s.gb, v->cbpcy_vlc, VC1_CBPCY_P_VLC_BITS, 2);
GET_MQUANT();
s->mb_intra = 0;
- s->current_picture.qscale_table[mb_pos] = mquant;
+ s->cur_pic.qscale_table[mb_pos] = mquant;
if (!v->ttmbf)
ttmb = get_vlc2(gb, ff_vc1_ttmb_vlc[v->tt_index], VC1_TTMB_VLC_BITS, 2);
dmv_x[0] = dmv_y[0] = dmv_x[1] = dmv_y[1] = 0;
@@ -1932,7 +1932,7 @@ static int vc1_decode_b_mb(VC1Context *v)
}
if (s->mb_intra && !mb_has_coeffs) {
GET_MQUANT();
- s->current_picture.qscale_table[mb_pos] = mquant;
+ s->cur_pic.qscale_table[mb_pos] = mquant;
s->ac_pred = get_bits1(gb);
cbp = 0;
ff_vc1_pred_b_mv(v, dmv_x, dmv_y, direct, bmvtype);
@@ -1954,7 +1954,7 @@ static int vc1_decode_b_mb(VC1Context *v)
s->ac_pred = get_bits1(gb);
cbp = get_vlc2(&v->s.gb, v->cbpcy_vlc, VC1_CBPCY_P_VLC_BITS, 2);
GET_MQUANT();
- s->current_picture.qscale_table[mb_pos] = mquant;
+ s->cur_pic.qscale_table[mb_pos] = mquant;
if (!v->ttmbf && !s->mb_intra && mb_has_coeffs)
ttmb = get_vlc2(gb, ff_vc1_ttmb_vlc[v->tt_index], VC1_TTMB_VLC_BITS, 2);
}
@@ -2029,11 +2029,11 @@ static int vc1_decode_b_mb_intfi(VC1Context *v)
if (idx_mbmode <= 1) { // intra MB
v->is_intra[s->mb_x] = 0x3f; // Set the bitfield to all 1.
s->mb_intra = 1;
- s->current_picture.motion_val[1][s->block_index[0]][0] = 0;
- s->current_picture.motion_val[1][s->block_index[0]][1] = 0;
- s->current_picture.mb_type[mb_pos + v->mb_off] = MB_TYPE_INTRA;
+ s->cur_pic.motion_val[1][s->block_index[0]][0] = 0;
+ s->cur_pic.motion_val[1][s->block_index[0]][1] = 0;
+ s->cur_pic.mb_type[mb_pos + v->mb_off] = MB_TYPE_INTRA;
GET_MQUANT();
- s->current_picture.qscale_table[mb_pos] = mquant;
+ s->cur_pic.qscale_table[mb_pos] = mquant;
/* Set DC scale - y and c use the same (not sure if necessary here) */
s->y_dc_scale = s->y_dc_scale_table[FFABS(mquant)];
s->c_dc_scale = s->c_dc_scale_table[FFABS(mquant)];
@@ -2069,7 +2069,7 @@ static int vc1_decode_b_mb_intfi(VC1Context *v)
}
} else {
s->mb_intra = v->is_intra[s->mb_x] = 0;
- s->current_picture.mb_type[mb_pos + v->mb_off] = MB_TYPE_16x16;
+ s->cur_pic.mb_type[mb_pos + v->mb_off] = MB_TYPE_16x16;
for (i = 0; i < 6; i++)
v->mb_type[0][s->block_index[i]] = 0;
if (v->fmb_is_raw)
@@ -2106,7 +2106,7 @@ static int vc1_decode_b_mb_intfi(VC1Context *v)
if (bmvtype == BMV_TYPE_DIRECT) {
dmv_x[0] = dmv_y[0] = pred_flag[0] = 0;
dmv_x[1] = dmv_y[1] = pred_flag[0] = 0;
- if (!s->next_picture_ptr->field_picture) {
+ if (!s->next_pic_ptr->field_picture) {
av_log(s->avctx, AV_LOG_ERROR, "Mixed field/frame direct mode not supported\n");
return AVERROR_INVALIDDATA;
}
@@ -2138,7 +2138,7 @@ static int vc1_decode_b_mb_intfi(VC1Context *v)
if (cbp) {
GET_MQUANT();
}
- s->current_picture.qscale_table[mb_pos] = mquant;
+ s->cur_pic.qscale_table[mb_pos] = mquant;
if (!v->ttmbf && cbp) {
ttmb = get_vlc2(gb, ff_vc1_ttmb_vlc[v->tt_index], VC1_TTMB_VLC_BITS, 2);
}
@@ -2217,21 +2217,21 @@ static int vc1_decode_b_mb_intfr(VC1Context *v)
if (ff_vc1_mbmode_intfrp[0][idx_mbmode][0] == MV_PMODE_INTFR_INTRA) { // intra MB
for (i = 0; i < 4; i++) {
- s->mv[0][i][0] = s->current_picture.motion_val[0][s->block_index[i]][0] = 0;
- s->mv[0][i][1] = s->current_picture.motion_val[0][s->block_index[i]][1] = 0;
- s->mv[1][i][0] = s->current_picture.motion_val[1][s->block_index[i]][0] = 0;
- s->mv[1][i][1] = s->current_picture.motion_val[1][s->block_index[i]][1] = 0;
+ s->mv[0][i][0] = s->cur_pic.motion_val[0][s->block_index[i]][0] = 0;
+ s->mv[0][i][1] = s->cur_pic.motion_val[0][s->block_index[i]][1] = 0;
+ s->mv[1][i][0] = s->cur_pic.motion_val[1][s->block_index[i]][0] = 0;
+ s->mv[1][i][1] = s->cur_pic.motion_val[1][s->block_index[i]][1] = 0;
}
v->is_intra[s->mb_x] = 0x3f; // Set the bitfield to all 1.
s->mb_intra = 1;
- s->current_picture.mb_type[mb_pos] = MB_TYPE_INTRA;
+ s->cur_pic.mb_type[mb_pos] = MB_TYPE_INTRA;
fieldtx = v->fieldtx_plane[mb_pos] = get_bits1(gb);
mb_has_coeffs = get_bits1(gb);
if (mb_has_coeffs)
cbp = 1 + get_vlc2(&v->s.gb, v->cbpcy_vlc, VC1_CBPCY_P_VLC_BITS, 2);
v->s.ac_pred = v->acpred_plane[mb_pos] = get_bits1(gb);
GET_MQUANT();
- s->current_picture.qscale_table[mb_pos] = mquant;
+ s->cur_pic.qscale_table[mb_pos] = mquant;
/* Set DC scale - y and c use the same (not sure if necessary here) */
s->y_dc_scale = s->y_dc_scale_table[FFABS(mquant)];
s->c_dc_scale = s->c_dc_scale_table[FFABS(mquant)];
@@ -2272,31 +2272,31 @@ static int vc1_decode_b_mb_intfr(VC1Context *v)
direct = v->direct_mb_plane[mb_pos];
if (direct) {
- if (s->next_picture_ptr->field_picture)
+ if (s->next_pic_ptr->field_picture)
av_log(s->avctx, AV_LOG_WARNING, "Mixed frame/field direct mode not supported\n");
- s->mv[0][0][0] = s->current_picture.motion_val[0][s->block_index[0]][0] = scale_mv(s->next_picture.motion_val[1][s->block_index[0]][0], v->bfraction, 0, s->quarter_sample);
- s->mv[0][0][1] = s->current_picture.motion_val[0][s->block_index[0]][1] = scale_mv(s->next_picture.motion_val[1][s->block_index[0]][1], v->bfraction, 0, s->quarter_sample);
- s->mv[1][0][0] = s->current_picture.motion_val[1][s->block_index[0]][0] = scale_mv(s->next_picture.motion_val[1][s->block_index[0]][0], v->bfraction, 1, s->quarter_sample);
- s->mv[1][0][1] = s->current_picture.motion_val[1][s->block_index[0]][1] = scale_mv(s->next_picture.motion_val[1][s->block_index[0]][1], v->bfraction, 1, s->quarter_sample);
+ s->mv[0][0][0] = s->cur_pic.motion_val[0][s->block_index[0]][0] = scale_mv(s->next_pic.motion_val[1][s->block_index[0]][0], v->bfraction, 0, s->quarter_sample);
+ s->mv[0][0][1] = s->cur_pic.motion_val[0][s->block_index[0]][1] = scale_mv(s->next_pic.motion_val[1][s->block_index[0]][1], v->bfraction, 0, s->quarter_sample);
+ s->mv[1][0][0] = s->cur_pic.motion_val[1][s->block_index[0]][0] = scale_mv(s->next_pic.motion_val[1][s->block_index[0]][0], v->bfraction, 1, s->quarter_sample);
+ s->mv[1][0][1] = s->cur_pic.motion_val[1][s->block_index[0]][1] = scale_mv(s->next_pic.motion_val[1][s->block_index[0]][1], v->bfraction, 1, s->quarter_sample);
if (twomv) {
- s->mv[0][2][0] = s->current_picture.motion_val[0][s->block_index[2]][0] = scale_mv(s->next_picture.motion_val[1][s->block_index[2]][0], v->bfraction, 0, s->quarter_sample);
- s->mv[0][2][1] = s->current_picture.motion_val[0][s->block_index[2]][1] = scale_mv(s->next_picture.motion_val[1][s->block_index[2]][1], v->bfraction, 0, s->quarter_sample);
- s->mv[1][2][0] = s->current_picture.motion_val[1][s->block_index[2]][0] = scale_mv(s->next_picture.motion_val[1][s->block_index[2]][0], v->bfraction, 1, s->quarter_sample);
- s->mv[1][2][1] = s->current_picture.motion_val[1][s->block_index[2]][1] = scale_mv(s->next_picture.motion_val[1][s->block_index[2]][1], v->bfraction, 1, s->quarter_sample);
+ s->mv[0][2][0] = s->cur_pic.motion_val[0][s->block_index[2]][0] = scale_mv(s->next_pic.motion_val[1][s->block_index[2]][0], v->bfraction, 0, s->quarter_sample);
+ s->mv[0][2][1] = s->cur_pic.motion_val[0][s->block_index[2]][1] = scale_mv(s->next_pic.motion_val[1][s->block_index[2]][1], v->bfraction, 0, s->quarter_sample);
+ s->mv[1][2][0] = s->cur_pic.motion_val[1][s->block_index[2]][0] = scale_mv(s->next_pic.motion_val[1][s->block_index[2]][0], v->bfraction, 1, s->quarter_sample);
+ s->mv[1][2][1] = s->cur_pic.motion_val[1][s->block_index[2]][1] = scale_mv(s->next_pic.motion_val[1][s->block_index[2]][1], v->bfraction, 1, s->quarter_sample);
for (i = 1; i < 4; i += 2) {
- s->mv[0][i][0] = s->current_picture.motion_val[0][s->block_index[i]][0] = s->mv[0][i-1][0];
- s->mv[0][i][1] = s->current_picture.motion_val[0][s->block_index[i]][1] = s->mv[0][i-1][1];
- s->mv[1][i][0] = s->current_picture.motion_val[1][s->block_index[i]][0] = s->mv[1][i-1][0];
- s->mv[1][i][1] = s->current_picture.motion_val[1][s->block_index[i]][1] = s->mv[1][i-1][1];
+ s->mv[0][i][0] = s->cur_pic.motion_val[0][s->block_index[i]][0] = s->mv[0][i-1][0];
+ s->mv[0][i][1] = s->cur_pic.motion_val[0][s->block_index[i]][1] = s->mv[0][i-1][1];
+ s->mv[1][i][0] = s->cur_pic.motion_val[1][s->block_index[i]][0] = s->mv[1][i-1][0];
+ s->mv[1][i][1] = s->cur_pic.motion_val[1][s->block_index[i]][1] = s->mv[1][i-1][1];
}
} else {
for (i = 1; i < 4; i++) {
- s->mv[0][i][0] = s->current_picture.motion_val[0][s->block_index[i]][0] = s->mv[0][0][0];
- s->mv[0][i][1] = s->current_picture.motion_val[0][s->block_index[i]][1] = s->mv[0][0][1];
- s->mv[1][i][0] = s->current_picture.motion_val[1][s->block_index[i]][0] = s->mv[1][0][0];
- s->mv[1][i][1] = s->current_picture.motion_val[1][s->block_index[i]][1] = s->mv[1][0][1];
+ s->mv[0][i][0] = s->cur_pic.motion_val[0][s->block_index[i]][0] = s->mv[0][0][0];
+ s->mv[0][i][1] = s->cur_pic.motion_val[0][s->block_index[i]][1] = s->mv[0][0][1];
+ s->mv[1][i][0] = s->cur_pic.motion_val[1][s->block_index[i]][0] = s->mv[1][0][0];
+ s->mv[1][i][1] = s->cur_pic.motion_val[1][s->block_index[i]][1] = s->mv[1][0][1];
}
}
}
@@ -2398,10 +2398,10 @@ static int vc1_decode_b_mb_intfr(VC1Context *v)
if (mvsw) {
for (i = 0; i < 2; i++) {
- s->mv[dir][i+2][0] = s->mv[dir][i][0] = s->current_picture.motion_val[dir][s->block_index[i+2]][0] = s->current_picture.motion_val[dir][s->block_index[i]][0];
- s->mv[dir][i+2][1] = s->mv[dir][i][1] = s->current_picture.motion_val[dir][s->block_index[i+2]][1] = s->current_picture.motion_val[dir][s->block_index[i]][1];
- s->mv[dir2][i+2][0] = s->mv[dir2][i][0] = s->current_picture.motion_val[dir2][s->block_index[i]][0] = s->current_picture.motion_val[dir2][s->block_index[i+2]][0];
- s->mv[dir2][i+2][1] = s->mv[dir2][i][1] = s->current_picture.motion_val[dir2][s->block_index[i]][1] = s->current_picture.motion_val[dir2][s->block_index[i+2]][1];
+ s->mv[dir][i+2][0] = s->mv[dir][i][0] = s->cur_pic.motion_val[dir][s->block_index[i+2]][0] = s->cur_pic.motion_val[dir][s->block_index[i]][0];
+ s->mv[dir][i+2][1] = s->mv[dir][i][1] = s->cur_pic.motion_val[dir][s->block_index[i+2]][1] = s->cur_pic.motion_val[dir][s->block_index[i]][1];
+ s->mv[dir2][i+2][0] = s->mv[dir2][i][0] = s->cur_pic.motion_val[dir2][s->block_index[i]][0] = s->cur_pic.motion_val[dir2][s->block_index[i+2]][0];
+ s->mv[dir2][i+2][1] = s->mv[dir2][i][1] = s->cur_pic.motion_val[dir2][s->block_index[i]][1] = s->cur_pic.motion_val[dir2][s->block_index[i+2]][1];
}
} else {
ff_vc1_pred_mv_intfr(v, 0, 0, 0, 2, v->range_x, v->range_y, !dir);
@@ -2428,15 +2428,15 @@ static int vc1_decode_b_mb_intfr(VC1Context *v)
v->blk_mv_type[s->block_index[3]] = 1;
ff_vc1_pred_mv_intfr(v, 0, 0, 0, 2, v->range_x, v->range_y, !dir);
for (i = 0; i < 2; i++) {
- s->mv[!dir][i+2][0] = s->mv[!dir][i][0] = s->current_picture.motion_val[!dir][s->block_index[i+2]][0] = s->current_picture.motion_val[!dir][s->block_index[i]][0];
- s->mv[!dir][i+2][1] = s->mv[!dir][i][1] = s->current_picture.motion_val[!dir][s->block_index[i+2]][1] = s->current_picture.motion_val[!dir][s->block_index[i]][1];
+ s->mv[!dir][i+2][0] = s->mv[!dir][i][0] = s->cur_pic.motion_val[!dir][s->block_index[i+2]][0] = s->cur_pic.motion_val[!dir][s->block_index[i]][0];
+ s->mv[!dir][i+2][1] = s->mv[!dir][i][1] = s->cur_pic.motion_val[!dir][s->block_index[i+2]][1] = s->cur_pic.motion_val[!dir][s->block_index[i]][1];
}
ff_vc1_mc_1mv(v, dir);
}
if (cbp)
GET_MQUANT(); // p. 227
- s->current_picture.qscale_table[mb_pos] = mquant;
+ s->cur_pic.qscale_table[mb_pos] = mquant;
if (!v->ttmbf && cbp)
ttmb = get_vlc2(gb, ff_vc1_ttmb_vlc[v->tt_index], VC1_TTMB_VLC_BITS, 2);
for (i = 0; i < 6; i++) {
@@ -2467,8 +2467,8 @@ static int vc1_decode_b_mb_intfr(VC1Context *v)
v->mb_type[0][s->block_index[i]] = 0;
s->dc_val[0][s->block_index[i]] = 0;
}
- s->current_picture.mb_type[mb_pos] = MB_TYPE_SKIP;
- s->current_picture.qscale_table[mb_pos] = 0;
+ s->cur_pic.mb_type[mb_pos] = MB_TYPE_SKIP;
+ s->cur_pic.qscale_table[mb_pos] = 0;
v->blk_mv_type[s->block_index[0]] = 0;
v->blk_mv_type[s->block_index[1]] = 0;
v->blk_mv_type[s->block_index[2]] = 0;
@@ -2486,10 +2486,10 @@ static int vc1_decode_b_mb_intfr(VC1Context *v)
if (mvsw)
dir2 = !dir;
for (i = 0; i < 2; i++) {
- s->mv[dir][i+2][0] = s->mv[dir][i][0] = s->current_picture.motion_val[dir][s->block_index[i+2]][0] = s->current_picture.motion_val[dir][s->block_index[i]][0];
- s->mv[dir][i+2][1] = s->mv[dir][i][1] = s->current_picture.motion_val[dir][s->block_index[i+2]][1] = s->current_picture.motion_val[dir][s->block_index[i]][1];
- s->mv[dir2][i+2][0] = s->mv[dir2][i][0] = s->current_picture.motion_val[dir2][s->block_index[i]][0] = s->current_picture.motion_val[dir2][s->block_index[i+2]][0];
- s->mv[dir2][i+2][1] = s->mv[dir2][i][1] = s->current_picture.motion_val[dir2][s->block_index[i]][1] = s->current_picture.motion_val[dir2][s->block_index[i+2]][1];
+ s->mv[dir][i+2][0] = s->mv[dir][i][0] = s->cur_pic.motion_val[dir][s->block_index[i+2]][0] = s->cur_pic.motion_val[dir][s->block_index[i]][0];
+ s->mv[dir][i+2][1] = s->mv[dir][i][1] = s->cur_pic.motion_val[dir][s->block_index[i+2]][1] = s->cur_pic.motion_val[dir][s->block_index[i]][1];
+ s->mv[dir2][i+2][0] = s->mv[dir2][i][0] = s->cur_pic.motion_val[dir2][s->block_index[i]][0] = s->cur_pic.motion_val[dir2][s->block_index[i+2]][0];
+ s->mv[dir2][i+2][1] = s->mv[dir2][i][1] = s->cur_pic.motion_val[dir2][s->block_index[i]][1] = s->cur_pic.motion_val[dir2][s->block_index[i+2]][1];
}
} else {
v->blk_mv_type[s->block_index[0]] = 1;
@@ -2498,8 +2498,8 @@ static int vc1_decode_b_mb_intfr(VC1Context *v)
v->blk_mv_type[s->block_index[3]] = 1;
ff_vc1_pred_mv_intfr(v, 0, 0, 0, 2, v->range_x, v->range_y, !dir);
for (i = 0; i < 2; i++) {
- s->mv[!dir][i+2][0] = s->mv[!dir][i][0] = s->current_picture.motion_val[!dir][s->block_index[i+2]][0] = s->current_picture.motion_val[!dir][s->block_index[i]][0];
- s->mv[!dir][i+2][1] = s->mv[!dir][i][1] = s->current_picture.motion_val[!dir][s->block_index[i+2]][1] = s->current_picture.motion_val[!dir][s->block_index[i]][1];
+ s->mv[!dir][i+2][0] = s->mv[!dir][i][0] = s->cur_pic.motion_val[!dir][s->block_index[i+2]][0] = s->cur_pic.motion_val[!dir][s->block_index[i]][0];
+ s->mv[!dir][i+2][1] = s->mv[!dir][i][1] = s->cur_pic.motion_val[!dir][s->block_index[i+2]][1] = s->cur_pic.motion_val[!dir][s->block_index[i]][1];
}
}
}
@@ -2568,11 +2568,11 @@ static void vc1_decode_i_blocks(VC1Context *v)
update_block_index(s);
s->bdsp.clear_blocks(v->block[v->cur_blk_idx][0]);
mb_pos = s->mb_x + s->mb_y * s->mb_width;
- s->current_picture.mb_type[mb_pos] = MB_TYPE_INTRA;
- s->current_picture.qscale_table[mb_pos] = v->pq;
+ s->cur_pic.mb_type[mb_pos] = MB_TYPE_INTRA;
+ s->cur_pic.qscale_table[mb_pos] = v->pq;
for (int i = 0; i < 4; i++) {
- s->current_picture.motion_val[1][s->block_index[i]][0] = 0;
- s->current_picture.motion_val[1][s->block_index[i]][1] = 0;
+ s->cur_pic.motion_val[1][s->block_index[i]][0] = 0;
+ s->cur_pic.motion_val[1][s->block_index[i]][1] = 0;
}
// do actual MB decoding and displaying
@@ -2698,10 +2698,10 @@ static int vc1_decode_i_blocks_adv(VC1Context *v)
update_block_index(s);
s->bdsp.clear_blocks(v->block[v->cur_blk_idx][0]);
mb_pos = s->mb_x + s->mb_y * s->mb_stride;
- s->current_picture.mb_type[mb_pos + v->mb_off] = MB_TYPE_INTRA;
+ s->cur_pic.mb_type[mb_pos + v->mb_off] = MB_TYPE_INTRA;
for (int i = 0; i < 4; i++) {
- s->current_picture.motion_val[1][s->block_index[i] + v->blocks_off][0] = 0;
- s->current_picture.motion_val[1][s->block_index[i] + v->blocks_off][1] = 0;
+ s->cur_pic.motion_val[1][s->block_index[i] + v->blocks_off][0] = 0;
+ s->cur_pic.motion_val[1][s->block_index[i] + v->blocks_off][1] = 0;
}
// do actual MB decoding and displaying
@@ -2724,7 +2724,7 @@ static int vc1_decode_i_blocks_adv(VC1Context *v)
GET_MQUANT();
- s->current_picture.qscale_table[mb_pos] = mquant;
+ s->cur_pic.qscale_table[mb_pos] = mquant;
/* Set DC scale - y and c use the same */
s->y_dc_scale = s->y_dc_scale_table[FFABS(mquant)];
s->c_dc_scale = s->c_dc_scale_table[FFABS(mquant)];
@@ -2948,7 +2948,7 @@ static void vc1_decode_skip_blocks(VC1Context *v)
{
MpegEncContext *s = &v->s;
- if (!v->s.last_picture.f->data[0])
+ if (!v->s.last_pic.f->data[0])
return;
ff_er_add_slice(&s->er, 0, s->start_mb_y, s->mb_width - 1, s->end_mb_y - 1, ER_MB_END);
@@ -2957,9 +2957,9 @@ static void vc1_decode_skip_blocks(VC1Context *v)
s->mb_x = 0;
init_block_index(v);
update_block_index(s);
- memcpy(s->dest[0], s->last_picture.f->data[0] + s->mb_y * 16 * s->linesize, s->linesize * 16);
- memcpy(s->dest[1], s->last_picture.f->data[1] + s->mb_y * 8 * s->uvlinesize, s->uvlinesize * 8);
- memcpy(s->dest[2], s->last_picture.f->data[2] + s->mb_y * 8 * s->uvlinesize, s->uvlinesize * 8);
+ memcpy(s->dest[0], s->last_pic.f->data[0] + s->mb_y * 16 * s->linesize, s->linesize * 16);
+ memcpy(s->dest[1], s->last_pic.f->data[1] + s->mb_y * 8 * s->uvlinesize, s->uvlinesize * 8);
+ memcpy(s->dest[2], s->last_pic.f->data[2] + s->mb_y * 8 * s->uvlinesize, s->uvlinesize * 8);
s->first_slice_line = 0;
}
}
@@ -2969,7 +2969,7 @@ void ff_vc1_decode_blocks(VC1Context *v)
v->s.esc3_level_length = 0;
if (v->x8_type) {
- ff_intrax8_decode_picture(&v->x8, &v->s.current_picture,
+ ff_intrax8_decode_picture(&v->x8, &v->s.cur_pic,
&v->s.gb, &v->s.mb_x, &v->s.mb_y,
2 * v->pq + v->halfpq, v->pq * !v->pquantizer,
v->s.loop_filter, v->s.low_delay);
diff --git a/libavcodec/vc1_loopfilter.c b/libavcodec/vc1_loopfilter.c
index 0f990cccef..8afb4db190 100644
--- a/libavcodec/vc1_loopfilter.c
+++ b/libavcodec/vc1_loopfilter.c
@@ -500,7 +500,7 @@ void ff_vc1_p_loop_filter(VC1Context *v)
cbp,
is_intra,
i > 3 ? uvmv :
- &s->current_picture.motion_val[0][s->block_index[i] - 4 * s->b8_stride - 2 + v->blocks_off],
+ &s->cur_pic.motion_val[0][s->block_index[i] - 4 * s->b8_stride - 2 + v->blocks_off],
i > 3 ? &v->mv_f[0][s->block_index[i] - 2 * s->mb_stride - 1 + v->mb_off] :
&v->mv_f[0][s->block_index[i] - 4 * s->b8_stride - 2 + v->blocks_off],
ttblk,
@@ -520,7 +520,7 @@ void ff_vc1_p_loop_filter(VC1Context *v)
cbp,
is_intra,
i > 3 ? uvmv :
- &s->current_picture.motion_val[0][s->block_index[i] - 4 * s->b8_stride + v->blocks_off],
+ &s->cur_pic.motion_val[0][s->block_index[i] - 4 * s->b8_stride + v->blocks_off],
i > 3 ? &v->mv_f[0][s->block_index[i] - 2 * s->mb_stride + v->mb_off] :
&v->mv_f[0][s->block_index[i] - 4 * s->b8_stride + v->blocks_off],
ttblk,
@@ -543,7 +543,7 @@ void ff_vc1_p_loop_filter(VC1Context *v)
cbp,
is_intra,
i > 3 ? uvmv :
- &s->current_picture.motion_val[0][s->block_index[i] - 2 * s->b8_stride - 2 + v->blocks_off],
+ &s->cur_pic.motion_val[0][s->block_index[i] - 2 * s->b8_stride - 2 + v->blocks_off],
i > 3 ? &v->mv_f[0][s->block_index[i] - s->mb_stride - 1 + v->mb_off] :
&v->mv_f[0][s->block_index[i] - 2 * s->b8_stride - 2 + v->blocks_off],
ttblk,
@@ -562,7 +562,7 @@ void ff_vc1_p_loop_filter(VC1Context *v)
cbp,
is_intra,
i > 3 ? uvmv :
- &s->current_picture.motion_val[0][s->block_index[i] - 2 + v->blocks_off],
+ &s->cur_pic.motion_val[0][s->block_index[i] - 2 + v->blocks_off],
i > 3 ? &v->mv_f[0][s->block_index[i] - 1 + v->mb_off] :
&v->mv_f[0][s->block_index[i] - 2 + v->blocks_off],
ttblk,
@@ -583,7 +583,7 @@ void ff_vc1_p_loop_filter(VC1Context *v)
cbp,
is_intra,
i > 3 ? uvmv :
- &s->current_picture.motion_val[0][s->block_index[i] - 2 * s->b8_stride + v->blocks_off],
+ &s->cur_pic.motion_val[0][s->block_index[i] - 2 * s->b8_stride + v->blocks_off],
i > 3 ? &v->mv_f[0][s->block_index[i] - s->mb_stride + v->mb_off] :
&v->mv_f[0][s->block_index[i] - 2 * s->b8_stride + v->blocks_off],
ttblk,
@@ -602,7 +602,7 @@ void ff_vc1_p_loop_filter(VC1Context *v)
cbp,
is_intra,
i > 3 ? uvmv :
- &s->current_picture.motion_val[0][s->block_index[i] + v->blocks_off],
+ &s->cur_pic.motion_val[0][s->block_index[i] + v->blocks_off],
i > 3 ? &v->mv_f[0][s->block_index[i] + v->mb_off] :
&v->mv_f[0][s->block_index[i] + v->blocks_off],
ttblk,
@@ -625,7 +625,7 @@ void ff_vc1_p_loop_filter(VC1Context *v)
cbp,
is_intra,
i > 3 ? uvmv :
- &s->current_picture.motion_val[0][s->block_index[i] - 4 * s->b8_stride - 4 + v->blocks_off],
+ &s->cur_pic.motion_val[0][s->block_index[i] - 4 * s->b8_stride - 4 + v->blocks_off],
i > 3 ? &v->mv_f[0][s->block_index[i] - 2 * s->mb_stride - 2 + v->mb_off] :
&v->mv_f[0][s->block_index[i] - 4 * s->b8_stride - 4 + v->blocks_off],
ttblk,
@@ -646,7 +646,7 @@ void ff_vc1_p_loop_filter(VC1Context *v)
cbp,
is_intra,
i > 3 ? uvmv :
- &s->current_picture.motion_val[0][s->block_index[i] - 4 * s->b8_stride - 2 + v->blocks_off],
+ &s->cur_pic.motion_val[0][s->block_index[i] - 4 * s->b8_stride - 2 + v->blocks_off],
i > 3 ? &v->mv_f[0][s->block_index[i] - 2 * s->mb_stride - 1 + v->mb_off] :
&v->mv_f[0][s->block_index[i] - 4 * s->b8_stride - 2 + v->blocks_off],
ttblk,
@@ -665,7 +665,7 @@ void ff_vc1_p_loop_filter(VC1Context *v)
cbp,
is_intra,
i > 3 ? uvmv :
- &s->current_picture.motion_val[0][s->block_index[i] - 4 * s->b8_stride + v->blocks_off],
+ &s->cur_pic.motion_val[0][s->block_index[i] - 4 * s->b8_stride + v->blocks_off],
i > 3 ? &v->mv_f[0][s->block_index[i] - 2 * s->mb_stride + v->mb_off] :
&v->mv_f[0][s->block_index[i] - 4 * s->b8_stride + v->blocks_off],
ttblk,
@@ -688,7 +688,7 @@ void ff_vc1_p_loop_filter(VC1Context *v)
cbp,
is_intra,
i > 3 ? uvmv :
- &s->current_picture.motion_val[0][s->block_index[i] - 2 * s->b8_stride - 4 + v->blocks_off],
+ &s->cur_pic.motion_val[0][s->block_index[i] - 2 * s->b8_stride - 4 + v->blocks_off],
i > 3 ? &v->mv_f[0][s->block_index[i] - s->mb_stride - 2 + v->mb_off] :
&v->mv_f[0][s->block_index[i] - 2 * s->b8_stride - 4 + v->blocks_off],
ttblk,
@@ -709,7 +709,7 @@ void ff_vc1_p_loop_filter(VC1Context *v)
cbp,
is_intra,
i > 3 ? uvmv :
- &s->current_picture.motion_val[0][s->block_index[i] - 2 * s->b8_stride - 2 + v->blocks_off],
+ &s->cur_pic.motion_val[0][s->block_index[i] - 2 * s->b8_stride - 2 + v->blocks_off],
i > 3 ? &v->mv_f[0][s->block_index[i] - s->mb_stride - 1 + v->mb_off] :
&v->mv_f[0][s->block_index[i] - 2 * s->b8_stride - 2 + v->blocks_off],
ttblk,
@@ -728,7 +728,7 @@ void ff_vc1_p_loop_filter(VC1Context *v)
cbp,
is_intra,
i > 3 ? uvmv :
- &s->current_picture.motion_val[0][s->block_index[i] - 2 * s->b8_stride + v->blocks_off],
+ &s->cur_pic.motion_val[0][s->block_index[i] - 2 * s->b8_stride + v->blocks_off],
i > 3 ? &v->mv_f[0][s->block_index[i] - s->mb_stride + v->mb_off] :
&v->mv_f[0][s->block_index[i] - 2 * s->b8_stride + v->blocks_off],
ttblk,
@@ -749,7 +749,7 @@ void ff_vc1_p_loop_filter(VC1Context *v)
cbp,
is_intra,
i > 3 ? uvmv :
- &s->current_picture.motion_val[0][s->block_index[i] - 4 + v->blocks_off],
+ &s->cur_pic.motion_val[0][s->block_index[i] - 4 + v->blocks_off],
i > 3 ? &v->mv_f[0][s->block_index[i] - 2 + v->mb_off] :
&v->mv_f[0][s->block_index[i] - 4 + v->blocks_off],
ttblk,
@@ -770,7 +770,7 @@ void ff_vc1_p_loop_filter(VC1Context *v)
cbp,
is_intra,
i > 3 ? uvmv :
- &s->current_picture.motion_val[0][s->block_index[i] - 2 + v->blocks_off],
+ &s->cur_pic.motion_val[0][s->block_index[i] - 2 + v->blocks_off],
i > 3 ? &v->mv_f[0][s->block_index[i] - 1 + v->mb_off] :
&v->mv_f[0][s->block_index[i] - 2 + v->blocks_off],
ttblk,
@@ -789,7 +789,7 @@ void ff_vc1_p_loop_filter(VC1Context *v)
cbp,
is_intra,
i > 3 ? uvmv :
- &s->current_picture.motion_val[0][s->block_index[i] + v->blocks_off],
+ &s->cur_pic.motion_val[0][s->block_index[i] + v->blocks_off],
i > 3 ? &v->mv_f[0][s->block_index[i] + v->mb_off] :
&v->mv_f[0][s->block_index[i] + v->blocks_off],
ttblk,
diff --git a/libavcodec/vc1_mc.c b/libavcodec/vc1_mc.c
index 8f0b3f6fab..e24328569d 100644
--- a/libavcodec/vc1_mc.c
+++ b/libavcodec/vc1_mc.c
@@ -184,11 +184,11 @@ void ff_vc1_mc_1mv(VC1Context *v, int dir)
if ((!v->field_mode ||
(v->ref_field_type[dir] == 1 && v->cur_field_type == 1)) &&
- !v->s.last_picture.f->data[0])
+ !v->s.last_pic.f->data[0])
return;
- linesize = s->current_picture_ptr->f->linesize[0];
- uvlinesize = s->current_picture_ptr->f->linesize[1];
+ linesize = s->cur_pic_ptr->f->linesize[0];
+ uvlinesize = s->cur_pic_ptr->f->linesize[1];
mx = s->mv[dir][0][0];
my = s->mv[dir][0][1];
@@ -196,8 +196,8 @@ void ff_vc1_mc_1mv(VC1Context *v, int dir)
// store motion vectors for further use in B-frames
if (s->pict_type == AV_PICTURE_TYPE_P) {
for (i = 0; i < 4; i++) {
- s->current_picture.motion_val[1][s->block_index[i] + v->blocks_off][0] = mx;
- s->current_picture.motion_val[1][s->block_index[i] + v->blocks_off][1] = my;
+ s->cur_pic.motion_val[1][s->block_index[i] + v->blocks_off][0] = mx;
+ s->cur_pic.motion_val[1][s->block_index[i] + v->blocks_off][1] = my;
}
}
@@ -219,30 +219,30 @@ void ff_vc1_mc_1mv(VC1Context *v, int dir)
}
if (!dir) {
if (v->field_mode && (v->cur_field_type != v->ref_field_type[dir]) && v->second_field) {
- srcY = s->current_picture.f->data[0];
- srcU = s->current_picture.f->data[1];
- srcV = s->current_picture.f->data[2];
+ srcY = s->cur_pic.f->data[0];
+ srcU = s->cur_pic.f->data[1];
+ srcV = s->cur_pic.f->data[2];
luty = v->curr_luty;
lutuv = v->curr_lutuv;
use_ic = *v->curr_use_ic;
interlace = 1;
} else {
- srcY = s->last_picture.f->data[0];
- srcU = s->last_picture.f->data[1];
- srcV = s->last_picture.f->data[2];
+ srcY = s->last_pic.f->data[0];
+ srcU = s->last_pic.f->data[1];
+ srcV = s->last_pic.f->data[2];
luty = v->last_luty;
lutuv = v->last_lutuv;
use_ic = v->last_use_ic;
- interlace = !!(s->last_picture.f->flags & AV_FRAME_FLAG_INTERLACED);
+ interlace = !!(s->last_pic.f->flags & AV_FRAME_FLAG_INTERLACED);
}
} else {
- srcY = s->next_picture.f->data[0];
- srcU = s->next_picture.f->data[1];
- srcV = s->next_picture.f->data[2];
+ srcY = s->next_pic.f->data[0];
+ srcU = s->next_pic.f->data[1];
+ srcV = s->next_pic.f->data[2];
luty = v->next_luty;
lutuv = v->next_lutuv;
use_ic = v->next_use_ic;
- interlace = !!(s->next_picture.f->flags & AV_FRAME_FLAG_INTERLACED);
+ interlace = !!(s->next_pic.f->flags & AV_FRAME_FLAG_INTERLACED);
}
if (!srcY || !srcU) {
@@ -464,31 +464,31 @@ void ff_vc1_mc_4mv_luma(VC1Context *v, int n, int dir, int avg)
if ((!v->field_mode ||
(v->ref_field_type[dir] == 1 && v->cur_field_type == 1)) &&
- !v->s.last_picture.f->data[0])
+ !v->s.last_pic.f->data[0])
return;
- linesize = s->current_picture_ptr->f->linesize[0];
+ linesize = s->cur_pic_ptr->f->linesize[0];
mx = s->mv[dir][n][0];
my = s->mv[dir][n][1];
if (!dir) {
if (v->field_mode && (v->cur_field_type != v->ref_field_type[dir]) && v->second_field) {
- srcY = s->current_picture.f->data[0];
+ srcY = s->cur_pic.f->data[0];
luty = v->curr_luty;
use_ic = *v->curr_use_ic;
interlace = 1;
} else {
- srcY = s->last_picture.f->data[0];
+ srcY = s->last_pic.f->data[0];
luty = v->last_luty;
use_ic = v->last_use_ic;
- interlace = !!(s->last_picture.f->flags & AV_FRAME_FLAG_INTERLACED);
+ interlace = !!(s->last_pic.f->flags & AV_FRAME_FLAG_INTERLACED);
}
} else {
- srcY = s->next_picture.f->data[0];
+ srcY = s->next_pic.f->data[0];
luty = v->next_luty;
use_ic = v->next_use_ic;
- interlace = !!(s->next_picture.f->flags & AV_FRAME_FLAG_INTERLACED);
+ interlace = !!(s->next_pic.f->flags & AV_FRAME_FLAG_INTERLACED);
}
if (!srcY) {
@@ -503,8 +503,8 @@ void ff_vc1_mc_4mv_luma(VC1Context *v, int n, int dir, int avg)
if (s->pict_type == AV_PICTURE_TYPE_P && n == 3 && v->field_mode) {
int opp_count = get_luma_mv(v, 0,
- &s->current_picture.motion_val[1][s->block_index[0] + v->blocks_off][0],
- &s->current_picture.motion_val[1][s->block_index[0] + v->blocks_off][1]);
+ &s->cur_pic.motion_val[1][s->block_index[0] + v->blocks_off][0],
+ &s->cur_pic.motion_val[1][s->block_index[0] + v->blocks_off][1]);
int k, f = opp_count > 2;
for (k = 0; k < 4; k++)
v->mv_f[1][s->block_index[k] + v->blocks_off] = f;
@@ -515,8 +515,8 @@ void ff_vc1_mc_4mv_luma(VC1Context *v, int n, int dir, int avg)
int width = s->avctx->coded_width;
int height = s->avctx->coded_height >> 1;
if (s->pict_type == AV_PICTURE_TYPE_P) {
- s->current_picture.motion_val[1][s->block_index[n] + v->blocks_off][0] = mx;
- s->current_picture.motion_val[1][s->block_index[n] + v->blocks_off][1] = my;
+ s->cur_pic.motion_val[1][s->block_index[n] + v->blocks_off][0] = mx;
+ s->cur_pic.motion_val[1][s->block_index[n] + v->blocks_off][1] = my;
}
qx = (s->mb_x * 16) + (mx >> 2);
qy = (s->mb_y * 8) + (my >> 3);
@@ -645,7 +645,7 @@ void ff_vc1_mc_4mv_chroma(VC1Context *v, int dir)
int interlace;
int uvlinesize;
- if (!v->field_mode && !v->s.last_picture.f->data[0])
+ if (!v->field_mode && !v->s.last_pic.f->data[0])
return;
if (CONFIG_GRAY && s->avctx->flags & AV_CODEC_FLAG_GRAY)
return;
@@ -654,8 +654,8 @@ void ff_vc1_mc_4mv_chroma(VC1Context *v, int dir)
if (!v->field_mode || !v->numref) {
int valid_count = get_chroma_mv(v, dir, &tx, &ty);
if (!valid_count) {
- s->current_picture.motion_val[1][s->block_index[0] + v->blocks_off][0] = 0;
- s->current_picture.motion_val[1][s->block_index[0] + v->blocks_off][1] = 0;
+ s->cur_pic.motion_val[1][s->block_index[0] + v->blocks_off][0] = 0;
+ s->cur_pic.motion_val[1][s->block_index[0] + v->blocks_off][1] = 0;
v->luma_mv[s->mb_x][0] = v->luma_mv[s->mb_x][1] = 0;
return; //no need to do MC for intra blocks
}
@@ -664,12 +664,12 @@ void ff_vc1_mc_4mv_chroma(VC1Context *v, int dir)
int opp_count = get_luma_mv(v, dir, &tx, &ty);
chroma_ref_type = v->cur_field_type ^ (opp_count > 2);
}
- if (v->field_mode && chroma_ref_type == 1 && v->cur_field_type == 1 && !v->s.last_picture.f->data[0])
+ if (v->field_mode && chroma_ref_type == 1 && v->cur_field_type == 1 && !v->s.last_pic.f->data[0])
return;
- s->current_picture.motion_val[1][s->block_index[0] + v->blocks_off][0] = tx;
- s->current_picture.motion_val[1][s->block_index[0] + v->blocks_off][1] = ty;
+ s->cur_pic.motion_val[1][s->block_index[0] + v->blocks_off][0] = tx;
+ s->cur_pic.motion_val[1][s->block_index[0] + v->blocks_off][1] = ty;
- uvlinesize = s->current_picture_ptr->f->linesize[1];
+ uvlinesize = s->cur_pic_ptr->f->linesize[1];
uvmx = (tx + ((tx & 3) == 3)) >> 1;
uvmy = (ty + ((ty & 3) == 3)) >> 1;
@@ -698,24 +698,24 @@ void ff_vc1_mc_4mv_chroma(VC1Context *v, int dir)
if (!dir) {
if (v->field_mode && (v->cur_field_type != chroma_ref_type) && v->second_field) {
- srcU = s->current_picture.f->data[1];
- srcV = s->current_picture.f->data[2];
+ srcU = s->cur_pic.f->data[1];
+ srcV = s->cur_pic.f->data[2];
lutuv = v->curr_lutuv;
use_ic = *v->curr_use_ic;
interlace = 1;
} else {
- srcU = s->last_picture.f->data[1];
- srcV = s->last_picture.f->data[2];
+ srcU = s->last_pic.f->data[1];
+ srcV = s->last_pic.f->data[2];
lutuv = v->last_lutuv;
use_ic = v->last_use_ic;
- interlace = !!(s->last_picture.f->flags & AV_FRAME_FLAG_INTERLACED);
+ interlace = !!(s->last_pic.f->flags & AV_FRAME_FLAG_INTERLACED);
}
} else {
- srcU = s->next_picture.f->data[1];
- srcV = s->next_picture.f->data[2];
+ srcU = s->next_pic.f->data[1];
+ srcV = s->next_pic.f->data[2];
lutuv = v->next_lutuv;
use_ic = v->next_use_ic;
- interlace = !!(s->next_picture.f->flags & AV_FRAME_FLAG_INTERLACED);
+ interlace = !!(s->next_pic.f->flags & AV_FRAME_FLAG_INTERLACED);
}
if (!srcU) {
@@ -856,7 +856,7 @@ void ff_vc1_mc_4mv_chroma4(VC1Context *v, int dir, int dir2, int avg)
if (CONFIG_GRAY && s->avctx->flags & AV_CODEC_FLAG_GRAY)
return;
- uvlinesize = s->current_picture_ptr->f->linesize[1];
+ uvlinesize = s->cur_pic_ptr->f->linesize[1];
for (i = 0; i < 4; i++) {
int d = i < 2 ? dir: dir2;
@@ -880,17 +880,17 @@ void ff_vc1_mc_4mv_chroma4(VC1Context *v, int dir, int dir2, int avg)
else
uvsrc_y = av_clip(uvsrc_y, -8, s->avctx->coded_height >> 1);
if (i < 2 ? dir : dir2) {
- srcU = s->next_picture.f->data[1];
- srcV = s->next_picture.f->data[2];
+ srcU = s->next_pic.f->data[1];
+ srcV = s->next_pic.f->data[2];
lutuv = v->next_lutuv;
use_ic = v->next_use_ic;
- interlace = !!(s->next_picture.f->flags & AV_FRAME_FLAG_INTERLACED);
+ interlace = !!(s->next_pic.f->flags & AV_FRAME_FLAG_INTERLACED);
} else {
- srcU = s->last_picture.f->data[1];
- srcV = s->last_picture.f->data[2];
+ srcU = s->last_pic.f->data[1];
+ srcV = s->last_pic.f->data[2];
lutuv = v->last_lutuv;
use_ic = v->last_use_ic;
- interlace = !!(s->last_picture.f->flags & AV_FRAME_FLAG_INTERLACED);
+ interlace = !!(s->last_pic.f->flags & AV_FRAME_FLAG_INTERLACED);
}
if (!srcU)
return;
@@ -1012,11 +1012,11 @@ void ff_vc1_interp_mc(VC1Context *v)
int interlace;
int linesize, uvlinesize;
- if (!v->field_mode && !v->s.next_picture.f->data[0])
+ if (!v->field_mode && !v->s.next_pic.f->data[0])
return;
- linesize = s->current_picture_ptr->f->linesize[0];
- uvlinesize = s->current_picture_ptr->f->linesize[1];
+ linesize = s->cur_pic_ptr->f->linesize[0];
+ uvlinesize = s->cur_pic_ptr->f->linesize[1];
mx = s->mv[1][0][0];
my = s->mv[1][0][1];
@@ -1030,11 +1030,11 @@ void ff_vc1_interp_mc(VC1Context *v)
uvmx = uvmx + ((uvmx < 0) ? -(uvmx & 1) : (uvmx & 1));
uvmy = uvmy + ((uvmy < 0) ? -(uvmy & 1) : (uvmy & 1));
}
- srcY = s->next_picture.f->data[0];
- srcU = s->next_picture.f->data[1];
- srcV = s->next_picture.f->data[2];
+ srcY = s->next_pic.f->data[0];
+ srcU = s->next_pic.f->data[1];
+ srcV = s->next_pic.f->data[2];
- interlace = !!(s->next_picture.f->flags & AV_FRAME_FLAG_INTERLACED);
+ interlace = !!(s->next_pic.f->flags & AV_FRAME_FLAG_INTERLACED);
src_x = s->mb_x * 16 + (mx >> 2);
src_y = s->mb_y * 16 + (my >> 2);
diff --git a/libavcodec/vc1_pred.c b/libavcodec/vc1_pred.c
index ad2caf6db2..51ad668f23 100644
--- a/libavcodec/vc1_pred.c
+++ b/libavcodec/vc1_pred.c
@@ -241,24 +241,24 @@ void ff_vc1_pred_mv(VC1Context *v, int n, int dmv_x, int dmv_y,
xy = s->block_index[n];
if (s->mb_intra) {
- s->mv[0][n][0] = s->current_picture.motion_val[0][xy + v->blocks_off][0] = 0;
- s->mv[0][n][1] = s->current_picture.motion_val[0][xy + v->blocks_off][1] = 0;
- s->current_picture.motion_val[1][xy + v->blocks_off][0] = 0;
- s->current_picture.motion_val[1][xy + v->blocks_off][1] = 0;
+ s->mv[0][n][0] = s->cur_pic.motion_val[0][xy + v->blocks_off][0] = 0;
+ s->mv[0][n][1] = s->cur_pic.motion_val[0][xy + v->blocks_off][1] = 0;
+ s->cur_pic.motion_val[1][xy + v->blocks_off][0] = 0;
+ s->cur_pic.motion_val[1][xy + v->blocks_off][1] = 0;
if (mv1) { /* duplicate motion data for 1-MV block */
- s->current_picture.motion_val[0][xy + 1 + v->blocks_off][0] = 0;
- s->current_picture.motion_val[0][xy + 1 + v->blocks_off][1] = 0;
- s->current_picture.motion_val[0][xy + wrap + v->blocks_off][0] = 0;
- s->current_picture.motion_val[0][xy + wrap + v->blocks_off][1] = 0;
- s->current_picture.motion_val[0][xy + wrap + 1 + v->blocks_off][0] = 0;
- s->current_picture.motion_val[0][xy + wrap + 1 + v->blocks_off][1] = 0;
+ s->cur_pic.motion_val[0][xy + 1 + v->blocks_off][0] = 0;
+ s->cur_pic.motion_val[0][xy + 1 + v->blocks_off][1] = 0;
+ s->cur_pic.motion_val[0][xy + wrap + v->blocks_off][0] = 0;
+ s->cur_pic.motion_val[0][xy + wrap + v->blocks_off][1] = 0;
+ s->cur_pic.motion_val[0][xy + wrap + 1 + v->blocks_off][0] = 0;
+ s->cur_pic.motion_val[0][xy + wrap + 1 + v->blocks_off][1] = 0;
v->luma_mv[s->mb_x][0] = v->luma_mv[s->mb_x][1] = 0;
- s->current_picture.motion_val[1][xy + 1 + v->blocks_off][0] = 0;
- s->current_picture.motion_val[1][xy + 1 + v->blocks_off][1] = 0;
- s->current_picture.motion_val[1][xy + wrap + v->blocks_off][0] = 0;
- s->current_picture.motion_val[1][xy + wrap + v->blocks_off][1] = 0;
- s->current_picture.motion_val[1][xy + wrap + 1 + v->blocks_off][0] = 0;
- s->current_picture.motion_val[1][xy + wrap + 1 + v->blocks_off][1] = 0;
+ s->cur_pic.motion_val[1][xy + 1 + v->blocks_off][0] = 0;
+ s->cur_pic.motion_val[1][xy + 1 + v->blocks_off][1] = 0;
+ s->cur_pic.motion_val[1][xy + wrap + v->blocks_off][0] = 0;
+ s->cur_pic.motion_val[1][xy + wrap + v->blocks_off][1] = 0;
+ s->cur_pic.motion_val[1][xy + wrap + 1 + v->blocks_off][0] = 0;
+ s->cur_pic.motion_val[1][xy + wrap + 1 + v->blocks_off][1] = 0;
}
return;
}
@@ -301,7 +301,7 @@ void ff_vc1_pred_mv(VC1Context *v, int n, int dmv_x, int dmv_y,
}
if (a_valid) {
- A = s->current_picture.motion_val[dir][xy - wrap + v->blocks_off];
+ A = s->cur_pic.motion_val[dir][xy - wrap + v->blocks_off];
a_f = v->mv_f[dir][xy - wrap + v->blocks_off];
num_oppfield += a_f;
num_samefield += 1 - a_f;
@@ -312,7 +312,7 @@ void ff_vc1_pred_mv(VC1Context *v, int n, int dmv_x, int dmv_y,
a_f = 0;
}
if (b_valid) {
- B = s->current_picture.motion_val[dir][xy - wrap + off + v->blocks_off];
+ B = s->cur_pic.motion_val[dir][xy - wrap + off + v->blocks_off];
b_f = v->mv_f[dir][xy - wrap + off + v->blocks_off];
num_oppfield += b_f;
num_samefield += 1 - b_f;
@@ -323,7 +323,7 @@ void ff_vc1_pred_mv(VC1Context *v, int n, int dmv_x, int dmv_y,
b_f = 0;
}
if (c_valid) {
- C = s->current_picture.motion_val[dir][xy - 1 + v->blocks_off];
+ C = s->cur_pic.motion_val[dir][xy - 1 + v->blocks_off];
c_f = v->mv_f[dir][xy - 1 + v->blocks_off];
num_oppfield += c_f;
num_samefield += 1 - c_f;
@@ -451,15 +451,15 @@ void ff_vc1_pred_mv(VC1Context *v, int n, int dmv_x, int dmv_y,
if (v->field_mode && v->cur_field_type && v->ref_field_type[dir] == 0)
y_bias = 1;
/* store MV using signed modulus of MV range defined in 4.11 */
- s->mv[dir][n][0] = s->current_picture.motion_val[dir][xy + v->blocks_off][0] = ((px + dmv_x + r_x) & ((r_x << 1) - 1)) - r_x;
- s->mv[dir][n][1] = s->current_picture.motion_val[dir][xy + v->blocks_off][1] = ((py + dmv_y + r_y - y_bias) & ((r_y << 1) - 1)) - r_y + y_bias;
+ s->mv[dir][n][0] = s->cur_pic.motion_val[dir][xy + v->blocks_off][0] = ((px + dmv_x + r_x) & ((r_x << 1) - 1)) - r_x;
+ s->mv[dir][n][1] = s->cur_pic.motion_val[dir][xy + v->blocks_off][1] = ((py + dmv_y + r_y - y_bias) & ((r_y << 1) - 1)) - r_y + y_bias;
if (mv1) { /* duplicate motion data for 1-MV block */
- s->current_picture.motion_val[dir][xy + 1 + v->blocks_off][0] = s->current_picture.motion_val[dir][xy + v->blocks_off][0];
- s->current_picture.motion_val[dir][xy + 1 + v->blocks_off][1] = s->current_picture.motion_val[dir][xy + v->blocks_off][1];
- s->current_picture.motion_val[dir][xy + wrap + v->blocks_off][0] = s->current_picture.motion_val[dir][xy + v->blocks_off][0];
- s->current_picture.motion_val[dir][xy + wrap + v->blocks_off][1] = s->current_picture.motion_val[dir][xy + v->blocks_off][1];
- s->current_picture.motion_val[dir][xy + wrap + 1 + v->blocks_off][0] = s->current_picture.motion_val[dir][xy + v->blocks_off][0];
- s->current_picture.motion_val[dir][xy + wrap + 1 + v->blocks_off][1] = s->current_picture.motion_val[dir][xy + v->blocks_off][1];
+ s->cur_pic.motion_val[dir][xy + 1 + v->blocks_off][0] = s->cur_pic.motion_val[dir][xy + v->blocks_off][0];
+ s->cur_pic.motion_val[dir][xy + 1 + v->blocks_off][1] = s->cur_pic.motion_val[dir][xy + v->blocks_off][1];
+ s->cur_pic.motion_val[dir][xy + wrap + v->blocks_off][0] = s->cur_pic.motion_val[dir][xy + v->blocks_off][0];
+ s->cur_pic.motion_val[dir][xy + wrap + v->blocks_off][1] = s->cur_pic.motion_val[dir][xy + v->blocks_off][1];
+ s->cur_pic.motion_val[dir][xy + wrap + 1 + v->blocks_off][0] = s->cur_pic.motion_val[dir][xy + v->blocks_off][0];
+ s->cur_pic.motion_val[dir][xy + wrap + 1 + v->blocks_off][1] = s->cur_pic.motion_val[dir][xy + v->blocks_off][1];
v->mv_f[dir][xy + 1 + v->blocks_off] = v->mv_f[dir][xy + v->blocks_off];
v->mv_f[dir][xy + wrap + v->blocks_off] = v->mv_f[dir][xy + wrap + 1 + v->blocks_off] = v->mv_f[dir][xy + v->blocks_off];
}
@@ -483,24 +483,24 @@ void ff_vc1_pred_mv_intfr(VC1Context *v, int n, int dmv_x, int dmv_y,
xy = s->block_index[n];
if (s->mb_intra) {
- s->mv[0][n][0] = s->current_picture.motion_val[0][xy][0] = 0;
- s->mv[0][n][1] = s->current_picture.motion_val[0][xy][1] = 0;
- s->current_picture.motion_val[1][xy][0] = 0;
- s->current_picture.motion_val[1][xy][1] = 0;
+ s->mv[0][n][0] = s->cur_pic.motion_val[0][xy][0] = 0;
+ s->mv[0][n][1] = s->cur_pic.motion_val[0][xy][1] = 0;
+ s->cur_pic.motion_val[1][xy][0] = 0;
+ s->cur_pic.motion_val[1][xy][1] = 0;
if (mvn == 1) { /* duplicate motion data for 1-MV block */
- s->current_picture.motion_val[0][xy + 1][0] = 0;
- s->current_picture.motion_val[0][xy + 1][1] = 0;
- s->current_picture.motion_val[0][xy + wrap][0] = 0;
- s->current_picture.motion_val[0][xy + wrap][1] = 0;
- s->current_picture.motion_val[0][xy + wrap + 1][0] = 0;
- s->current_picture.motion_val[0][xy + wrap + 1][1] = 0;
+ s->cur_pic.motion_val[0][xy + 1][0] = 0;
+ s->cur_pic.motion_val[0][xy + 1][1] = 0;
+ s->cur_pic.motion_val[0][xy + wrap][0] = 0;
+ s->cur_pic.motion_val[0][xy + wrap][1] = 0;
+ s->cur_pic.motion_val[0][xy + wrap + 1][0] = 0;
+ s->cur_pic.motion_val[0][xy + wrap + 1][1] = 0;
v->luma_mv[s->mb_x][0] = v->luma_mv[s->mb_x][1] = 0;
- s->current_picture.motion_val[1][xy + 1][0] = 0;
- s->current_picture.motion_val[1][xy + 1][1] = 0;
- s->current_picture.motion_val[1][xy + wrap][0] = 0;
- s->current_picture.motion_val[1][xy + wrap][1] = 0;
- s->current_picture.motion_val[1][xy + wrap + 1][0] = 0;
- s->current_picture.motion_val[1][xy + wrap + 1][1] = 0;
+ s->cur_pic.motion_val[1][xy + 1][0] = 0;
+ s->cur_pic.motion_val[1][xy + 1][1] = 0;
+ s->cur_pic.motion_val[1][xy + wrap][0] = 0;
+ s->cur_pic.motion_val[1][xy + wrap][1] = 0;
+ s->cur_pic.motion_val[1][xy + wrap + 1][0] = 0;
+ s->cur_pic.motion_val[1][xy + wrap + 1][1] = 0;
}
return;
}
@@ -510,14 +510,14 @@ void ff_vc1_pred_mv_intfr(VC1Context *v, int n, int dmv_x, int dmv_y,
if (s->mb_x || (n == 1) || (n == 3)) {
if ((v->blk_mv_type[xy]) // current block (MB) has a field MV
|| (!v->blk_mv_type[xy] && !v->blk_mv_type[xy - 1])) { // or both have frame MV
- A[0] = s->current_picture.motion_val[dir][xy - 1][0];
- A[1] = s->current_picture.motion_val[dir][xy - 1][1];
+ A[0] = s->cur_pic.motion_val[dir][xy - 1][0];
+ A[1] = s->cur_pic.motion_val[dir][xy - 1][1];
a_valid = 1;
} else { // current block has frame mv and cand. has field MV (so average)
- A[0] = (s->current_picture.motion_val[dir][xy - 1][0]
- + s->current_picture.motion_val[dir][xy - 1 + off * wrap][0] + 1) >> 1;
- A[1] = (s->current_picture.motion_val[dir][xy - 1][1]
- + s->current_picture.motion_val[dir][xy - 1 + off * wrap][1] + 1) >> 1;
+ A[0] = (s->cur_pic.motion_val[dir][xy - 1][0]
+ + s->cur_pic.motion_val[dir][xy - 1 + off * wrap][0] + 1) >> 1;
+ A[1] = (s->cur_pic.motion_val[dir][xy - 1][1]
+ + s->cur_pic.motion_val[dir][xy - 1 + off * wrap][1] + 1) >> 1;
a_valid = 1;
}
if (!(n & 1) && v->is_intra[s->mb_x - 1]) {
@@ -537,11 +537,11 @@ void ff_vc1_pred_mv_intfr(VC1Context *v, int n, int dmv_x, int dmv_y,
if (v->blk_mv_type[pos_b] && v->blk_mv_type[xy]) {
n_adj = (n & 2) | (n & 1);
}
- B[0] = s->current_picture.motion_val[dir][s->block_index[n_adj] - 2 * wrap][0];
- B[1] = s->current_picture.motion_val[dir][s->block_index[n_adj] - 2 * wrap][1];
+ B[0] = s->cur_pic.motion_val[dir][s->block_index[n_adj] - 2 * wrap][0];
+ B[1] = s->cur_pic.motion_val[dir][s->block_index[n_adj] - 2 * wrap][1];
if (v->blk_mv_type[pos_b] && !v->blk_mv_type[xy]) {
- B[0] = (B[0] + s->current_picture.motion_val[dir][s->block_index[n_adj ^ 2] - 2 * wrap][0] + 1) >> 1;
- B[1] = (B[1] + s->current_picture.motion_val[dir][s->block_index[n_adj ^ 2] - 2 * wrap][1] + 1) >> 1;
+ B[0] = (B[0] + s->cur_pic.motion_val[dir][s->block_index[n_adj ^ 2] - 2 * wrap][0] + 1) >> 1;
+ B[1] = (B[1] + s->cur_pic.motion_val[dir][s->block_index[n_adj ^ 2] - 2 * wrap][1] + 1) >> 1;
}
}
if (s->mb_width > 1) {
@@ -552,11 +552,11 @@ void ff_vc1_pred_mv_intfr(VC1Context *v, int n, int dmv_x, int dmv_y,
if (v->blk_mv_type[pos_c] && v->blk_mv_type[xy]) {
n_adj = n & 2;
}
- C[0] = s->current_picture.motion_val[dir][s->block_index[n_adj] - 2 * wrap + 2][0];
- C[1] = s->current_picture.motion_val[dir][s->block_index[n_adj] - 2 * wrap + 2][1];
+ C[0] = s->cur_pic.motion_val[dir][s->block_index[n_adj] - 2 * wrap + 2][0];
+ C[1] = s->cur_pic.motion_val[dir][s->block_index[n_adj] - 2 * wrap + 2][1];
if (v->blk_mv_type[pos_c] && !v->blk_mv_type[xy]) {
- C[0] = (1 + C[0] + (s->current_picture.motion_val[dir][s->block_index[n_adj ^ 2] - 2 * wrap + 2][0])) >> 1;
- C[1] = (1 + C[1] + (s->current_picture.motion_val[dir][s->block_index[n_adj ^ 2] - 2 * wrap + 2][1])) >> 1;
+ C[0] = (1 + C[0] + (s->cur_pic.motion_val[dir][s->block_index[n_adj ^ 2] - 2 * wrap + 2][0])) >> 1;
+ C[1] = (1 + C[1] + (s->cur_pic.motion_val[dir][s->block_index[n_adj ^ 2] - 2 * wrap + 2][1])) >> 1;
}
if (s->mb_x == s->mb_width - 1) {
if (!v->is_intra[s->mb_x - s->mb_stride - 1]) {
@@ -566,11 +566,11 @@ void ff_vc1_pred_mv_intfr(VC1Context *v, int n, int dmv_x, int dmv_y,
if (v->blk_mv_type[pos_c] && v->blk_mv_type[xy]) {
n_adj = n | 1;
}
- C[0] = s->current_picture.motion_val[dir][s->block_index[n_adj] - 2 * wrap - 2][0];
- C[1] = s->current_picture.motion_val[dir][s->block_index[n_adj] - 2 * wrap - 2][1];
+ C[0] = s->cur_pic.motion_val[dir][s->block_index[n_adj] - 2 * wrap - 2][0];
+ C[1] = s->cur_pic.motion_val[dir][s->block_index[n_adj] - 2 * wrap - 2][1];
if (v->blk_mv_type[pos_c] && !v->blk_mv_type[xy]) {
- C[0] = (1 + C[0] + s->current_picture.motion_val[dir][s->block_index[1] - 2 * wrap - 2][0]) >> 1;
- C[1] = (1 + C[1] + s->current_picture.motion_val[dir][s->block_index[1] - 2 * wrap - 2][1]) >> 1;
+ C[0] = (1 + C[0] + s->cur_pic.motion_val[dir][s->block_index[1] - 2 * wrap - 2][0]) >> 1;
+ C[1] = (1 + C[1] + s->cur_pic.motion_val[dir][s->block_index[1] - 2 * wrap - 2][1]) >> 1;
}
} else
c_valid = 0;
@@ -581,12 +581,12 @@ void ff_vc1_pred_mv_intfr(VC1Context *v, int n, int dmv_x, int dmv_y,
} else {
pos_b = s->block_index[1];
b_valid = 1;
- B[0] = s->current_picture.motion_val[dir][pos_b][0];
- B[1] = s->current_picture.motion_val[dir][pos_b][1];
+ B[0] = s->cur_pic.motion_val[dir][pos_b][0];
+ B[1] = s->cur_pic.motion_val[dir][pos_b][1];
pos_c = s->block_index[0];
c_valid = 1;
- C[0] = s->current_picture.motion_val[dir][pos_c][0];
- C[1] = s->current_picture.motion_val[dir][pos_c][1];
+ C[0] = s->cur_pic.motion_val[dir][pos_c][0];
+ C[1] = s->cur_pic.motion_val[dir][pos_c][1];
}
total_valid = a_valid + b_valid + c_valid;
@@ -671,18 +671,18 @@ void ff_vc1_pred_mv_intfr(VC1Context *v, int n, int dmv_x, int dmv_y,
}
/* store MV using signed modulus of MV range defined in 4.11 */
- s->mv[dir][n][0] = s->current_picture.motion_val[dir][xy][0] = ((px + dmv_x + r_x) & ((r_x << 1) - 1)) - r_x;
- s->mv[dir][n][1] = s->current_picture.motion_val[dir][xy][1] = ((py + dmv_y + r_y) & ((r_y << 1) - 1)) - r_y;
+ s->mv[dir][n][0] = s->cur_pic.motion_val[dir][xy][0] = ((px + dmv_x + r_x) & ((r_x << 1) - 1)) - r_x;
+ s->mv[dir][n][1] = s->cur_pic.motion_val[dir][xy][1] = ((py + dmv_y + r_y) & ((r_y << 1) - 1)) - r_y;
if (mvn == 1) { /* duplicate motion data for 1-MV block */
- s->current_picture.motion_val[dir][xy + 1 ][0] = s->current_picture.motion_val[dir][xy][0];
- s->current_picture.motion_val[dir][xy + 1 ][1] = s->current_picture.motion_val[dir][xy][1];
- s->current_picture.motion_val[dir][xy + wrap ][0] = s->current_picture.motion_val[dir][xy][0];
- s->current_picture.motion_val[dir][xy + wrap ][1] = s->current_picture.motion_val[dir][xy][1];
- s->current_picture.motion_val[dir][xy + wrap + 1][0] = s->current_picture.motion_val[dir][xy][0];
- s->current_picture.motion_val[dir][xy + wrap + 1][1] = s->current_picture.motion_val[dir][xy][1];
+ s->cur_pic.motion_val[dir][xy + 1 ][0] = s->cur_pic.motion_val[dir][xy][0];
+ s->cur_pic.motion_val[dir][xy + 1 ][1] = s->cur_pic.motion_val[dir][xy][1];
+ s->cur_pic.motion_val[dir][xy + wrap ][0] = s->cur_pic.motion_val[dir][xy][0];
+ s->cur_pic.motion_val[dir][xy + wrap ][1] = s->cur_pic.motion_val[dir][xy][1];
+ s->cur_pic.motion_val[dir][xy + wrap + 1][0] = s->cur_pic.motion_val[dir][xy][0];
+ s->cur_pic.motion_val[dir][xy + wrap + 1][1] = s->cur_pic.motion_val[dir][xy][1];
} else if (mvn == 2) { /* duplicate motion data for 2-Field MV block */
- s->current_picture.motion_val[dir][xy + 1][0] = s->current_picture.motion_val[dir][xy][0];
- s->current_picture.motion_val[dir][xy + 1][1] = s->current_picture.motion_val[dir][xy][1];
+ s->cur_pic.motion_val[dir][xy + 1][0] = s->cur_pic.motion_val[dir][xy][0];
+ s->cur_pic.motion_val[dir][xy + 1][1] = s->cur_pic.motion_val[dir][xy][1];
s->mv[dir][n + 1][0] = s->mv[dir][n][0];
s->mv[dir][n + 1][1] = s->mv[dir][n][1];
}
@@ -715,19 +715,19 @@ void ff_vc1_pred_b_mv(VC1Context *v, int dmv_x[2], int dmv_y[2],
xy = s->block_index[0];
if (s->mb_intra) {
- s->current_picture.motion_val[0][xy][0] =
- s->current_picture.motion_val[0][xy][1] =
- s->current_picture.motion_val[1][xy][0] =
- s->current_picture.motion_val[1][xy][1] = 0;
+ s->cur_pic.motion_val[0][xy][0] =
+ s->cur_pic.motion_val[0][xy][1] =
+ s->cur_pic.motion_val[1][xy][0] =
+ s->cur_pic.motion_val[1][xy][1] = 0;
return;
}
- if (direct && s->next_picture_ptr->field_picture)
+ if (direct && s->next_pic_ptr->field_picture)
av_log(s->avctx, AV_LOG_WARNING, "Mixed frame/field direct mode not supported\n");
- s->mv[0][0][0] = scale_mv(s->next_picture.motion_val[1][xy][0], v->bfraction, 0, s->quarter_sample);
- s->mv[0][0][1] = scale_mv(s->next_picture.motion_val[1][xy][1], v->bfraction, 0, s->quarter_sample);
- s->mv[1][0][0] = scale_mv(s->next_picture.motion_val[1][xy][0], v->bfraction, 1, s->quarter_sample);
- s->mv[1][0][1] = scale_mv(s->next_picture.motion_val[1][xy][1], v->bfraction, 1, s->quarter_sample);
+ s->mv[0][0][0] = scale_mv(s->next_pic.motion_val[1][xy][0], v->bfraction, 0, s->quarter_sample);
+ s->mv[0][0][1] = scale_mv(s->next_pic.motion_val[1][xy][1], v->bfraction, 0, s->quarter_sample);
+ s->mv[1][0][0] = scale_mv(s->next_pic.motion_val[1][xy][0], v->bfraction, 1, s->quarter_sample);
+ s->mv[1][0][1] = scale_mv(s->next_pic.motion_val[1][xy][1], v->bfraction, 1, s->quarter_sample);
/* Pullback predicted motion vectors as specified in 8.4.5.4 */
s->mv[0][0][0] = av_clip(s->mv[0][0][0], -60 - (s->mb_x << 6), (s->mb_width << 6) - 4 - (s->mb_x << 6));
@@ -735,18 +735,18 @@ void ff_vc1_pred_b_mv(VC1Context *v, int dmv_x[2], int dmv_y[2],
s->mv[1][0][0] = av_clip(s->mv[1][0][0], -60 - (s->mb_x << 6), (s->mb_width << 6) - 4 - (s->mb_x << 6));
s->mv[1][0][1] = av_clip(s->mv[1][0][1], -60 - (s->mb_y << 6), (s->mb_height << 6) - 4 - (s->mb_y << 6));
if (direct) {
- s->current_picture.motion_val[0][xy][0] = s->mv[0][0][0];
- s->current_picture.motion_val[0][xy][1] = s->mv[0][0][1];
- s->current_picture.motion_val[1][xy][0] = s->mv[1][0][0];
- s->current_picture.motion_val[1][xy][1] = s->mv[1][0][1];
+ s->cur_pic.motion_val[0][xy][0] = s->mv[0][0][0];
+ s->cur_pic.motion_val[0][xy][1] = s->mv[0][0][1];
+ s->cur_pic.motion_val[1][xy][0] = s->mv[1][0][0];
+ s->cur_pic.motion_val[1][xy][1] = s->mv[1][0][1];
return;
}
if ((mvtype == BMV_TYPE_FORWARD) || (mvtype == BMV_TYPE_INTERPOLATED)) {
- C = s->current_picture.motion_val[0][xy - 2];
- A = s->current_picture.motion_val[0][xy - wrap * 2];
+ C = s->cur_pic.motion_val[0][xy - 2];
+ A = s->cur_pic.motion_val[0][xy - wrap * 2];
off = (s->mb_x == (s->mb_width - 1)) ? -2 : 2;
- B = s->current_picture.motion_val[0][xy - wrap * 2 + off];
+ B = s->cur_pic.motion_val[0][xy - wrap * 2 + off];
if (!s->mb_x) C[0] = C[1] = 0;
if (!s->first_slice_line) { // predictor A is not out of bounds
@@ -812,10 +812,10 @@ void ff_vc1_pred_b_mv(VC1Context *v, int dmv_x[2], int dmv_y[2],
s->mv[0][0][1] = ((py + dmv_y[0] + r_y) & ((r_y << 1) - 1)) - r_y;
}
if ((mvtype == BMV_TYPE_BACKWARD) || (mvtype == BMV_TYPE_INTERPOLATED)) {
- C = s->current_picture.motion_val[1][xy - 2];
- A = s->current_picture.motion_val[1][xy - wrap * 2];
+ C = s->cur_pic.motion_val[1][xy - 2];
+ A = s->cur_pic.motion_val[1][xy - wrap * 2];
off = (s->mb_x == (s->mb_width - 1)) ? -2 : 2;
- B = s->current_picture.motion_val[1][xy - wrap * 2 + off];
+ B = s->cur_pic.motion_val[1][xy - wrap * 2 + off];
if (!s->mb_x)
C[0] = C[1] = 0;
@@ -882,10 +882,10 @@ void ff_vc1_pred_b_mv(VC1Context *v, int dmv_x[2], int dmv_y[2],
s->mv[1][0][0] = ((px + dmv_x[1] + r_x) & ((r_x << 1) - 1)) - r_x;
s->mv[1][0][1] = ((py + dmv_y[1] + r_y) & ((r_y << 1) - 1)) - r_y;
}
- s->current_picture.motion_val[0][xy][0] = s->mv[0][0][0];
- s->current_picture.motion_val[0][xy][1] = s->mv[0][0][1];
- s->current_picture.motion_val[1][xy][0] = s->mv[1][0][0];
- s->current_picture.motion_val[1][xy][1] = s->mv[1][0][1];
+ s->cur_pic.motion_val[0][xy][0] = s->mv[0][0][0];
+ s->cur_pic.motion_val[0][xy][1] = s->mv[0][0][1];
+ s->cur_pic.motion_val[1][xy][0] = s->mv[1][0][0];
+ s->cur_pic.motion_val[1][xy][1] = s->mv[1][0][1];
}
void ff_vc1_pred_b_mv_intfi(VC1Context *v, int n, int *dmv_x, int *dmv_y,
@@ -897,14 +897,14 @@ void ff_vc1_pred_b_mv_intfi(VC1Context *v, int n, int *dmv_x, int *dmv_y,
if (v->bmvtype == BMV_TYPE_DIRECT) {
int total_opp, k, f;
- if (s->next_picture.mb_type[mb_pos + v->mb_off] != MB_TYPE_INTRA) {
- s->mv[0][0][0] = scale_mv(s->next_picture.motion_val[1][s->block_index[0] + v->blocks_off][0],
+ if (s->next_pic.mb_type[mb_pos + v->mb_off] != MB_TYPE_INTRA) {
+ s->mv[0][0][0] = scale_mv(s->next_pic.motion_val[1][s->block_index[0] + v->blocks_off][0],
v->bfraction, 0, s->quarter_sample);
- s->mv[0][0][1] = scale_mv(s->next_picture.motion_val[1][s->block_index[0] + v->blocks_off][1],
+ s->mv[0][0][1] = scale_mv(s->next_pic.motion_val[1][s->block_index[0] + v->blocks_off][1],
v->bfraction, 0, s->quarter_sample);
- s->mv[1][0][0] = scale_mv(s->next_picture.motion_val[1][s->block_index[0] + v->blocks_off][0],
+ s->mv[1][0][0] = scale_mv(s->next_pic.motion_val[1][s->block_index[0] + v->blocks_off][0],
v->bfraction, 1, s->quarter_sample);
- s->mv[1][0][1] = scale_mv(s->next_picture.motion_val[1][s->block_index[0] + v->blocks_off][1],
+ s->mv[1][0][1] = scale_mv(s->next_pic.motion_val[1][s->block_index[0] + v->blocks_off][1],
v->bfraction, 1, s->quarter_sample);
total_opp = v->mv_f_next[0][s->block_index[0] + v->blocks_off]
@@ -919,10 +919,10 @@ void ff_vc1_pred_b_mv_intfi(VC1Context *v, int n, int *dmv_x, int *dmv_y,
}
v->ref_field_type[0] = v->ref_field_type[1] = v->cur_field_type ^ f;
for (k = 0; k < 4; k++) {
- s->current_picture.motion_val[0][s->block_index[k] + v->blocks_off][0] = s->mv[0][0][0];
- s->current_picture.motion_val[0][s->block_index[k] + v->blocks_off][1] = s->mv[0][0][1];
- s->current_picture.motion_val[1][s->block_index[k] + v->blocks_off][0] = s->mv[1][0][0];
- s->current_picture.motion_val[1][s->block_index[k] + v->blocks_off][1] = s->mv[1][0][1];
+ s->cur_pic.motion_val[0][s->block_index[k] + v->blocks_off][0] = s->mv[0][0][0];
+ s->cur_pic.motion_val[0][s->block_index[k] + v->blocks_off][1] = s->mv[0][0][1];
+ s->cur_pic.motion_val[1][s->block_index[k] + v->blocks_off][0] = s->mv[1][0][0];
+ s->cur_pic.motion_val[1][s->block_index[k] + v->blocks_off][1] = s->mv[1][0][1];
v->mv_f[0][s->block_index[k] + v->blocks_off] = f;
v->mv_f[1][s->block_index[k] + v->blocks_off] = f;
}
diff --git a/libavcodec/vc1dec.c b/libavcodec/vc1dec.c
index 3b5b016cf9..93398e3fb2 100644
--- a/libavcodec/vc1dec.c
+++ b/libavcodec/vc1dec.c
@@ -235,15 +235,15 @@ static void vc1_draw_sprites(VC1Context *v, SpriteData* sd)
v->sprite_output_frame->linesize[plane] * row;
for (sprite = 0; sprite <= v->two_sprites; sprite++) {
- uint8_t *iplane = s->current_picture.f->data[plane];
- int iline = s->current_picture.f->linesize[plane];
+ uint8_t *iplane = s->cur_pic.f->data[plane];
+ int iline = s->cur_pic.f->linesize[plane];
int ycoord = yoff[sprite] + yadv[sprite] * row;
int yline = ycoord >> 16;
int next_line;
ysub[sprite] = ycoord & 0xFFFF;
if (sprite) {
- iplane = s->last_picture.f->data[plane];
- iline = s->last_picture.f->linesize[plane];
+ iplane = s->last_pic.f->data[plane];
+ iline = s->last_pic.f->linesize[plane];
}
next_line = FFMIN(yline + 1, (v->sprite_height >> !!plane) - 1) * iline;
if (!(xoff[sprite] & 0xFFFF) && xadv[sprite] == 1 << 16) {
@@ -317,12 +317,12 @@ static int vc1_decode_sprites(VC1Context *v, GetBitContext* gb)
if (ret < 0)
return ret;
- if (!s->current_picture.f || !s->current_picture.f->data[0]) {
+ if (!s->cur_pic.f || !s->cur_pic.f->data[0]) {
av_log(avctx, AV_LOG_ERROR, "Got no sprites\n");
return AVERROR_UNKNOWN;
}
- if (v->two_sprites && (!s->last_picture_ptr || !s->last_picture.f->data[0])) {
+ if (v->two_sprites && (!s->last_pic_ptr || !s->last_pic.f->data[0])) {
av_log(avctx, AV_LOG_WARNING, "Need two sprites, only got one\n");
v->two_sprites = 0;
}
@@ -340,7 +340,7 @@ static void vc1_sprite_flush(AVCodecContext *avctx)
{
VC1Context *v = avctx->priv_data;
MpegEncContext *s = &v->s;
- AVFrame *f = s->current_picture.f;
+ AVFrame *f = s->cur_pic.f;
int plane, i;
/* Windows Media Image codecs have a convergence interval of two keyframes.
@@ -837,10 +837,10 @@ static int vc1_decode_frame(AVCodecContext *avctx, AVFrame *pict,
/* no supplementary picture */
if (buf_size == 0 || (buf_size == 4 && AV_RB32(buf) == VC1_CODE_ENDOFSEQ)) {
/* special case for last picture */
- if (s->low_delay == 0 && s->next_picture_ptr) {
- if ((ret = av_frame_ref(pict, s->next_picture_ptr->f)) < 0)
+ if (s->low_delay == 0 && s->next_pic_ptr) {
+ if ((ret = av_frame_ref(pict, s->next_pic_ptr->f)) < 0)
return ret;
- s->next_picture_ptr = NULL;
+ s->next_pic_ptr = NULL;
*got_frame = 1;
}
@@ -1047,7 +1047,7 @@ static int vc1_decode_frame(AVCodecContext *avctx, AVFrame *pict,
}
/* skip B-frames if we don't have reference frames */
- if (!s->last_picture_ptr && s->pict_type == AV_PICTURE_TYPE_B) {
+ if (!s->last_pic_ptr && s->pict_type == AV_PICTURE_TYPE_B) {
av_log(v->s.avctx, AV_LOG_DEBUG, "Skipping B frame without reference frames\n");
goto end;
}
@@ -1061,19 +1061,19 @@ static int vc1_decode_frame(AVCodecContext *avctx, AVFrame *pict,
goto err;
}
- v->s.current_picture_ptr->field_picture = v->field_mode;
- v->s.current_picture_ptr->f->flags |= AV_FRAME_FLAG_INTERLACED * (v->fcm != PROGRESSIVE);
- v->s.current_picture_ptr->f->flags |= AV_FRAME_FLAG_TOP_FIELD_FIRST * !!v->tff;
+ v->s.cur_pic_ptr->field_picture = v->field_mode;
+ v->s.cur_pic_ptr->f->flags |= AV_FRAME_FLAG_INTERLACED * (v->fcm != PROGRESSIVE);
+ v->s.cur_pic_ptr->f->flags |= AV_FRAME_FLAG_TOP_FIELD_FIRST * !!v->tff;
// process pulldown flags
- s->current_picture_ptr->f->repeat_pict = 0;
+ s->cur_pic_ptr->f->repeat_pict = 0;
// Pulldown flags are only valid when 'broadcast' has been set.
if (v->rff) {
// repeat field
- s->current_picture_ptr->f->repeat_pict = 1;
+ s->cur_pic_ptr->f->repeat_pict = 1;
} else if (v->rptfrm) {
// repeat frames
- s->current_picture_ptr->f->repeat_pict = v->rptfrm * 2;
+ s->cur_pic_ptr->f->repeat_pict = v->rptfrm * 2;
}
if (avctx->hwaccel) {
@@ -1135,7 +1135,7 @@ static int vc1_decode_frame(AVCodecContext *avctx, AVFrame *pict,
ret = AVERROR_INVALIDDATA;
goto err;
}
- v->s.current_picture_ptr->f->pict_type = v->s.pict_type;
+ v->s.cur_pic_ptr->f->pict_type = v->s.pict_type;
ret = hwaccel->start_frame(avctx, buf_start_second_field,
(buf + buf_size) - buf_start_second_field);
@@ -1230,9 +1230,9 @@ static int vc1_decode_frame(AVCodecContext *avctx, AVFrame *pict,
v->end_mb_x = s->mb_width;
if (v->field_mode) {
- s->current_picture.f->linesize[0] <<= 1;
- s->current_picture.f->linesize[1] <<= 1;
- s->current_picture.f->linesize[2] <<= 1;
+ s->cur_pic.f->linesize[0] <<= 1;
+ s->cur_pic.f->linesize[1] <<= 1;
+ s->cur_pic.f->linesize[2] <<= 1;
s->linesize <<= 1;
s->uvlinesize <<= 1;
}
@@ -1307,9 +1307,9 @@ static int vc1_decode_frame(AVCodecContext *avctx, AVFrame *pict,
}
if (v->field_mode) {
v->second_field = 0;
- s->current_picture.f->linesize[0] >>= 1;
- s->current_picture.f->linesize[1] >>= 1;
- s->current_picture.f->linesize[2] >>= 1;
+ s->cur_pic.f->linesize[0] >>= 1;
+ s->cur_pic.f->linesize[1] >>= 1;
+ s->cur_pic.f->linesize[2] >>= 1;
s->linesize >>= 1;
s->uvlinesize >>= 1;
if (v->s.pict_type != AV_PICTURE_TYPE_BI && v->s.pict_type != AV_PICTURE_TYPE_B) {
@@ -1353,16 +1353,16 @@ image:
*got_frame = 1;
} else {
if (s->pict_type == AV_PICTURE_TYPE_B || s->low_delay) {
- if ((ret = av_frame_ref(pict, s->current_picture_ptr->f)) < 0)
+ if ((ret = av_frame_ref(pict, s->cur_pic_ptr->f)) < 0)
goto err;
if (!v->field_mode)
- ff_print_debug_info(s, s->current_picture_ptr, pict);
+ ff_print_debug_info(s, s->cur_pic_ptr, pict);
*got_frame = 1;
- } else if (s->last_picture_ptr) {
- if ((ret = av_frame_ref(pict, s->last_picture_ptr->f)) < 0)
+ } else if (s->last_pic_ptr) {
+ if ((ret = av_frame_ref(pict, s->last_pic_ptr->f)) < 0)
goto err;
if (!v->field_mode)
- ff_print_debug_info(s, s->last_picture_ptr, pict);
+ ff_print_debug_info(s, s->last_pic_ptr, pict);
*got_frame = 1;
}
}
diff --git a/libavcodec/vdpau.c b/libavcodec/vdpau.c
index 6df3e88dac..cd7194138d 100644
--- a/libavcodec/vdpau.c
+++ b/libavcodec/vdpau.c
@@ -370,7 +370,7 @@ int ff_vdpau_common_end_frame(AVCodecContext *avctx, AVFrame *frame,
int ff_vdpau_mpeg_end_frame(AVCodecContext *avctx)
{
MpegEncContext *s = avctx->priv_data;
- Picture *pic = s->current_picture_ptr;
+ Picture *pic = s->cur_pic_ptr;
struct vdpau_picture_context *pic_ctx = pic->hwaccel_picture_private;
int val;
diff --git a/libavcodec/vdpau_mpeg12.c b/libavcodec/vdpau_mpeg12.c
index bbf76eb469..1f0ea7e803 100644
--- a/libavcodec/vdpau_mpeg12.c
+++ b/libavcodec/vdpau_mpeg12.c
@@ -35,7 +35,7 @@ static int vdpau_mpeg_start_frame(AVCodecContext *avctx,
const uint8_t *buffer, uint32_t size)
{
MpegEncContext * const s = avctx->priv_data;
- Picture *pic = s->current_picture_ptr;
+ Picture *pic = s->cur_pic_ptr;
struct vdpau_picture_context *pic_ctx = pic->hwaccel_picture_private;
VdpPictureInfoMPEG1Or2 *info = &pic_ctx->info.mpeg;
VdpVideoSurface ref;
@@ -47,12 +47,12 @@ static int vdpau_mpeg_start_frame(AVCodecContext *avctx,
switch (s->pict_type) {
case AV_PICTURE_TYPE_B:
- ref = ff_vdpau_get_surface_id(s->next_picture.f);
+ ref = ff_vdpau_get_surface_id(s->next_pic.f);
assert(ref != VDP_INVALID_HANDLE);
info->backward_reference = ref;
/* fall through to forward prediction */
case AV_PICTURE_TYPE_P:
- ref = ff_vdpau_get_surface_id(s->last_picture.f);
+ ref = ff_vdpau_get_surface_id(s->last_pic.f);
info->forward_reference = ref;
}
@@ -87,7 +87,7 @@ static int vdpau_mpeg_decode_slice(AVCodecContext *avctx,
const uint8_t *buffer, uint32_t size)
{
MpegEncContext * const s = avctx->priv_data;
- Picture *pic = s->current_picture_ptr;
+ Picture *pic = s->cur_pic_ptr;
struct vdpau_picture_context *pic_ctx = pic->hwaccel_picture_private;
int val;
diff --git a/libavcodec/vdpau_mpeg4.c b/libavcodec/vdpau_mpeg4.c
index 055426b95b..ecbc80b86d 100644
--- a/libavcodec/vdpau_mpeg4.c
+++ b/libavcodec/vdpau_mpeg4.c
@@ -34,7 +34,7 @@ static int vdpau_mpeg4_start_frame(AVCodecContext *avctx,
{
Mpeg4DecContext *ctx = avctx->priv_data;
MpegEncContext * const s = &ctx->m;
- Picture *pic = s->current_picture_ptr;
+ Picture *pic = s->cur_pic_ptr;
struct vdpau_picture_context *pic_ctx = pic->hwaccel_picture_private;
VdpPictureInfoMPEG4Part2 *info = &pic_ctx->info.mpeg4;
VdpVideoSurface ref;
@@ -47,13 +47,13 @@ static int vdpau_mpeg4_start_frame(AVCodecContext *avctx,
switch (s->pict_type) {
case AV_PICTURE_TYPE_B:
- ref = ff_vdpau_get_surface_id(s->next_picture.f);
+ ref = ff_vdpau_get_surface_id(s->next_pic.f);
assert(ref != VDP_INVALID_HANDLE);
info->backward_reference = ref;
info->vop_coding_type = 2;
/* fall-through */
case AV_PICTURE_TYPE_P:
- ref = ff_vdpau_get_surface_id(s->last_picture.f);
+ ref = ff_vdpau_get_surface_id(s->last_pic.f);
assert(ref != VDP_INVALID_HANDLE);
info->forward_reference = ref;
}
diff --git a/libavcodec/vdpau_vc1.c b/libavcodec/vdpau_vc1.c
index 0eacc4477d..119e514c0e 100644
--- a/libavcodec/vdpau_vc1.c
+++ b/libavcodec/vdpau_vc1.c
@@ -36,7 +36,7 @@ static int vdpau_vc1_start_frame(AVCodecContext *avctx,
{
VC1Context * const v = avctx->priv_data;
MpegEncContext * const s = &v->s;
- Picture *pic = s->current_picture_ptr;
+ Picture *pic = s->cur_pic_ptr;
struct vdpau_picture_context *pic_ctx = pic->hwaccel_picture_private;
VdpPictureInfoVC1 *info = &pic_ctx->info.vc1;
VdpVideoSurface ref;
@@ -47,15 +47,15 @@ static int vdpau_vc1_start_frame(AVCodecContext *avctx,
switch (s->pict_type) {
case AV_PICTURE_TYPE_B:
- if (s->next_picture_ptr) {
- ref = ff_vdpau_get_surface_id(s->next_picture.f);
+ if (s->next_pic_ptr) {
+ ref = ff_vdpau_get_surface_id(s->next_pic.f);
assert(ref != VDP_INVALID_HANDLE);
info->backward_reference = ref;
}
/* fall-through */
case AV_PICTURE_TYPE_P:
- if (s->last_picture_ptr) {
- ref = ff_vdpau_get_surface_id(s->last_picture.f);
+ if (s->last_pic_ptr) {
+ ref = ff_vdpau_get_surface_id(s->last_pic.f);
assert(ref != VDP_INVALID_HANDLE);
info->forward_reference = ref;
}
@@ -104,7 +104,7 @@ static int vdpau_vc1_decode_slice(AVCodecContext *avctx,
{
VC1Context * const v = avctx->priv_data;
MpegEncContext * const s = &v->s;
- Picture *pic = s->current_picture_ptr;
+ Picture *pic = s->cur_pic_ptr;
struct vdpau_picture_context *pic_ctx = pic->hwaccel_picture_private;
int val;
diff --git a/libavcodec/videotoolbox.c b/libavcodec/videotoolbox.c
index d6990a39c0..7807047aa6 100644
--- a/libavcodec/videotoolbox.c
+++ b/libavcodec/videotoolbox.c
@@ -1108,7 +1108,7 @@ static int videotoolbox_mpeg_decode_slice(AVCodecContext *avctx,
static int videotoolbox_mpeg_end_frame(AVCodecContext *avctx)
{
MpegEncContext *s = avctx->priv_data;
- AVFrame *frame = s->current_picture_ptr->f;
+ AVFrame *frame = s->cur_pic_ptr->f;
return ff_videotoolbox_common_end_frame(avctx, frame);
}
diff --git a/libavcodec/wmv2dec.c b/libavcodec/wmv2dec.c
index ff27d1b4d0..61e1759449 100644
--- a/libavcodec/wmv2dec.c
+++ b/libavcodec/wmv2dec.c
@@ -103,7 +103,7 @@ static int parse_mb_skip(WMV2DecContext *w)
int mb_x, mb_y;
int coded_mb_count = 0;
MpegEncContext *const s = &w->s;
- uint32_t *const mb_type = s->current_picture_ptr->mb_type;
+ uint32_t *const mb_type = s->cur_pic_ptr->mb_type;
w->skip_type = get_bits(&s->gb, 2);
switch (w->skip_type) {
@@ -238,8 +238,8 @@ int ff_wmv2_decode_secondary_picture_header(MpegEncContext *s)
if (s->pict_type == AV_PICTURE_TYPE_I) {
/* Is filling with zeroes really the right thing to do? */
- memset(s->current_picture_ptr->mb_type, 0,
- sizeof(*s->current_picture_ptr->mb_type) *
+ memset(s->cur_pic_ptr->mb_type, 0,
+ sizeof(*s->cur_pic_ptr->mb_type) *
s->mb_height * s->mb_stride);
if (w->j_type_bit)
w->j_type = get_bits1(&s->gb);
@@ -331,7 +331,7 @@ int ff_wmv2_decode_secondary_picture_header(MpegEncContext *s)
s->esc3_run_length = 0;
if (w->j_type) {
- ff_intrax8_decode_picture(&w->x8, &s->current_picture,
+ ff_intrax8_decode_picture(&w->x8, &s->cur_pic,
&s->gb, &s->mb_x, &s->mb_y,
2 * s->qscale, (s->qscale - 1) | 1,
s->loop_filter, s->low_delay);
@@ -366,11 +366,11 @@ static int16_t *wmv2_pred_motion(WMV2DecContext *w, int *px, int *py)
wrap = s->b8_stride;
xy = s->block_index[0];
- mot_val = s->current_picture.motion_val[0][xy];
+ mot_val = s->cur_pic.motion_val[0][xy];
- A = s->current_picture.motion_val[0][xy - 1];
- B = s->current_picture.motion_val[0][xy - wrap];
- C = s->current_picture.motion_val[0][xy + 2 - wrap];
+ A = s->cur_pic.motion_val[0][xy - 1];
+ B = s->cur_pic.motion_val[0][xy - wrap];
+ C = s->cur_pic.motion_val[0][xy + 2 - wrap];
if (s->mb_x && !s->first_slice_line && !s->mspel && w->top_left_mv_flag)
diff = FFMAX(FFABS(A[0] - B[0]), FFABS(A[1] - B[1]));
@@ -452,7 +452,7 @@ static int wmv2_decode_mb(MpegEncContext *s, int16_t block[6][64])
return 0;
if (s->pict_type == AV_PICTURE_TYPE_P) {
- if (IS_SKIP(s->current_picture.mb_type[s->mb_y * s->mb_stride + s->mb_x])) {
+ if (IS_SKIP(s->cur_pic.mb_type[s->mb_y * s->mb_stride + s->mb_x])) {
/* skip mb */
s->mb_intra = 0;
for (i = 0; i < 6; i++)
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 32/71] avcodec/mpegpicture: Reduce value of MAX_PLANES define
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (29 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 31/71] avcodec/mpegvideo: Shorten variable names Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 33/71] avcodec/mpegpicture: Cache AVFrame.data and linesize values Andreas Rheinhardt
` (39 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
No mpegvideo based codec supports alpha.
While just at it, also make the define shorter.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegpicture.h | 2 +-
libavcodec/mpegvideo.h | 2 +-
libavcodec/mpegvideo_enc.c | 7 +++----
3 files changed, 5 insertions(+), 6 deletions(-)
diff --git a/libavcodec/mpegpicture.h b/libavcodec/mpegpicture.h
index 363732910a..8e3c119acc 100644
--- a/libavcodec/mpegpicture.h
+++ b/libavcodec/mpegpicture.h
@@ -27,7 +27,7 @@
#include "motion_est.h"
#include "threadframe.h"
-#define MPEGVIDEO_MAX_PLANES 4
+#define MPV_MAX_PLANES 3
#define MAX_PICTURE_COUNT 36
#define EDGE_WIDTH 16
diff --git a/libavcodec/mpegvideo.h b/libavcodec/mpegvideo.h
index e2953a3198..62550027a7 100644
--- a/libavcodec/mpegvideo.h
+++ b/libavcodec/mpegvideo.h
@@ -256,7 +256,7 @@ typedef struct MpegEncContext {
uint8_t *mb_mean; ///< Table for MB luminance
int64_t mb_var_sum; ///< sum of MB variance for current frame
int64_t mc_mb_var_sum; ///< motion compensated MB variance for current frame
- uint64_t encoding_error[MPEGVIDEO_MAX_PLANES];
+ uint64_t encoding_error[MPV_MAX_PLANES];
int motion_est; ///< ME algorithm
int me_penalty_compensation;
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index d7e1085cf8..e7459cc5bf 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -1634,7 +1634,7 @@ no_output_pic:
} else {
// input is not a shared pix -> reuse buffer for current_pix
s->cur_pic_ptr = s->reordered_input_picture[0];
- for (i = 0; i < 4; i++) {
+ for (int i = 0; i < MPV_MAX_PLANES; i++) {
if (s->new_pic->data[i])
s->new_pic->data[i] += INPLACE_OFFSET;
}
@@ -1861,12 +1861,11 @@ vbv_retry:
if (avctx->flags & AV_CODEC_FLAG_PASS1)
ff_write_pass1_stats(s);
- for (i = 0; i < 4; i++) {
+ for (int i = 0; i < MPV_MAX_PLANES; i++)
avctx->error[i] += s->encoding_error[i];
- }
ff_side_data_set_encoder_stats(pkt, s->cur_pic.f->quality,
s->encoding_error,
- (avctx->flags&AV_CODEC_FLAG_PSNR) ? MPEGVIDEO_MAX_PLANES : 0,
+ (avctx->flags&AV_CODEC_FLAG_PSNR) ? MPV_MAX_PLANES : 0,
s->pict_type);
if (avctx->flags & AV_CODEC_FLAG_PASS1)
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 33/71] avcodec/mpegpicture: Cache AVFrame.data and linesize values
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (30 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 32/71] avcodec/mpegpicture: Reduce value of MAX_PLANES define Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 34/71] avcodec/rv30, rv34, rv40: Avoid indirection Andreas Rheinhardt
` (38 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
This avoids an indirection and is in preparation for removing
the AVFrame from MpegEncContext.(cur|last|next)_pic altogether.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/motion_est.c | 16 +++----
libavcodec/mpeg12dec.c | 14 +++---
libavcodec/mpeg_er.c | 6 +--
libavcodec/mpegpicture.c | 14 ++++++
libavcodec/mpegpicture.h | 4 ++
libavcodec/mpegvideo.c | 10 ++--
libavcodec/mpegvideo_dec.c | 4 +-
libavcodec/mpegvideo_enc.c | 16 +++----
libavcodec/mpegvideo_motion.c | 4 +-
libavcodec/mpv_reconstruct_mb_template.c | 12 ++---
libavcodec/msmpeg4.c | 4 +-
libavcodec/mss2.c | 4 +-
libavcodec/svq1enc.c | 6 +--
libavcodec/vc1_block.c | 8 ++--
libavcodec/vc1_mc.c | 60 ++++++++++++------------
libavcodec/vc1dec.c | 28 +++++------
16 files changed, 114 insertions(+), 96 deletions(-)
diff --git a/libavcodec/motion_est.c b/libavcodec/motion_est.c
index b2644b5328..fcef47a623 100644
--- a/libavcodec/motion_est.c
+++ b/libavcodec/motion_est.c
@@ -703,11 +703,11 @@ static inline int h263_mv4_search(MpegEncContext *s, int mx, int my, int shift)
offset= (s->mb_x*8 + (mx>>1)) + (s->mb_y*8 + (my>>1))*s->uvlinesize;
if(s->no_rounding){
- s->hdsp.put_no_rnd_pixels_tab[1][dxy](c->scratchpad , s->last_pic.f->data[1] + offset, s->uvlinesize, 8);
- s->hdsp.put_no_rnd_pixels_tab[1][dxy](c->scratchpad + 8, s->last_pic.f->data[2] + offset, s->uvlinesize, 8);
+ s->hdsp.put_no_rnd_pixels_tab[1][dxy](c->scratchpad , s->last_pic.data[1] + offset, s->uvlinesize, 8);
+ s->hdsp.put_no_rnd_pixels_tab[1][dxy](c->scratchpad + 8, s->last_pic.data[2] + offset, s->uvlinesize, 8);
}else{
- s->hdsp.put_pixels_tab [1][dxy](c->scratchpad , s->last_pic.f->data[1] + offset, s->uvlinesize, 8);
- s->hdsp.put_pixels_tab [1][dxy](c->scratchpad + 8, s->last_pic.f->data[2] + offset, s->uvlinesize, 8);
+ s->hdsp.put_pixels_tab [1][dxy](c->scratchpad , s->last_pic.data[1] + offset, s->uvlinesize, 8);
+ s->hdsp.put_pixels_tab [1][dxy](c->scratchpad + 8, s->last_pic.data[2] + offset, s->uvlinesize, 8);
}
dmin_sum += s->mecc.mb_cmp[1](s, s->new_pic->data[1] + s->mb_x * 8 + s->mb_y * 8 * s->uvlinesize, c->scratchpad, s->uvlinesize, 8);
@@ -899,7 +899,7 @@ void ff_estimate_p_frame_motion(MpegEncContext * s,
const int shift= 1+s->quarter_sample;
int mb_type=0;
- init_ref(c, s->new_pic->data, s->last_pic.f->data, NULL, 16*mb_x, 16*mb_y, 0);
+ init_ref(c, s->new_pic->data, s->last_pic.data, NULL, 16*mb_x, 16*mb_y, 0);
av_assert0(s->quarter_sample==0 || s->quarter_sample==1);
av_assert0(s->linesize == c->stride);
@@ -1070,7 +1070,7 @@ int ff_pre_estimate_p_frame_motion(MpegEncContext * s,
int P[10][2];
const int shift= 1+s->quarter_sample;
const int xy= mb_x + mb_y*s->mb_stride;
- init_ref(c, s->new_pic->data, s->last_pic.f->data, NULL, 16*mb_x, 16*mb_y, 0);
+ init_ref(c, s->new_pic->data, s->last_pic.data, NULL, 16*mb_x, 16*mb_y, 0);
av_assert0(s->quarter_sample==0 || s->quarter_sample==1);
@@ -1495,8 +1495,8 @@ void ff_estimate_b_frame_motion(MpegEncContext * s,
int fmin, bmin, dmin, fbmin, bimin, fimin;
int type=0;
const int xy = mb_y*s->mb_stride + mb_x;
- init_ref(c, s->new_pic->data, s->last_pic.f->data,
- s->next_pic.f->data, 16 * mb_x, 16 * mb_y, 2);
+ init_ref(c, s->new_pic->data, s->last_pic.data,
+ s->next_pic.data, 16 * mb_x, 16 * mb_y, 2);
get_limits(s, 16*mb_x, 16*mb_y);
diff --git a/libavcodec/mpeg12dec.c b/libavcodec/mpeg12dec.c
index 4aba5651a6..c04d351e0c 100644
--- a/libavcodec/mpeg12dec.c
+++ b/libavcodec/mpeg12dec.c
@@ -1297,12 +1297,12 @@ static int mpeg_field_start(MpegEncContext *s, const uint8_t *buf, int buf_size)
for (int i = 0; i < 3; i++) {
if (s->picture_structure == PICT_BOTTOM_FIELD) {
- s->cur_pic.f->data[i] = FF_PTR_ADD(s->cur_pic.f->data[i],
- s->cur_pic.f->linesize[i]);
+ s->cur_pic.data[i] = FF_PTR_ADD(s->cur_pic.data[i],
+ s->cur_pic.linesize[i]);
}
- s->cur_pic.f->linesize[i] *= 2;
- s->last_pic.f->linesize[i] *= 2;
- s->next_pic.f->linesize[i] *= 2;
+ s->cur_pic.linesize[i] *= 2;
+ s->last_pic.linesize[i] *= 2;
+ s->next_pic.linesize[i] *= 2;
}
}
@@ -1377,9 +1377,9 @@ static int mpeg_field_start(MpegEncContext *s, const uint8_t *buf, int buf_size)
return ret;
for (int i = 0; i < 3; i++) {
- s->cur_pic.f->data[i] = s->cur_pic_ptr->f->data[i];
+ s->cur_pic.data[i] = s->cur_pic_ptr->f->data[i];
if (s->picture_structure == PICT_BOTTOM_FIELD)
- s->cur_pic.f->data[i] +=
+ s->cur_pic.data[i] +=
s->cur_pic_ptr->f->linesize[i];
}
}
diff --git a/libavcodec/mpeg_er.c b/libavcodec/mpeg_er.c
index bc838b05ba..8d8b2aea92 100644
--- a/libavcodec/mpeg_er.c
+++ b/libavcodec/mpeg_er.c
@@ -84,13 +84,13 @@ static void mpeg_er_decode_mb(void *opaque, int ref, int mv_dir, int mv_type,
if (!s->chroma_y_shift)
s->bdsp.clear_blocks(s->block[6]);
- s->dest[0] = s->cur_pic.f->data[0] +
+ s->dest[0] = s->cur_pic.data[0] +
s->mb_y * 16 * s->linesize +
s->mb_x * 16;
- s->dest[1] = s->cur_pic.f->data[1] +
+ s->dest[1] = s->cur_pic.data[1] +
s->mb_y * (16 >> s->chroma_y_shift) * s->uvlinesize +
s->mb_x * (16 >> s->chroma_x_shift);
- s->dest[2] = s->cur_pic.f->data[2] +
+ s->dest[2] = s->cur_pic.data[2] +
s->mb_y * (16 >> s->chroma_y_shift) * s->uvlinesize +
s->mb_x * (16 >> s->chroma_x_shift);
diff --git a/libavcodec/mpegpicture.c b/libavcodec/mpegpicture.c
index ca265da9fc..6da9545b50 100644
--- a/libavcodec/mpegpicture.c
+++ b/libavcodec/mpegpicture.c
@@ -174,6 +174,11 @@ int ff_alloc_picture(AVCodecContext *avctx, Picture *pic, MotionEstContext *me,
*linesize = pic->f->linesize[0];
*uvlinesize = pic->f->linesize[1];
+ for (int i = 0; i < MPV_MAX_PLANES; i++) {
+ pic->data[i] = pic->f->data[i];
+ pic->linesize[i] = pic->f->linesize[i];
+ }
+
ret = alloc_picture_tables(pools, pic, mb_height);
if (ret < 0)
goto fail;
@@ -206,7 +211,11 @@ void ff_mpeg_unref_picture(Picture *pic)
free_picture_tables(pic);
+ memset(pic->data, 0, sizeof(pic->data));
+ memset(pic->linesize, 0, sizeof(pic->linesize));
+
pic->dummy = 0;
+
pic->field_picture = 0;
pic->b_frame_score = 0;
pic->reference = 0;
@@ -248,6 +257,11 @@ int ff_mpeg_ref_picture(Picture *dst, Picture *src)
if (ret < 0)
goto fail;
+ for (int i = 0; i < MPV_MAX_PLANES; i++) {
+ dst->data[i] = src->data[i];
+ dst->linesize[i] = src->linesize[i];
+ }
+
update_picture_tables(dst, src);
ff_refstruct_replace(&dst->hwaccel_picture_private,
diff --git a/libavcodec/mpegpicture.h b/libavcodec/mpegpicture.h
index 8e3c119acc..814f71213e 100644
--- a/libavcodec/mpegpicture.h
+++ b/libavcodec/mpegpicture.h
@@ -21,6 +21,7 @@
#ifndef AVCODEC_MPEGPICTURE_H
#define AVCODEC_MPEGPICTURE_H
+#include <stddef.h>
#include <stdint.h>
#include "avcodec.h"
@@ -57,6 +58,9 @@ typedef struct Picture {
struct AVFrame *f;
ThreadFrame tf;
+ uint8_t *data[MPV_MAX_PLANES];
+ ptrdiff_t linesize[MPV_MAX_PLANES];
+
int8_t *qscale_table_base;
int8_t *qscale_table;
diff --git a/libavcodec/mpegvideo.c b/libavcodec/mpegvideo.c
index c8a1d6487a..c24b7207b1 100644
--- a/libavcodec/mpegvideo.c
+++ b/libavcodec/mpegvideo.c
@@ -881,8 +881,8 @@ void ff_clean_intra_table_entries(MpegEncContext *s)
}
void ff_init_block_index(MpegEncContext *s){ //FIXME maybe rename
- const int linesize = s->cur_pic.f->linesize[0]; //not s->linesize as this would be wrong for field pics
- const int uvlinesize = s->cur_pic.f->linesize[1];
+ const int linesize = s->cur_pic.linesize[0]; //not s->linesize as this would be wrong for field pics
+ const int uvlinesize = s->cur_pic.linesize[1];
const int width_of_mb = (4 + (s->avctx->bits_per_raw_sample > 8)) - s->avctx->lowres;
const int height_of_mb = 4 - s->avctx->lowres;
@@ -894,9 +894,9 @@ void ff_init_block_index(MpegEncContext *s){ //FIXME maybe rename
s->block_index[5]= s->mb_stride*(s->mb_y + s->mb_height + 2) + s->b8_stride*s->mb_height*2 + s->mb_x - 1;
//block_index is not used by mpeg2, so it is not affected by chroma_format
- s->dest[0] = s->cur_pic.f->data[0] + (int)((s->mb_x - 1U) << width_of_mb);
- s->dest[1] = s->cur_pic.f->data[1] + (int)((s->mb_x - 1U) << (width_of_mb - s->chroma_x_shift));
- s->dest[2] = s->cur_pic.f->data[2] + (int)((s->mb_x - 1U) << (width_of_mb - s->chroma_x_shift));
+ s->dest[0] = s->cur_pic.data[0] + (int)((s->mb_x - 1U) << width_of_mb);
+ s->dest[1] = s->cur_pic.data[1] + (int)((s->mb_x - 1U) << (width_of_mb - s->chroma_x_shift));
+ s->dest[2] = s->cur_pic.data[2] + (int)((s->mb_x - 1U) << (width_of_mb - s->chroma_x_shift));
if (s->picture_structure == PICT_FRAME) {
s->dest[0] += s->mb_y * linesize << height_of_mb;
diff --git a/libavcodec/mpegvideo_dec.c b/libavcodec/mpegvideo_dec.c
index 9b04d6a351..570a422b6f 100644
--- a/libavcodec/mpegvideo_dec.c
+++ b/libavcodec/mpegvideo_dec.c
@@ -613,8 +613,8 @@ static av_always_inline void mpeg_motion_lowres(MpegEncContext *s,
const int h_edge_pos = s->h_edge_pos >> lowres;
const int v_edge_pos = s->v_edge_pos >> lowres;
int hc = s->chroma_y_shift ? (h+1-bottom_field)>>1 : h;
- linesize = s->cur_pic.f->linesize[0] << field_based;
- uvlinesize = s->cur_pic.f->linesize[1] << field_based;
+ linesize = s->cur_pic.linesize[0] << field_based;
+ uvlinesize = s->cur_pic.linesize[1] << field_based;
// FIXME obviously not perfect but qpel will not work in lowres anyway
if (s->quarter_sample) {
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index e7459cc5bf..2f6aaad1c7 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -1655,20 +1655,20 @@ static void frame_end(MpegEncContext *s)
!s->intra_only) {
int hshift = s->chroma_x_shift;
int vshift = s->chroma_y_shift;
- s->mpvencdsp.draw_edges(s->cur_pic.f->data[0],
- s->cur_pic.f->linesize[0],
+ s->mpvencdsp.draw_edges(s->cur_pic.data[0],
+ s->cur_pic.linesize[0],
s->h_edge_pos, s->v_edge_pos,
EDGE_WIDTH, EDGE_WIDTH,
EDGE_TOP | EDGE_BOTTOM);
- s->mpvencdsp.draw_edges(s->cur_pic.f->data[1],
- s->cur_pic.f->linesize[1],
+ s->mpvencdsp.draw_edges(s->cur_pic.data[1],
+ s->cur_pic.linesize[1],
s->h_edge_pos >> hshift,
s->v_edge_pos >> vshift,
EDGE_WIDTH >> hshift,
EDGE_WIDTH >> vshift,
EDGE_TOP | EDGE_BOTTOM);
- s->mpvencdsp.draw_edges(s->cur_pic.f->data[2],
- s->cur_pic.f->linesize[2],
+ s->mpvencdsp.draw_edges(s->cur_pic.data[2],
+ s->cur_pic.linesize[2],
s->h_edge_pos >> hshift,
s->v_edge_pos >> vshift,
EDGE_WIDTH >> hshift,
@@ -2268,14 +2268,14 @@ static av_always_inline void encode_mb_internal(MpegEncContext *s,
if (s->mv_dir & MV_DIR_FORWARD) {
ff_mpv_motion(s, dest_y, dest_cb, dest_cr, 0,
- s->last_pic.f->data,
+ s->last_pic.data,
op_pix, op_qpix);
op_pix = s->hdsp.avg_pixels_tab;
op_qpix = s->qdsp.avg_qpel_pixels_tab;
}
if (s->mv_dir & MV_DIR_BACKWARD) {
ff_mpv_motion(s, dest_y, dest_cb, dest_cr, 1,
- s->next_pic.f->data,
+ s->next_pic.data,
op_pix, op_qpix);
}
diff --git a/libavcodec/mpegvideo_motion.c b/libavcodec/mpegvideo_motion.c
index 3824832f9d..9c1872aa1b 100644
--- a/libavcodec/mpegvideo_motion.c
+++ b/libavcodec/mpegvideo_motion.c
@@ -93,8 +93,8 @@ void mpeg_motion_internal(MpegEncContext *s,
ptrdiff_t uvlinesize, linesize;
v_edge_pos = s->v_edge_pos >> field_based;
- linesize = s->cur_pic.f->linesize[0] << field_based;
- uvlinesize = s->cur_pic.f->linesize[1] << field_based;
+ linesize = s->cur_pic.linesize[0] << field_based;
+ uvlinesize = s->cur_pic.linesize[1] << field_based;
block_y_half = (field_based | is_16x8);
dxy = ((motion_y & 1) << 1) | (motion_x & 1);
diff --git a/libavcodec/mpv_reconstruct_mb_template.c b/libavcodec/mpv_reconstruct_mb_template.c
index febada041a..70dab76f73 100644
--- a/libavcodec/mpv_reconstruct_mb_template.c
+++ b/libavcodec/mpv_reconstruct_mb_template.c
@@ -82,8 +82,8 @@ void mpv_reconstruct_mb_internal(MpegEncContext *s, int16_t block[12][64],
{
uint8_t *dest_y, *dest_cb, *dest_cr;
int dct_linesize, dct_offset;
- const int linesize = s->cur_pic.f->linesize[0]; //not s->linesize as this would be wrong for field pics
- const int uvlinesize = s->cur_pic.f->linesize[1];
+ const int linesize = s->cur_pic.linesize[0]; //not s->linesize as this would be wrong for field pics
+ const int uvlinesize = s->cur_pic.linesize[1];
const int readable = IS_ENCODER || lowres_flag || s->pict_type != AV_PICTURE_TYPE_B;
const int block_size = lowres_flag ? 8 >> s->avctx->lowres : 8;
@@ -137,11 +137,11 @@ void mpv_reconstruct_mb_internal(MpegEncContext *s, int16_t block[12][64],
const h264_chroma_mc_func *op_pix = s->h264chroma.put_h264_chroma_pixels_tab;
if (s->mv_dir & MV_DIR_FORWARD) {
- MPV_motion_lowres(s, dest_y, dest_cb, dest_cr, 0, s->last_pic.f->data, op_pix);
+ MPV_motion_lowres(s, dest_y, dest_cb, dest_cr, 0, s->last_pic.data, op_pix);
op_pix = s->h264chroma.avg_h264_chroma_pixels_tab;
}
if (s->mv_dir & MV_DIR_BACKWARD) {
- MPV_motion_lowres(s, dest_y, dest_cb, dest_cr, 1, s->next_pic.f->data, op_pix);
+ MPV_motion_lowres(s, dest_y, dest_cb, dest_cr, 1, s->next_pic.data, op_pix);
}
} else {
op_pixels_func (*op_pix)[4];
@@ -155,12 +155,12 @@ void mpv_reconstruct_mb_internal(MpegEncContext *s, int16_t block[12][64],
op_qpix = s->qdsp.put_no_rnd_qpel_pixels_tab;
}
if (s->mv_dir & MV_DIR_FORWARD) {
- ff_mpv_motion(s, dest_y, dest_cb, dest_cr, 0, s->last_pic.f->data, op_pix, op_qpix);
+ ff_mpv_motion(s, dest_y, dest_cb, dest_cr, 0, s->last_pic.data, op_pix, op_qpix);
op_pix = s->hdsp.avg_pixels_tab;
op_qpix = s->qdsp.avg_qpel_pixels_tab;
}
if (s->mv_dir & MV_DIR_BACKWARD) {
- ff_mpv_motion(s, dest_y, dest_cb, dest_cr, 1, s->next_pic.f->data, op_pix, op_qpix);
+ ff_mpv_motion(s, dest_y, dest_cb, dest_cr, 1, s->next_pic.data, op_pix, op_qpix);
}
}
diff --git a/libavcodec/msmpeg4.c b/libavcodec/msmpeg4.c
index 323f083f8f..f7ebb8ba89 100644
--- a/libavcodec/msmpeg4.c
+++ b/libavcodec/msmpeg4.c
@@ -282,10 +282,10 @@ int ff_msmpeg4_pred_dc(MpegEncContext *s, int n,
int bs = 8 >> s->avctx->lowres;
if(n<4){
wrap= s->linesize;
- dest = s->cur_pic.f->data[0] + (((n >> 1) + 2*s->mb_y) * bs* wrap ) + ((n & 1) + 2*s->mb_x) * bs;
+ dest = s->cur_pic.data[0] + (((n >> 1) + 2*s->mb_y) * bs* wrap ) + ((n & 1) + 2*s->mb_x) * bs;
}else{
wrap= s->uvlinesize;
- dest = s->cur_pic.f->data[n - 3] + (s->mb_y * bs * wrap) + s->mb_x * bs;
+ dest = s->cur_pic.data[n - 3] + (s->mb_y * bs * wrap) + s->mb_x * bs;
}
if(s->mb_x==0) a= (1024 + (scale>>1))/scale;
else a= get_dc(dest-bs, wrap, scale*8>>(2*s->avctx->lowres), bs);
diff --git a/libavcodec/mss2.c b/libavcodec/mss2.c
index 6a4b5aeb59..5d52744529 100644
--- a/libavcodec/mss2.c
+++ b/libavcodec/mss2.c
@@ -382,7 +382,7 @@ static int decode_wmv9(AVCodecContext *avctx, const uint8_t *buf, int buf_size,
MSS12Context *c = &ctx->c;
VC1Context *v = avctx->priv_data;
MpegEncContext *s = &v->s;
- AVFrame *f;
+ Picture *f;
int ret;
ff_mpeg_flush(avctx);
@@ -431,7 +431,7 @@ static int decode_wmv9(AVCodecContext *avctx, const uint8_t *buf, int buf_size,
ff_mpv_frame_end(s);
- f = s->cur_pic.f;
+ f = &s->cur_pic;
if (v->respic == 3) {
ctx->dsp.upsample_plane(f->data[0], f->linesize[0], w, h);
diff --git a/libavcodec/svq1enc.c b/libavcodec/svq1enc.c
index 52140494bb..c75ab1800a 100644
--- a/libavcodec/svq1enc.c
+++ b/libavcodec/svq1enc.c
@@ -328,11 +328,11 @@ static int svq1_encode_plane(SVQ1EncContext *s, int plane,
s->m.avctx = s->avctx;
s->m.cur_pic_ptr = &s->m.cur_pic;
s->m.last_pic_ptr = &s->m.last_pic;
- s->m.last_pic.f->data[0] = ref_plane;
+ s->m.last_pic.data[0] = ref_plane;
s->m.linesize =
- s->m.last_pic.f->linesize[0] =
+ s->m.last_pic.linesize[0] =
s->m.new_pic->linesize[0] =
- s->m.cur_pic.f->linesize[0] = stride;
+ s->m.cur_pic.linesize[0] = stride;
s->m.width = width;
s->m.height = height;
s->m.mb_width = block_width;
diff --git a/libavcodec/vc1_block.c b/libavcodec/vc1_block.c
index 6b5b1d0566..9cb9fd27bf 100644
--- a/libavcodec/vc1_block.c
+++ b/libavcodec/vc1_block.c
@@ -2948,7 +2948,7 @@ static void vc1_decode_skip_blocks(VC1Context *v)
{
MpegEncContext *s = &v->s;
- if (!v->s.last_pic.f->data[0])
+ if (!v->s.last_pic.data[0])
return;
ff_er_add_slice(&s->er, 0, s->start_mb_y, s->mb_width - 1, s->end_mb_y - 1, ER_MB_END);
@@ -2957,9 +2957,9 @@ static void vc1_decode_skip_blocks(VC1Context *v)
s->mb_x = 0;
init_block_index(v);
update_block_index(s);
- memcpy(s->dest[0], s->last_pic.f->data[0] + s->mb_y * 16 * s->linesize, s->linesize * 16);
- memcpy(s->dest[1], s->last_pic.f->data[1] + s->mb_y * 8 * s->uvlinesize, s->uvlinesize * 8);
- memcpy(s->dest[2], s->last_pic.f->data[2] + s->mb_y * 8 * s->uvlinesize, s->uvlinesize * 8);
+ memcpy(s->dest[0], s->last_pic.data[0] + s->mb_y * 16 * s->linesize, s->linesize * 16);
+ memcpy(s->dest[1], s->last_pic.data[1] + s->mb_y * 8 * s->uvlinesize, s->uvlinesize * 8);
+ memcpy(s->dest[2], s->last_pic.data[2] + s->mb_y * 8 * s->uvlinesize, s->uvlinesize * 8);
s->first_slice_line = 0;
}
}
diff --git a/libavcodec/vc1_mc.c b/libavcodec/vc1_mc.c
index e24328569d..b60a48b38f 100644
--- a/libavcodec/vc1_mc.c
+++ b/libavcodec/vc1_mc.c
@@ -184,7 +184,7 @@ void ff_vc1_mc_1mv(VC1Context *v, int dir)
if ((!v->field_mode ||
(v->ref_field_type[dir] == 1 && v->cur_field_type == 1)) &&
- !v->s.last_pic.f->data[0])
+ !v->s.last_pic.data[0])
return;
linesize = s->cur_pic_ptr->f->linesize[0];
@@ -219,26 +219,26 @@ void ff_vc1_mc_1mv(VC1Context *v, int dir)
}
if (!dir) {
if (v->field_mode && (v->cur_field_type != v->ref_field_type[dir]) && v->second_field) {
- srcY = s->cur_pic.f->data[0];
- srcU = s->cur_pic.f->data[1];
- srcV = s->cur_pic.f->data[2];
+ srcY = s->cur_pic.data[0];
+ srcU = s->cur_pic.data[1];
+ srcV = s->cur_pic.data[2];
luty = v->curr_luty;
lutuv = v->curr_lutuv;
use_ic = *v->curr_use_ic;
interlace = 1;
} else {
- srcY = s->last_pic.f->data[0];
- srcU = s->last_pic.f->data[1];
- srcV = s->last_pic.f->data[2];
+ srcY = s->last_pic.data[0];
+ srcU = s->last_pic.data[1];
+ srcV = s->last_pic.data[2];
luty = v->last_luty;
lutuv = v->last_lutuv;
use_ic = v->last_use_ic;
interlace = !!(s->last_pic.f->flags & AV_FRAME_FLAG_INTERLACED);
}
} else {
- srcY = s->next_pic.f->data[0];
- srcU = s->next_pic.f->data[1];
- srcV = s->next_pic.f->data[2];
+ srcY = s->next_pic.data[0];
+ srcU = s->next_pic.data[1];
+ srcV = s->next_pic.data[2];
luty = v->next_luty;
lutuv = v->next_lutuv;
use_ic = v->next_use_ic;
@@ -464,7 +464,7 @@ void ff_vc1_mc_4mv_luma(VC1Context *v, int n, int dir, int avg)
if ((!v->field_mode ||
(v->ref_field_type[dir] == 1 && v->cur_field_type == 1)) &&
- !v->s.last_pic.f->data[0])
+ !v->s.last_pic.data[0])
return;
linesize = s->cur_pic_ptr->f->linesize[0];
@@ -474,18 +474,18 @@ void ff_vc1_mc_4mv_luma(VC1Context *v, int n, int dir, int avg)
if (!dir) {
if (v->field_mode && (v->cur_field_type != v->ref_field_type[dir]) && v->second_field) {
- srcY = s->cur_pic.f->data[0];
+ srcY = s->cur_pic.data[0];
luty = v->curr_luty;
use_ic = *v->curr_use_ic;
interlace = 1;
} else {
- srcY = s->last_pic.f->data[0];
+ srcY = s->last_pic.data[0];
luty = v->last_luty;
use_ic = v->last_use_ic;
interlace = !!(s->last_pic.f->flags & AV_FRAME_FLAG_INTERLACED);
}
} else {
- srcY = s->next_pic.f->data[0];
+ srcY = s->next_pic.data[0];
luty = v->next_luty;
use_ic = v->next_use_ic;
interlace = !!(s->next_pic.f->flags & AV_FRAME_FLAG_INTERLACED);
@@ -645,7 +645,7 @@ void ff_vc1_mc_4mv_chroma(VC1Context *v, int dir)
int interlace;
int uvlinesize;
- if (!v->field_mode && !v->s.last_pic.f->data[0])
+ if (!v->field_mode && !v->s.last_pic.data[0])
return;
if (CONFIG_GRAY && s->avctx->flags & AV_CODEC_FLAG_GRAY)
return;
@@ -664,7 +664,7 @@ void ff_vc1_mc_4mv_chroma(VC1Context *v, int dir)
int opp_count = get_luma_mv(v, dir, &tx, &ty);
chroma_ref_type = v->cur_field_type ^ (opp_count > 2);
}
- if (v->field_mode && chroma_ref_type == 1 && v->cur_field_type == 1 && !v->s.last_pic.f->data[0])
+ if (v->field_mode && chroma_ref_type == 1 && v->cur_field_type == 1 && !v->s.last_pic.data[0])
return;
s->cur_pic.motion_val[1][s->block_index[0] + v->blocks_off][0] = tx;
s->cur_pic.motion_val[1][s->block_index[0] + v->blocks_off][1] = ty;
@@ -698,21 +698,21 @@ void ff_vc1_mc_4mv_chroma(VC1Context *v, int dir)
if (!dir) {
if (v->field_mode && (v->cur_field_type != chroma_ref_type) && v->second_field) {
- srcU = s->cur_pic.f->data[1];
- srcV = s->cur_pic.f->data[2];
+ srcU = s->cur_pic.data[1];
+ srcV = s->cur_pic.data[2];
lutuv = v->curr_lutuv;
use_ic = *v->curr_use_ic;
interlace = 1;
} else {
- srcU = s->last_pic.f->data[1];
- srcV = s->last_pic.f->data[2];
+ srcU = s->last_pic.data[1];
+ srcV = s->last_pic.data[2];
lutuv = v->last_lutuv;
use_ic = v->last_use_ic;
interlace = !!(s->last_pic.f->flags & AV_FRAME_FLAG_INTERLACED);
}
} else {
- srcU = s->next_pic.f->data[1];
- srcV = s->next_pic.f->data[2];
+ srcU = s->next_pic.data[1];
+ srcV = s->next_pic.data[2];
lutuv = v->next_lutuv;
use_ic = v->next_use_ic;
interlace = !!(s->next_pic.f->flags & AV_FRAME_FLAG_INTERLACED);
@@ -880,14 +880,14 @@ void ff_vc1_mc_4mv_chroma4(VC1Context *v, int dir, int dir2, int avg)
else
uvsrc_y = av_clip(uvsrc_y, -8, s->avctx->coded_height >> 1);
if (i < 2 ? dir : dir2) {
- srcU = s->next_pic.f->data[1];
- srcV = s->next_pic.f->data[2];
+ srcU = s->next_pic.data[1];
+ srcV = s->next_pic.data[2];
lutuv = v->next_lutuv;
use_ic = v->next_use_ic;
interlace = !!(s->next_pic.f->flags & AV_FRAME_FLAG_INTERLACED);
} else {
- srcU = s->last_pic.f->data[1];
- srcV = s->last_pic.f->data[2];
+ srcU = s->last_pic.data[1];
+ srcV = s->last_pic.data[2];
lutuv = v->last_lutuv;
use_ic = v->last_use_ic;
interlace = !!(s->last_pic.f->flags & AV_FRAME_FLAG_INTERLACED);
@@ -1012,7 +1012,7 @@ void ff_vc1_interp_mc(VC1Context *v)
int interlace;
int linesize, uvlinesize;
- if (!v->field_mode && !v->s.next_pic.f->data[0])
+ if (!v->field_mode && !v->s.next_pic.data[0])
return;
linesize = s->cur_pic_ptr->f->linesize[0];
@@ -1030,9 +1030,9 @@ void ff_vc1_interp_mc(VC1Context *v)
uvmx = uvmx + ((uvmx < 0) ? -(uvmx & 1) : (uvmx & 1));
uvmy = uvmy + ((uvmy < 0) ? -(uvmy & 1) : (uvmy & 1));
}
- srcY = s->next_pic.f->data[0];
- srcU = s->next_pic.f->data[1];
- srcV = s->next_pic.f->data[2];
+ srcY = s->next_pic.data[0];
+ srcU = s->next_pic.data[1];
+ srcV = s->next_pic.data[2];
interlace = !!(s->next_pic.f->flags & AV_FRAME_FLAG_INTERLACED);
diff --git a/libavcodec/vc1dec.c b/libavcodec/vc1dec.c
index 93398e3fb2..d8d58bb7eb 100644
--- a/libavcodec/vc1dec.c
+++ b/libavcodec/vc1dec.c
@@ -235,15 +235,15 @@ static void vc1_draw_sprites(VC1Context *v, SpriteData* sd)
v->sprite_output_frame->linesize[plane] * row;
for (sprite = 0; sprite <= v->two_sprites; sprite++) {
- uint8_t *iplane = s->cur_pic.f->data[plane];
- int iline = s->cur_pic.f->linesize[plane];
+ uint8_t *iplane = s->cur_pic.data[plane];
+ int iline = s->cur_pic.linesize[plane];
int ycoord = yoff[sprite] + yadv[sprite] * row;
int yline = ycoord >> 16;
int next_line;
ysub[sprite] = ycoord & 0xFFFF;
if (sprite) {
- iplane = s->last_pic.f->data[plane];
- iline = s->last_pic.f->linesize[plane];
+ iplane = s->last_pic.data[plane];
+ iline = s->last_pic.linesize[plane];
}
next_line = FFMIN(yline + 1, (v->sprite_height >> !!plane) - 1) * iline;
if (!(xoff[sprite] & 0xFFFF) && xadv[sprite] == 1 << 16) {
@@ -317,12 +317,12 @@ static int vc1_decode_sprites(VC1Context *v, GetBitContext* gb)
if (ret < 0)
return ret;
- if (!s->cur_pic.f || !s->cur_pic.f->data[0]) {
+ if (!s->cur_pic.data[0]) {
av_log(avctx, AV_LOG_ERROR, "Got no sprites\n");
return AVERROR_UNKNOWN;
}
- if (v->two_sprites && (!s->last_pic_ptr || !s->last_pic.f->data[0])) {
+ if (v->two_sprites && (!s->last_pic_ptr || !s->last_pic.data[0])) {
av_log(avctx, AV_LOG_WARNING, "Need two sprites, only got one\n");
v->two_sprites = 0;
}
@@ -340,14 +340,14 @@ static void vc1_sprite_flush(AVCodecContext *avctx)
{
VC1Context *v = avctx->priv_data;
MpegEncContext *s = &v->s;
- AVFrame *f = s->cur_pic.f;
+ Picture *f = &s->cur_pic;
int plane, i;
/* Windows Media Image codecs have a convergence interval of two keyframes.
Since we can't enforce it, clear to black the missing sprite. This is
wrong but it looks better than doing nothing. */
- if (f && f->data[0])
+ if (f->data[0])
for (plane = 0; plane < (CONFIG_GRAY && s->avctx->flags & AV_CODEC_FLAG_GRAY ? 1 : 3); plane++)
for (i = 0; i < v->sprite_height>>!!plane; i++)
memset(f->data[plane] + i * f->linesize[plane],
@@ -1230,9 +1230,9 @@ static int vc1_decode_frame(AVCodecContext *avctx, AVFrame *pict,
v->end_mb_x = s->mb_width;
if (v->field_mode) {
- s->cur_pic.f->linesize[0] <<= 1;
- s->cur_pic.f->linesize[1] <<= 1;
- s->cur_pic.f->linesize[2] <<= 1;
+ s->cur_pic.linesize[0] <<= 1;
+ s->cur_pic.linesize[1] <<= 1;
+ s->cur_pic.linesize[2] <<= 1;
s->linesize <<= 1;
s->uvlinesize <<= 1;
}
@@ -1307,9 +1307,9 @@ static int vc1_decode_frame(AVCodecContext *avctx, AVFrame *pict,
}
if (v->field_mode) {
v->second_field = 0;
- s->cur_pic.f->linesize[0] >>= 1;
- s->cur_pic.f->linesize[1] >>= 1;
- s->cur_pic.f->linesize[2] >>= 1;
+ s->cur_pic.linesize[0] >>= 1;
+ s->cur_pic.linesize[1] >>= 1;
+ s->cur_pic.linesize[2] >>= 1;
s->linesize >>= 1;
s->uvlinesize >>= 1;
if (v->s.pict_type != AV_PICTURE_TYPE_BI && v->s.pict_type != AV_PICTURE_TYPE_B) {
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 34/71] avcodec/rv30, rv34, rv40: Avoid indirection
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (31 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 33/71] avcodec/mpegpicture: Cache AVFrame.data and linesize values Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 35/71] avcodec/mpegvideo: Add const where appropriate Andreas Rheinhardt
` (37 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Use the cached values from MpegEncContext.(cur|last|next)_pic
instead of the corresponding *_pic_ptr.
Also do the same in wmv2dec.c and mpegvideo_enc.c.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegvideo_enc.c | 2 +-
libavcodec/rv30.c | 18 +++---
libavcodec/rv34.c | 122 +++++++++++++++++++------------------
libavcodec/rv40.c | 10 +--
libavcodec/wmv2dec.c | 7 +--
5 files changed, 80 insertions(+), 79 deletions(-)
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index 2f6aaad1c7..f84a05d674 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -2145,7 +2145,7 @@ static av_always_inline void encode_mb_internal(MpegEncContext *s,
update_qscale(s);
if (!(s->mpv_flags & FF_MPV_FLAG_QP_RD)) {
- s->qscale = s->cur_pic_ptr->qscale_table[mb_xy];
+ s->qscale = s->cur_pic.qscale_table[mb_xy];
s->dquant = s->qscale - last_qp;
if (s->out_format == FMT_H263) {
diff --git a/libavcodec/rv30.c b/libavcodec/rv30.c
index a4e38edf54..9c8bb966e9 100644
--- a/libavcodec/rv30.c
+++ b/libavcodec/rv30.c
@@ -160,7 +160,7 @@ static void rv30_loop_filter(RV34DecContext *r, int row)
mb_pos = row * s->mb_stride;
for(mb_x = 0; mb_x < s->mb_width; mb_x++, mb_pos++){
- int mbtype = s->cur_pic_ptr->mb_type[mb_pos];
+ int mbtype = s->cur_pic.mb_type[mb_pos];
if(IS_INTRA(mbtype) || IS_SEPARATE_DC(mbtype))
r->deblock_coefs[mb_pos] = 0xFFFF;
if(IS_INTRA(mbtype))
@@ -172,11 +172,11 @@ static void rv30_loop_filter(RV34DecContext *r, int row)
*/
mb_pos = row * s->mb_stride;
for(mb_x = 0; mb_x < s->mb_width; mb_x++, mb_pos++){
- cur_lim = rv30_loop_filt_lim[s->cur_pic_ptr->qscale_table[mb_pos]];
+ cur_lim = rv30_loop_filt_lim[s->cur_pic.qscale_table[mb_pos]];
if(mb_x)
- left_lim = rv30_loop_filt_lim[s->cur_pic_ptr->qscale_table[mb_pos - 1]];
+ left_lim = rv30_loop_filt_lim[s->cur_pic.qscale_table[mb_pos - 1]];
for(j = 0; j < 16; j += 4){
- Y = s->cur_pic_ptr->f->data[0] + mb_x*16 + (row*16 + j) * s->linesize + 4 * !mb_x;
+ Y = s->cur_pic.data[0] + mb_x*16 + (row*16 + j) * s->linesize + 4 * !mb_x;
for(i = !mb_x; i < 4; i++, Y += 4){
int ij = i + j;
loc_lim = 0;
@@ -196,7 +196,7 @@ static void rv30_loop_filter(RV34DecContext *r, int row)
if(mb_x)
left_cbp = (r->cbp_chroma[mb_pos - 1] >> (k*4)) & 0xF;
for(j = 0; j < 8; j += 4){
- C = s->cur_pic_ptr->f->data[k + 1] + mb_x*8 + (row*8 + j) * s->uvlinesize + 4 * !mb_x;
+ C = s->cur_pic.data[k + 1] + mb_x*8 + (row*8 + j) * s->uvlinesize + 4 * !mb_x;
for(i = !mb_x; i < 2; i++, C += 4){
int ij = i + (j >> 1);
loc_lim = 0;
@@ -214,11 +214,11 @@ static void rv30_loop_filter(RV34DecContext *r, int row)
}
mb_pos = row * s->mb_stride;
for(mb_x = 0; mb_x < s->mb_width; mb_x++, mb_pos++){
- cur_lim = rv30_loop_filt_lim[s->cur_pic_ptr->qscale_table[mb_pos]];
+ cur_lim = rv30_loop_filt_lim[s->cur_pic.qscale_table[mb_pos]];
if(row)
- top_lim = rv30_loop_filt_lim[s->cur_pic_ptr->qscale_table[mb_pos - s->mb_stride]];
+ top_lim = rv30_loop_filt_lim[s->cur_pic.qscale_table[mb_pos - s->mb_stride]];
for(j = 4*!row; j < 16; j += 4){
- Y = s->cur_pic_ptr->f->data[0] + mb_x*16 + (row*16 + j) * s->linesize;
+ Y = s->cur_pic.data[0] + mb_x*16 + (row*16 + j) * s->linesize;
for(i = 0; i < 4; i++, Y += 4){
int ij = i + j;
loc_lim = 0;
@@ -238,7 +238,7 @@ static void rv30_loop_filter(RV34DecContext *r, int row)
if(row)
top_cbp = (r->cbp_chroma[mb_pos - s->mb_stride] >> (k*4)) & 0xF;
for(j = 4*!row; j < 8; j += 4){
- C = s->cur_pic_ptr->f->data[k+1] + mb_x*8 + (row*8 + j) * s->uvlinesize;
+ C = s->cur_pic.data[k+1] + mb_x*8 + (row*8 + j) * s->uvlinesize;
for(i = 0; i < 2; i++, C += 4){
int ij = i + (j >> 1);
loc_lim = 0;
diff --git a/libavcodec/rv34.c b/libavcodec/rv34.c
index 467a6ab5a1..941d983501 100644
--- a/libavcodec/rv34.c
+++ b/libavcodec/rv34.c
@@ -367,7 +367,7 @@ static int rv34_decode_intra_mb_header(RV34DecContext *r, int8_t *intra_types)
r->is16 = get_bits1(gb);
if(r->is16){
- s->cur_pic_ptr->mb_type[mb_pos] = MB_TYPE_INTRA16x16;
+ s->cur_pic.mb_type[mb_pos] = MB_TYPE_INTRA16x16;
r->block_type = RV34_MB_TYPE_INTRA16x16;
t = get_bits(gb, 2);
fill_rectangle(intra_types, 4, 4, r->intra_types_stride, t, sizeof(intra_types[0]));
@@ -377,7 +377,7 @@ static int rv34_decode_intra_mb_header(RV34DecContext *r, int8_t *intra_types)
if(!get_bits1(gb))
av_log(s->avctx, AV_LOG_ERROR, "Need DQUANT\n");
}
- s->cur_pic_ptr->mb_type[mb_pos] = MB_TYPE_INTRA;
+ s->cur_pic.mb_type[mb_pos] = MB_TYPE_INTRA;
r->block_type = RV34_MB_TYPE_INTRA;
if(r->decode_intra_types(r, gb, intra_types) < 0)
return -1;
@@ -403,7 +403,7 @@ static int rv34_decode_inter_mb_header(RV34DecContext *r, int8_t *intra_types)
r->block_type = r->decode_mb_info(r);
if(r->block_type == -1)
return -1;
- s->cur_pic_ptr->mb_type[mb_pos] = rv34_mb_type_to_lavc[r->block_type];
+ s->cur_pic.mb_type[mb_pos] = rv34_mb_type_to_lavc[r->block_type];
r->mb_type[mb_pos] = r->block_type;
if(r->block_type == RV34_MB_SKIP){
if(s->pict_type == AV_PICTURE_TYPE_P)
@@ -411,7 +411,7 @@ static int rv34_decode_inter_mb_header(RV34DecContext *r, int8_t *intra_types)
if(s->pict_type == AV_PICTURE_TYPE_B)
r->mb_type[mb_pos] = RV34_MB_B_DIRECT;
}
- r->is16 = !!IS_INTRA16x16(s->cur_pic_ptr->mb_type[mb_pos]);
+ r->is16 = !!IS_INTRA16x16(s->cur_pic.mb_type[mb_pos]);
if (rv34_decode_mv(r, r->block_type) < 0)
return -1;
if(r->block_type == RV34_MB_SKIP){
@@ -421,7 +421,7 @@ static int rv34_decode_inter_mb_header(RV34DecContext *r, int8_t *intra_types)
r->chroma_vlc = 1;
r->luma_vlc = 0;
- if(IS_INTRA(s->cur_pic_ptr->mb_type[mb_pos])){
+ if (IS_INTRA(s->cur_pic.mb_type[mb_pos])) {
if(r->is16){
t = get_bits(gb, 2);
fill_rectangle(intra_types, 4, 4, r->intra_types_stride, t, sizeof(intra_types[0]));
@@ -480,33 +480,34 @@ static void rv34_pred_mv(RV34DecContext *r, int block_type, int subblock_no, int
int mx, my;
int* avail = r->avail_cache + avail_indexes[subblock_no];
int c_off = part_sizes_w[block_type];
+ int16_t (*motion_val)[2] = s->cur_pic.motion_val[0];
mv_pos += (subblock_no & 1) + (subblock_no >> 1)*s->b8_stride;
if(subblock_no == 3)
c_off = -1;
if(avail[-1]){
- A[0] = s->cur_pic_ptr->motion_val[0][mv_pos-1][0];
- A[1] = s->cur_pic_ptr->motion_val[0][mv_pos-1][1];
+ A[0] = motion_val[mv_pos-1][0];
+ A[1] = motion_val[mv_pos-1][1];
}
if(avail[-4]){
- B[0] = s->cur_pic_ptr->motion_val[0][mv_pos-s->b8_stride][0];
- B[1] = s->cur_pic_ptr->motion_val[0][mv_pos-s->b8_stride][1];
+ B[0] = motion_val[mv_pos-s->b8_stride][0];
+ B[1] = motion_val[mv_pos-s->b8_stride][1];
}else{
B[0] = A[0];
B[1] = A[1];
}
if(!avail[c_off-4]){
if(avail[-4] && (avail[-1] || r->rv30)){
- C[0] = s->cur_pic_ptr->motion_val[0][mv_pos-s->b8_stride-1][0];
- C[1] = s->cur_pic_ptr->motion_val[0][mv_pos-s->b8_stride-1][1];
+ C[0] = motion_val[mv_pos-s->b8_stride-1][0];
+ C[1] = motion_val[mv_pos-s->b8_stride-1][1];
}else{
C[0] = A[0];
C[1] = A[1];
}
}else{
- C[0] = s->cur_pic_ptr->motion_val[0][mv_pos-s->b8_stride+c_off][0];
- C[1] = s->cur_pic_ptr->motion_val[0][mv_pos-s->b8_stride+c_off][1];
+ C[0] = motion_val[mv_pos-s->b8_stride+c_off][0];
+ C[1] = motion_val[mv_pos-s->b8_stride+c_off][1];
}
mx = mid_pred(A[0], B[0], C[0]);
my = mid_pred(A[1], B[1], C[1]);
@@ -514,8 +515,8 @@ static void rv34_pred_mv(RV34DecContext *r, int block_type, int subblock_no, int
my += r->dmv[dmv_no][1];
for(j = 0; j < part_sizes_h[block_type]; j++){
for(i = 0; i < part_sizes_w[block_type]; i++){
- s->cur_pic_ptr->motion_val[0][mv_pos + i + j*s->b8_stride][0] = mx;
- s->cur_pic_ptr->motion_val[0][mv_pos + i + j*s->b8_stride][1] = my;
+ motion_val[mv_pos + i + j*s->b8_stride][0] = mx;
+ motion_val[mv_pos + i + j*s->b8_stride][1] = my;
}
}
}
@@ -564,7 +565,7 @@ static void rv34_pred_mv_b(RV34DecContext *r, int block_type, int dir)
int has_A = 0, has_B = 0, has_C = 0;
int mx, my;
int i, j;
- Picture *cur_pic = s->cur_pic_ptr;
+ Picture *cur_pic = &s->cur_pic;
const int mask = dir ? MB_TYPE_L1 : MB_TYPE_L0;
int type = cur_pic->mb_type[mb_pos];
@@ -617,27 +618,27 @@ static void rv34_pred_mv_rv3(RV34DecContext *r, int block_type, int dir)
int* avail = r->avail_cache + avail_indexes[0];
if(avail[-1]){
- A[0] = s->cur_pic_ptr->motion_val[0][mv_pos - 1][0];
- A[1] = s->cur_pic_ptr->motion_val[0][mv_pos - 1][1];
+ A[0] = s->cur_pic.motion_val[0][mv_pos - 1][0];
+ A[1] = s->cur_pic.motion_val[0][mv_pos - 1][1];
}
if(avail[-4]){
- B[0] = s->cur_pic_ptr->motion_val[0][mv_pos - s->b8_stride][0];
- B[1] = s->cur_pic_ptr->motion_val[0][mv_pos - s->b8_stride][1];
+ B[0] = s->cur_pic.motion_val[0][mv_pos - s->b8_stride][0];
+ B[1] = s->cur_pic.motion_val[0][mv_pos - s->b8_stride][1];
}else{
B[0] = A[0];
B[1] = A[1];
}
if(!avail[-4 + 2]){
if(avail[-4] && (avail[-1])){
- C[0] = s->cur_pic_ptr->motion_val[0][mv_pos - s->b8_stride - 1][0];
- C[1] = s->cur_pic_ptr->motion_val[0][mv_pos - s->b8_stride - 1][1];
+ C[0] = s->cur_pic.motion_val[0][mv_pos - s->b8_stride - 1][0];
+ C[1] = s->cur_pic.motion_val[0][mv_pos - s->b8_stride - 1][1];
}else{
C[0] = A[0];
C[1] = A[1];
}
}else{
- C[0] = s->cur_pic_ptr->motion_val[0][mv_pos - s->b8_stride + 2][0];
- C[1] = s->cur_pic_ptr->motion_val[0][mv_pos - s->b8_stride + 2][1];
+ C[0] = s->cur_pic.motion_val[0][mv_pos - s->b8_stride + 2][0];
+ C[1] = s->cur_pic.motion_val[0][mv_pos - s->b8_stride + 2][1];
}
mx = mid_pred(A[0], B[0], C[0]);
my = mid_pred(A[1], B[1], C[1]);
@@ -646,8 +647,8 @@ static void rv34_pred_mv_rv3(RV34DecContext *r, int block_type, int dir)
for(j = 0; j < 2; j++){
for(i = 0; i < 2; i++){
for(k = 0; k < 2; k++){
- s->cur_pic_ptr->motion_val[k][mv_pos + i + j*s->b8_stride][0] = mx;
- s->cur_pic_ptr->motion_val[k][mv_pos + i + j*s->b8_stride][1] = my;
+ s->cur_pic.motion_val[k][mv_pos + i + j*s->b8_stride][0] = mx;
+ s->cur_pic.motion_val[k][mv_pos + i + j*s->b8_stride][1] = my;
}
}
}
@@ -683,27 +684,28 @@ static inline void rv34_mc(RV34DecContext *r, const int block_type,
int mv_pos = s->mb_x * 2 + s->mb_y * 2 * s->b8_stride + mv_off;
int is16x16 = 1;
int emu = 0;
+ int16_t *motion_val = s->cur_pic.motion_val[dir][mv_pos];
if(thirdpel){
int chroma_mx, chroma_my;
- mx = (s->cur_pic_ptr->motion_val[dir][mv_pos][0] + (3 << 24)) / 3 - (1 << 24);
- my = (s->cur_pic_ptr->motion_val[dir][mv_pos][1] + (3 << 24)) / 3 - (1 << 24);
- lx = (s->cur_pic_ptr->motion_val[dir][mv_pos][0] + (3 << 24)) % 3;
- ly = (s->cur_pic_ptr->motion_val[dir][mv_pos][1] + (3 << 24)) % 3;
- chroma_mx = s->cur_pic_ptr->motion_val[dir][mv_pos][0] / 2;
- chroma_my = s->cur_pic_ptr->motion_val[dir][mv_pos][1] / 2;
+ mx = (motion_val[0] + (3 << 24)) / 3 - (1 << 24);
+ my = (motion_val[1] + (3 << 24)) / 3 - (1 << 24);
+ lx = (motion_val[0] + (3 << 24)) % 3;
+ ly = (motion_val[1] + (3 << 24)) % 3;
+ chroma_mx = motion_val[0] / 2;
+ chroma_my = motion_val[1] / 2;
umx = (chroma_mx + (3 << 24)) / 3 - (1 << 24);
umy = (chroma_my + (3 << 24)) / 3 - (1 << 24);
uvmx = chroma_coeffs[(chroma_mx + (3 << 24)) % 3];
uvmy = chroma_coeffs[(chroma_my + (3 << 24)) % 3];
}else{
int cx, cy;
- mx = s->cur_pic_ptr->motion_val[dir][mv_pos][0] >> 2;
- my = s->cur_pic_ptr->motion_val[dir][mv_pos][1] >> 2;
- lx = s->cur_pic_ptr->motion_val[dir][mv_pos][0] & 3;
- ly = s->cur_pic_ptr->motion_val[dir][mv_pos][1] & 3;
- cx = s->cur_pic_ptr->motion_val[dir][mv_pos][0] / 2;
- cy = s->cur_pic_ptr->motion_val[dir][mv_pos][1] / 2;
+ mx = motion_val[0] >> 2;
+ my = motion_val[1] >> 2;
+ lx = motion_val[0] & 3;
+ ly = motion_val[1] & 3;
+ cx = motion_val[0] / 2;
+ cy = motion_val[1] / 2;
umx = cx >> 2;
umy = cy >> 2;
uvmx = (cx & 3) << 1;
@@ -721,9 +723,9 @@ static inline void rv34_mc(RV34DecContext *r, const int block_type,
}
dxy = ly*4 + lx;
- srcY = dir ? s->next_pic_ptr->f->data[0] : s->last_pic_ptr->f->data[0];
- srcU = dir ? s->next_pic_ptr->f->data[1] : s->last_pic_ptr->f->data[1];
- srcV = dir ? s->next_pic_ptr->f->data[2] : s->last_pic_ptr->f->data[2];
+ srcY = dir ? s->next_pic.data[0] : s->last_pic.data[0];
+ srcU = dir ? s->next_pic.data[1] : s->last_pic.data[1];
+ srcV = dir ? s->next_pic.data[2] : s->last_pic.data[2];
src_x = s->mb_x * 16 + xoff + mx;
src_y = s->mb_y * 16 + yoff + my;
uvsrc_x = s->mb_x * 8 + (xoff >> 1) + umx;
@@ -884,11 +886,11 @@ static int rv34_decode_mv(RV34DecContext *r, int block_type)
switch(block_type){
case RV34_MB_TYPE_INTRA:
case RV34_MB_TYPE_INTRA16x16:
- ZERO8x2(s->cur_pic_ptr->motion_val[0][s->mb_x * 2 + s->mb_y * 2 * s->b8_stride], s->b8_stride);
+ ZERO8x2(s->cur_pic.motion_val[0][s->mb_x * 2 + s->mb_y * 2 * s->b8_stride], s->b8_stride);
return 0;
case RV34_MB_SKIP:
if(s->pict_type == AV_PICTURE_TYPE_P){
- ZERO8x2(s->cur_pic_ptr->motion_val[0][s->mb_x * 2 + s->mb_y * 2 * s->b8_stride], s->b8_stride);
+ ZERO8x2(s->cur_pic.motion_val[0][s->mb_x * 2 + s->mb_y * 2 * s->b8_stride], s->b8_stride);
rv34_mc_1mv (r, block_type, 0, 0, 0, 2, 2, 0);
break;
}
@@ -898,21 +900,21 @@ static int rv34_decode_mv(RV34DecContext *r, int block_type)
if (HAVE_THREADS && (s->avctx->active_thread_type & FF_THREAD_FRAME))
ff_thread_await_progress(&s->next_pic_ptr->tf, FFMAX(0, s->mb_y-1), 0);
- next_bt = s->next_pic_ptr->mb_type[s->mb_x + s->mb_y * s->mb_stride];
+ next_bt = s->next_pic.mb_type[s->mb_x + s->mb_y * s->mb_stride];
if(IS_INTRA(next_bt) || IS_SKIP(next_bt)){
- ZERO8x2(s->cur_pic_ptr->motion_val[0][s->mb_x * 2 + s->mb_y * 2 * s->b8_stride], s->b8_stride);
- ZERO8x2(s->cur_pic_ptr->motion_val[1][s->mb_x * 2 + s->mb_y * 2 * s->b8_stride], s->b8_stride);
+ ZERO8x2(s->cur_pic.motion_val[0][s->mb_x * 2 + s->mb_y * 2 * s->b8_stride], s->b8_stride);
+ ZERO8x2(s->cur_pic.motion_val[1][s->mb_x * 2 + s->mb_y * 2 * s->b8_stride], s->b8_stride);
}else
for(j = 0; j < 2; j++)
for(i = 0; i < 2; i++)
for(k = 0; k < 2; k++)
for(l = 0; l < 2; l++)
- s->cur_pic_ptr->motion_val[l][mv_pos + i + j*s->b8_stride][k] = calc_add_mv(r, l, s->next_pic_ptr->motion_val[0][mv_pos + i + j*s->b8_stride][k]);
+ s->cur_pic.motion_val[l][mv_pos + i + j*s->b8_stride][k] = calc_add_mv(r, l, s->next_pic.motion_val[0][mv_pos + i + j*s->b8_stride][k]);
if(!(IS_16X8(next_bt) || IS_8X16(next_bt) || IS_8X8(next_bt))) //we can use whole macroblock MC
rv34_mc_2mv(r, block_type);
else
rv34_mc_2mv_skip(r);
- ZERO8x2(s->cur_pic_ptr->motion_val[0][s->mb_x * 2 + s->mb_y * 2 * s->b8_stride], s->b8_stride);
+ ZERO8x2(s->cur_pic.motion_val[0][s->mb_x * 2 + s->mb_y * 2 * s->b8_stride], s->b8_stride);
break;
case RV34_MB_P_16x16:
case RV34_MB_P_MIX16x16:
@@ -1180,7 +1182,7 @@ static int rv34_set_deblock_coef(RV34DecContext *r)
MpegEncContext *s = &r->s;
int hmvmask = 0, vmvmask = 0, i, j;
int midx = s->mb_x * 2 + s->mb_y * 2 * s->b8_stride;
- int16_t (*motion_val)[2] = &s->cur_pic_ptr->motion_val[0][midx];
+ int16_t (*motion_val)[2] = &s->cur_pic.motion_val[0][midx];
for(j = 0; j < 16; j += 8){
for(i = 0; i < 2; i++){
if(is_mv_diff_gt_3(motion_val + i, 1))
@@ -1223,26 +1225,26 @@ static int rv34_decode_inter_macroblock(RV34DecContext *r, int8_t *intra_types)
dist = (s->mb_x - s->resync_mb_x) + (s->mb_y - s->resync_mb_y) * s->mb_width;
if(s->mb_x && dist)
r->avail_cache[5] =
- r->avail_cache[9] = s->cur_pic_ptr->mb_type[mb_pos - 1];
+ r->avail_cache[9] = s->cur_pic.mb_type[mb_pos - 1];
if(dist >= s->mb_width)
r->avail_cache[2] =
- r->avail_cache[3] = s->cur_pic_ptr->mb_type[mb_pos - s->mb_stride];
+ r->avail_cache[3] = s->cur_pic.mb_type[mb_pos - s->mb_stride];
if(((s->mb_x+1) < s->mb_width) && dist >= s->mb_width - 1)
- r->avail_cache[4] = s->cur_pic_ptr->mb_type[mb_pos - s->mb_stride + 1];
+ r->avail_cache[4] = s->cur_pic.mb_type[mb_pos - s->mb_stride + 1];
if(s->mb_x && dist > s->mb_width)
- r->avail_cache[1] = s->cur_pic_ptr->mb_type[mb_pos - s->mb_stride - 1];
+ r->avail_cache[1] = s->cur_pic.mb_type[mb_pos - s->mb_stride - 1];
s->qscale = r->si.quant;
cbp = cbp2 = rv34_decode_inter_mb_header(r, intra_types);
r->cbp_luma [mb_pos] = cbp;
r->cbp_chroma[mb_pos] = cbp >> 16;
r->deblock_coefs[mb_pos] = rv34_set_deblock_coef(r) | r->cbp_luma[mb_pos];
- s->cur_pic_ptr->qscale_table[mb_pos] = s->qscale;
+ s->cur_pic.qscale_table[mb_pos] = s->qscale;
if(cbp == -1)
return -1;
- if (IS_INTRA(s->cur_pic_ptr->mb_type[mb_pos])){
+ if (IS_INTRA(s->cur_pic.mb_type[mb_pos])) {
if(r->is16) rv34_output_i16x16(r, intra_types, cbp);
else rv34_output_intra(r, intra_types, cbp);
return 0;
@@ -1325,21 +1327,21 @@ static int rv34_decode_intra_macroblock(RV34DecContext *r, int8_t *intra_types)
dist = (s->mb_x - s->resync_mb_x) + (s->mb_y - s->resync_mb_y) * s->mb_width;
if(s->mb_x && dist)
r->avail_cache[5] =
- r->avail_cache[9] = s->cur_pic_ptr->mb_type[mb_pos - 1];
+ r->avail_cache[9] = s->cur_pic.mb_type[mb_pos - 1];
if(dist >= s->mb_width)
r->avail_cache[2] =
- r->avail_cache[3] = s->cur_pic_ptr->mb_type[mb_pos - s->mb_stride];
+ r->avail_cache[3] = s->cur_pic.mb_type[mb_pos - s->mb_stride];
if(((s->mb_x+1) < s->mb_width) && dist >= s->mb_width - 1)
- r->avail_cache[4] = s->cur_pic_ptr->mb_type[mb_pos - s->mb_stride + 1];
+ r->avail_cache[4] = s->cur_pic.mb_type[mb_pos - s->mb_stride + 1];
if(s->mb_x && dist > s->mb_width)
- r->avail_cache[1] = s->cur_pic_ptr->mb_type[mb_pos - s->mb_stride - 1];
+ r->avail_cache[1] = s->cur_pic.mb_type[mb_pos - s->mb_stride - 1];
s->qscale = r->si.quant;
cbp = rv34_decode_intra_mb_header(r, intra_types);
r->cbp_luma [mb_pos] = cbp;
r->cbp_chroma[mb_pos] = cbp >> 16;
r->deblock_coefs[mb_pos] = 0xFFFF;
- s->cur_pic_ptr->qscale_table[mb_pos] = s->qscale;
+ s->cur_pic.qscale_table[mb_pos] = s->qscale;
if(cbp == -1)
return -1;
diff --git a/libavcodec/rv40.c b/libavcodec/rv40.c
index a98e64f5bf..536bbc9623 100644
--- a/libavcodec/rv40.c
+++ b/libavcodec/rv40.c
@@ -371,7 +371,7 @@ static void rv40_loop_filter(RV34DecContext *r, int row)
mb_pos = row * s->mb_stride;
for(mb_x = 0; mb_x < s->mb_width; mb_x++, mb_pos++){
- int mbtype = s->cur_pic_ptr->mb_type[mb_pos];
+ int mbtype = s->cur_pic.mb_type[mb_pos];
if(IS_INTRA(mbtype) || IS_SEPARATE_DC(mbtype))
r->cbp_luma [mb_pos] = r->deblock_coefs[mb_pos] = 0xFFFF;
if(IS_INTRA(mbtype))
@@ -386,7 +386,7 @@ static void rv40_loop_filter(RV34DecContext *r, int row)
unsigned y_to_deblock;
int c_to_deblock[2];
- q = s->cur_pic_ptr->qscale_table[mb_pos];
+ q = s->cur_pic.qscale_table[mb_pos];
alpha = rv40_alpha_tab[q];
beta = rv40_beta_tab [q];
betaY = betaC = beta * 3;
@@ -401,7 +401,7 @@ static void rv40_loop_filter(RV34DecContext *r, int row)
if(avail[i]){
int pos = mb_pos + neighbour_offs_x[i] + neighbour_offs_y[i]*s->mb_stride;
mvmasks[i] = r->deblock_coefs[pos];
- mbtype [i] = s->cur_pic_ptr->mb_type[pos];
+ mbtype [i] = s->cur_pic.mb_type[pos];
cbp [i] = r->cbp_luma[pos];
uvcbp[i][0] = r->cbp_chroma[pos] & 0xF;
uvcbp[i][1] = r->cbp_chroma[pos] >> 4;
@@ -460,7 +460,7 @@ static void rv40_loop_filter(RV34DecContext *r, int row)
}
for(j = 0; j < 16; j += 4){
- Y = s->cur_pic_ptr->f->data[0] + mb_x*16 + (row*16 + j) * s->linesize;
+ Y = s->cur_pic.data[0] + mb_x*16 + (row*16 + j) * s->linesize;
for(i = 0; i < 4; i++, Y += 4){
int ij = i + j;
int clip_cur = y_to_deblock & (MASK_CUR << ij) ? clip[POS_CUR] : 0;
@@ -505,7 +505,7 @@ static void rv40_loop_filter(RV34DecContext *r, int row)
}
for(k = 0; k < 2; k++){
for(j = 0; j < 2; j++){
- C = s->cur_pic_ptr->f->data[k + 1] + mb_x*8 + (row*8 + j*4) * s->uvlinesize;
+ C = s->cur_pic.data[k + 1] + mb_x*8 + (row*8 + j*4) * s->uvlinesize;
for(i = 0; i < 2; i++, C += 4){
int ij = i + j*2;
int clip_cur = c_to_deblock[k] & (MASK_CUR << ij) ? clip[POS_CUR] : 0;
diff --git a/libavcodec/wmv2dec.c b/libavcodec/wmv2dec.c
index 61e1759449..432d6f7223 100644
--- a/libavcodec/wmv2dec.c
+++ b/libavcodec/wmv2dec.c
@@ -103,7 +103,7 @@ static int parse_mb_skip(WMV2DecContext *w)
int mb_x, mb_y;
int coded_mb_count = 0;
MpegEncContext *const s = &w->s;
- uint32_t *const mb_type = s->cur_pic_ptr->mb_type;
+ uint32_t *const mb_type = s->cur_pic.mb_type;
w->skip_type = get_bits(&s->gb, 2);
switch (w->skip_type) {
@@ -238,9 +238,8 @@ int ff_wmv2_decode_secondary_picture_header(MpegEncContext *s)
if (s->pict_type == AV_PICTURE_TYPE_I) {
/* Is filling with zeroes really the right thing to do? */
- memset(s->cur_pic_ptr->mb_type, 0,
- sizeof(*s->cur_pic_ptr->mb_type) *
- s->mb_height * s->mb_stride);
+ memset(s->cur_pic.mb_type, 0,
+ sizeof(*s->cur_pic.mb_type) * s->mb_height * s->mb_stride);
if (w->j_type_bit)
w->j_type = get_bits1(&s->gb);
else
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 35/71] avcodec/mpegvideo: Add const where appropriate
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (32 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 34/71] avcodec/rv30, rv34, rv40: Avoid indirection Andreas Rheinhardt
@ 2024-05-11 20:50 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 36/71] avcodec/vc1_pred: Remove unused function parameter Andreas Rheinhardt
` (36 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:50 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Specifically, add const to the pointed-to-type of pointers
that point to something static or that belong to last_pic
or next_pic (because modifying these might lead to data races).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/h261dec.c | 2 +-
libavcodec/h261enc.c | 2 +-
libavcodec/ituh263dec.c | 6 ++--
libavcodec/ituh263enc.c | 2 +-
libavcodec/mpeg12dec.c | 6 ++--
libavcodec/mpeg12enc.c | 8 ++---
libavcodec/mpeg4videodec.c | 9 +++---
libavcodec/mpeg4videoenc.c | 32 ++++++++++----------
libavcodec/mpeg_er.c | 2 +-
libavcodec/mpegvideo.h | 4 +--
libavcodec/mpegvideo_enc.c | 6 ++--
libavcodec/mpegvideo_motion.c | 32 ++++++++++----------
libavcodec/mpegvideoenc.h | 2 +-
libavcodec/mpv_reconstruct_mb_template.c | 4 +--
libavcodec/msmpeg4dec.c | 6 ++--
libavcodec/ratecontrol.c | 22 +++++++-------
libavcodec/rv34.c | 3 +-
libavcodec/vc1_loopfilter.c | 26 +++++++++-------
libavcodec/vc1_mc.c | 16 +++++-----
libavcodec/vc1_pred.c | 38 +++++++++++-------------
libavcodec/vc1dec.c | 6 ++--
libavcodec/wmv2.c | 3 +-
libavcodec/wmv2.h | 3 +-
23 files changed, 123 insertions(+), 117 deletions(-)
diff --git a/libavcodec/h261dec.c b/libavcodec/h261dec.c
index 77aa08687d..00edd7a7c2 100644
--- a/libavcodec/h261dec.c
+++ b/libavcodec/h261dec.c
@@ -281,7 +281,7 @@ static int h261_decode_block(H261DecContext *h, int16_t *block, int n, int coded
{
MpegEncContext *const s = &h->s;
int level, i, j, run;
- RLTable *rl = &ff_h261_rl_tcoeff;
+ const RLTable *rl = &ff_h261_rl_tcoeff;
const uint8_t *scan_table;
/* For the variable length encoding there are two code tables, one being
diff --git a/libavcodec/h261enc.c b/libavcodec/h261enc.c
index 20dd296711..01bce533a0 100644
--- a/libavcodec/h261enc.c
+++ b/libavcodec/h261enc.c
@@ -167,7 +167,7 @@ static void h261_encode_block(H261EncContext *h, int16_t *block, int n)
{
MpegEncContext *const s = &h->s;
int level, run, i, j, last_index, last_non_zero, sign, slevel, code;
- RLTable *rl;
+ const RLTable *rl;
rl = &ff_h261_rl_tcoeff;
if (s->mb_intra) {
diff --git a/libavcodec/ituh263dec.c b/libavcodec/ituh263dec.c
index 9358363ed8..492cb5e0d4 100644
--- a/libavcodec/ituh263dec.c
+++ b/libavcodec/ituh263dec.c
@@ -534,7 +534,7 @@ static int h263_decode_block(MpegEncContext * s, int16_t * block,
int n, int coded)
{
int level, i, j, run;
- RLTable *rl = &ff_h263_rl_inter;
+ const RLTable *rl = &ff_h263_rl_inter;
const uint8_t *scan_table;
GetBitContext gb= s->gb;
@@ -719,7 +719,7 @@ static int h263_get_modb(GetBitContext *gb, int pb_frame, int *cbpb)
#define tab_size ((signed)FF_ARRAY_ELEMS(s->direct_scale_mv[0]))
#define tab_bias (tab_size / 2)
-static inline void set_one_direct_mv(MpegEncContext *s, Picture *p, int i)
+static inline void set_one_direct_mv(MpegEncContext *s, const Picture *p, int i)
{
int xy = s->block_index[i];
uint16_t time_pp = s->pp_time;
@@ -750,7 +750,7 @@ static inline void set_one_direct_mv(MpegEncContext *s, Picture *p, int i)
static int set_direct_mv(MpegEncContext *s)
{
const int mb_index = s->mb_x + s->mb_y * s->mb_stride;
- Picture *p = &s->next_pic;
+ const Picture *p = &s->next_pic;
int colocated_mb_type = p->mb_type[mb_index];
int i;
diff --git a/libavcodec/ituh263enc.c b/libavcodec/ituh263enc.c
index bcb230871e..b7c9f124a9 100644
--- a/libavcodec/ituh263enc.c
+++ b/libavcodec/ituh263enc.c
@@ -305,7 +305,7 @@ static const int dquant_code[5]= {1,0,9,2,3};
static void h263_encode_block(MpegEncContext * s, int16_t * block, int n)
{
int level, run, last, i, j, last_index, last_non_zero, sign, slevel, code;
- RLTable *rl;
+ const RLTable *rl;
rl = &ff_h263_rl_inter;
if (s->mb_intra && !s->h263_aic) {
diff --git a/libavcodec/mpeg12dec.c b/libavcodec/mpeg12dec.c
index c04d351e0c..6877b9ef4a 100644
--- a/libavcodec/mpeg12dec.c
+++ b/libavcodec/mpeg12dec.c
@@ -160,7 +160,7 @@ static inline int mpeg1_decode_block_inter(MpegEncContext *s,
int16_t *block, int n)
{
int level, i, j, run;
- uint8_t *const scantable = s->intra_scantable.permutated;
+ const uint8_t *const scantable = s->intra_scantable.permutated;
const uint16_t *quant_matrix = s->inter_matrix;
const int qscale = s->qscale;
@@ -244,7 +244,7 @@ static inline int mpeg2_decode_block_non_intra(MpegEncContext *s,
int16_t *block, int n)
{
int level, i, j, run;
- uint8_t *const scantable = s->intra_scantable.permutated;
+ const uint8_t *const scantable = s->intra_scantable.permutated;
const uint16_t *quant_matrix;
const int qscale = s->qscale;
int mismatch;
@@ -331,7 +331,7 @@ static inline int mpeg2_decode_block_intra(MpegEncContext *s,
int level, dc, diff, i, j, run;
int component;
const RL_VLC_ELEM *rl_vlc;
- uint8_t *const scantable = s->intra_scantable.permutated;
+ const uint8_t *const scantable = s->intra_scantable.permutated;
const uint16_t *quant_matrix;
const int qscale = s->qscale;
int mismatch;
diff --git a/libavcodec/mpeg12enc.c b/libavcodec/mpeg12enc.c
index bd95451b68..42ff92cb16 100644
--- a/libavcodec/mpeg12enc.c
+++ b/libavcodec/mpeg12enc.c
@@ -470,7 +470,7 @@ void ff_mpeg1_encode_slice_header(MpegEncContext *s)
void ff_mpeg1_encode_picture_header(MpegEncContext *s)
{
MPEG12EncContext *const mpeg12 = (MPEG12EncContext*)s;
- AVFrameSideData *side_data;
+ const AVFrameSideData *side_data;
mpeg1_encode_sequence_header(s);
/* MPEG-1 picture header */
@@ -557,7 +557,7 @@ void ff_mpeg1_encode_picture_header(MpegEncContext *s)
side_data = av_frame_get_side_data(s->cur_pic_ptr->f,
AV_FRAME_DATA_STEREO3D);
if (side_data) {
- AVStereo3D *stereo = (AVStereo3D *)side_data->data;
+ const AVStereo3D *stereo = (AVStereo3D *)side_data->data;
uint8_t fpa_type;
switch (stereo->type) {
@@ -711,7 +711,7 @@ static inline void encode_dc(MpegEncContext *s, int diff, int component)
}
}
-static void mpeg1_encode_block(MpegEncContext *s, int16_t *block, int n)
+static void mpeg1_encode_block(MpegEncContext *s, const int16_t *block, int n)
{
int alevel, level, last_non_zero, dc, diff, i, j, run, last_index, sign;
int code, component;
@@ -793,7 +793,7 @@ next_coef:
}
static av_always_inline void mpeg1_encode_mb_internal(MpegEncContext *s,
- int16_t block[8][64],
+ const int16_t block[8][64],
int motion_x, int motion_y,
int mb_block_count,
int chroma_y_shift)
diff --git a/libavcodec/mpeg4videodec.c b/libavcodec/mpeg4videodec.c
index 8659ec0376..8f2e03414b 100644
--- a/libavcodec/mpeg4videodec.c
+++ b/libavcodec/mpeg4videodec.c
@@ -1292,8 +1292,8 @@ static inline int mpeg4_decode_block(Mpeg4DecContext *ctx, int16_t *block,
MpegEncContext *s = &ctx->m;
int level, i, last, run, qmul, qadd;
int av_uninit(dc_pred_dir);
- RLTable *rl;
- RL_VLC_ELEM *rl_vlc;
+ const RLTable *rl;
+ const RL_VLC_ELEM *rl_vlc;
const uint8_t *scan_table;
// Note intra & rvlc should be optimized away if this is inlined
@@ -1651,7 +1651,6 @@ static int mpeg4_decode_mb(MpegEncContext *s, int16_t block[6][64])
{
Mpeg4DecContext *ctx = s->avctx->priv_data;
int cbpc, cbpy, i, cbp, pred_x, pred_y, mx, my, dquant;
- int16_t *mot_val;
static const int8_t quant_tab[4] = { -1, -2, 1, 2 };
const int xy = s->mb_x + s->mb_y * s->mb_stride;
int next;
@@ -1782,7 +1781,7 @@ static int mpeg4_decode_mb(MpegEncContext *s, int16_t block[6][64])
s->cur_pic.mb_type[xy] = MB_TYPE_8x8 | MB_TYPE_L0;
s->mv_type = MV_TYPE_8X8;
for (i = 0; i < 4; i++) {
- mot_val = ff_h263_pred_motion(s, i, 0, &pred_x, &pred_y);
+ int16_t *mot_val = ff_h263_pred_motion(s, i, 0, &pred_x, &pred_y);
mx = ff_h263_decode_motion(s, pred_x, s->f_code);
if (mx >= 0xffff)
return AVERROR_INVALIDDATA;
@@ -2075,7 +2074,7 @@ static int mpeg4_decode_studio_block(MpegEncContext *s, int32_t block[64], int n
int cc, dct_dc_size, dct_diff, code, j, idx = 1, group = 0, run = 0,
additional_code_len, sign, mismatch;
const VLCElem *cur_vlc = studio_intra_tab[0];
- uint8_t *const scantable = s->intra_scantable.permutated;
+ const uint8_t *const scantable = s->intra_scantable.permutated;
const uint16_t *quant_matrix;
uint32_t flc;
const int min = -1 * (1 << (s->avctx->bits_per_raw_sample + 6));
diff --git a/libavcodec/mpeg4videoenc.c b/libavcodec/mpeg4videoenc.c
index 87b12413ab..c5b5b3ea50 100644
--- a/libavcodec/mpeg4videoenc.c
+++ b/libavcodec/mpeg4videoenc.c
@@ -71,7 +71,7 @@ static uint8_t uni_mpeg4_inter_rl_len[64 * 64 * 2 * 2];
* @param[in] block_last_index last index in scantable order that refers to a non zero element in block.
*/
static inline int get_block_rate(MpegEncContext *s, int16_t block[64],
- int block_last_index, uint8_t scantable[64])
+ int block_last_index, const uint8_t scantable[64])
{
int last = 0;
int j;
@@ -106,7 +106,7 @@ static inline int get_block_rate(MpegEncContext *s, int16_t block[64],
* @param[in] zigzag_last_index index referring to the last non zero coefficient in zigzag order
*/
static inline void restore_ac_coeffs(MpegEncContext *s, int16_t block[6][64],
- const int dir[6], uint8_t *st[6],
+ const int dir[6], const uint8_t *st[6],
const int zigzag_last_index[6])
{
int i, n;
@@ -137,12 +137,12 @@ static inline void restore_ac_coeffs(MpegEncContext *s, int16_t block[6][64],
* @param[out] zigzag_last_index index referring to the last non zero coefficient in zigzag order
*/
static inline int decide_ac_pred(MpegEncContext *s, int16_t block[6][64],
- const int dir[6], uint8_t *st[6],
+ const int dir[6], const uint8_t *st[6],
int zigzag_last_index[6])
{
int score = 0;
int i, n;
- int8_t *const qscale_table = s->cur_pic.qscale_table;
+ const int8_t *const qscale_table = s->cur_pic.qscale_table;
memcpy(zigzag_last_index, s->block_last_index, sizeof(int) * 6);
@@ -288,14 +288,14 @@ static inline int mpeg4_get_dc_length(int level, int n)
* Encode an 8x8 block.
* @param n block index (0-3 are luma, 4-5 are chroma)
*/
-static inline void mpeg4_encode_block(MpegEncContext *s,
- int16_t *block, int n, int intra_dc,
- uint8_t *scan_table, PutBitContext *dc_pb,
+static inline void mpeg4_encode_block(const MpegEncContext *s,
+ const int16_t *block, int n, int intra_dc,
+ const uint8_t *scan_table, PutBitContext *dc_pb,
PutBitContext *ac_pb)
{
int i, last_non_zero;
- uint32_t *bits_tab;
- uint8_t *len_tab;
+ const uint32_t *bits_tab;
+ const uint8_t *len_tab;
const int last_index = s->block_last_index[n];
if (s->mb_intra) { // Note gcc (3.2.1 at least) will optimize this away
@@ -350,11 +350,11 @@ static inline void mpeg4_encode_block(MpegEncContext *s,
}
static int mpeg4_get_block_length(MpegEncContext *s,
- int16_t *block, int n,
- int intra_dc, uint8_t *scan_table)
+ const int16_t *block, int n,
+ int intra_dc, const uint8_t *scan_table)
{
int i, last_non_zero;
- uint8_t *len_tab;
+ const uint8_t *len_tab;
const int last_index = s->block_last_index[n];
int len = 0;
@@ -403,8 +403,10 @@ static int mpeg4_get_block_length(MpegEncContext *s,
return len;
}
-static inline void mpeg4_encode_blocks(MpegEncContext *s, int16_t block[6][64],
- int intra_dc[6], uint8_t **scan_table,
+static inline void mpeg4_encode_blocks(MpegEncContext *s,
+ const int16_t block[6][64],
+ const int intra_dc[6],
+ const uint8_t * const *scan_table,
PutBitContext *dc_pb,
PutBitContext *ac_pb)
{
@@ -796,7 +798,7 @@ void ff_mpeg4_encode_mb(MpegEncContext *s, int16_t block[6][64],
int dc_diff[6]; // dc values with the dc prediction subtracted
int dir[6]; // prediction direction
int zigzag_last_index[6];
- uint8_t *scan_table[6];
+ const uint8_t *scan_table[6];
int i;
for (i = 0; i < 6; i++)
diff --git a/libavcodec/mpeg_er.c b/libavcodec/mpeg_er.c
index 8d8b2aea92..360f3ce3e0 100644
--- a/libavcodec/mpeg_er.c
+++ b/libavcodec/mpeg_er.c
@@ -22,7 +22,7 @@
#include "mpegvideodec.h"
#include "mpeg_er.h"
-static void set_erpic(ERPicture *dst, Picture *src)
+static void set_erpic(ERPicture *dst, const Picture *src)
{
int i;
diff --git a/libavcodec/mpegvideo.h b/libavcodec/mpegvideo.h
index 62550027a7..3150f337c0 100644
--- a/libavcodec/mpegvideo.h
+++ b/libavcodec/mpegvideo.h
@@ -595,8 +595,8 @@ void ff_mpv_motion(MpegEncContext *s,
uint8_t *dest_y, uint8_t *dest_cb,
uint8_t *dest_cr, int dir,
uint8_t *const *ref_picture,
- op_pixels_func (*pix_op)[4],
- qpel_mc_func (*qpix_op)[16]);
+ const op_pixels_func (*pix_op)[4],
+ const qpel_mc_func (*qpix_op)[16]);
static inline void ff_update_block_index(MpegEncContext *s, int bits_per_raw_sample,
int lowres, int chroma_x_shift)
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index f84a05d674..21626b58a0 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -4219,8 +4219,8 @@ static int dct_quantize_refine(MpegEncContext *s, //FIXME breaks denoise?
int prev_run=0;
int prev_level=0;
int qmul, qadd, start_i, last_non_zero, i, dc;
- uint8_t * length;
- uint8_t * last_length;
+ const uint8_t *length;
+ const uint8_t *last_length;
int lambda;
int rle_index, run, q = 1, sum; //q is only used when s->mb_intra is true
@@ -4533,7 +4533,7 @@ static int dct_quantize_refine(MpegEncContext *s, //FIXME breaks denoise?
* permutation up, the block is not (inverse) permutated
* to scantable order!
*/
-void ff_block_permute(int16_t *block, uint8_t *permutation,
+void ff_block_permute(int16_t *block, const uint8_t *permutation,
const uint8_t *scantable, int last)
{
int i;
diff --git a/libavcodec/mpegvideo_motion.c b/libavcodec/mpegvideo_motion.c
index 9c1872aa1b..964caa5afb 100644
--- a/libavcodec/mpegvideo_motion.c
+++ b/libavcodec/mpegvideo_motion.c
@@ -38,7 +38,7 @@
static inline int hpel_motion(MpegEncContext *s,
uint8_t *dest, uint8_t *src,
int src_x, int src_y,
- op_pixels_func *pix_op,
+ const op_pixels_func *pix_op,
int motion_x, int motion_y)
{
int dxy = 0;
@@ -79,7 +79,7 @@ void mpeg_motion_internal(MpegEncContext *s,
int bottom_field,
int field_select,
uint8_t *const *ref_picture,
- op_pixels_func (*pix_op)[4],
+ const op_pixels_func (*pix_op)[4],
int motion_x,
int motion_y,
int h,
@@ -219,7 +219,7 @@ void mpeg_motion_internal(MpegEncContext *s,
static void mpeg_motion(MpegEncContext *s,
uint8_t *dest_y, uint8_t *dest_cb, uint8_t *dest_cr,
int field_select, uint8_t *const *ref_picture,
- op_pixels_func (*pix_op)[4],
+ const op_pixels_func (*pix_op)[4],
int motion_x, int motion_y, int h, int is_16x8, int mb_y)
{
#if !CONFIG_SMALL
@@ -238,7 +238,7 @@ static void mpeg_motion_field(MpegEncContext *s, uint8_t *dest_y,
uint8_t *dest_cb, uint8_t *dest_cr,
int bottom_field, int field_select,
uint8_t *const *ref_picture,
- op_pixels_func (*pix_op)[4],
+ const op_pixels_func (*pix_op)[4],
int motion_x, int motion_y, int mb_y)
{
#if !CONFIG_SMALL
@@ -254,7 +254,7 @@ static void mpeg_motion_field(MpegEncContext *s, uint8_t *dest_y,
}
// FIXME: SIMDify, avg variant, 16x16 version
-static inline void put_obmc(uint8_t *dst, uint8_t *src[5], int stride)
+static inline void put_obmc(uint8_t *dst, uint8_t *const src[5], int stride)
{
int x;
uint8_t *const top = src[1];
@@ -310,7 +310,7 @@ static inline void put_obmc(uint8_t *dst, uint8_t *src[5], int stride)
static inline void obmc_motion(MpegEncContext *s,
uint8_t *dest, uint8_t *src,
int src_x, int src_y,
- op_pixels_func *pix_op,
+ const op_pixels_func *pix_op,
int16_t mv[5][2] /* mid top left right bottom */)
#define MID 0
{
@@ -339,8 +339,8 @@ static inline void qpel_motion(MpegEncContext *s,
uint8_t *dest_cr,
int field_based, int bottom_field,
int field_select, uint8_t *const *ref_picture,
- op_pixels_func (*pix_op)[4],
- qpel_mc_func (*qpix_op)[16],
+ const op_pixels_func (*pix_op)[4],
+ const qpel_mc_func (*qpix_op)[16],
int motion_x, int motion_y, int h)
{
const uint8_t *ptr_y, *ptr_cb, *ptr_cr;
@@ -443,7 +443,7 @@ static inline void qpel_motion(MpegEncContext *s,
static void chroma_4mv_motion(MpegEncContext *s,
uint8_t *dest_cb, uint8_t *dest_cr,
uint8_t *const *ref_picture,
- op_pixels_func *pix_op,
+ const op_pixels_func *pix_op,
int mx, int my)
{
const uint8_t *ptr;
@@ -511,7 +511,7 @@ static inline void apply_obmc(MpegEncContext *s,
uint8_t *dest_cb,
uint8_t *dest_cr,
uint8_t *const *ref_picture,
- op_pixels_func (*pix_op)[4])
+ const op_pixels_func (*pix_op)[4])
{
LOCAL_ALIGNED_8(int16_t, mv_cache, [4], [4][2]);
const Picture *cur_frame = &s->cur_pic;
@@ -599,8 +599,8 @@ static inline void apply_8x8(MpegEncContext *s,
uint8_t *dest_cr,
int dir,
uint8_t *const *ref_picture,
- qpel_mc_func (*qpix_op)[16],
- op_pixels_func (*pix_op)[4])
+ const qpel_mc_func (*qpix_op)[16],
+ const op_pixels_func (*pix_op)[4])
{
int dxy, mx, my, src_x, src_y;
int i;
@@ -684,8 +684,8 @@ static av_always_inline void mpv_motion_internal(MpegEncContext *s,
uint8_t *dest_cr,
int dir,
uint8_t *const *ref_picture,
- op_pixels_func (*pix_op)[4],
- qpel_mc_func (*qpix_op)[16],
+ const op_pixels_func (*pix_op)[4],
+ const qpel_mc_func (*qpix_op)[16],
int is_mpeg12)
{
int i;
@@ -820,8 +820,8 @@ void ff_mpv_motion(MpegEncContext *s,
uint8_t *dest_y, uint8_t *dest_cb,
uint8_t *dest_cr, int dir,
uint8_t *const *ref_picture,
- op_pixels_func (*pix_op)[4],
- qpel_mc_func (*qpix_op)[16])
+ const op_pixels_func (*pix_op)[4],
+ const qpel_mc_func (*qpix_op)[16])
{
av_assert2(s->out_format == FMT_MPEG1 ||
s->out_format == FMT_H263 ||
diff --git a/libavcodec/mpegvideoenc.h b/libavcodec/mpegvideoenc.h
index c20ea500eb..f7e681eaa6 100644
--- a/libavcodec/mpegvideoenc.h
+++ b/libavcodec/mpegvideoenc.h
@@ -152,7 +152,7 @@ int ff_dct_quantize_c(MpegEncContext *s, int16_t *block, int n, int qscale, int
void ff_convert_matrix(MpegEncContext *s, int (*qmat)[64], uint16_t (*qmat16)[2][64],
const uint16_t *quant_matrix, int bias, int qmin, int qmax, int intra);
-void ff_block_permute(int16_t *block, uint8_t *permutation,
+void ff_block_permute(int16_t *block, const uint8_t *permutation,
const uint8_t *scantable, int last);
static inline int get_bits_diff(MpegEncContext *s)
diff --git a/libavcodec/mpv_reconstruct_mb_template.c b/libavcodec/mpv_reconstruct_mb_template.c
index 70dab76f73..2da2218042 100644
--- a/libavcodec/mpv_reconstruct_mb_template.c
+++ b/libavcodec/mpv_reconstruct_mb_template.c
@@ -144,8 +144,8 @@ void mpv_reconstruct_mb_internal(MpegEncContext *s, int16_t block[12][64],
MPV_motion_lowres(s, dest_y, dest_cb, dest_cr, 1, s->next_pic.data, op_pix);
}
} else {
- op_pixels_func (*op_pix)[4];
- qpel_mc_func (*op_qpix)[16];
+ const op_pixels_func (*op_pix)[4];
+ const qpel_mc_func (*op_qpix)[16];
if ((is_mpeg12 == DEFINITELY_MPEG12 || !s->no_rounding) || s->pict_type == AV_PICTURE_TYPE_B) {
op_pix = s->hdsp.put_pixels_tab;
diff --git a/libavcodec/msmpeg4dec.c b/libavcodec/msmpeg4dec.c
index c354f46c50..a7b3fc4603 100644
--- a/libavcodec/msmpeg4dec.c
+++ b/libavcodec/msmpeg4dec.c
@@ -627,8 +627,8 @@ int ff_msmpeg4_decode_block(MpegEncContext * s, int16_t * block,
{
int level, i, last, run, run_diff;
int av_uninit(dc_pred_dir);
- RLTable *rl;
- RL_VLC_ELEM *rl_vlc;
+ const RLTable *rl;
+ const RL_VLC_ELEM *rl_vlc;
int qmul, qadd;
if (s->mb_intra) {
@@ -811,7 +811,7 @@ int ff_msmpeg4_decode_block(MpegEncContext * s, int16_t * block,
void ff_msmpeg4_decode_motion(MpegEncContext *s, int *mx_ptr, int *my_ptr)
{
- MVTable *mv;
+ const MVTable *mv;
int code, mx, my;
mv = &ff_mv_tables[s->mv_table_index];
diff --git a/libavcodec/ratecontrol.c b/libavcodec/ratecontrol.c
index e4d18ff669..1c9af6b53c 100644
--- a/libavcodec/ratecontrol.c
+++ b/libavcodec/ratecontrol.c
@@ -70,7 +70,7 @@ FF_DISABLE_DEPRECATION_WARNINGS
FF_ENABLE_DEPRECATION_WARNINGS
}
-static inline double qp2bits(RateControlEntry *rce, double qp)
+static inline double qp2bits(const RateControlEntry *rce, double qp)
{
if (qp <= 0.0) {
av_log(NULL, AV_LOG_ERROR, "qp<=0.0\n");
@@ -83,7 +83,7 @@ static double qp2bits_cb(void *rce, double qp)
return qp2bits(rce, qp);
}
-static inline double bits2qp(RateControlEntry *rce, double bits)
+static inline double bits2qp(const RateControlEntry *rce, double bits)
{
if (bits < 0.9) {
av_log(NULL, AV_LOG_ERROR, "bits<0.9\n");
@@ -96,7 +96,7 @@ static double bits2qp_cb(void *rce, double qp)
return bits2qp(rce, qp);
}
-static double get_diff_limited_q(MpegEncContext *s, RateControlEntry *rce, double q)
+static double get_diff_limited_q(MpegEncContext *s, const RateControlEntry *rce, double q)
{
RateControlContext *rcc = &s->rc_context;
AVCodecContext *a = s->avctx;
@@ -163,7 +163,7 @@ static void get_qminmax(int *qmin_ret, int *qmax_ret, MpegEncContext *s, int pic
*qmax_ret = qmax;
}
-static double modify_qscale(MpegEncContext *s, RateControlEntry *rce,
+static double modify_qscale(MpegEncContext *s, const RateControlEntry *rce,
double q, int frame_num)
{
RateControlContext *rcc = &s->rc_context;
@@ -385,7 +385,7 @@ static int init_pass2(MpegEncContext *s)
/* find qscale */
for (i = 0; i < rcc->num_entries; i++) {
- RateControlEntry *rce = &rcc->entry[i];
+ const RateControlEntry *rce = &rcc->entry[i];
qscale[i] = get_qscale(s, &rcc->entry[i], rate_factor, i);
rcc->last_qscale_for[rce->pict_type] = qscale[i];
@@ -394,20 +394,20 @@ static int init_pass2(MpegEncContext *s)
/* fixed I/B QP relative to P mode */
for (i = FFMAX(0, rcc->num_entries - 300); i < rcc->num_entries; i++) {
- RateControlEntry *rce = &rcc->entry[i];
+ const RateControlEntry *rce = &rcc->entry[i];
qscale[i] = get_diff_limited_q(s, rce, qscale[i]);
}
for (i = rcc->num_entries - 1; i >= 0; i--) {
- RateControlEntry *rce = &rcc->entry[i];
+ const RateControlEntry *rce = &rcc->entry[i];
qscale[i] = get_diff_limited_q(s, rce, qscale[i]);
}
/* smooth curve */
for (i = 0; i < rcc->num_entries; i++) {
- RateControlEntry *rce = &rcc->entry[i];
+ const RateControlEntry *rce = &rcc->entry[i];
const int pict_type = rce->new_pict_type;
int j;
double q = 0.0, sum = 0.0;
@@ -877,8 +877,8 @@ static void adaptive_quantization(MpegEncContext *s, double q)
void ff_get_2pass_fcode(MpegEncContext *s)
{
- RateControlContext *rcc = &s->rc_context;
- RateControlEntry *rce = &rcc->entry[s->picture_number];
+ const RateControlContext *rcc = &s->rc_context;
+ const RateControlEntry *rce = &rcc->entry[s->picture_number];
s->f_code = rce->f_code;
s->b_code = rce->b_code;
@@ -929,7 +929,7 @@ float ff_rate_estimate_qscale(MpegEncContext *s, int dry_run)
rce = &rcc->entry[picture_number];
wanted_bits = rce->expected_bits;
} else {
- Picture *dts_pic;
+ const Picture *dts_pic;
rce = &local_rce;
/* FIXME add a dts field to AVFrame and ensure it is set and use it
diff --git a/libavcodec/rv34.c b/libavcodec/rv34.c
index 941d983501..df1d570e73 100644
--- a/libavcodec/rv34.c
+++ b/libavcodec/rv34.c
@@ -679,7 +679,8 @@ static inline void rv34_mc(RV34DecContext *r, const int block_type,
h264_chroma_mc_func (*chroma_mc))
{
MpegEncContext *s = &r->s;
- uint8_t *Y, *U, *V, *srcY, *srcU, *srcV;
+ uint8_t *Y, *U, *V;
+ const uint8_t *srcY, *srcU, *srcV;
int dxy, mx, my, umx, umy, lx, ly, uvmx, uvmy, src_x, src_y, uvsrc_x, uvsrc_y;
int mv_pos = s->mb_x * 2 + s->mb_y * 2 * s->b8_stride + mv_off;
int is16x16 = 1;
diff --git a/libavcodec/vc1_loopfilter.c b/libavcodec/vc1_loopfilter.c
index 8afb4db190..67abb4b01c 100644
--- a/libavcodec/vc1_loopfilter.c
+++ b/libavcodec/vc1_loopfilter.c
@@ -413,9 +413,10 @@ static av_always_inline void vc1_p_h_loop_filter(VC1Context *v, uint8_t *dest, u
}
}
-static av_always_inline void vc1_p_v_loop_filter(VC1Context *v, uint8_t *dest, uint32_t *cbp,
- uint8_t *is_intra, int16_t (*mv)[2], uint8_t *mv_f,
- int *ttblk, uint32_t flags, int block_num)
+static av_always_inline
+void vc1_p_v_loop_filter(VC1Context *v, uint8_t *dest, const uint32_t *cbp,
+ const uint8_t *is_intra, int16_t (*mv)[2], const uint8_t *mv_f,
+ const int *ttblk, uint32_t flags, int block_num)
{
MpegEncContext *s = &v->s;
int pq = v->pq;
@@ -799,7 +800,7 @@ void ff_vc1_p_loop_filter(VC1Context *v)
}
}
-static av_always_inline void vc1_p_h_intfr_loop_filter(VC1Context *v, uint8_t *dest, int *ttblk,
+static av_always_inline void vc1_p_h_intfr_loop_filter(VC1Context *v, uint8_t *dest, const int *ttblk,
uint32_t flags, uint8_t fieldtx, int block_num)
{
MpegEncContext *s = &v->s;
@@ -849,8 +850,9 @@ static av_always_inline void vc1_p_h_intfr_loop_filter(VC1Context *v, uint8_t *d
}
}
-static av_always_inline void vc1_p_v_intfr_loop_filter(VC1Context *v, uint8_t *dest, int *ttblk,
- uint32_t flags, uint8_t fieldtx, int block_num)
+static av_always_inline
+void vc1_p_v_intfr_loop_filter(VC1Context *v, uint8_t *dest, const int *ttblk,
+ uint32_t flags, uint8_t fieldtx, int block_num)
{
MpegEncContext *s = &v->s;
int pq = v->pq;
@@ -1109,8 +1111,9 @@ void ff_vc1_p_intfr_loop_filter(VC1Context *v)
}
}
-static av_always_inline void vc1_b_h_intfi_loop_filter(VC1Context *v, uint8_t *dest, uint32_t *cbp,
- int *ttblk, uint32_t flags, int block_num)
+static av_always_inline
+void vc1_b_h_intfi_loop_filter(VC1Context *v, uint8_t *dest, const uint32_t *cbp,
+ const int *ttblk, uint32_t flags, int block_num)
{
MpegEncContext *s = &v->s;
int pq = v->pq;
@@ -1141,8 +1144,9 @@ static av_always_inline void vc1_b_h_intfi_loop_filter(VC1Context *v, uint8_t *d
}
}
-static av_always_inline void vc1_b_v_intfi_loop_filter(VC1Context *v, uint8_t *dest, uint32_t *cbp,
- int *ttblk, uint32_t flags, int block_num)
+static av_always_inline
+void vc1_b_v_intfi_loop_filter(VC1Context *v, uint8_t *dest, const uint32_t *cbp,
+ const int *ttblk, uint32_t flags, int block_num)
{
MpegEncContext *s = &v->s;
int pq = v->pq;
@@ -1174,7 +1178,7 @@ void ff_vc1_b_intfi_loop_filter(VC1Context *v)
MpegEncContext *s = &v->s;
int block_count = CONFIG_GRAY && (s->avctx->flags & AV_CODEC_FLAG_GRAY) ? 4 : 6;
uint8_t *dest;
- uint32_t *cbp;
+ const uint32_t *cbp;
int *ttblk;
uint32_t flags = 0;
int i;
diff --git a/libavcodec/vc1_mc.c b/libavcodec/vc1_mc.c
index b60a48b38f..90ff1eee58 100644
--- a/libavcodec/vc1_mc.c
+++ b/libavcodec/vc1_mc.c
@@ -58,7 +58,7 @@ static av_always_inline void vc1_scale_chroma(uint8_t *srcU, uint8_t *srcV,
}
static av_always_inline void vc1_lut_scale_luma(uint8_t *srcY,
- uint8_t *lut1, uint8_t *lut2,
+ const uint8_t *lut1, const uint8_t *lut2,
int k, int linesize)
{
int i, j;
@@ -78,7 +78,7 @@ static av_always_inline void vc1_lut_scale_luma(uint8_t *srcY,
}
static av_always_inline void vc1_lut_scale_chroma(uint8_t *srcU, uint8_t *srcV,
- uint8_t *lut1, uint8_t *lut2,
+ const uint8_t *lut1, const uint8_t *lut2,
int k, int uvlinesize)
{
int i, j;
@@ -177,7 +177,7 @@ void ff_vc1_mc_1mv(VC1Context *v, int dir)
int dxy, mx, my, uvmx, uvmy, src_x, src_y, uvsrc_x, uvsrc_y;
int v_edge_pos = s->v_edge_pos >> v->field_mode;
int i;
- uint8_t (*luty)[256], (*lutuv)[256];
+ const uint8_t (*luty)[256], (*lutuv)[256];
int use_ic;
int interlace;
int linesize, uvlinesize;
@@ -457,7 +457,7 @@ void ff_vc1_mc_4mv_luma(VC1Context *v, int n, int dir, int avg)
int off;
int fieldmv = (v->fcm == ILACE_FRAME) ? v->blk_mv_type[s->block_index[n]] : 0;
int v_edge_pos = s->v_edge_pos >> v->field_mode;
- uint8_t (*luty)[256];
+ const uint8_t (*luty)[256];
int use_ic;
int interlace;
int linesize;
@@ -640,7 +640,7 @@ void ff_vc1_mc_4mv_chroma(VC1Context *v, int dir)
int16_t tx, ty;
int chroma_ref_type;
int v_edge_pos = s->v_edge_pos >> v->field_mode;
- uint8_t (*lutuv)[256];
+ const uint8_t (*lutuv)[256];
int use_ic;
int interlace;
int uvlinesize;
@@ -851,7 +851,7 @@ void ff_vc1_mc_4mv_chroma4(VC1Context *v, int dir, int dir2, int avg)
int use_ic;
int interlace;
int uvlinesize;
- uint8_t (*lutuv)[256];
+ const uint8_t (*lutuv)[256];
if (CONFIG_GRAY && s->avctx->flags & AV_CODEC_FLAG_GRAY)
return;
@@ -1191,8 +1191,8 @@ void ff_vc1_interp_mc(VC1Context *v)
}
if (use_ic) {
- uint8_t (*luty )[256] = v->next_luty;
- uint8_t (*lutuv)[256] = v->next_lutuv;
+ const uint8_t (*luty )[256] = v->next_luty;
+ const uint8_t (*lutuv)[256] = v->next_lutuv;
vc1_lut_scale_luma(srcY,
luty[v->field_mode ? v->ref_field_type[1] : ((0+src_y - s->mspel) & 1)],
luty[v->field_mode ? v->ref_field_type[1] : ((1+src_y - s->mspel) & 1)],
diff --git a/libavcodec/vc1_pred.c b/libavcodec/vc1_pred.c
index 51ad668f23..f5e80fe0ef 100644
--- a/libavcodec/vc1_pred.c
+++ b/libavcodec/vc1_pred.c
@@ -33,7 +33,7 @@
#include "vc1_pred.h"
#include "vc1data.h"
-static av_always_inline int scaleforsame_x(VC1Context *v, int n /* MV */, int dir)
+static av_always_inline int scaleforsame_x(const VC1Context *v, int n /* MV */, int dir)
{
int scaledvalue, refdist;
int scalesame1, scalesame2;
@@ -66,7 +66,7 @@ static av_always_inline int scaleforsame_x(VC1Context *v, int n /* MV */, int di
return av_clip(scaledvalue, -v->range_x, v->range_x - 1);
}
-static av_always_inline int scaleforsame_y(VC1Context *v, int i, int n /* MV */, int dir)
+static av_always_inline int scaleforsame_y(const VC1Context *v, int i, int n /* MV */, int dir)
{
int scaledvalue, refdist;
int scalesame1, scalesame2;
@@ -103,7 +103,7 @@ static av_always_inline int scaleforsame_y(VC1Context *v, int i, int n /* MV */,
return av_clip(scaledvalue, -v->range_y / 2, v->range_y / 2 - 1);
}
-static av_always_inline int scaleforopp_x(VC1Context *v, int n /* MV */)
+static av_always_inline int scaleforopp_x(const VC1Context *v, int n /* MV */)
{
int scalezone1_x, zone1offset_x;
int scaleopp1, scaleopp2, brfd;
@@ -130,7 +130,7 @@ static av_always_inline int scaleforopp_x(VC1Context *v, int n /* MV */)
return av_clip(scaledvalue, -v->range_x, v->range_x - 1);
}
-static av_always_inline int scaleforopp_y(VC1Context *v, int n /* MV */, int dir)
+static av_always_inline int scaleforopp_y(const VC1Context *v, int n /* MV */, int dir)
{
int scalezone1_y, zone1offset_y;
int scaleopp1, scaleopp2, brfd;
@@ -161,7 +161,7 @@ static av_always_inline int scaleforopp_y(VC1Context *v, int n /* MV */, int dir
}
}
-static av_always_inline int scaleforsame(VC1Context *v, int i, int n /* MV */,
+static av_always_inline int scaleforsame(const VC1Context *v, int i, int n /* MV */,
int dim, int dir)
{
int brfd, scalesame;
@@ -182,7 +182,7 @@ static av_always_inline int scaleforsame(VC1Context *v, int i, int n /* MV */,
return n;
}
-static av_always_inline int scaleforopp(VC1Context *v, int n /* MV */,
+static av_always_inline int scaleforopp(const VC1Context *v, int n /* MV */,
int dim, int dir)
{
int refdist, scaleopp;
@@ -215,7 +215,6 @@ void ff_vc1_pred_mv(VC1Context *v, int n, int dmv_x, int dmv_y,
{
MpegEncContext *s = &v->s;
int xy, wrap, off = 0;
- int16_t *A, *B, *C;
int px, py;
int sum;
int mixedmv_pic, num_samefield = 0, num_oppfield = 0;
@@ -301,7 +300,7 @@ void ff_vc1_pred_mv(VC1Context *v, int n, int dmv_x, int dmv_y,
}
if (a_valid) {
- A = s->cur_pic.motion_val[dir][xy - wrap + v->blocks_off];
+ const int16_t *A = s->cur_pic.motion_val[dir][xy - wrap + v->blocks_off];
a_f = v->mv_f[dir][xy - wrap + v->blocks_off];
num_oppfield += a_f;
num_samefield += 1 - a_f;
@@ -312,7 +311,7 @@ void ff_vc1_pred_mv(VC1Context *v, int n, int dmv_x, int dmv_y,
a_f = 0;
}
if (b_valid) {
- B = s->cur_pic.motion_val[dir][xy - wrap + off + v->blocks_off];
+ const int16_t *B = s->cur_pic.motion_val[dir][xy - wrap + off + v->blocks_off];
b_f = v->mv_f[dir][xy - wrap + off + v->blocks_off];
num_oppfield += b_f;
num_samefield += 1 - b_f;
@@ -323,7 +322,7 @@ void ff_vc1_pred_mv(VC1Context *v, int n, int dmv_x, int dmv_y,
b_f = 0;
}
if (c_valid) {
- C = s->cur_pic.motion_val[dir][xy - 1 + v->blocks_off];
+ const int16_t *C = s->cur_pic.motion_val[dir][xy - 1 + v->blocks_off];
c_f = v->mv_f[dir][xy - 1 + v->blocks_off];
num_oppfield += c_f;
num_samefield += 1 - c_f;
@@ -692,8 +691,7 @@ void ff_vc1_pred_b_mv(VC1Context *v, int dmv_x[2], int dmv_y[2],
int direct, int mvtype)
{
MpegEncContext *s = &v->s;
- int xy, wrap, off = 0;
- int16_t *A, *B, *C;
+ int xy, wrap;
int px, py;
int sum;
int r_x, r_y;
@@ -743,10 +741,10 @@ void ff_vc1_pred_b_mv(VC1Context *v, int dmv_x[2], int dmv_y[2],
}
if ((mvtype == BMV_TYPE_FORWARD) || (mvtype == BMV_TYPE_INTERPOLATED)) {
- C = s->cur_pic.motion_val[0][xy - 2];
- A = s->cur_pic.motion_val[0][xy - wrap * 2];
- off = (s->mb_x == (s->mb_width - 1)) ? -2 : 2;
- B = s->cur_pic.motion_val[0][xy - wrap * 2 + off];
+ int16_t *C = s->cur_pic.motion_val[0][xy - 2];
+ const int16_t *A = s->cur_pic.motion_val[0][xy - wrap * 2];
+ int off = (s->mb_x == (s->mb_width - 1)) ? -2 : 2;
+ const int16_t *B = s->cur_pic.motion_val[0][xy - wrap * 2 + off];
if (!s->mb_x) C[0] = C[1] = 0;
if (!s->first_slice_line) { // predictor A is not out of bounds
@@ -812,10 +810,10 @@ void ff_vc1_pred_b_mv(VC1Context *v, int dmv_x[2], int dmv_y[2],
s->mv[0][0][1] = ((py + dmv_y[0] + r_y) & ((r_y << 1) - 1)) - r_y;
}
if ((mvtype == BMV_TYPE_BACKWARD) || (mvtype == BMV_TYPE_INTERPOLATED)) {
- C = s->cur_pic.motion_val[1][xy - 2];
- A = s->cur_pic.motion_val[1][xy - wrap * 2];
- off = (s->mb_x == (s->mb_width - 1)) ? -2 : 2;
- B = s->cur_pic.motion_val[1][xy - wrap * 2 + off];
+ int16_t *C = s->cur_pic.motion_val[1][xy - 2];
+ const int16_t *A = s->cur_pic.motion_val[1][xy - wrap * 2];
+ int off = (s->mb_x == (s->mb_width - 1)) ? -2 : 2;
+ const int16_t *B = s->cur_pic.motion_val[1][xy - wrap * 2 + off];
if (!s->mb_x)
C[0] = C[1] = 0;
diff --git a/libavcodec/vc1dec.c b/libavcodec/vc1dec.c
index d8d58bb7eb..b89f695b56 100644
--- a/libavcodec/vc1dec.c
+++ b/libavcodec/vc1dec.c
@@ -211,7 +211,7 @@ static void vc1_draw_sprites(VC1Context *v, SpriteData* sd)
{
int i, plane, row, sprite;
int sr_cache[2][2] = { { -1, -1 }, { -1, -1 } };
- uint8_t* src_h[2][2];
+ const uint8_t *src_h[2][2];
int xoff[2], xadv[2], yoff[2], yadv[2], alpha;
int ysub[2];
MpegEncContext *s = &v->s;
@@ -235,7 +235,7 @@ static void vc1_draw_sprites(VC1Context *v, SpriteData* sd)
v->sprite_output_frame->linesize[plane] * row;
for (sprite = 0; sprite <= v->two_sprites; sprite++) {
- uint8_t *iplane = s->cur_pic.data[plane];
+ const uint8_t *iplane = s->cur_pic.data[plane];
int iline = s->cur_pic.linesize[plane];
int ycoord = yoff[sprite] + yadv[sprite] * row;
int yline = ycoord >> 16;
@@ -667,7 +667,7 @@ static av_cold int vc1_decode_init(AVCodecContext *avctx)
}
} else { // VC1/WVC1/WVP2
const uint8_t *start = avctx->extradata;
- uint8_t *end = avctx->extradata + avctx->extradata_size;
+ const uint8_t *end = avctx->extradata + avctx->extradata_size;
const uint8_t *next;
int size, buf2_size;
uint8_t *buf2 = NULL;
diff --git a/libavcodec/wmv2.c b/libavcodec/wmv2.c
index e3d3288d33..c2bcb988c4 100644
--- a/libavcodec/wmv2.c
+++ b/libavcodec/wmv2.c
@@ -49,7 +49,8 @@ av_cold void ff_wmv2_common_init(MpegEncContext *s)
void ff_mspel_motion(MpegEncContext *s, uint8_t *dest_y,
uint8_t *dest_cb, uint8_t *dest_cr,
- uint8_t *const *ref_picture, op_pixels_func (*pix_op)[4],
+ uint8_t *const *ref_picture,
+ const op_pixels_func (*pix_op)[4],
int motion_x, int motion_y, int h)
{
WMV2Context *const w = s->private_ctx;
diff --git a/libavcodec/wmv2.h b/libavcodec/wmv2.h
index e49b81cdfb..6fc9704c3d 100644
--- a/libavcodec/wmv2.h
+++ b/libavcodec/wmv2.h
@@ -39,7 +39,8 @@ void ff_wmv2_common_init(MpegEncContext *s);
void ff_mspel_motion(MpegEncContext *s,
uint8_t *dest_y, uint8_t *dest_cb, uint8_t *dest_cr,
- uint8_t *const *ref_picture, op_pixels_func (*pix_op)[4],
+ uint8_t *const *ref_picture,
+ const op_pixels_func (*pix_op)[4],
int motion_x, int motion_y, int h);
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 36/71] avcodec/vc1_pred: Remove unused function parameter
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (33 preceding siblings ...)
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 35/71] avcodec/mpegvideo: Add const where appropriate Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 37/71] avcodec/mpegpicture: Improve error messages and code Andreas Rheinhardt
` (35 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/vc1_pred.c | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/libavcodec/vc1_pred.c b/libavcodec/vc1_pred.c
index f5e80fe0ef..9141290d26 100644
--- a/libavcodec/vc1_pred.c
+++ b/libavcodec/vc1_pred.c
@@ -66,7 +66,7 @@ static av_always_inline int scaleforsame_x(const VC1Context *v, int n /* MV */,
return av_clip(scaledvalue, -v->range_x, v->range_x - 1);
}
-static av_always_inline int scaleforsame_y(const VC1Context *v, int i, int n /* MV */, int dir)
+static av_always_inline int scaleforsame_y(const VC1Context *v, int n /* MV */, int dir)
{
int scaledvalue, refdist;
int scalesame1, scalesame2;
@@ -161,7 +161,7 @@ static av_always_inline int scaleforopp_y(const VC1Context *v, int n /* MV */, i
}
}
-static av_always_inline int scaleforsame(const VC1Context *v, int i, int n /* MV */,
+static av_always_inline int scaleforsame(const VC1Context *v, int n /* MV */,
int dim, int dir)
{
int brfd, scalesame;
@@ -170,7 +170,7 @@ static av_always_inline int scaleforsame(const VC1Context *v, int i, int n /* MV
n >>= hpel;
if (v->s.pict_type != AV_PICTURE_TYPE_B || v->second_field || !dir) {
if (dim)
- n = scaleforsame_y(v, i, n, dir) * (1 << hpel);
+ n = scaleforsame_y(v, n, dir) * (1 << hpel);
else
n = scaleforsame_x(v, n, dir) * (1 << hpel);
return n;
@@ -365,16 +365,16 @@ void ff_vc1_pred_mv(VC1Context *v, int n, int dmv_x, int dmv_y,
v->mv_f[dir][xy + v->blocks_off] = 0;
v->ref_field_type[dir] = v->cur_field_type;
if (a_valid && a_f) {
- field_predA[0] = scaleforsame(v, n, field_predA[0], 0, dir);
- field_predA[1] = scaleforsame(v, n, field_predA[1], 1, dir);
+ field_predA[0] = scaleforsame(v, field_predA[0], 0, dir);
+ field_predA[1] = scaleforsame(v, field_predA[1], 1, dir);
}
if (b_valid && b_f) {
- field_predB[0] = scaleforsame(v, n, field_predB[0], 0, dir);
- field_predB[1] = scaleforsame(v, n, field_predB[1], 1, dir);
+ field_predB[0] = scaleforsame(v, field_predB[0], 0, dir);
+ field_predB[1] = scaleforsame(v, field_predB[1], 1, dir);
}
if (c_valid && c_f) {
- field_predC[0] = scaleforsame(v, n, field_predC[0], 0, dir);
- field_predC[1] = scaleforsame(v, n, field_predC[1], 1, dir);
+ field_predC[0] = scaleforsame(v, field_predC[0], 0, dir);
+ field_predC[1] = scaleforsame(v, field_predC[1], 1, dir);
}
}
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 37/71] avcodec/mpegpicture: Improve error messages and code
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (34 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 36/71] avcodec/vc1_pred: Remove unused function parameter Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 38/71] avcodec/mpegpicture: Split ff_alloc_picture() into check and alloc part Andreas Rheinhardt
` (34 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Make it clear that this is not a failure of get_buffer/the user,
but a deficit of mpegvideo.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegpicture.c | 11 +++++------
1 file changed, 5 insertions(+), 6 deletions(-)
diff --git a/libavcodec/mpegpicture.c b/libavcodec/mpegpicture.c
index 6da9545b50..f605338845 100644
--- a/libavcodec/mpegpicture.c
+++ b/libavcodec/mpegpicture.c
@@ -102,20 +102,19 @@ static int handle_pic_linesizes(AVCodecContext *avctx, Picture *pic,
if ((linesize && linesize != pic->f->linesize[0]) ||
(uvlinesize && uvlinesize != pic->f->linesize[1])) {
- av_log(avctx, AV_LOG_ERROR,
- "get_buffer() failed (stride changed: linesize=%d/%d uvlinesize=%d/%d)\n",
+ av_log(avctx, AV_LOG_ERROR, "Stride change unsupported: "
+ "linesize=%d/%d uvlinesize=%d/%d)\n",
linesize, pic->f->linesize[0],
uvlinesize, pic->f->linesize[1]);
ff_mpeg_unref_picture(pic);
- return -1;
+ return AVERROR_PATCHWELCOME;
}
if (av_pix_fmt_count_planes(pic->f->format) > 2 &&
pic->f->linesize[1] != pic->f->linesize[2]) {
- av_log(avctx, AV_LOG_ERROR,
- "get_buffer() failed (uv stride mismatch)\n");
+ av_log(avctx, AV_LOG_ERROR, "uv stride mismatch unsupported\n");
ff_mpeg_unref_picture(pic);
- return -1;
+ return AVERROR_PATCHWELCOME;
}
ret = ff_mpeg_framesize_alloc(avctx, me, sc,
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 38/71] avcodec/mpegpicture: Split ff_alloc_picture() into check and alloc part
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (35 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 37/71] avcodec/mpegpicture: Improve error messages and code Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 39/71] avcodec/mpegvideo_enc: Pass AVFrame*, not Picture* to alloc_picture() Andreas Rheinhardt
` (33 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
ff_alloc_picture() currently does two things: It checks the
consistency of the linesize (which should not be necessary, but is)
and it allocates certain buffers. (It does not actually allocate
the picture buffers, so its name is misleading.)
This commit splits it into two separate functions. The rationale
for this is that for the encoders, every picture needs its linesizes
checked, but not every picture needs these extra buffers.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegpicture.c | 70 ++++++++++++++------------------------
libavcodec/mpegpicture.h | 15 ++++++--
libavcodec/mpegvideo_dec.c | 8 +++--
libavcodec/mpegvideo_enc.c | 57 ++++++++++++++-----------------
4 files changed, 69 insertions(+), 81 deletions(-)
diff --git a/libavcodec/mpegpicture.c b/libavcodec/mpegpicture.c
index f605338845..840aa23c38 100644
--- a/libavcodec/mpegpicture.c
+++ b/libavcodec/mpegpicture.c
@@ -91,40 +91,27 @@ int ff_mpeg_framesize_alloc(AVCodecContext *avctx, MotionEstContext *me,
return 0;
}
-/**
- * Check the pic's linesize and allocate linesize dependent scratch buffers
- */
-static int handle_pic_linesizes(AVCodecContext *avctx, Picture *pic,
- MotionEstContext *me, ScratchpadContext *sc,
- int linesize, int uvlinesize)
+int ff_mpv_pic_check_linesize(void *logctx, const AVFrame *f,
+ ptrdiff_t *linesizep, ptrdiff_t *uvlinesizep)
{
- int ret;
-
- if ((linesize && linesize != pic->f->linesize[0]) ||
- (uvlinesize && uvlinesize != pic->f->linesize[1])) {
- av_log(avctx, AV_LOG_ERROR, "Stride change unsupported: "
- "linesize=%d/%d uvlinesize=%d/%d)\n",
- linesize, pic->f->linesize[0],
- uvlinesize, pic->f->linesize[1]);
- ff_mpeg_unref_picture(pic);
+ ptrdiff_t linesize = *linesizep, uvlinesize = *uvlinesizep;
+
+ if ((linesize && linesize != f->linesize[0]) ||
+ (uvlinesize && uvlinesize != f->linesize[1])) {
+ av_log(logctx, AV_LOG_ERROR, "Stride change unsupported: "
+ "linesize=%"PTRDIFF_SPECIFIER"/%d uvlinesize=%"PTRDIFF_SPECIFIER"/%d)\n",
+ linesize, f->linesize[0],
+ uvlinesize, f->linesize[1]);
return AVERROR_PATCHWELCOME;
}
- if (av_pix_fmt_count_planes(pic->f->format) > 2 &&
- pic->f->linesize[1] != pic->f->linesize[2]) {
- av_log(avctx, AV_LOG_ERROR, "uv stride mismatch unsupported\n");
- ff_mpeg_unref_picture(pic);
+ if (av_pix_fmt_count_planes(f->format) > 2 &&
+ f->linesize[1] != f->linesize[2]) {
+ av_log(logctx, AV_LOG_ERROR, "uv stride mismatch unsupported\n");
return AVERROR_PATCHWELCOME;
}
-
- ret = ff_mpeg_framesize_alloc(avctx, me, sc,
- pic->f->linesize[0]);
- if (ret < 0) {
- av_log(avctx, AV_LOG_ERROR,
- "get_buffer() failed to allocate context scratch buffers.\n");
- ff_mpeg_unref_picture(pic);
- return ret;
- }
+ *linesizep = f->linesize[0];
+ *uvlinesizep = f->linesize[1];
return 0;
}
@@ -156,28 +143,22 @@ static int alloc_picture_tables(BufferPoolContext *pools, Picture *pic,
return 0;
}
-/**
- * Allocate a Picture.
- * The pixels are allocated/set by calling get_buffer() if shared = 0
- */
-int ff_alloc_picture(AVCodecContext *avctx, Picture *pic, MotionEstContext *me,
- ScratchpadContext *sc, BufferPoolContext *pools,
- int mb_height, ptrdiff_t *linesize, ptrdiff_t *uvlinesize)
+int ff_mpv_alloc_pic_accessories(AVCodecContext *avctx, Picture *pic,
+ MotionEstContext *me, ScratchpadContext *sc,
+ BufferPoolContext *pools, int mb_height)
{
int ret;
- if (handle_pic_linesizes(avctx, pic, me, sc,
- *linesize, *uvlinesize) < 0)
- return -1;
-
- *linesize = pic->f->linesize[0];
- *uvlinesize = pic->f->linesize[1];
-
for (int i = 0; i < MPV_MAX_PLANES; i++) {
pic->data[i] = pic->f->data[i];
pic->linesize[i] = pic->f->linesize[i];
}
+ ret = ff_mpeg_framesize_alloc(avctx, me, sc,
+ pic->f->linesize[0]);
+ if (ret < 0)
+ goto fail;
+
ret = alloc_picture_tables(pools, pic, mb_height);
if (ret < 0)
goto fail;
@@ -192,9 +173,8 @@ int ff_alloc_picture(AVCodecContext *avctx, Picture *pic, MotionEstContext *me,
return 0;
fail:
- av_log(avctx, AV_LOG_ERROR, "Error allocating a picture.\n");
- ff_mpeg_unref_picture(pic);
- return AVERROR(ENOMEM);
+ av_log(avctx, AV_LOG_ERROR, "Error allocating picture accessories.\n");
+ return ret;
}
/**
diff --git a/libavcodec/mpegpicture.h b/libavcodec/mpegpicture.h
index 814f71213e..6589b38262 100644
--- a/libavcodec/mpegpicture.h
+++ b/libavcodec/mpegpicture.h
@@ -96,9 +96,18 @@ typedef struct Picture {
/**
* Allocate a Picture's accessories, but not the AVFrame's buffer itself.
*/
-int ff_alloc_picture(AVCodecContext *avctx, Picture *pic, MotionEstContext *me,
- ScratchpadContext *sc, BufferPoolContext *pools,
- int mb_height, ptrdiff_t *linesize, ptrdiff_t *uvlinesize);
+int ff_mpv_alloc_pic_accessories(AVCodecContext *avctx, Picture *pic,
+ MotionEstContext *me, ScratchpadContext *sc,
+ BufferPoolContext *pools, int mb_height);
+
+/**
+ * Check that the linesizes of an AVFrame are consistent with the requirements
+ * of mpegvideo.
+ * FIXME: There should be no need for this function. mpegvideo should be made
+ * to work with changing linesizes.
+ */
+int ff_mpv_pic_check_linesize(void *logctx, const struct AVFrame *f,
+ ptrdiff_t *linesizep, ptrdiff_t *uvlinesizep);
int ff_mpeg_framesize_alloc(AVCodecContext *avctx, MotionEstContext *me,
ScratchpadContext *sc, int linesize);
diff --git a/libavcodec/mpegvideo_dec.c b/libavcodec/mpegvideo_dec.c
index 570a422b6f..663d97e60f 100644
--- a/libavcodec/mpegvideo_dec.c
+++ b/libavcodec/mpegvideo_dec.c
@@ -259,6 +259,10 @@ static int alloc_picture(MpegEncContext *s, Picture **picp, int reference)
if (ret < 0)
goto fail;
+ ret = ff_mpv_pic_check_linesize(avctx, pic->f, &s->linesize, &s->uvlinesize);
+ if (ret < 0)
+ goto fail;
+
ret = ff_hwaccel_frame_priv_alloc(avctx, &pic->hwaccel_picture_private);
if (ret < 0)
goto fail;
@@ -267,8 +271,8 @@ static int alloc_picture(MpegEncContext *s, Picture **picp, int reference)
av_assert1(s->mb_height == s->buffer_pools.alloc_mb_height ||
FFALIGN(s->mb_height, 2) == s->buffer_pools.alloc_mb_height);
av_assert1(s->mb_stride == s->buffer_pools.alloc_mb_stride);
- ret = ff_alloc_picture(s->avctx, pic, &s->me, &s->sc, &s->buffer_pools,
- s->mb_height, &s->linesize, &s->uvlinesize);
+ ret = ff_mpv_alloc_pic_accessories(s->avctx, pic, &s->me, &s->sc,
+ &s->buffer_pools, s->mb_height);
if (ret < 0)
goto fail;
*picp = pic;
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index 21626b58a0..d4b280da05 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -1103,6 +1103,10 @@ static int alloc_picture(MpegEncContext *s, Picture *pic)
if (ret < 0)
return ret;
+ ret = ff_mpv_pic_check_linesize(avctx, pic->f, &s->linesize, &s->uvlinesize);
+ if (ret < 0)
+ return ret;
+
for (int i = 0; pic->f->data[i]; i++) {
int offset = (EDGE_WIDTH >> (i ? s->chroma_y_shift : 0)) *
pic->f->linesize[i] +
@@ -1112,11 +1116,7 @@ static int alloc_picture(MpegEncContext *s, Picture *pic)
pic->f->width = avctx->width;
pic->f->height = avctx->height;
- av_assert1(s->mb_width == s->buffer_pools.alloc_mb_width);
- av_assert1(s->mb_height == s->buffer_pools.alloc_mb_height);
- av_assert1(s->mb_stride == s->buffer_pools.alloc_mb_stride);
- return ff_alloc_picture(s->avctx, pic, &s->me, &s->sc, &s->buffer_pools,
- s->mb_height, &s->linesize, &s->uvlinesize);
+ return 0;
}
static int load_input_picture(MpegEncContext *s, const AVFrame *pic_arg)
@@ -1188,7 +1188,7 @@ static int load_input_picture(MpegEncContext *s, const AVFrame *pic_arg)
} else {
ret = alloc_picture(s, pic);
if (ret < 0)
- return ret;
+ goto fail;
ret = av_frame_copy_props(pic->f, pic_arg);
if (ret < 0) {
ff_mpeg_unref_picture(pic);
@@ -1258,6 +1258,9 @@ static int load_input_picture(MpegEncContext *s, const AVFrame *pic_arg)
s->input_picture[encoding_delay] = pic;
return 0;
+fail:
+ ff_mpeg_unref_picture(pic);
+ return ret;
}
static int skip_check(MpegEncContext *s, const Picture *p, const Picture *ref)
@@ -1600,45 +1603,37 @@ no_output_pic:
s->reordered_input_picture[0]->f->pict_type !=
AV_PICTURE_TYPE_B ? 3 : 0;
- if ((ret = av_frame_ref(s->new_pic,
- s->reordered_input_picture[0]->f)))
- goto fail;
-
if (s->reordered_input_picture[0]->shared || s->avctx->rc_buffer_size) {
// input is a shared pix, so we can't modify it -> allocate a new
// one & ensure that the shared one is reuseable
-
- Picture *pic;
- int i = ff_find_unused_picture(s->avctx, s->picture, 0);
- if (i < 0)
- return i;
- pic = &s->picture[i];
-
- pic->reference = s->reordered_input_picture[0]->reference;
- ret = alloc_picture(s, pic);
+ av_frame_move_ref(s->new_pic, s->reordered_input_picture[0]->f);
+ ret = alloc_picture(s, s->reordered_input_picture[0]);
if (ret < 0)
goto fail;
- ret = av_frame_copy_props(pic->f, s->reordered_input_picture[0]->f);
- if (ret < 0) {
- ff_mpeg_unref_picture(pic);
+ ret = av_frame_copy_props(s->reordered_input_picture[0]->f, s->new_pic);
+ if (ret < 0)
goto fail;
- }
- pic->coded_picture_number = s->reordered_input_picture[0]->coded_picture_number;
- pic->display_picture_number = s->reordered_input_picture[0]->display_picture_number;
-
- /* mark us unused / free shared pic */
- ff_mpeg_unref_picture(s->reordered_input_picture[0]);
-
- s->cur_pic_ptr = pic;
} else {
// input is not a shared pix -> reuse buffer for current_pix
- s->cur_pic_ptr = s->reordered_input_picture[0];
+ ret = av_frame_ref(s->new_pic, s->reordered_input_picture[0]->f);
+ if (ret < 0)
+ goto fail;
for (int i = 0; i < MPV_MAX_PLANES; i++) {
if (s->new_pic->data[i])
s->new_pic->data[i] += INPLACE_OFFSET;
}
}
+ s->cur_pic_ptr = s->reordered_input_picture[0];
+ av_assert1(s->mb_width == s->buffer_pools.alloc_mb_width);
+ av_assert1(s->mb_height == s->buffer_pools.alloc_mb_height);
+ av_assert1(s->mb_stride == s->buffer_pools.alloc_mb_stride);
+ ret = ff_mpv_alloc_pic_accessories(s->avctx, s->cur_pic_ptr, &s->me,
+ &s->sc, &s->buffer_pools, s->mb_height);
+ if (ret < 0) {
+ ff_mpeg_unref_picture(s->cur_pic_ptr);
+ return ret;
+ }
s->picture_number = s->cur_pic_ptr->display_picture_number;
}
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 39/71] avcodec/mpegvideo_enc: Pass AVFrame*, not Picture* to alloc_picture()
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (36 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 38/71] avcodec/mpegpicture: Split ff_alloc_picture() into check and alloc part Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 40/71] avcodec/mpegvideo_enc: Move copying properties " Andreas Rheinhardt
` (32 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
It now only deals with the AVFrame and no longer with the accessories.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegvideo_enc.c | 24 ++++++++++++------------
1 file changed, 12 insertions(+), 12 deletions(-)
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index d4b280da05..c6f4cd9b0e 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -1091,30 +1091,30 @@ static int get_intra_count(MpegEncContext *s, const uint8_t *src,
return acc;
}
-static int alloc_picture(MpegEncContext *s, Picture *pic)
+static int alloc_picture(MpegEncContext *s, AVFrame *f)
{
AVCodecContext *avctx = s->avctx;
int ret;
- pic->f->width = avctx->width + 2 * EDGE_WIDTH;
- pic->f->height = avctx->height + 2 * EDGE_WIDTH;
+ f->width = avctx->width + 2 * EDGE_WIDTH;
+ f->height = avctx->height + 2 * EDGE_WIDTH;
- ret = ff_encode_alloc_frame(avctx, pic->f);
+ ret = ff_encode_alloc_frame(avctx, f);
if (ret < 0)
return ret;
- ret = ff_mpv_pic_check_linesize(avctx, pic->f, &s->linesize, &s->uvlinesize);
+ ret = ff_mpv_pic_check_linesize(avctx, f, &s->linesize, &s->uvlinesize);
if (ret < 0)
return ret;
- for (int i = 0; pic->f->data[i]; i++) {
+ for (int i = 0; f->data[i]; i++) {
int offset = (EDGE_WIDTH >> (i ? s->chroma_y_shift : 0)) *
- pic->f->linesize[i] +
+ f->linesize[i] +
(EDGE_WIDTH >> (i ? s->chroma_x_shift : 0));
- pic->f->data[i] += offset;
+ f->data[i] += offset;
}
- pic->f->width = avctx->width;
- pic->f->height = avctx->height;
+ f->width = avctx->width;
+ f->height = avctx->height;
return 0;
}
@@ -1186,7 +1186,7 @@ static int load_input_picture(MpegEncContext *s, const AVFrame *pic_arg)
return ret;
pic->shared = 1;
} else {
- ret = alloc_picture(s, pic);
+ ret = alloc_picture(s, pic->f);
if (ret < 0)
goto fail;
ret = av_frame_copy_props(pic->f, pic_arg);
@@ -1607,7 +1607,7 @@ no_output_pic:
// input is a shared pix, so we can't modify it -> allocate a new
// one & ensure that the shared one is reuseable
av_frame_move_ref(s->new_pic, s->reordered_input_picture[0]->f);
- ret = alloc_picture(s, s->reordered_input_picture[0]);
+ ret = alloc_picture(s, s->reordered_input_picture[0]->f);
if (ret < 0)
goto fail;
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 40/71] avcodec/mpegvideo_enc: Move copying properties to alloc_picture()
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (37 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 39/71] avcodec/mpegvideo_enc: Pass AVFrame*, not Picture* to alloc_picture() Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-12 19:55 ` Michael Niedermayer
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 41/71] avcodec/mpegpicture: Rename Picture->MPVPicture Andreas Rheinhardt
` (31 subsequent siblings)
70 siblings, 1 reply; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
This way said function sets everything (except for the actual
contents of the frame's data). Also rename it to prepare_picture()
given its new role.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegvideo_enc.c | 22 +++++++++++-----------
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index c6f4cd9b0e..393b21823f 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -1091,7 +1091,11 @@ static int get_intra_count(MpegEncContext *s, const uint8_t *src,
return acc;
}
-static int alloc_picture(MpegEncContext *s, AVFrame *f)
+/**
+ * Allocates new buffers for an AVFrame and copies the properties
+ * from another AVFrame.
+ */
+static int prepare_picture(MpegEncContext *s, AVFrame *f, const AVFrame *props_frame)
{
AVCodecContext *avctx = s->avctx;
int ret;
@@ -1107,6 +1111,10 @@ static int alloc_picture(MpegEncContext *s, AVFrame *f)
if (ret < 0)
return ret;
+ ret = av_frame_copy_props(f, props_frame);
+ if (ret < 0)
+ return ret;
+
for (int i = 0; f->data[i]; i++) {
int offset = (EDGE_WIDTH >> (i ? s->chroma_y_shift : 0)) *
f->linesize[i] +
@@ -1186,14 +1194,9 @@ static int load_input_picture(MpegEncContext *s, const AVFrame *pic_arg)
return ret;
pic->shared = 1;
} else {
- ret = alloc_picture(s, pic->f);
+ ret = prepare_picture(s, pic->f, pic_arg);
if (ret < 0)
goto fail;
- ret = av_frame_copy_props(pic->f, pic_arg);
- if (ret < 0) {
- ff_mpeg_unref_picture(pic);
- return ret;
- }
for (int i = 0; i < 3; i++) {
ptrdiff_t src_stride = pic_arg->linesize[i];
@@ -1607,11 +1610,8 @@ no_output_pic:
// input is a shared pix, so we can't modify it -> allocate a new
// one & ensure that the shared one is reuseable
av_frame_move_ref(s->new_pic, s->reordered_input_picture[0]->f);
- ret = alloc_picture(s, s->reordered_input_picture[0]->f);
- if (ret < 0)
- goto fail;
- ret = av_frame_copy_props(s->reordered_input_picture[0]->f, s->new_pic);
+ ret = prepare_picture(s, s->reordered_input_picture[0]->f, s->new_pic);
if (ret < 0)
goto fail;
} else {
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 41/71] avcodec/mpegpicture: Rename Picture->MPVPicture
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (38 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 40/71] avcodec/mpegvideo_enc: Move copying properties " Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 42/71] avcodec/vc1_mc: Don't check AVFrame INTERLACE flags Andreas Rheinhardt
` (30 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Picture is just too generic.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/dxva2_mpeg2.c | 2 +-
libavcodec/dxva2_vc1.c | 4 ++--
libavcodec/intrax8.c | 2 +-
libavcodec/intrax8.h | 2 +-
libavcodec/ituh263dec.c | 4 ++--
libavcodec/mpeg4videoenc.c | 2 +-
libavcodec/mpeg_er.c | 2 +-
libavcodec/mpegpicture.c | 16 ++++++++--------
libavcodec/mpegpicture.h | 18 +++++++++---------
libavcodec/mpegvideo.h | 18 +++++++++---------
libavcodec/mpegvideo_dec.c | 13 +++++++------
libavcodec/mpegvideo_enc.c | 6 +++---
libavcodec/mpegvideo_motion.c | 2 +-
libavcodec/mpegvideodec.h | 5 +++--
libavcodec/mss2.c | 2 +-
libavcodec/ratecontrol.c | 2 +-
libavcodec/rv34.c | 2 +-
libavcodec/vc1dec.c | 2 +-
libavcodec/vdpau.c | 2 +-
libavcodec/vdpau_mpeg12.c | 4 ++--
libavcodec/vdpau_mpeg4.c | 2 +-
libavcodec/vdpau_vc1.c | 4 ++--
22 files changed, 59 insertions(+), 57 deletions(-)
diff --git a/libavcodec/dxva2_mpeg2.c b/libavcodec/dxva2_mpeg2.c
index fde615f530..d29a5bb538 100644
--- a/libavcodec/dxva2_mpeg2.c
+++ b/libavcodec/dxva2_mpeg2.c
@@ -45,7 +45,7 @@ void ff_dxva2_mpeg2_fill_picture_parameters(AVCodecContext *avctx,
DXVA_PictureParameters *pp)
{
const struct MpegEncContext *s = avctx->priv_data;
- const Picture *current_picture = s->cur_pic_ptr;
+ const MPVPicture *current_picture = s->cur_pic_ptr;
int is_field = s->picture_structure != PICT_FRAME;
memset(pp, 0, sizeof(*pp));
diff --git a/libavcodec/dxva2_vc1.c b/libavcodec/dxva2_vc1.c
index 7122f1cfea..f536da1008 100644
--- a/libavcodec/dxva2_vc1.c
+++ b/libavcodec/dxva2_vc1.c
@@ -46,7 +46,7 @@ void ff_dxva2_vc1_fill_picture_parameters(AVCodecContext *avctx,
{
const VC1Context *v = avctx->priv_data;
const MpegEncContext *s = &v->s;
- const Picture *current_picture = s->cur_pic_ptr;
+ const MPVPicture *current_picture = s->cur_pic_ptr;
int intcomp = 0;
// determine if intensity compensation is needed
@@ -336,7 +336,7 @@ static int dxva2_vc1_decode_slice(AVCodecContext *avctx,
uint32_t size)
{
const VC1Context *v = avctx->priv_data;
- const Picture *current_picture = v->s.cur_pic_ptr;
+ const MPVPicture *current_picture = v->s.cur_pic_ptr;
struct dxva2_picture_context *ctx_pic = current_picture->hwaccel_picture_private;
unsigned position;
diff --git a/libavcodec/intrax8.c b/libavcodec/intrax8.c
index 40085c69ce..f1dce86a50 100644
--- a/libavcodec/intrax8.c
+++ b/libavcodec/intrax8.c
@@ -730,7 +730,7 @@ av_cold void ff_intrax8_common_end(IntraX8Context *w)
av_freep(&w->prediction_table);
}
-int ff_intrax8_decode_picture(IntraX8Context *w, Picture *pict,
+int ff_intrax8_decode_picture(IntraX8Context *w, MPVPicture *pict,
GetBitContext *gb, int *mb_x, int *mb_y,
int dquant, int quant_offset,
int loopfilter, int lowdelay)
diff --git a/libavcodec/intrax8.h b/libavcodec/intrax8.h
index 8e22361f1f..b9f8c4250b 100644
--- a/libavcodec/intrax8.h
+++ b/libavcodec/intrax8.h
@@ -106,7 +106,7 @@ void ff_intrax8_common_end(IntraX8Context *w);
* @param quant_offset offset away from zero
* @param loopfilter enable filter after decoding a block
*/
-int ff_intrax8_decode_picture(IntraX8Context *w, Picture *pict,
+int ff_intrax8_decode_picture(IntraX8Context *w, MPVPicture *pict,
GetBitContext *gb, int *mb_x, int *mb_y,
int quant, int halfpq,
int loopfilter, int lowdelay);
diff --git a/libavcodec/ituh263dec.c b/libavcodec/ituh263dec.c
index 492cb5e0d4..2e4d74adc8 100644
--- a/libavcodec/ituh263dec.c
+++ b/libavcodec/ituh263dec.c
@@ -719,7 +719,7 @@ static int h263_get_modb(GetBitContext *gb, int pb_frame, int *cbpb)
#define tab_size ((signed)FF_ARRAY_ELEMS(s->direct_scale_mv[0]))
#define tab_bias (tab_size / 2)
-static inline void set_one_direct_mv(MpegEncContext *s, const Picture *p, int i)
+static inline void set_one_direct_mv(MpegEncContext *s, const MPVPicture *p, int i)
{
int xy = s->block_index[i];
uint16_t time_pp = s->pp_time;
@@ -750,7 +750,7 @@ static inline void set_one_direct_mv(MpegEncContext *s, const Picture *p, int i)
static int set_direct_mv(MpegEncContext *s)
{
const int mb_index = s->mb_x + s->mb_y * s->mb_stride;
- const Picture *p = &s->next_pic;
+ const MPVPicture *p = &s->next_pic;
int colocated_mb_type = p->mb_type[mb_index];
int i;
diff --git a/libavcodec/mpeg4videoenc.c b/libavcodec/mpeg4videoenc.c
index c5b5b3ea50..036171fe70 100644
--- a/libavcodec/mpeg4videoenc.c
+++ b/libavcodec/mpeg4videoenc.c
@@ -652,7 +652,7 @@ void ff_mpeg4_encode_mb(MpegEncContext *s, int16_t block[6][64],
for (i = 0; i < s->max_b_frames; i++) {
const uint8_t *b_pic;
int diff;
- Picture *pic = s->reordered_input_picture[i + 1];
+ const MPVPicture *pic = s->reordered_input_picture[i + 1];
if (!pic || pic->f->pict_type != AV_PICTURE_TYPE_B)
break;
diff --git a/libavcodec/mpeg_er.c b/libavcodec/mpeg_er.c
index 360f3ce3e0..21fe7d6f71 100644
--- a/libavcodec/mpeg_er.c
+++ b/libavcodec/mpeg_er.c
@@ -22,7 +22,7 @@
#include "mpegvideodec.h"
#include "mpeg_er.h"
-static void set_erpic(ERPicture *dst, const Picture *src)
+static void set_erpic(ERPicture *dst, const MPVPicture *src)
{
int i;
diff --git a/libavcodec/mpegpicture.c b/libavcodec/mpegpicture.c
index 840aa23c38..429c110397 100644
--- a/libavcodec/mpegpicture.c
+++ b/libavcodec/mpegpicture.c
@@ -30,7 +30,7 @@
#include "refstruct.h"
#include "threadframe.h"
-static void av_noinline free_picture_tables(Picture *pic)
+static void av_noinline free_picture_tables(MPVPicture *pic)
{
ff_refstruct_unref(&pic->mbskip_table);
ff_refstruct_unref(&pic->qscale_table_base);
@@ -116,7 +116,7 @@ int ff_mpv_pic_check_linesize(void *logctx, const AVFrame *f,
return 0;
}
-static int alloc_picture_tables(BufferPoolContext *pools, Picture *pic,
+static int alloc_picture_tables(BufferPoolContext *pools, MPVPicture *pic,
int mb_height)
{
#define GET_BUFFER(name, buf_suffix, idx_suffix) do { \
@@ -143,7 +143,7 @@ static int alloc_picture_tables(BufferPoolContext *pools, Picture *pic,
return 0;
}
-int ff_mpv_alloc_pic_accessories(AVCodecContext *avctx, Picture *pic,
+int ff_mpv_alloc_pic_accessories(AVCodecContext *avctx, MPVPicture *pic,
MotionEstContext *me, ScratchpadContext *sc,
BufferPoolContext *pools, int mb_height)
{
@@ -181,7 +181,7 @@ fail:
* Deallocate a picture; frees the picture tables in case they
* need to be reallocated anyway.
*/
-void ff_mpeg_unref_picture(Picture *pic)
+void ff_mpeg_unref_picture(MPVPicture *pic)
{
pic->tf.f = pic->f;
ff_thread_release_ext_buffer(&pic->tf);
@@ -203,7 +203,7 @@ void ff_mpeg_unref_picture(Picture *pic)
pic->coded_picture_number = 0;
}
-static void update_picture_tables(Picture *dst, const Picture *src)
+static void update_picture_tables(MPVPicture *dst, const MPVPicture *src)
{
ff_refstruct_replace(&dst->mbskip_table, src->mbskip_table);
ff_refstruct_replace(&dst->qscale_table_base, src->qscale_table_base);
@@ -223,7 +223,7 @@ static void update_picture_tables(Picture *dst, const Picture *src)
dst->mb_stride = src->mb_stride;
}
-int ff_mpeg_ref_picture(Picture *dst, Picture *src)
+int ff_mpeg_ref_picture(MPVPicture *dst, MPVPicture *src)
{
int ret;
@@ -260,7 +260,7 @@ fail:
return ret;
}
-int ff_find_unused_picture(AVCodecContext *avctx, Picture *picture, int shared)
+int ff_find_unused_picture(AVCodecContext *avctx, MPVPicture *picture, int shared)
{
for (int i = 0; i < MAX_PICTURE_COUNT; i++)
if (!picture[i].f->buf[0])
@@ -283,7 +283,7 @@ int ff_find_unused_picture(AVCodecContext *avctx, Picture *picture, int shared)
return -1;
}
-void av_cold ff_mpv_picture_free(Picture *pic)
+void av_cold ff_mpv_picture_free(MPVPicture *pic)
{
ff_mpeg_unref_picture(pic);
av_frame_free(&pic->f);
diff --git a/libavcodec/mpegpicture.h b/libavcodec/mpegpicture.h
index 6589b38262..f0837b158a 100644
--- a/libavcodec/mpegpicture.h
+++ b/libavcodec/mpegpicture.h
@@ -52,9 +52,9 @@ typedef struct BufferPoolContext {
} BufferPoolContext;
/**
- * Picture.
+ * MPVPicture.
*/
-typedef struct Picture {
+typedef struct MPVPicture {
struct AVFrame *f;
ThreadFrame tf;
@@ -91,12 +91,12 @@ typedef struct Picture {
int display_picture_number;
int coded_picture_number;
-} Picture;
+} MPVPicture;
/**
- * Allocate a Picture's accessories, but not the AVFrame's buffer itself.
+ * Allocate an MPVPicture's accessories, but not the AVFrame's buffer itself.
*/
-int ff_mpv_alloc_pic_accessories(AVCodecContext *avctx, Picture *pic,
+int ff_mpv_alloc_pic_accessories(AVCodecContext *avctx, MPVPicture *pic,
MotionEstContext *me, ScratchpadContext *sc,
BufferPoolContext *pools, int mb_height);
@@ -112,11 +112,11 @@ int ff_mpv_pic_check_linesize(void *logctx, const struct AVFrame *f,
int ff_mpeg_framesize_alloc(AVCodecContext *avctx, MotionEstContext *me,
ScratchpadContext *sc, int linesize);
-int ff_mpeg_ref_picture(Picture *dst, Picture *src);
-void ff_mpeg_unref_picture(Picture *picture);
+int ff_mpeg_ref_picture(MPVPicture *dst, MPVPicture *src);
+void ff_mpeg_unref_picture(MPVPicture *picture);
-void ff_mpv_picture_free(Picture *pic);
+void ff_mpv_picture_free(MPVPicture *pic);
-int ff_find_unused_picture(AVCodecContext *avctx, Picture *picture, int shared);
+int ff_find_unused_picture(AVCodecContext *avctx, MPVPicture *picture, int shared);
#endif /* AVCODEC_MPEGPICTURE_H */
diff --git a/libavcodec/mpegvideo.h b/libavcodec/mpegvideo.h
index 3150f337c0..6d96376a6e 100644
--- a/libavcodec/mpegvideo.h
+++ b/libavcodec/mpegvideo.h
@@ -128,9 +128,9 @@ typedef struct MpegEncContext {
int mb_num; ///< number of MBs of a picture
ptrdiff_t linesize; ///< line size, in bytes, may be different from width
ptrdiff_t uvlinesize; ///< line size, for chroma in bytes, may be different from width
- Picture *picture; ///< main picture buffer
- Picture **input_picture; ///< next pictures on display order for encoding
- Picture **reordered_input_picture; ///< pointer to the next pictures in coded order for encoding
+ MPVPicture *picture; ///< main picture buffer
+ MPVPicture **input_picture;///< next pictures on display order for encoding
+ MPVPicture **reordered_input_picture; ///< pointer to the next pictures in coded order for encoding
BufferPoolContext buffer_pools;
@@ -156,13 +156,13 @@ typedef struct MpegEncContext {
* copy of the previous picture structure.
* note, linesize & data, might not match the previous picture (for field pictures)
*/
- Picture last_pic;
+ MPVPicture last_pic;
/**
* copy of the next picture structure.
* note, linesize & data, might not match the next picture (for field pictures)
*/
- Picture next_pic;
+ MPVPicture next_pic;
/**
* Reference to the source picture for encoding.
@@ -174,11 +174,11 @@ typedef struct MpegEncContext {
* copy of the current picture structure.
* note, linesize & data, might not match the current picture (for field pictures)
*/
- Picture cur_pic; ///< buffer to store the decompressed current picture
+ MPVPicture cur_pic;
- Picture *last_pic_ptr; ///< pointer to the previous picture.
- Picture *next_pic_ptr; ///< pointer to the next picture (for bidir pred)
- Picture *cur_pic_ptr; ///< pointer to the current picture
+ MPVPicture *last_pic_ptr; ///< pointer to the previous picture.
+ MPVPicture *next_pic_ptr; ///< pointer to the next picture (for bidir pred)
+ MPVPicture *cur_pic_ptr; ///< pointer to the current picture
int skipped_last_frame;
int last_dc[3]; ///< last DC values for MPEG-1
int16_t *dc_val_base;
diff --git a/libavcodec/mpegvideo_dec.c b/libavcodec/mpegvideo_dec.c
index 663d97e60f..97efd4fe81 100644
--- a/libavcodec/mpegvideo_dec.c
+++ b/libavcodec/mpegvideo_dec.c
@@ -228,11 +228,11 @@ int ff_mpv_common_frame_size_change(MpegEncContext *s)
return err;
}
-static int alloc_picture(MpegEncContext *s, Picture **picp, int reference)
+static int alloc_picture(MpegEncContext *s, MPVPicture **picp, int reference)
{
AVCodecContext *avctx = s->avctx;
int idx = ff_find_unused_picture(s->avctx, s->picture, 0);
- Picture *pic;
+ MPVPicture *pic;
int ret;
if (idx < 0)
@@ -283,9 +283,9 @@ fail:
return ret;
}
-static int av_cold alloc_dummy_frame(MpegEncContext *s, Picture **picp, Picture *wpic)
+static int av_cold alloc_dummy_frame(MpegEncContext *s, MPVPicture **picp, MPVPicture *wpic)
{
- Picture *pic;
+ MPVPicture *pic;
int ret = alloc_picture(s, &pic, 1);
if (ret < 0)
return ret;
@@ -475,14 +475,15 @@ void ff_mpv_frame_end(MpegEncContext *s)
ff_thread_report_progress(&s->cur_pic_ptr->tf, INT_MAX, 0);
}
-void ff_print_debug_info(const MpegEncContext *s, const Picture *p, AVFrame *pict)
+void ff_print_debug_info(const MpegEncContext *s, const MPVPicture *p, AVFrame *pict)
{
ff_print_debug_info2(s->avctx, pict, s->mbskip_table, p->mb_type,
p->qscale_table, p->motion_val,
s->mb_width, s->mb_height, s->mb_stride, s->quarter_sample);
}
-int ff_mpv_export_qp_table(const MpegEncContext *s, AVFrame *f, const Picture *p, int qp_type)
+int ff_mpv_export_qp_table(const MpegEncContext *s, AVFrame *f,
+ const MPVPicture *p, int qp_type)
{
AVVideoEncParams *par;
int mult = (qp_type == FF_MPV_QSCALE_TYPE_MPEG1) ? 2 : 1;
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index 393b21823f..251b954210 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -1129,7 +1129,7 @@ static int prepare_picture(MpegEncContext *s, AVFrame *f, const AVFrame *props_f
static int load_input_picture(MpegEncContext *s, const AVFrame *pic_arg)
{
- Picture *pic = NULL;
+ MPVPicture *pic = NULL;
int64_t pts;
int i, display_picture_number = 0, ret;
int encoding_delay = s->max_b_frames ? s->max_b_frames
@@ -1266,7 +1266,7 @@ fail:
return ret;
}
-static int skip_check(MpegEncContext *s, const Picture *p, const Picture *ref)
+static int skip_check(MpegEncContext *s, const MPVPicture *p, const MPVPicture *ref)
{
int x, y, plane;
int score = 0;
@@ -1355,7 +1355,7 @@ static int estimate_best_b_count(MpegEncContext *s)
FF_LAMBDA_SHIFT;
for (i = 0; i < s->max_b_frames + 2; i++) {
- const Picture *pre_input_ptr = i ? s->input_picture[i - 1] :
+ const MPVPicture *pre_input_ptr = i ? s->input_picture[i - 1] :
s->next_pic_ptr;
if (pre_input_ptr) {
diff --git a/libavcodec/mpegvideo_motion.c b/libavcodec/mpegvideo_motion.c
index 964caa5afb..56d794974b 100644
--- a/libavcodec/mpegvideo_motion.c
+++ b/libavcodec/mpegvideo_motion.c
@@ -514,7 +514,7 @@ static inline void apply_obmc(MpegEncContext *s,
const op_pixels_func (*pix_op)[4])
{
LOCAL_ALIGNED_8(int16_t, mv_cache, [4], [4][2]);
- const Picture *cur_frame = &s->cur_pic;
+ const MPVPicture *cur_frame = &s->cur_pic;
int mb_x = s->mb_x;
int mb_y = s->mb_y;
const int xy = mb_x + mb_y * s->mb_stride;
diff --git a/libavcodec/mpegvideodec.h b/libavcodec/mpegvideodec.h
index 42c2697749..4259d5a02d 100644
--- a/libavcodec/mpegvideodec.h
+++ b/libavcodec/mpegvideodec.h
@@ -58,12 +58,13 @@ void ff_mpv_reconstruct_mb(MpegEncContext *s, int16_t block[12][64]);
void ff_mpv_report_decode_progress(MpegEncContext *s);
void ff_mpv_frame_end(MpegEncContext *s);
-int ff_mpv_export_qp_table(const MpegEncContext *s, AVFrame *f, const Picture *p, int qp_type);
+int ff_mpv_export_qp_table(const MpegEncContext *s, AVFrame *f,
+ const MPVPicture *p, int qp_type);
int ff_mpeg_update_thread_context(AVCodecContext *dst, const AVCodecContext *src);
void ff_mpeg_draw_horiz_band(MpegEncContext *s, int y, int h);
void ff_mpeg_flush(AVCodecContext *avctx);
-void ff_print_debug_info(const MpegEncContext *s, const Picture *p, AVFrame *pict);
+void ff_print_debug_info(const MpegEncContext *s, const MPVPicture *p, AVFrame *pict);
static inline int mpeg_get_qscale(MpegEncContext *s)
{
diff --git a/libavcodec/mss2.c b/libavcodec/mss2.c
index 5d52744529..05319436b6 100644
--- a/libavcodec/mss2.c
+++ b/libavcodec/mss2.c
@@ -382,7 +382,7 @@ static int decode_wmv9(AVCodecContext *avctx, const uint8_t *buf, int buf_size,
MSS12Context *c = &ctx->c;
VC1Context *v = avctx->priv_data;
MpegEncContext *s = &v->s;
- Picture *f;
+ MPVPicture *f;
int ret;
ff_mpeg_flush(avctx);
diff --git a/libavcodec/ratecontrol.c b/libavcodec/ratecontrol.c
index 1c9af6b53c..1429b3a93a 100644
--- a/libavcodec/ratecontrol.c
+++ b/libavcodec/ratecontrol.c
@@ -929,7 +929,7 @@ float ff_rate_estimate_qscale(MpegEncContext *s, int dry_run)
rce = &rcc->entry[picture_number];
wanted_bits = rce->expected_bits;
} else {
- const Picture *dts_pic;
+ const MPVPicture *dts_pic;
rce = &local_rce;
/* FIXME add a dts field to AVFrame and ensure it is set and use it
diff --git a/libavcodec/rv34.c b/libavcodec/rv34.c
index df1d570e73..284de14e8c 100644
--- a/libavcodec/rv34.c
+++ b/libavcodec/rv34.c
@@ -565,7 +565,7 @@ static void rv34_pred_mv_b(RV34DecContext *r, int block_type, int dir)
int has_A = 0, has_B = 0, has_C = 0;
int mx, my;
int i, j;
- Picture *cur_pic = &s->cur_pic;
+ MPVPicture *cur_pic = &s->cur_pic;
const int mask = dir ? MB_TYPE_L1 : MB_TYPE_L0;
int type = cur_pic->mb_type[mb_pos];
diff --git a/libavcodec/vc1dec.c b/libavcodec/vc1dec.c
index b89f695b56..71fda305da 100644
--- a/libavcodec/vc1dec.c
+++ b/libavcodec/vc1dec.c
@@ -340,7 +340,7 @@ static void vc1_sprite_flush(AVCodecContext *avctx)
{
VC1Context *v = avctx->priv_data;
MpegEncContext *s = &v->s;
- Picture *f = &s->cur_pic;
+ MPVPicture *f = &s->cur_pic;
int plane, i;
/* Windows Media Image codecs have a convergence interval of two keyframes.
diff --git a/libavcodec/vdpau.c b/libavcodec/vdpau.c
index cd7194138d..f46bfa2bdf 100644
--- a/libavcodec/vdpau.c
+++ b/libavcodec/vdpau.c
@@ -370,7 +370,7 @@ int ff_vdpau_common_end_frame(AVCodecContext *avctx, AVFrame *frame,
int ff_vdpau_mpeg_end_frame(AVCodecContext *avctx)
{
MpegEncContext *s = avctx->priv_data;
- Picture *pic = s->cur_pic_ptr;
+ MPVPicture *pic = s->cur_pic_ptr;
struct vdpau_picture_context *pic_ctx = pic->hwaccel_picture_private;
int val;
diff --git a/libavcodec/vdpau_mpeg12.c b/libavcodec/vdpau_mpeg12.c
index 1f0ea7e803..abd8cb19af 100644
--- a/libavcodec/vdpau_mpeg12.c
+++ b/libavcodec/vdpau_mpeg12.c
@@ -35,7 +35,7 @@ static int vdpau_mpeg_start_frame(AVCodecContext *avctx,
const uint8_t *buffer, uint32_t size)
{
MpegEncContext * const s = avctx->priv_data;
- Picture *pic = s->cur_pic_ptr;
+ MPVPicture *pic = s->cur_pic_ptr;
struct vdpau_picture_context *pic_ctx = pic->hwaccel_picture_private;
VdpPictureInfoMPEG1Or2 *info = &pic_ctx->info.mpeg;
VdpVideoSurface ref;
@@ -87,7 +87,7 @@ static int vdpau_mpeg_decode_slice(AVCodecContext *avctx,
const uint8_t *buffer, uint32_t size)
{
MpegEncContext * const s = avctx->priv_data;
- Picture *pic = s->cur_pic_ptr;
+ MPVPicture *pic = s->cur_pic_ptr;
struct vdpau_picture_context *pic_ctx = pic->hwaccel_picture_private;
int val;
diff --git a/libavcodec/vdpau_mpeg4.c b/libavcodec/vdpau_mpeg4.c
index ecbc80b86d..e2766835f6 100644
--- a/libavcodec/vdpau_mpeg4.c
+++ b/libavcodec/vdpau_mpeg4.c
@@ -34,7 +34,7 @@ static int vdpau_mpeg4_start_frame(AVCodecContext *avctx,
{
Mpeg4DecContext *ctx = avctx->priv_data;
MpegEncContext * const s = &ctx->m;
- Picture *pic = s->cur_pic_ptr;
+ MPVPicture *pic = s->cur_pic_ptr;
struct vdpau_picture_context *pic_ctx = pic->hwaccel_picture_private;
VdpPictureInfoMPEG4Part2 *info = &pic_ctx->info.mpeg4;
VdpVideoSurface ref;
diff --git a/libavcodec/vdpau_vc1.c b/libavcodec/vdpau_vc1.c
index 119e514c0e..9ed1665cad 100644
--- a/libavcodec/vdpau_vc1.c
+++ b/libavcodec/vdpau_vc1.c
@@ -36,7 +36,7 @@ static int vdpau_vc1_start_frame(AVCodecContext *avctx,
{
VC1Context * const v = avctx->priv_data;
MpegEncContext * const s = &v->s;
- Picture *pic = s->cur_pic_ptr;
+ MPVPicture *pic = s->cur_pic_ptr;
struct vdpau_picture_context *pic_ctx = pic->hwaccel_picture_private;
VdpPictureInfoVC1 *info = &pic_ctx->info.vc1;
VdpVideoSurface ref;
@@ -104,7 +104,7 @@ static int vdpau_vc1_decode_slice(AVCodecContext *avctx,
{
VC1Context * const v = avctx->priv_data;
MpegEncContext * const s = &v->s;
- Picture *pic = s->cur_pic_ptr;
+ MPVPicture *pic = s->cur_pic_ptr;
struct vdpau_picture_context *pic_ctx = pic->hwaccel_picture_private;
int val;
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 42/71] avcodec/vc1_mc: Don't check AVFrame INTERLACE flags
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (39 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 41/71] avcodec/mpegpicture: Rename Picture->MPVPicture Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 43/71] avcodec/mpegpicture: Split MPVPicture into WorkPicture and ordinary Pic Andreas Rheinhardt
` (29 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Instead cache these values in VC1Context to avoid the indirection
and AND.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/vc1.h | 1 +
libavcodec/vc1_mc.c | 20 +++++++++-----------
libavcodec/vc1dec.c | 2 ++
3 files changed, 12 insertions(+), 11 deletions(-)
diff --git a/libavcodec/vc1.h b/libavcodec/vc1.h
index 0e01458c89..185236662f 100644
--- a/libavcodec/vc1.h
+++ b/libavcodec/vc1.h
@@ -293,6 +293,7 @@ typedef struct VC1Context{
uint8_t next_luty[2][256], next_lutuv[2][256]; ///< lookup tables used for intensity compensation
uint8_t (*curr_luty)[256] ,(*curr_lutuv)[256];
int last_use_ic, *curr_use_ic, next_use_ic, aux_use_ic;
+ int last_interlaced, next_interlaced; ///< whether last_pic, next_pic is interlaced
int rnd; ///< rounding control
int cbptab;
diff --git a/libavcodec/vc1_mc.c b/libavcodec/vc1_mc.c
index 90ff1eee58..fad9a4c370 100644
--- a/libavcodec/vc1_mc.c
+++ b/libavcodec/vc1_mc.c
@@ -233,7 +233,7 @@ void ff_vc1_mc_1mv(VC1Context *v, int dir)
luty = v->last_luty;
lutuv = v->last_lutuv;
use_ic = v->last_use_ic;
- interlace = !!(s->last_pic.f->flags & AV_FRAME_FLAG_INTERLACED);
+ interlace = v->last_interlaced;
}
} else {
srcY = s->next_pic.data[0];
@@ -242,7 +242,7 @@ void ff_vc1_mc_1mv(VC1Context *v, int dir)
luty = v->next_luty;
lutuv = v->next_lutuv;
use_ic = v->next_use_ic;
- interlace = !!(s->next_pic.f->flags & AV_FRAME_FLAG_INTERLACED);
+ interlace = v->next_interlaced;
}
if (!srcY || !srcU) {
@@ -482,13 +482,13 @@ void ff_vc1_mc_4mv_luma(VC1Context *v, int n, int dir, int avg)
srcY = s->last_pic.data[0];
luty = v->last_luty;
use_ic = v->last_use_ic;
- interlace = !!(s->last_pic.f->flags & AV_FRAME_FLAG_INTERLACED);
+ interlace = v->last_interlaced;
}
} else {
srcY = s->next_pic.data[0];
luty = v->next_luty;
use_ic = v->next_use_ic;
- interlace = !!(s->next_pic.f->flags & AV_FRAME_FLAG_INTERLACED);
+ interlace = v->next_interlaced;
}
if (!srcY) {
@@ -708,14 +708,14 @@ void ff_vc1_mc_4mv_chroma(VC1Context *v, int dir)
srcV = s->last_pic.data[2];
lutuv = v->last_lutuv;
use_ic = v->last_use_ic;
- interlace = !!(s->last_pic.f->flags & AV_FRAME_FLAG_INTERLACED);
+ interlace = v->last_interlaced;
}
} else {
srcU = s->next_pic.data[1];
srcV = s->next_pic.data[2];
lutuv = v->next_lutuv;
use_ic = v->next_use_ic;
- interlace = !!(s->next_pic.f->flags & AV_FRAME_FLAG_INTERLACED);
+ interlace = v->next_interlaced;
}
if (!srcU) {
@@ -884,13 +884,13 @@ void ff_vc1_mc_4mv_chroma4(VC1Context *v, int dir, int dir2, int avg)
srcV = s->next_pic.data[2];
lutuv = v->next_lutuv;
use_ic = v->next_use_ic;
- interlace = !!(s->next_pic.f->flags & AV_FRAME_FLAG_INTERLACED);
+ interlace = v->next_interlaced;
} else {
srcU = s->last_pic.data[1];
srcV = s->last_pic.data[2];
lutuv = v->last_lutuv;
use_ic = v->last_use_ic;
- interlace = !!(s->last_pic.f->flags & AV_FRAME_FLAG_INTERLACED);
+ interlace = v->last_interlaced;
}
if (!srcU)
return;
@@ -1009,7 +1009,7 @@ void ff_vc1_interp_mc(VC1Context *v)
int dxy, mx, my, uvmx, uvmy, src_x, src_y, uvsrc_x, uvsrc_y;
int v_edge_pos = s->v_edge_pos >> v->field_mode;
int use_ic = v->next_use_ic;
- int interlace;
+ int interlace = v->next_interlaced;
int linesize, uvlinesize;
if (!v->field_mode && !v->s.next_pic.data[0])
@@ -1034,8 +1034,6 @@ void ff_vc1_interp_mc(VC1Context *v)
srcU = s->next_pic.data[1];
srcV = s->next_pic.data[2];
- interlace = !!(s->next_pic.f->flags & AV_FRAME_FLAG_INTERLACED);
-
src_x = s->mb_x * 16 + (mx >> 2);
src_y = s->mb_y * 16 + (my >> 2);
uvsrc_x = s->mb_x * 8 + (uvmx >> 2);
diff --git a/libavcodec/vc1dec.c b/libavcodec/vc1dec.c
index 71fda305da..36a47502f5 100644
--- a/libavcodec/vc1dec.c
+++ b/libavcodec/vc1dec.c
@@ -1064,6 +1064,8 @@ static int vc1_decode_frame(AVCodecContext *avctx, AVFrame *pict,
v->s.cur_pic_ptr->field_picture = v->field_mode;
v->s.cur_pic_ptr->f->flags |= AV_FRAME_FLAG_INTERLACED * (v->fcm != PROGRESSIVE);
v->s.cur_pic_ptr->f->flags |= AV_FRAME_FLAG_TOP_FIELD_FIRST * !!v->tff;
+ v->last_interlaced = v->s.last_pic_ptr ? v->s.last_pic_ptr->f->flags & AV_FRAME_FLAG_INTERLACED : 0;
+ v->next_interlaced = v->s.next_pic_ptr ? v->s.next_pic_ptr->f->flags & AV_FRAME_FLAG_INTERLACED : 0;
// process pulldown flags
s->cur_pic_ptr->f->repeat_pict = 0;
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 43/71] avcodec/mpegpicture: Split MPVPicture into WorkPicture and ordinary Pic
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (40 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 42/71] avcodec/vc1_mc: Don't check AVFrame INTERLACE flags Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-06-23 22:28 ` Michael Niedermayer
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 44/71] avcodec/error_resilience: Deduplicate cleanup code Andreas Rheinhardt
` (28 subsequent siblings)
70 siblings, 1 reply; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
There are two types of MPVPictures: Three (cur_pic, last_pic, next_pic)
that are directly part of MpegEncContext and an array of MPVPictures
that are separately allocated and are mostly accessed via pointers
(cur|last|next)_pic_ptr; they are also used to store AVFrames in the
encoder (necessary due to B-frames). As the name implies, each of the
former is directly associated with one of the _ptr pointers:
They actually share the same underlying buffers, but the ones
that are part of the context can have their data pointers offset
and their linesize doubled for field pictures.
Up until now, each of these had their own references; in particular,
there was an underlying av_frame_ref() to sync cur_pic and cur_pic_ptr
etc. This is wasteful.
This commit changes this relationship: cur_pic, last_pic and next_pic
now become MPVWorkPictures; this structure does not have an AVFrame
at all any more, but only the cached values of data and linesize.
It also contains a pointer to the corresponding MPVPicture, establishing
a more natural relationsship between the two.
This already means that creating the context-pictures from the pointers
can no longer fail.
What has not been changed is the fact that the MPVPicture* pointers
are not ownership pointers and that the MPVPictures are part of an
array of MPVPictures that is owned by a single AVCodecContext.
Doing so will be done in a latter commit.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/d3d12va_mpeg2.c | 10 +-
libavcodec/d3d12va_vc1.c | 10 +-
libavcodec/dxva2_mpeg2.c | 16 +--
libavcodec/dxva2_vc1.c | 20 ++--
libavcodec/h261dec.c | 7 +-
libavcodec/h263dec.c | 33 +++---
libavcodec/ituh263dec.c | 4 +-
libavcodec/mpeg12dec.c | 56 ++++-----
libavcodec/mpeg12enc.c | 14 +--
libavcodec/mpeg4videodec.c | 4 +-
libavcodec/mpeg4videoenc.c | 4 +-
libavcodec/mpeg_er.c | 6 +-
libavcodec/mpegpicture.c | 56 ++++++---
libavcodec/mpegpicture.h | 30 ++++-
libavcodec/mpegvideo.c | 11 --
libavcodec/mpegvideo.h | 9 +-
libavcodec/mpegvideo_dec.c | 143 +++++++++--------------
libavcodec/mpegvideo_enc.c | 99 ++++++----------
libavcodec/mpegvideo_motion.c | 8 +-
libavcodec/mpv_reconstruct_mb_template.c | 4 +-
libavcodec/mss2.c | 2 +-
libavcodec/nvdec_mpeg12.c | 6 +-
libavcodec/nvdec_mpeg4.c | 6 +-
libavcodec/nvdec_vc1.c | 6 +-
libavcodec/ratecontrol.c | 10 +-
libavcodec/rv10.c | 28 ++---
libavcodec/rv34.c | 38 +++---
libavcodec/snowenc.c | 17 +--
libavcodec/svq1enc.c | 5 +-
libavcodec/vaapi_mpeg2.c | 12 +-
libavcodec/vaapi_mpeg4.c | 14 +--
libavcodec/vaapi_vc1.c | 14 ++-
libavcodec/vc1.c | 2 +-
libavcodec/vc1_block.c | 12 +-
libavcodec/vc1_mc.c | 14 +--
libavcodec/vc1_pred.c | 2 +-
libavcodec/vc1dec.c | 40 +++----
libavcodec/vdpau.c | 2 +-
libavcodec/vdpau_mpeg12.c | 8 +-
libavcodec/vdpau_mpeg4.c | 6 +-
libavcodec/vdpau_vc1.c | 12 +-
libavcodec/videotoolbox.c | 2 +-
libavcodec/wmv2dec.c | 2 +-
43 files changed, 386 insertions(+), 418 deletions(-)
diff --git a/libavcodec/d3d12va_mpeg2.c b/libavcodec/d3d12va_mpeg2.c
index c2cf78104c..86a7d97b34 100644
--- a/libavcodec/d3d12va_mpeg2.c
+++ b/libavcodec/d3d12va_mpeg2.c
@@ -44,7 +44,7 @@ static int d3d12va_mpeg2_start_frame(AVCodecContext *avctx, av_unused const uint
{
const MpegEncContext *s = avctx->priv_data;
D3D12VADecodeContext *ctx = D3D12VA_DECODE_CONTEXT(avctx);
- D3D12DecodePictureContext *ctx_pic = s->cur_pic_ptr->hwaccel_picture_private;
+ D3D12DecodePictureContext *ctx_pic = s->cur_pic.ptr->hwaccel_picture_private;
if (!ctx)
return -1;
@@ -69,7 +69,7 @@ static int d3d12va_mpeg2_start_frame(AVCodecContext *avctx, av_unused const uint
static int d3d12va_mpeg2_decode_slice(AVCodecContext *avctx, const uint8_t *buffer, uint32_t size)
{
const MpegEncContext *s = avctx->priv_data;
- D3D12DecodePictureContext *ctx_pic = s->cur_pic_ptr->hwaccel_picture_private;
+ D3D12DecodePictureContext *ctx_pic = s->cur_pic.ptr->hwaccel_picture_private;
if (ctx_pic->slice_count >= MAX_SLICES) {
return AVERROR(ERANGE);
@@ -88,7 +88,7 @@ static int d3d12va_mpeg2_decode_slice(AVCodecContext *avctx, const uint8_t *buff
static int update_input_arguments(AVCodecContext *avctx, D3D12_VIDEO_DECODE_INPUT_STREAM_ARGUMENTS *input_args, ID3D12Resource *buffer)
{
const MpegEncContext *s = avctx->priv_data;
- D3D12DecodePictureContext *ctx_pic = s->cur_pic_ptr->hwaccel_picture_private;
+ D3D12DecodePictureContext *ctx_pic = s->cur_pic.ptr->hwaccel_picture_private;
const int is_field = s->picture_structure != PICT_FRAME;
const unsigned mb_count = s->mb_width * (s->mb_height >> is_field);
@@ -137,12 +137,12 @@ static int d3d12va_mpeg2_end_frame(AVCodecContext *avctx)
{
int ret;
MpegEncContext *s = avctx->priv_data;
- D3D12DecodePictureContext *ctx_pic = s->cur_pic_ptr->hwaccel_picture_private;
+ D3D12DecodePictureContext *ctx_pic = s->cur_pic.ptr->hwaccel_picture_private;
if (ctx_pic->slice_count <= 0 || ctx_pic->bitstream_size <= 0)
return -1;
- ret = ff_d3d12va_common_end_frame(avctx, s->cur_pic_ptr->f, &ctx_pic->pp, sizeof(ctx_pic->pp),
+ ret = ff_d3d12va_common_end_frame(avctx, s->cur_pic.ptr->f, &ctx_pic->pp, sizeof(ctx_pic->pp),
&ctx_pic->qm, sizeof(ctx_pic->qm), update_input_arguments);
if (!ret)
ff_mpeg_draw_horiz_band(s, 0, avctx->height);
diff --git a/libavcodec/d3d12va_vc1.c b/libavcodec/d3d12va_vc1.c
index c4ac67ca04..dccc0fbffa 100644
--- a/libavcodec/d3d12va_vc1.c
+++ b/libavcodec/d3d12va_vc1.c
@@ -45,7 +45,7 @@ static int d3d12va_vc1_start_frame(AVCodecContext *avctx, av_unused const uint8_
{
const VC1Context *v = avctx->priv_data;
D3D12VADecodeContext *ctx = D3D12VA_DECODE_CONTEXT(avctx);
- D3D12DecodePictureContext *ctx_pic = v->s.cur_pic_ptr->hwaccel_picture_private;
+ D3D12DecodePictureContext *ctx_pic = v->s.cur_pic.ptr->hwaccel_picture_private;
if (!ctx)
return -1;
@@ -67,7 +67,7 @@ static int d3d12va_vc1_start_frame(AVCodecContext *avctx, av_unused const uint8_
static int d3d12va_vc1_decode_slice(AVCodecContext *avctx, const uint8_t *buffer, uint32_t size)
{
const VC1Context *v = avctx->priv_data;
- D3D12DecodePictureContext *ctx_pic = v->s.cur_pic_ptr->hwaccel_picture_private;
+ D3D12DecodePictureContext *ctx_pic = v->s.cur_pic.ptr->hwaccel_picture_private;
if (ctx_pic->slice_count >= MAX_SLICES) {
return AVERROR(ERANGE);
@@ -93,7 +93,7 @@ static int update_input_arguments(AVCodecContext *avctx, D3D12_VIDEO_DECODE_INPU
{
const VC1Context *v = avctx->priv_data;
const MpegEncContext *s = &v->s;
- D3D12DecodePictureContext *ctx_pic = s->cur_pic_ptr->hwaccel_picture_private;
+ D3D12DecodePictureContext *ctx_pic = s->cur_pic.ptr->hwaccel_picture_private;
D3D12_VIDEO_DECODE_FRAME_ARGUMENT *args = &input_args->FrameArguments[input_args->NumFrameArguments++];
const unsigned mb_count = s->mb_width * (s->mb_height >> v->field_mode);
@@ -151,12 +151,12 @@ static int update_input_arguments(AVCodecContext *avctx, D3D12_VIDEO_DECODE_INPU
static int d3d12va_vc1_end_frame(AVCodecContext *avctx)
{
const VC1Context *v = avctx->priv_data;
- D3D12DecodePictureContext *ctx_pic = v->s.cur_pic_ptr->hwaccel_picture_private;
+ D3D12DecodePictureContext *ctx_pic = v->s.cur_pic.ptr->hwaccel_picture_private;
if (ctx_pic->slice_count <= 0 || ctx_pic->bitstream_size <= 0)
return -1;
- return ff_d3d12va_common_end_frame(avctx, v->s.cur_pic_ptr->f,
+ return ff_d3d12va_common_end_frame(avctx, v->s.cur_pic.ptr->f,
&ctx_pic->pp, sizeof(ctx_pic->pp),
NULL, 0,
update_input_arguments);
diff --git a/libavcodec/dxva2_mpeg2.c b/libavcodec/dxva2_mpeg2.c
index d29a5bb538..4b58466878 100644
--- a/libavcodec/dxva2_mpeg2.c
+++ b/libavcodec/dxva2_mpeg2.c
@@ -45,17 +45,17 @@ void ff_dxva2_mpeg2_fill_picture_parameters(AVCodecContext *avctx,
DXVA_PictureParameters *pp)
{
const struct MpegEncContext *s = avctx->priv_data;
- const MPVPicture *current_picture = s->cur_pic_ptr;
+ const MPVPicture *current_picture = s->cur_pic.ptr;
int is_field = s->picture_structure != PICT_FRAME;
memset(pp, 0, sizeof(*pp));
pp->wDeblockedPictureIndex = 0;
if (s->pict_type != AV_PICTURE_TYPE_I)
- pp->wForwardRefPictureIndex = ff_dxva2_get_surface_index(avctx, ctx, s->last_pic.f, 0);
+ pp->wForwardRefPictureIndex = ff_dxva2_get_surface_index(avctx, ctx, s->last_pic.ptr->f, 0);
else
pp->wForwardRefPictureIndex = 0xffff;
if (s->pict_type == AV_PICTURE_TYPE_B)
- pp->wBackwardRefPictureIndex = ff_dxva2_get_surface_index(avctx, ctx, s->next_pic.f, 0);
+ pp->wBackwardRefPictureIndex = ff_dxva2_get_surface_index(avctx, ctx, s->next_pic.ptr->f, 0);
else
pp->wBackwardRefPictureIndex = 0xffff;
pp->wDecodedPictureIndex = ff_dxva2_get_surface_index(avctx, ctx, current_picture->f, 1);
@@ -157,7 +157,7 @@ static int commit_bitstream_and_slice_buffer(AVCodecContext *avctx,
const struct MpegEncContext *s = avctx->priv_data;
AVDXVAContext *ctx = DXVA_CONTEXT(avctx);
struct dxva2_picture_context *ctx_pic =
- s->cur_pic_ptr->hwaccel_picture_private;
+ s->cur_pic.ptr->hwaccel_picture_private;
const int is_field = s->picture_structure != PICT_FRAME;
const unsigned mb_count = s->mb_width * (s->mb_height >> is_field);
void *dxva_data_ptr;
@@ -260,7 +260,7 @@ static int dxva2_mpeg2_start_frame(AVCodecContext *avctx,
const struct MpegEncContext *s = avctx->priv_data;
AVDXVAContext *ctx = DXVA_CONTEXT(avctx);
struct dxva2_picture_context *ctx_pic =
- s->cur_pic_ptr->hwaccel_picture_private;
+ s->cur_pic.ptr->hwaccel_picture_private;
if (!DXVA_CONTEXT_VALID(avctx, ctx))
return -1;
@@ -280,7 +280,7 @@ static int dxva2_mpeg2_decode_slice(AVCodecContext *avctx,
{
const struct MpegEncContext *s = avctx->priv_data;
struct dxva2_picture_context *ctx_pic =
- s->cur_pic_ptr->hwaccel_picture_private;
+ s->cur_pic.ptr->hwaccel_picture_private;
unsigned position;
if (ctx_pic->slice_count >= MAX_SLICES) {
@@ -302,12 +302,12 @@ static int dxva2_mpeg2_end_frame(AVCodecContext *avctx)
{
struct MpegEncContext *s = avctx->priv_data;
struct dxva2_picture_context *ctx_pic =
- s->cur_pic_ptr->hwaccel_picture_private;
+ s->cur_pic.ptr->hwaccel_picture_private;
int ret;
if (ctx_pic->slice_count <= 0 || ctx_pic->bitstream_size <= 0)
return -1;
- ret = ff_dxva2_common_end_frame(avctx, s->cur_pic_ptr->f,
+ ret = ff_dxva2_common_end_frame(avctx, s->cur_pic.ptr->f,
&ctx_pic->pp, sizeof(ctx_pic->pp),
&ctx_pic->qm, sizeof(ctx_pic->qm),
commit_bitstream_and_slice_buffer);
diff --git a/libavcodec/dxva2_vc1.c b/libavcodec/dxva2_vc1.c
index f536da1008..6dc9cd8b5a 100644
--- a/libavcodec/dxva2_vc1.c
+++ b/libavcodec/dxva2_vc1.c
@@ -46,7 +46,7 @@ void ff_dxva2_vc1_fill_picture_parameters(AVCodecContext *avctx,
{
const VC1Context *v = avctx->priv_data;
const MpegEncContext *s = &v->s;
- const MPVPicture *current_picture = s->cur_pic_ptr;
+ const MPVPicture *current_picture = s->cur_pic.ptr;
int intcomp = 0;
// determine if intensity compensation is needed
@@ -58,12 +58,12 @@ void ff_dxva2_vc1_fill_picture_parameters(AVCodecContext *avctx,
}
memset(pp, 0, sizeof(*pp));
- if (s->pict_type != AV_PICTURE_TYPE_I && !v->bi_type)
- pp->wForwardRefPictureIndex = ff_dxva2_get_surface_index(avctx, ctx, s->last_pic.f, 0);
+ if (s->pict_type != AV_PICTURE_TYPE_I && !v->bi_type && s->last_pic.ptr)
+ pp->wForwardRefPictureIndex = ff_dxva2_get_surface_index(avctx, ctx, s->last_pic.ptr->f, 0);
else
pp->wForwardRefPictureIndex = 0xffff;
- if (s->pict_type == AV_PICTURE_TYPE_B && !v->bi_type)
- pp->wBackwardRefPictureIndex = ff_dxva2_get_surface_index(avctx, ctx, s->next_pic.f, 0);
+ if (s->pict_type == AV_PICTURE_TYPE_B && !v->bi_type && s->next_pic.ptr)
+ pp->wBackwardRefPictureIndex = ff_dxva2_get_surface_index(avctx, ctx, s->next_pic.ptr->f, 0);
else
pp->wBackwardRefPictureIndex = 0xffff;
pp->wDecodedPictureIndex =
@@ -191,7 +191,7 @@ static int commit_bitstream_and_slice_buffer(AVCodecContext *avctx,
const VC1Context *v = avctx->priv_data;
AVDXVAContext *ctx = DXVA_CONTEXT(avctx);
const MpegEncContext *s = &v->s;
- struct dxva2_picture_context *ctx_pic = s->cur_pic_ptr->hwaccel_picture_private;
+ struct dxva2_picture_context *ctx_pic = s->cur_pic.ptr->hwaccel_picture_private;
static const uint8_t start_code[] = { 0, 0, 1, 0x0d };
const unsigned start_code_size = avctx->codec_id == AV_CODEC_ID_VC1 ? sizeof(start_code) : 0;
@@ -317,7 +317,7 @@ static int dxva2_vc1_start_frame(AVCodecContext *avctx,
{
const VC1Context *v = avctx->priv_data;
AVDXVAContext *ctx = DXVA_CONTEXT(avctx);
- struct dxva2_picture_context *ctx_pic = v->s.cur_pic_ptr->hwaccel_picture_private;
+ struct dxva2_picture_context *ctx_pic = v->s.cur_pic.ptr->hwaccel_picture_private;
if (!DXVA_CONTEXT_VALID(avctx, ctx))
return -1;
@@ -336,7 +336,7 @@ static int dxva2_vc1_decode_slice(AVCodecContext *avctx,
uint32_t size)
{
const VC1Context *v = avctx->priv_data;
- const MPVPicture *current_picture = v->s.cur_pic_ptr;
+ const MPVPicture *current_picture = v->s.cur_pic.ptr;
struct dxva2_picture_context *ctx_pic = current_picture->hwaccel_picture_private;
unsigned position;
@@ -364,13 +364,13 @@ static int dxva2_vc1_decode_slice(AVCodecContext *avctx,
static int dxva2_vc1_end_frame(AVCodecContext *avctx)
{
VC1Context *v = avctx->priv_data;
- struct dxva2_picture_context *ctx_pic = v->s.cur_pic_ptr->hwaccel_picture_private;
+ struct dxva2_picture_context *ctx_pic = v->s.cur_pic.ptr->hwaccel_picture_private;
int ret;
if (ctx_pic->slice_count <= 0 || ctx_pic->bitstream_size <= 0)
return -1;
- ret = ff_dxva2_common_end_frame(avctx, v->s.cur_pic_ptr->f,
+ ret = ff_dxva2_common_end_frame(avctx, v->s.cur_pic.ptr->f,
&ctx_pic->pp, sizeof(ctx_pic->pp),
NULL, 0,
commit_bitstream_and_slice_buffer);
diff --git a/libavcodec/h261dec.c b/libavcodec/h261dec.c
index 00edd7a7c2..9acfd984ee 100644
--- a/libavcodec/h261dec.c
+++ b/libavcodec/h261dec.c
@@ -649,12 +649,11 @@ static int h261_decode_frame(AVCodecContext *avctx, AVFrame *pict,
}
ff_mpv_frame_end(s);
- av_assert0(s->cur_pic.f->pict_type == s->cur_pic_ptr->f->pict_type);
- av_assert0(s->cur_pic.f->pict_type == s->pict_type);
+ av_assert0(s->pict_type == s->cur_pic.ptr->f->pict_type);
- if ((ret = av_frame_ref(pict, s->cur_pic_ptr->f)) < 0)
+ if ((ret = av_frame_ref(pict, s->cur_pic.ptr->f)) < 0)
return ret;
- ff_print_debug_info(s, s->cur_pic_ptr, pict);
+ ff_print_debug_info(s, s->cur_pic.ptr, pict);
*got_frame = 1;
diff --git a/libavcodec/h263dec.c b/libavcodec/h263dec.c
index 6ae634fceb..4fe4a30000 100644
--- a/libavcodec/h263dec.c
+++ b/libavcodec/h263dec.c
@@ -432,22 +432,22 @@ int ff_h263_decode_frame(AVCodecContext *avctx, AVFrame *pict,
/* no supplementary picture */
if (buf_size == 0) {
/* special case for last picture */
- if (s->low_delay == 0 && s->next_pic_ptr) {
- if ((ret = av_frame_ref(pict, s->next_pic_ptr->f)) < 0)
+ if (s->low_delay == 0 && s->next_pic.ptr) {
+ if ((ret = av_frame_ref(pict, s->next_pic.ptr->f)) < 0)
return ret;
- s->next_pic_ptr = NULL;
+ s->next_pic.ptr = NULL;
*got_frame = 1;
- } else if (s->skipped_last_frame && s->cur_pic_ptr) {
+ } else if (s->skipped_last_frame && s->cur_pic.ptr) {
/* Output the last picture we decoded again if the stream ended with
* an NVOP */
- if ((ret = av_frame_ref(pict, s->cur_pic_ptr->f)) < 0)
+ if ((ret = av_frame_ref(pict, s->cur_pic.ptr->f)) < 0)
return ret;
/* Copy props from the last input packet. Otherwise, props from the last
* returned picture would be reused */
if ((ret = ff_decode_frame_props(avctx, pict)) < 0)
return ret;
- s->cur_pic_ptr = NULL;
+ s->cur_pic.ptr = NULL;
*got_frame = 1;
}
@@ -561,7 +561,7 @@ retry:
s->gob_index = H263_GOB_HEIGHT(s->height);
/* skip B-frames if we don't have reference frames */
- if (!s->last_pic_ptr &&
+ if (!s->last_pic.ptr &&
(s->pict_type == AV_PICTURE_TYPE_B || s->droppable))
return get_consumed_bytes(s, buf_size);
if ((avctx->skip_frame >= AVDISCARD_NONREF &&
@@ -647,21 +647,20 @@ frame_end:
if (!s->divx_packed && avctx->hwaccel)
ff_thread_finish_setup(avctx);
- av_assert1(s->cur_pic.f->pict_type == s->cur_pic_ptr->f->pict_type);
- av_assert1(s->cur_pic.f->pict_type == s->pict_type);
+ av_assert1(s->pict_type == s->cur_pic.ptr->f->pict_type);
if (s->pict_type == AV_PICTURE_TYPE_B || s->low_delay) {
- if ((ret = av_frame_ref(pict, s->cur_pic_ptr->f)) < 0)
+ if ((ret = av_frame_ref(pict, s->cur_pic.ptr->f)) < 0)
return ret;
- ff_print_debug_info(s, s->cur_pic_ptr, pict);
- ff_mpv_export_qp_table(s, pict, s->cur_pic_ptr, FF_MPV_QSCALE_TYPE_MPEG1);
- } else if (s->last_pic_ptr) {
- if ((ret = av_frame_ref(pict, s->last_pic_ptr->f)) < 0)
+ ff_print_debug_info(s, s->cur_pic.ptr, pict);
+ ff_mpv_export_qp_table(s, pict, s->cur_pic.ptr, FF_MPV_QSCALE_TYPE_MPEG1);
+ } else if (s->last_pic.ptr) {
+ if ((ret = av_frame_ref(pict, s->last_pic.ptr->f)) < 0)
return ret;
- ff_print_debug_info(s, s->last_pic_ptr, pict);
- ff_mpv_export_qp_table(s, pict, s->last_pic_ptr, FF_MPV_QSCALE_TYPE_MPEG1);
+ ff_print_debug_info(s, s->last_pic.ptr, pict);
+ ff_mpv_export_qp_table(s, pict, s->last_pic.ptr, FF_MPV_QSCALE_TYPE_MPEG1);
}
- if (s->last_pic_ptr || s->low_delay) {
+ if (s->last_pic.ptr || s->low_delay) {
if ( pict->format == AV_PIX_FMT_YUV420P
&& (s->codec_tag == AV_RL32("GEOV") || s->codec_tag == AV_RL32("GEOX"))) {
for (int p = 0; p < 3; p++) {
diff --git a/libavcodec/ituh263dec.c b/libavcodec/ituh263dec.c
index 2e4d74adc8..0809048362 100644
--- a/libavcodec/ituh263dec.c
+++ b/libavcodec/ituh263dec.c
@@ -750,12 +750,12 @@ static inline void set_one_direct_mv(MpegEncContext *s, const MPVPicture *p, int
static int set_direct_mv(MpegEncContext *s)
{
const int mb_index = s->mb_x + s->mb_y * s->mb_stride;
- const MPVPicture *p = &s->next_pic;
+ const MPVPicture *p = s->next_pic.ptr;
int colocated_mb_type = p->mb_type[mb_index];
int i;
if (s->codec_tag == AV_RL32("U263") && p->f->pict_type == AV_PICTURE_TYPE_I) {
- p = &s->last_pic;
+ p = s->last_pic.ptr;
colocated_mb_type = p->mb_type[mb_index];
}
diff --git a/libavcodec/mpeg12dec.c b/libavcodec/mpeg12dec.c
index 6877b9ef4a..e3f2dd8af7 100644
--- a/libavcodec/mpeg12dec.c
+++ b/libavcodec/mpeg12dec.c
@@ -1292,7 +1292,7 @@ static int mpeg_field_start(MpegEncContext *s, const uint8_t *buf, int buf_size)
return ret;
if (s->picture_structure != PICT_FRAME) {
- s->cur_pic_ptr->f->flags |= AV_FRAME_FLAG_TOP_FIELD_FIRST *
+ s->cur_pic.ptr->f->flags |= AV_FRAME_FLAG_TOP_FIELD_FIRST *
(s->picture_structure == PICT_TOP_FIELD);
for (int i = 0; i < 3; i++) {
@@ -1309,19 +1309,19 @@ static int mpeg_field_start(MpegEncContext *s, const uint8_t *buf, int buf_size)
ff_mpeg_er_frame_start(s);
/* first check if we must repeat the frame */
- s->cur_pic_ptr->f->repeat_pict = 0;
+ s->cur_pic.ptr->f->repeat_pict = 0;
if (s->repeat_first_field) {
if (s->progressive_sequence) {
if (s->top_field_first)
- s->cur_pic_ptr->f->repeat_pict = 4;
+ s->cur_pic.ptr->f->repeat_pict = 4;
else
- s->cur_pic_ptr->f->repeat_pict = 2;
+ s->cur_pic.ptr->f->repeat_pict = 2;
} else if (s->progressive_frame) {
- s->cur_pic_ptr->f->repeat_pict = 1;
+ s->cur_pic.ptr->f->repeat_pict = 1;
}
}
- ret = ff_frame_new_side_data(s->avctx, s->cur_pic_ptr->f,
+ ret = ff_frame_new_side_data(s->avctx, s->cur_pic.ptr->f,
AV_FRAME_DATA_PANSCAN, sizeof(s1->pan_scan),
&pan_scan);
if (ret < 0)
@@ -1331,14 +1331,14 @@ static int mpeg_field_start(MpegEncContext *s, const uint8_t *buf, int buf_size)
if (s1->a53_buf_ref) {
ret = ff_frame_new_side_data_from_buf(
- s->avctx, s->cur_pic_ptr->f, AV_FRAME_DATA_A53_CC,
+ s->avctx, s->cur_pic.ptr->f, AV_FRAME_DATA_A53_CC,
&s1->a53_buf_ref, NULL);
if (ret < 0)
return ret;
}
if (s1->has_stereo3d) {
- AVStereo3D *stereo = av_stereo3d_create_side_data(s->cur_pic_ptr->f);
+ AVStereo3D *stereo = av_stereo3d_create_side_data(s->cur_pic.ptr->f);
if (!stereo)
return AVERROR(ENOMEM);
@@ -1348,7 +1348,7 @@ static int mpeg_field_start(MpegEncContext *s, const uint8_t *buf, int buf_size)
if (s1->has_afd) {
AVFrameSideData *sd;
- ret = ff_frame_new_side_data(s->avctx, s->cur_pic_ptr->f,
+ ret = ff_frame_new_side_data(s->avctx, s->cur_pic.ptr->f,
AV_FRAME_DATA_AFD, 1, &sd);
if (ret < 0)
return ret;
@@ -1360,7 +1360,7 @@ static int mpeg_field_start(MpegEncContext *s, const uint8_t *buf, int buf_size)
if (HAVE_THREADS && (avctx->active_thread_type & FF_THREAD_FRAME))
ff_thread_finish_setup(avctx);
} else { // second field
- if (!s->cur_pic_ptr) {
+ if (!s->cur_pic.ptr) {
av_log(s->avctx, AV_LOG_ERROR, "first field missing\n");
return AVERROR_INVALIDDATA;
}
@@ -1377,10 +1377,10 @@ static int mpeg_field_start(MpegEncContext *s, const uint8_t *buf, int buf_size)
return ret;
for (int i = 0; i < 3; i++) {
- s->cur_pic.data[i] = s->cur_pic_ptr->f->data[i];
+ s->cur_pic.data[i] = s->cur_pic.ptr->f->data[i];
if (s->picture_structure == PICT_BOTTOM_FIELD)
s->cur_pic.data[i] +=
- s->cur_pic_ptr->f->linesize[i];
+ s->cur_pic.ptr->f->linesize[i];
}
}
@@ -1735,7 +1735,7 @@ static int slice_end(AVCodecContext *avctx, AVFrame *pict, int *got_output)
Mpeg1Context *s1 = avctx->priv_data;
MpegEncContext *s = &s1->mpeg_enc_ctx;
- if (!s->context_initialized || !s->cur_pic_ptr)
+ if (!s->context_initialized || !s->cur_pic.ptr)
return 0;
if (s->avctx->hwaccel) {
@@ -1756,20 +1756,20 @@ static int slice_end(AVCodecContext *avctx, AVFrame *pict, int *got_output)
ff_mpv_frame_end(s);
if (s->pict_type == AV_PICTURE_TYPE_B || s->low_delay) {
- int ret = av_frame_ref(pict, s->cur_pic_ptr->f);
+ int ret = av_frame_ref(pict, s->cur_pic.ptr->f);
if (ret < 0)
return ret;
- ff_print_debug_info(s, s->cur_pic_ptr, pict);
- ff_mpv_export_qp_table(s, pict, s->cur_pic_ptr, FF_MPV_QSCALE_TYPE_MPEG2);
+ ff_print_debug_info(s, s->cur_pic.ptr, pict);
+ ff_mpv_export_qp_table(s, pict, s->cur_pic.ptr, FF_MPV_QSCALE_TYPE_MPEG2);
*got_output = 1;
} else {
/* latency of 1 frame for I- and P-frames */
- if (s->last_pic_ptr && !s->last_pic_ptr->dummy) {
- int ret = av_frame_ref(pict, s->last_pic_ptr->f);
+ if (s->last_pic.ptr && !s->last_pic.ptr->dummy) {
+ int ret = av_frame_ref(pict, s->last_pic.ptr->f);
if (ret < 0)
return ret;
- ff_print_debug_info(s, s->last_pic_ptr, pict);
- ff_mpv_export_qp_table(s, pict, s->last_pic_ptr, FF_MPV_QSCALE_TYPE_MPEG2);
+ ff_print_debug_info(s, s->last_pic.ptr, pict);
+ ff_mpv_export_qp_table(s, pict, s->last_pic.ptr, FF_MPV_QSCALE_TYPE_MPEG2);
*got_output = 1;
}
}
@@ -2405,7 +2405,7 @@ static int decode_chunks(AVCodecContext *avctx, AVFrame *picture,
return AVERROR_INVALIDDATA;
}
- if (!s2->last_pic_ptr) {
+ if (!s2->last_pic.ptr) {
/* Skip B-frames if we do not have reference frames and
* GOP is not closed. */
if (s2->pict_type == AV_PICTURE_TYPE_B) {
@@ -2419,7 +2419,7 @@ static int decode_chunks(AVCodecContext *avctx, AVFrame *picture,
}
if (s2->pict_type == AV_PICTURE_TYPE_I || (s2->avctx->flags2 & AV_CODEC_FLAG2_SHOW_ALL))
s->sync = 1;
- if (!s2->next_pic_ptr) {
+ if (!s2->next_pic.ptr) {
/* Skip P-frames if we do not have a reference frame or
* we have an invalid header. */
if (s2->pict_type == AV_PICTURE_TYPE_P && !s->sync) {
@@ -2460,7 +2460,7 @@ static int decode_chunks(AVCodecContext *avctx, AVFrame *picture,
if ((ret = mpeg_field_start(s2, buf, buf_size)) < 0)
return ret;
}
- if (!s2->cur_pic_ptr) {
+ if (!s2->cur_pic.ptr) {
av_log(avctx, AV_LOG_ERROR,
"current_picture not initialized\n");
return AVERROR_INVALIDDATA;
@@ -2524,12 +2524,12 @@ static int mpeg_decode_frame(AVCodecContext *avctx, AVFrame *picture,
if (buf_size == 0 || (buf_size == 4 && AV_RB32(buf) == SEQ_END_CODE)) {
/* special case for last picture */
- if (s2->low_delay == 0 && s2->next_pic_ptr) {
- int ret = av_frame_ref(picture, s2->next_pic_ptr->f);
+ if (s2->low_delay == 0 && s2->next_pic.ptr) {
+ int ret = av_frame_ref(picture, s2->next_pic.ptr->f);
if (ret < 0)
return ret;
- s2->next_pic_ptr = NULL;
+ s2->next_pic.ptr = NULL;
*got_output = 1;
}
@@ -2552,14 +2552,14 @@ static int mpeg_decode_frame(AVCodecContext *avctx, AVFrame *picture,
}
s->extradata_decoded = 1;
if (ret < 0 && (avctx->err_recognition & AV_EF_EXPLODE)) {
- s2->cur_pic_ptr = NULL;
+ s2->cur_pic.ptr = NULL;
return ret;
}
}
ret = decode_chunks(avctx, picture, got_output, buf, buf_size);
if (ret<0 || *got_output) {
- s2->cur_pic_ptr = NULL;
+ s2->cur_pic.ptr = NULL;
if (s->timecode_frame_start != -1 && *got_output) {
char tcbuf[AV_TIMECODE_STR_SIZE];
diff --git a/libavcodec/mpeg12enc.c b/libavcodec/mpeg12enc.c
index 42ff92cb16..304cfb9046 100644
--- a/libavcodec/mpeg12enc.c
+++ b/libavcodec/mpeg12enc.c
@@ -290,7 +290,7 @@ static void mpeg1_encode_sequence_header(MpegEncContext *s)
AVRational aspect_ratio = s->avctx->sample_aspect_ratio;
int aspect_ratio_info;
- if (!(s->cur_pic.f->flags & AV_FRAME_FLAG_KEY))
+ if (!(s->cur_pic.ptr->f->flags & AV_FRAME_FLAG_KEY))
return;
if (aspect_ratio.num == 0 || aspect_ratio.den == 0)
@@ -382,7 +382,7 @@ static void mpeg1_encode_sequence_header(MpegEncContext *s)
put_bits(&s->pb, 2, mpeg12->frame_rate_ext.num-1); // frame_rate_ext_n
put_bits(&s->pb, 5, mpeg12->frame_rate_ext.den-1); // frame_rate_ext_d
- side_data = av_frame_get_side_data(s->cur_pic_ptr->f, AV_FRAME_DATA_PANSCAN);
+ side_data = av_frame_get_side_data(s->cur_pic.ptr->f, AV_FRAME_DATA_PANSCAN);
if (side_data) {
const AVPanScan *pan_scan = (AVPanScan *)side_data->data;
if (pan_scan->width && pan_scan->height) {
@@ -419,10 +419,10 @@ static void mpeg1_encode_sequence_header(MpegEncContext *s)
/* time code: we must convert from the real frame rate to a
* fake MPEG frame rate in case of low frame rate */
fps = (framerate.num + framerate.den / 2) / framerate.den;
- time_code = s->cur_pic_ptr->coded_picture_number +
+ time_code = s->cur_pic.ptr->coded_picture_number +
mpeg12->timecode_frame_start;
- mpeg12->gop_picture_number = s->cur_pic_ptr->coded_picture_number;
+ mpeg12->gop_picture_number = s->cur_pic.ptr->coded_picture_number;
av_assert0(mpeg12->drop_frame_timecode == !!(mpeg12->tc.flags & AV_TIMECODE_FLAG_DROPFRAME));
if (mpeg12->drop_frame_timecode)
@@ -530,7 +530,7 @@ void ff_mpeg1_encode_picture_header(MpegEncContext *s)
if (s->progressive_sequence)
put_bits(&s->pb, 1, 0); /* no repeat */
else
- put_bits(&s->pb, 1, !!(s->cur_pic_ptr->f->flags & AV_FRAME_FLAG_TOP_FIELD_FIRST));
+ put_bits(&s->pb, 1, !!(s->cur_pic.ptr->f->flags & AV_FRAME_FLAG_TOP_FIELD_FIRST));
/* XXX: optimize the generation of this flag with entropy measures */
s->frame_pred_frame_dct = s->progressive_sequence;
@@ -554,7 +554,7 @@ void ff_mpeg1_encode_picture_header(MpegEncContext *s)
for (i = 0; i < sizeof(svcd_scan_offset_placeholder); i++)
put_bits(&s->pb, 8, svcd_scan_offset_placeholder[i]);
}
- side_data = av_frame_get_side_data(s->cur_pic_ptr->f,
+ side_data = av_frame_get_side_data(s->cur_pic.ptr->f,
AV_FRAME_DATA_STEREO3D);
if (side_data) {
const AVStereo3D *stereo = (AVStereo3D *)side_data->data;
@@ -594,7 +594,7 @@ void ff_mpeg1_encode_picture_header(MpegEncContext *s)
}
if (CONFIG_MPEG2VIDEO_ENCODER && mpeg12->a53_cc) {
- side_data = av_frame_get_side_data(s->cur_pic_ptr->f,
+ side_data = av_frame_get_side_data(s->cur_pic.ptr->f,
AV_FRAME_DATA_A53_CC);
if (side_data) {
if (side_data->size <= A53_MAX_CC_COUNT * 3 && side_data->size % 3 == 0) {
diff --git a/libavcodec/mpeg4videodec.c b/libavcodec/mpeg4videodec.c
index 8f2e03414b..6cdab62b46 100644
--- a/libavcodec/mpeg4videodec.c
+++ b/libavcodec/mpeg4videodec.c
@@ -1811,7 +1811,7 @@ static int mpeg4_decode_mb(MpegEncContext *s, int16_t block[6][64])
s->last_mv[i][1][1] = 0;
}
- ff_thread_await_progress(&s->next_pic_ptr->tf, s->mb_y, 0);
+ ff_thread_await_progress(&s->next_pic.ptr->tf, s->mb_y, 0);
}
/* if we skipped it in the future P-frame than skip it now too */
@@ -2016,7 +2016,7 @@ end:
if (s->pict_type == AV_PICTURE_TYPE_B) {
const int delta = s->mb_x + 1 == s->mb_width ? 2 : 1;
- ff_thread_await_progress(&s->next_pic_ptr->tf,
+ ff_thread_await_progress(&s->next_pic.ptr->tf,
(s->mb_x + delta >= s->mb_width)
? FFMIN(s->mb_y + 1, s->mb_height - 1)
: s->mb_y, 0);
diff --git a/libavcodec/mpeg4videoenc.c b/libavcodec/mpeg4videoenc.c
index 036171fe70..2f4b1a1d52 100644
--- a/libavcodec/mpeg4videoenc.c
+++ b/libavcodec/mpeg4videoenc.c
@@ -888,7 +888,7 @@ static void mpeg4_encode_gop_header(MpegEncContext *s)
put_bits(&s->pb, 16, 0);
put_bits(&s->pb, 16, GOP_STARTCODE);
- time = s->cur_pic_ptr->f->pts;
+ time = s->cur_pic.ptr->f->pts;
if (s->reordered_input_picture[1])
time = FFMIN(time, s->reordered_input_picture[1]->f->pts);
time = time * s->avctx->time_base.num;
@@ -1100,7 +1100,7 @@ int ff_mpeg4_encode_picture_header(MpegEncContext *s)
}
put_bits(&s->pb, 3, 0); /* intra dc VLC threshold */
if (!s->progressive_sequence) {
- put_bits(&s->pb, 1, !!(s->cur_pic_ptr->f->flags & AV_FRAME_FLAG_TOP_FIELD_FIRST));
+ put_bits(&s->pb, 1, !!(s->cur_pic.ptr->f->flags & AV_FRAME_FLAG_TOP_FIELD_FIRST));
put_bits(&s->pb, 1, s->alternate_scan);
}
// FIXME sprite stuff
diff --git a/libavcodec/mpeg_er.c b/libavcodec/mpeg_er.c
index 21fe7d6f71..f9421ec91f 100644
--- a/libavcodec/mpeg_er.c
+++ b/libavcodec/mpeg_er.c
@@ -49,9 +49,9 @@ void ff_mpeg_er_frame_start(MpegEncContext *s)
{
ERContext *er = &s->er;
- set_erpic(&er->cur_pic, s->cur_pic_ptr);
- set_erpic(&er->next_pic, s->next_pic_ptr);
- set_erpic(&er->last_pic, s->last_pic_ptr);
+ set_erpic(&er->cur_pic, s->cur_pic.ptr);
+ set_erpic(&er->next_pic, s->next_pic.ptr);
+ set_erpic(&er->last_pic, s->last_pic.ptr);
er->pp_time = s->pp_time;
er->pb_time = s->pb_time;
diff --git a/libavcodec/mpegpicture.c b/libavcodec/mpegpicture.c
index 429c110397..9d5a24523f 100644
--- a/libavcodec/mpegpicture.c
+++ b/libavcodec/mpegpicture.c
@@ -45,6 +45,45 @@ static void av_noinline free_picture_tables(MPVPicture *pic)
pic->mb_height = 0;
}
+void ff_mpv_unref_picture(MPVWorkPicture *pic)
+{
+ if (pic->ptr)
+ ff_mpeg_unref_picture(pic->ptr);
+ memset(pic, 0, sizeof(*pic));
+}
+
+static void set_workpic_from_pic(MPVWorkPicture *wpic, const MPVPicture *pic)
+{
+ for (int i = 0; i < MPV_MAX_PLANES; i++) {
+ wpic->data[i] = pic->f->data[i];
+ wpic->linesize[i] = pic->f->linesize[i];
+ }
+ wpic->qscale_table = pic->qscale_table;
+ wpic->mb_type = pic->mb_type;
+ wpic->mbskip_table = pic->mbskip_table;
+
+ for (int i = 0; i < 2; i++) {
+ wpic->motion_val[i] = pic->motion_val[i];
+ wpic->ref_index[i] = pic->ref_index[i];
+ }
+ wpic->reference = pic->reference;
+}
+
+void ff_mpv_replace_picture(MPVWorkPicture *dst, const MPVWorkPicture *src)
+{
+ memcpy(dst, src, sizeof(*dst));
+}
+
+void ff_mpv_workpic_from_pic(MPVWorkPicture *wpic, MPVPicture *pic)
+{
+ if (!pic) {
+ memset(wpic, 0, sizeof(*wpic));
+ return;
+ }
+ wpic->ptr = pic;
+ set_workpic_from_pic(wpic, pic);
+}
+
int ff_mpeg_framesize_alloc(AVCodecContext *avctx, MotionEstContext *me,
ScratchpadContext *sc, int linesize)
{
@@ -143,17 +182,13 @@ static int alloc_picture_tables(BufferPoolContext *pools, MPVPicture *pic,
return 0;
}
-int ff_mpv_alloc_pic_accessories(AVCodecContext *avctx, MPVPicture *pic,
+int ff_mpv_alloc_pic_accessories(AVCodecContext *avctx, MPVWorkPicture *wpic,
MotionEstContext *me, ScratchpadContext *sc,
BufferPoolContext *pools, int mb_height)
{
+ MPVPicture *pic = wpic->ptr;
int ret;
- for (int i = 0; i < MPV_MAX_PLANES; i++) {
- pic->data[i] = pic->f->data[i];
- pic->linesize[i] = pic->f->linesize[i];
- }
-
ret = ff_mpeg_framesize_alloc(avctx, me, sc,
pic->f->linesize[0]);
if (ret < 0)
@@ -170,6 +205,7 @@ int ff_mpv_alloc_pic_accessories(AVCodecContext *avctx, MPVPicture *pic,
for (int i = 0; i < 2; i++)
pic->motion_val[i] = pic->motion_val_base[i] + 4;
}
+ set_workpic_from_pic(wpic, pic);
return 0;
fail:
@@ -190,9 +226,6 @@ void ff_mpeg_unref_picture(MPVPicture *pic)
free_picture_tables(pic);
- memset(pic->data, 0, sizeof(pic->data));
- memset(pic->linesize, 0, sizeof(pic->linesize));
-
pic->dummy = 0;
pic->field_picture = 0;
@@ -236,11 +269,6 @@ int ff_mpeg_ref_picture(MPVPicture *dst, MPVPicture *src)
if (ret < 0)
goto fail;
- for (int i = 0; i < MPV_MAX_PLANES; i++) {
- dst->data[i] = src->data[i];
- dst->linesize[i] = src->linesize[i];
- }
-
update_picture_tables(dst, src);
ff_refstruct_replace(&dst->hwaccel_picture_private,
diff --git a/libavcodec/mpegpicture.h b/libavcodec/mpegpicture.h
index f0837b158a..7bf204dd5b 100644
--- a/libavcodec/mpegpicture.h
+++ b/libavcodec/mpegpicture.h
@@ -58,9 +58,6 @@ typedef struct MPVPicture {
struct AVFrame *f;
ThreadFrame tf;
- uint8_t *data[MPV_MAX_PLANES];
- ptrdiff_t linesize[MPV_MAX_PLANES];
-
int8_t *qscale_table_base;
int8_t *qscale_table;
@@ -93,10 +90,30 @@ typedef struct MPVPicture {
int coded_picture_number;
} MPVPicture;
+typedef struct MPVWorkPicture {
+ uint8_t *data[MPV_MAX_PLANES];
+ ptrdiff_t linesize[MPV_MAX_PLANES];
+
+ MPVPicture *ptr;
+
+ int8_t *qscale_table;
+
+ int16_t (*motion_val[2])[2];
+
+ uint32_t *mb_type; ///< types and macros are defined in mpegutils.h
+
+ uint8_t *mbskip_table;
+
+ int8_t *ref_index[2];
+
+ int reference;
+} MPVWorkPicture;
+
/**
- * Allocate an MPVPicture's accessories, but not the AVFrame's buffer itself.
+ * Allocate an MPVPicture's accessories (but not the AVFrame's buffer itself)
+ * and set the MPVWorkPicture's fields.
*/
-int ff_mpv_alloc_pic_accessories(AVCodecContext *avctx, MPVPicture *pic,
+int ff_mpv_alloc_pic_accessories(AVCodecContext *avctx, MPVWorkPicture *pic,
MotionEstContext *me, ScratchpadContext *sc,
BufferPoolContext *pools, int mb_height);
@@ -113,6 +130,9 @@ int ff_mpeg_framesize_alloc(AVCodecContext *avctx, MotionEstContext *me,
ScratchpadContext *sc, int linesize);
int ff_mpeg_ref_picture(MPVPicture *dst, MPVPicture *src);
+void ff_mpv_unref_picture(MPVWorkPicture *pic);
+void ff_mpv_workpic_from_pic(MPVWorkPicture *wpic, MPVPicture *pic);
+void ff_mpv_replace_picture(MPVWorkPicture *dst, const MPVWorkPicture *src);
void ff_mpeg_unref_picture(MPVPicture *picture);
void ff_mpv_picture_free(MPVPicture *pic);
diff --git a/libavcodec/mpegvideo.c b/libavcodec/mpegvideo.c
index c24b7207b1..e062749291 100644
--- a/libavcodec/mpegvideo.c
+++ b/libavcodec/mpegvideo.c
@@ -763,11 +763,6 @@ av_cold int ff_mpv_common_init(MpegEncContext *s)
goto fail_nomem;
}
- if (!(s->next_pic.f = av_frame_alloc()) ||
- !(s->last_pic.f = av_frame_alloc()) ||
- !(s->cur_pic.f = av_frame_alloc()))
- goto fail_nomem;
-
if ((ret = ff_mpv_init_context_frame(s)))
goto fail;
@@ -840,15 +835,9 @@ void ff_mpv_common_end(MpegEncContext *s)
ff_mpv_picture_free(&s->picture[i]);
}
av_freep(&s->picture);
- ff_mpv_picture_free(&s->last_pic);
- ff_mpv_picture_free(&s->cur_pic);
- ff_mpv_picture_free(&s->next_pic);
s->context_initialized = 0;
s->context_reinit = 0;
- s->last_pic_ptr =
- s->next_pic_ptr =
- s->cur_pic_ptr = NULL;
s->linesize = s->uvlinesize = 0;
}
diff --git a/libavcodec/mpegvideo.h b/libavcodec/mpegvideo.h
index 6d96376a6e..3e2c98b039 100644
--- a/libavcodec/mpegvideo.h
+++ b/libavcodec/mpegvideo.h
@@ -156,13 +156,13 @@ typedef struct MpegEncContext {
* copy of the previous picture structure.
* note, linesize & data, might not match the previous picture (for field pictures)
*/
- MPVPicture last_pic;
+ MPVWorkPicture last_pic;
/**
* copy of the next picture structure.
* note, linesize & data, might not match the next picture (for field pictures)
*/
- MPVPicture next_pic;
+ MPVWorkPicture next_pic;
/**
* Reference to the source picture for encoding.
@@ -174,11 +174,8 @@ typedef struct MpegEncContext {
* copy of the current picture structure.
* note, linesize & data, might not match the current picture (for field pictures)
*/
- MPVPicture cur_pic;
+ MPVWorkPicture cur_pic;
- MPVPicture *last_pic_ptr; ///< pointer to the previous picture.
- MPVPicture *next_pic_ptr; ///< pointer to the next picture (for bidir pred)
- MPVPicture *cur_pic_ptr; ///< pointer to the current picture
int skipped_last_frame;
int last_dc[3]; ///< last DC values for MPEG-1
int16_t *dc_val_base;
diff --git a/libavcodec/mpegvideo_dec.c b/libavcodec/mpegvideo_dec.c
index 97efd4fe81..71a6c0ad67 100644
--- a/libavcodec/mpegvideo_dec.c
+++ b/libavcodec/mpegvideo_dec.c
@@ -114,12 +114,10 @@ int ff_mpeg_update_thread_context(AVCodecContext *dst,
#define UPDATE_PICTURE(pic)\
do {\
- ff_mpeg_unref_picture(&s->pic);\
- if (s1->pic.f && s1->pic.f->buf[0]) {\
- ret = ff_mpeg_ref_picture(&s->pic, &s1->pic);\
- if (ret < 0)\
- return ret;\
- }\
+ if (s->picture && s1->picture && s1->pic.ptr && s1->pic.ptr->f->buf[0]) {\
+ ff_mpv_workpic_from_pic(&s->pic, &s->picture[s1->pic.ptr - s1->picture]);\
+ } else\
+ ff_mpv_unref_picture(&s->pic);\
} while (0)
UPDATE_PICTURE(cur_pic);
@@ -129,15 +127,6 @@ do {\
s->linesize = s1->linesize;
s->uvlinesize = s1->uvlinesize;
-#define REBASE_PICTURE(pic, new_ctx, old_ctx) \
- ((pic && pic >= old_ctx->picture && \
- pic < old_ctx->picture + MAX_PICTURE_COUNT) ? \
- &new_ctx->picture[pic - old_ctx->picture] : NULL)
-
- s->last_pic_ptr = REBASE_PICTURE(s1->last_pic_ptr, s, s1);
- s->cur_pic_ptr = REBASE_PICTURE(s1->cur_pic_ptr, s, s1);
- s->next_pic_ptr = REBASE_PICTURE(s1->next_pic_ptr, s, s1);
-
// Error/bug resilience
s->workaround_bugs = s1->workaround_bugs;
s->padding_bug_score = s1->padding_bug_score;
@@ -193,9 +182,9 @@ int ff_mpv_common_frame_size_change(MpegEncContext *s)
ff_mpv_free_context_frame(s);
- s->last_pic_ptr =
- s->next_pic_ptr =
- s->cur_pic_ptr = NULL;
+ s->last_pic.ptr =
+ s->next_pic.ptr =
+ s->cur_pic.ptr = NULL;
if ((s->width || s->height) &&
(err = av_image_check_size(s->width, s->height, 0, s->avctx)) < 0)
@@ -228,7 +217,7 @@ int ff_mpv_common_frame_size_change(MpegEncContext *s)
return err;
}
-static int alloc_picture(MpegEncContext *s, MPVPicture **picp, int reference)
+static int alloc_picture(MpegEncContext *s, MPVWorkPicture *dst, int reference)
{
AVCodecContext *avctx = s->avctx;
int idx = ff_find_unused_picture(s->avctx, s->picture, 0);
@@ -239,6 +228,7 @@ static int alloc_picture(MpegEncContext *s, MPVPicture **picp, int reference)
return idx;
pic = &s->picture[idx];
+ dst->ptr = pic;
pic->tf.f = pic->f;
pic->reference = reference;
@@ -271,36 +261,27 @@ static int alloc_picture(MpegEncContext *s, MPVPicture **picp, int reference)
av_assert1(s->mb_height == s->buffer_pools.alloc_mb_height ||
FFALIGN(s->mb_height, 2) == s->buffer_pools.alloc_mb_height);
av_assert1(s->mb_stride == s->buffer_pools.alloc_mb_stride);
- ret = ff_mpv_alloc_pic_accessories(s->avctx, pic, &s->me, &s->sc,
+ ret = ff_mpv_alloc_pic_accessories(s->avctx, dst, &s->me, &s->sc,
&s->buffer_pools, s->mb_height);
if (ret < 0)
goto fail;
- *picp = pic;
return 0;
fail:
- ff_mpeg_unref_picture(pic);
+ ff_mpv_unref_picture(dst);
return ret;
}
-static int av_cold alloc_dummy_frame(MpegEncContext *s, MPVPicture **picp, MPVPicture *wpic)
+static int av_cold alloc_dummy_frame(MpegEncContext *s, MPVWorkPicture *dst)
{
MPVPicture *pic;
- int ret = alloc_picture(s, &pic, 1);
+ int ret = alloc_picture(s, dst, 1);
if (ret < 0)
return ret;
+ pic = dst->ptr;
pic->dummy = 1;
- ff_mpeg_unref_picture(wpic);
- ret = ff_mpeg_ref_picture(wpic, pic);
- if (ret < 0) {
- ff_mpeg_unref_picture(pic);
- return ret;
- }
-
- *picp = pic;
-
ff_thread_report_progress(&pic->tf, INT_MAX, 0);
ff_thread_report_progress(&pic->tf, INT_MAX, 1);
@@ -330,9 +311,9 @@ int ff_mpv_alloc_dummy_frames(MpegEncContext *s)
AVCodecContext *avctx = s->avctx;
int ret;
- if ((!s->last_pic_ptr || !s->last_pic_ptr->f->buf[0]) &&
+ if ((!s->last_pic.ptr || !s->last_pic.ptr->f->buf[0]) &&
(s->pict_type != AV_PICTURE_TYPE_I)) {
- if (s->pict_type == AV_PICTURE_TYPE_B && s->next_pic_ptr && s->next_pic_ptr->f->buf[0])
+ if (s->pict_type == AV_PICTURE_TYPE_B && s->next_pic.ptr && s->next_pic.ptr->f->buf[0])
av_log(avctx, AV_LOG_DEBUG,
"allocating dummy last picture for B frame\n");
else if (s->codec_id != AV_CODEC_ID_H261 /* H.261 has no keyframes */ &&
@@ -341,25 +322,25 @@ int ff_mpv_alloc_dummy_frames(MpegEncContext *s)
"warning: first frame is no keyframe\n");
/* Allocate a dummy frame */
- ret = alloc_dummy_frame(s, &s->last_pic_ptr, &s->last_pic);
+ ret = alloc_dummy_frame(s, &s->last_pic);
if (ret < 0)
return ret;
if (!avctx->hwaccel) {
int luma_val = s->codec_id == AV_CODEC_ID_FLV1 || s->codec_id == AV_CODEC_ID_H263 ? 16 : 0x80;
- color_frame(s->last_pic_ptr->f, luma_val);
+ color_frame(s->last_pic.ptr->f, luma_val);
}
}
- if ((!s->next_pic_ptr || !s->next_pic_ptr->f->buf[0]) &&
+ if ((!s->next_pic.ptr || !s->next_pic.ptr->f->buf[0]) &&
s->pict_type == AV_PICTURE_TYPE_B) {
/* Allocate a dummy frame */
- ret = alloc_dummy_frame(s, &s->next_pic_ptr, &s->next_pic);
+ ret = alloc_dummy_frame(s, &s->next_pic);
if (ret < 0)
return ret;
}
- av_assert0(s->pict_type == AV_PICTURE_TYPE_I || (s->last_pic_ptr &&
- s->last_pic_ptr->f->buf[0]));
+ av_assert0(s->pict_type == AV_PICTURE_TYPE_I || (s->last_pic.ptr &&
+ s->last_pic.ptr->f->buf[0]));
return 0;
}
@@ -380,68 +361,49 @@ int ff_mpv_frame_start(MpegEncContext *s, AVCodecContext *avctx)
}
/* mark & release old frames */
- if (s->pict_type != AV_PICTURE_TYPE_B && s->last_pic_ptr &&
- s->last_pic_ptr != s->next_pic_ptr &&
- s->last_pic_ptr->f->buf[0]) {
- ff_mpeg_unref_picture(s->last_pic_ptr);
+ if (s->pict_type != AV_PICTURE_TYPE_B && s->last_pic.ptr &&
+ s->last_pic.ptr != s->next_pic.ptr &&
+ s->last_pic.ptr->f->buf[0]) {
+ ff_mpeg_unref_picture(s->last_pic.ptr);
}
/* release non reference/forgotten frames */
for (int i = 0; i < MAX_PICTURE_COUNT; i++) {
if (!s->picture[i].reference ||
- (&s->picture[i] != s->last_pic_ptr &&
- &s->picture[i] != s->next_pic_ptr)) {
+ (&s->picture[i] != s->last_pic.ptr &&
+ &s->picture[i] != s->next_pic.ptr)) {
ff_mpeg_unref_picture(&s->picture[i]);
}
}
- ff_mpeg_unref_picture(&s->cur_pic);
- ff_mpeg_unref_picture(&s->last_pic);
- ff_mpeg_unref_picture(&s->next_pic);
-
- ret = alloc_picture(s, &s->cur_pic_ptr,
+ ret = alloc_picture(s, &s->cur_pic,
s->pict_type != AV_PICTURE_TYPE_B && !s->droppable);
if (ret < 0)
return ret;
- s->cur_pic_ptr->f->flags |= AV_FRAME_FLAG_TOP_FIELD_FIRST * !!s->top_field_first;
- s->cur_pic_ptr->f->flags |= AV_FRAME_FLAG_INTERLACED *
+ s->cur_pic.ptr->f->flags |= AV_FRAME_FLAG_TOP_FIELD_FIRST * !!s->top_field_first;
+ s->cur_pic.ptr->f->flags |= AV_FRAME_FLAG_INTERLACED *
(!s->progressive_frame && !s->progressive_sequence);
- s->cur_pic_ptr->field_picture = s->picture_structure != PICT_FRAME;
+ s->cur_pic.ptr->field_picture = s->picture_structure != PICT_FRAME;
- s->cur_pic_ptr->f->pict_type = s->pict_type;
+ s->cur_pic.ptr->f->pict_type = s->pict_type;
if (s->pict_type == AV_PICTURE_TYPE_I)
- s->cur_pic_ptr->f->flags |= AV_FRAME_FLAG_KEY;
+ s->cur_pic.ptr->f->flags |= AV_FRAME_FLAG_KEY;
else
- s->cur_pic_ptr->f->flags &= ~AV_FRAME_FLAG_KEY;
-
- if ((ret = ff_mpeg_ref_picture(&s->cur_pic, s->cur_pic_ptr)) < 0)
- return ret;
+ s->cur_pic.ptr->f->flags &= ~AV_FRAME_FLAG_KEY;
if (s->pict_type != AV_PICTURE_TYPE_B) {
- s->last_pic_ptr = s->next_pic_ptr;
+ ff_mpv_workpic_from_pic(&s->last_pic, s->next_pic.ptr);
if (!s->droppable)
- s->next_pic_ptr = s->cur_pic_ptr;
+ ff_mpv_workpic_from_pic(&s->next_pic, s->cur_pic.ptr);
}
ff_dlog(s->avctx, "L%p N%p C%p L%p N%p C%p type:%d drop:%d\n",
- s->last_pic_ptr, s->next_pic_ptr, s->cur_pic_ptr,
- s->last_pic_ptr ? s->last_pic_ptr->f->data[0] : NULL,
- s->next_pic_ptr ? s->next_pic_ptr->f->data[0] : NULL,
- s->cur_pic_ptr ? s->cur_pic_ptr->f->data[0] : NULL,
+ (void*)s->last_pic.ptr, (void*)s->next_pic.ptr, (void*)s->cur_pic.ptr,
+ s->last_pic.ptr ? s->last_pic.ptr->f->data[0] : NULL,
+ s->next_pic.ptr ? s->next_pic.ptr->f->data[0] : NULL,
+ s->cur_pic.ptr ? s->cur_pic.ptr->f->data[0] : NULL,
s->pict_type, s->droppable);
- if (s->last_pic_ptr) {
- if (s->last_pic_ptr->f->buf[0] &&
- (ret = ff_mpeg_ref_picture(&s->last_pic,
- s->last_pic_ptr)) < 0)
- return ret;
- }
- if (s->next_pic_ptr) {
- if (s->next_pic_ptr->f->buf[0] &&
- (ret = ff_mpeg_ref_picture(&s->next_pic, s->next_pic_ptr)) < 0)
- return ret;
- }
-
ret = ff_mpv_alloc_dummy_frames(s);
if (ret < 0)
return ret;
@@ -461,7 +423,7 @@ int ff_mpv_frame_start(MpegEncContext *s, AVCodecContext *avctx)
}
if (s->avctx->debug & FF_DEBUG_NOMC)
- color_frame(s->cur_pic_ptr->f, 0x80);
+ color_frame(s->cur_pic.ptr->f, 0x80);
return 0;
}
@@ -472,7 +434,7 @@ void ff_mpv_frame_end(MpegEncContext *s)
emms_c();
if (s->cur_pic.reference)
- ff_thread_report_progress(&s->cur_pic_ptr->tf, INT_MAX, 0);
+ ff_thread_report_progress(&s->cur_pic.ptr->tf, INT_MAX, 0);
}
void ff_print_debug_info(const MpegEncContext *s, const MPVPicture *p, AVFrame *pict)
@@ -515,8 +477,8 @@ int ff_mpv_export_qp_table(const MpegEncContext *s, AVFrame *f,
void ff_mpeg_draw_horiz_band(MpegEncContext *s, int y, int h)
{
- ff_draw_horiz_band(s->avctx, s->cur_pic_ptr->f,
- s->last_pic_ptr ? s->last_pic_ptr->f : NULL,
+ ff_draw_horiz_band(s->avctx, s->cur_pic.ptr->f,
+ s->last_pic.ptr ? s->last_pic.ptr->f : NULL,
y, h, s->picture_structure,
s->first_field, s->low_delay);
}
@@ -530,11 +492,10 @@ void ff_mpeg_flush(AVCodecContext *avctx)
for (int i = 0; i < MAX_PICTURE_COUNT; i++)
ff_mpeg_unref_picture(&s->picture[i]);
- s->cur_pic_ptr = s->last_pic_ptr = s->next_pic_ptr = NULL;
- ff_mpeg_unref_picture(&s->cur_pic);
- ff_mpeg_unref_picture(&s->last_pic);
- ff_mpeg_unref_picture(&s->next_pic);
+ ff_mpv_unref_picture(&s->cur_pic);
+ ff_mpv_unref_picture(&s->last_pic);
+ ff_mpv_unref_picture(&s->next_pic);
s->mb_x = s->mb_y = 0;
@@ -545,7 +506,7 @@ void ff_mpeg_flush(AVCodecContext *avctx)
void ff_mpv_report_decode_progress(MpegEncContext *s)
{
if (s->pict_type != AV_PICTURE_TYPE_B && !s->partitioned_frame && !s->er.error_occurred)
- ff_thread_report_progress(&s->cur_pic_ptr->tf, s->mb_y, 0);
+ ff_thread_report_progress(&s->cur_pic.ptr->tf, s->mb_y, 0);
}
@@ -864,7 +825,7 @@ static inline void MPV_motion_lowres(MpegEncContext *s,
} else {
if (s->picture_structure != s->field_select[dir][0] + 1 &&
s->pict_type != AV_PICTURE_TYPE_B && !s->first_field) {
- ref_picture = s->cur_pic_ptr->f->data;
+ ref_picture = s->cur_pic.ptr->f->data;
}
mpeg_motion_lowres(s, dest_y, dest_cb, dest_cr,
0, 0, s->field_select[dir][0],
@@ -881,7 +842,7 @@ static inline void MPV_motion_lowres(MpegEncContext *s,
s->pict_type == AV_PICTURE_TYPE_B || s->first_field) {
ref2picture = ref_picture;
} else {
- ref2picture = s->cur_pic_ptr->f->data;
+ ref2picture = s->cur_pic.ptr->f->data;
}
mpeg_motion_lowres(s, dest_y, dest_cb, dest_cr,
@@ -922,7 +883,7 @@ static inline void MPV_motion_lowres(MpegEncContext *s,
// opposite parity is always in the same
// frame if this is second field
if (!s->first_field) {
- ref_picture = s->cur_pic_ptr->f->data;
+ ref_picture = s->cur_pic.ptr->f->data;
}
}
}
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index 251b954210..cd25cd3221 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -1356,7 +1356,7 @@ static int estimate_best_b_count(MpegEncContext *s)
for (i = 0; i < s->max_b_frames + 2; i++) {
const MPVPicture *pre_input_ptr = i ? s->input_picture[i - 1] :
- s->next_pic_ptr;
+ s->next_pic.ptr;
if (pre_input_ptr) {
const uint8_t *data[4];
@@ -1484,8 +1484,8 @@ static int select_input_picture(MpegEncContext *s)
if (!s->reordered_input_picture[0] && s->input_picture[0]) {
if (s->frame_skip_threshold || s->frame_skip_factor) {
if (s->picture_in_gop_number < s->gop_size &&
- s->next_pic_ptr &&
- skip_check(s, s->input_picture[0], s->next_pic_ptr)) {
+ s->next_pic.ptr &&
+ skip_check(s, s->input_picture[0], s->next_pic.ptr)) {
// FIXME check that the gop check above is +-1 correct
ff_mpeg_unref_picture(s->input_picture[0]);
@@ -1496,7 +1496,7 @@ static int select_input_picture(MpegEncContext *s)
}
if (/*s->picture_in_gop_number >= s->gop_size ||*/
- !s->next_pic_ptr || s->intra_only) {
+ !s->next_pic.ptr || s->intra_only) {
s->reordered_input_picture[0] = s->input_picture[0];
s->reordered_input_picture[0]->f->pict_type = AV_PICTURE_TYPE_I;
s->reordered_input_picture[0]->coded_picture_number =
@@ -1624,17 +1624,17 @@ no_output_pic:
s->new_pic->data[i] += INPLACE_OFFSET;
}
}
- s->cur_pic_ptr = s->reordered_input_picture[0];
+ s->cur_pic.ptr = s->reordered_input_picture[0];
av_assert1(s->mb_width == s->buffer_pools.alloc_mb_width);
av_assert1(s->mb_height == s->buffer_pools.alloc_mb_height);
av_assert1(s->mb_stride == s->buffer_pools.alloc_mb_stride);
- ret = ff_mpv_alloc_pic_accessories(s->avctx, s->cur_pic_ptr, &s->me,
+ ret = ff_mpv_alloc_pic_accessories(s->avctx, &s->cur_pic, &s->me,
&s->sc, &s->buffer_pools, s->mb_height);
if (ret < 0) {
- ff_mpeg_unref_picture(s->cur_pic_ptr);
+ ff_mpv_unref_picture(&s->cur_pic);
return ret;
}
- s->picture_number = s->cur_pic_ptr->display_picture_number;
+ s->picture_number = s->cur_pic.ptr->display_picture_number;
}
return 0;
@@ -1674,7 +1674,7 @@ static void frame_end(MpegEncContext *s)
emms_c();
s->last_pict_type = s->pict_type;
- s->last_lambda_for [s->pict_type] = s->cur_pic_ptr->f->quality;
+ s->last_lambda_for [s->pict_type] = s->cur_pic.ptr->f->quality;
if (s->pict_type!= AV_PICTURE_TYPE_B)
s->last_non_b_pict_type = s->pict_type;
}
@@ -1700,47 +1700,26 @@ static void update_noise_reduction(MpegEncContext *s)
}
}
-static int frame_start(MpegEncContext *s)
+static void frame_start(MpegEncContext *s)
{
- int ret;
-
/* mark & release old frames */
- if (s->pict_type != AV_PICTURE_TYPE_B && s->last_pic_ptr &&
- s->last_pic_ptr != s->next_pic_ptr &&
- s->last_pic_ptr->f->buf[0]) {
- ff_mpeg_unref_picture(s->last_pic_ptr);
+ if (s->pict_type != AV_PICTURE_TYPE_B && s->last_pic.ptr &&
+ s->last_pic.ptr != s->next_pic.ptr &&
+ s->last_pic.ptr->f->buf[0]) {
+ ff_mpv_unref_picture(&s->last_pic);
}
- s->cur_pic_ptr->f->pict_type = s->pict_type;
-
- ff_mpeg_unref_picture(&s->cur_pic);
- if ((ret = ff_mpeg_ref_picture(&s->cur_pic, s->cur_pic_ptr)) < 0)
- return ret;
+ s->cur_pic.ptr->f->pict_type = s->pict_type;
if (s->pict_type != AV_PICTURE_TYPE_B) {
- s->last_pic_ptr = s->next_pic_ptr;
- s->next_pic_ptr = s->cur_pic_ptr;
- }
-
- if (s->last_pic_ptr) {
- ff_mpeg_unref_picture(&s->last_pic);
- if (s->last_pic_ptr->f->buf[0] &&
- (ret = ff_mpeg_ref_picture(&s->last_pic, s->last_pic_ptr)) < 0)
- return ret;
- }
- if (s->next_pic_ptr) {
- ff_mpeg_unref_picture(&s->next_pic);
- if (s->next_pic_ptr->f->buf[0] &&
- (ret = ff_mpeg_ref_picture(&s->next_pic, s->next_pic_ptr)) < 0)
- return ret;
+ ff_mpv_replace_picture(&s->last_pic, &s->next_pic);
+ ff_mpv_replace_picture(&s->next_pic, &s->cur_pic);
}
if (s->dct_error_sum) {
av_assert2(s->noise_reduction && s->encoding);
update_noise_reduction(s);
}
-
- return 0;
}
int ff_mpv_encode_picture(AVCodecContext *avctx, AVPacket *pkt,
@@ -1793,9 +1772,7 @@ int ff_mpv_encode_picture(AVCodecContext *avctx, AVPacket *pkt,
s->pict_type = s->new_pic->pict_type;
//emms_c();
- ret = frame_start(s);
- if (ret < 0)
- return ret;
+ frame_start(s);
vbv_retry:
ret = encode_picture(s);
if (growing_buffer) {
@@ -1858,7 +1835,7 @@ vbv_retry:
for (int i = 0; i < MPV_MAX_PLANES; i++)
avctx->error[i] += s->encoding_error[i];
- ff_side_data_set_encoder_stats(pkt, s->cur_pic.f->quality,
+ ff_side_data_set_encoder_stats(pkt, s->cur_pic.ptr->f->quality,
s->encoding_error,
(avctx->flags&AV_CODEC_FLAG_PSNR) ? MPV_MAX_PLANES : 0,
s->pict_type);
@@ -1952,10 +1929,10 @@ vbv_retry:
}
s->total_bits += s->frame_bits;
- pkt->pts = s->cur_pic.f->pts;
- pkt->duration = s->cur_pic.f->duration;
+ pkt->pts = s->cur_pic.ptr->f->pts;
+ pkt->duration = s->cur_pic.ptr->f->duration;
if (!s->low_delay && s->pict_type != AV_PICTURE_TYPE_B) {
- if (!s->cur_pic.coded_picture_number)
+ if (!s->cur_pic.ptr->coded_picture_number)
pkt->dts = pkt->pts - s->dts_delta;
else
pkt->dts = s->reordered_pts;
@@ -1965,12 +1942,12 @@ vbv_retry:
// the no-delay case is handled in generic code
if (avctx->codec->capabilities & AV_CODEC_CAP_DELAY) {
- ret = ff_encode_reordered_opaque(avctx, pkt, s->cur_pic.f);
+ ret = ff_encode_reordered_opaque(avctx, pkt, s->cur_pic.ptr->f);
if (ret < 0)
return ret;
}
- if (s->cur_pic.f->flags & AV_FRAME_FLAG_KEY)
+ if (s->cur_pic.ptr->f->flags & AV_FRAME_FLAG_KEY)
pkt->flags |= AV_PKT_FLAG_KEY;
if (s->mb_info)
av_packet_shrink_side_data(pkt, AV_PKT_DATA_H263_MB_INFO, s->mb_info_size);
@@ -3512,14 +3489,12 @@ static void merge_context_after_encode(MpegEncContext *dst, MpegEncContext *src)
static int estimate_qp(MpegEncContext *s, int dry_run){
if (s->next_lambda){
- s->cur_pic_ptr->f->quality =
- s->cur_pic.f->quality = s->next_lambda;
+ s->cur_pic.ptr->f->quality = s->next_lambda;
if(!dry_run) s->next_lambda= 0;
} else if (!s->fixed_qscale) {
int quality = ff_rate_estimate_qscale(s, dry_run);
- s->cur_pic_ptr->f->quality =
- s->cur_pic.f->quality = quality;
- if (s->cur_pic.f->quality < 0)
+ s->cur_pic.ptr->f->quality = quality;
+ if (s->cur_pic.ptr->f->quality < 0)
return -1;
}
@@ -3542,15 +3517,15 @@ static int estimate_qp(MpegEncContext *s, int dry_run){
s->lambda= s->lambda_table[0];
//FIXME broken
}else
- s->lambda = s->cur_pic.f->quality;
+ s->lambda = s->cur_pic.ptr->f->quality;
update_qscale(s);
return 0;
}
/* must be called before writing the header */
static void set_frame_distances(MpegEncContext * s){
- av_assert1(s->cur_pic_ptr->f->pts != AV_NOPTS_VALUE);
- s->time = s->cur_pic_ptr->f->pts * s->avctx->time_base.num;
+ av_assert1(s->cur_pic.ptr->f->pts != AV_NOPTS_VALUE);
+ s->time = s->cur_pic.ptr->f->pts * s->avctx->time_base.num;
if(s->pict_type==AV_PICTURE_TYPE_B){
s->pb_time= s->pp_time - (s->last_non_b_time - s->time);
@@ -3581,7 +3556,7 @@ static int encode_picture(MpegEncContext *s)
s->me.scene_change_score=0;
-// s->lambda= s->cur_pic_ptr->quality; //FIXME qscale / ... stuff for ME rate distortion
+// s->lambda= s->cur_pic.ptr->quality; //FIXME qscale / ... stuff for ME rate distortion
if(s->pict_type==AV_PICTURE_TYPE_I){
if(s->msmpeg4_version >= 3) s->no_rounding=1;
@@ -3769,18 +3744,14 @@ static int encode_picture(MpegEncContext *s)
}
}
- //FIXME var duplication
if (s->pict_type == AV_PICTURE_TYPE_I) {
- s->cur_pic_ptr->f->flags |= AV_FRAME_FLAG_KEY; //FIXME pic_ptr
- s->cur_pic.f->flags |= AV_FRAME_FLAG_KEY;
+ s->cur_pic.ptr->f->flags |= AV_FRAME_FLAG_KEY;
} else {
- s->cur_pic_ptr->f->flags &= ~AV_FRAME_FLAG_KEY; //FIXME pic_ptr
- s->cur_pic.f->flags &= ~AV_FRAME_FLAG_KEY;
+ s->cur_pic.ptr->f->flags &= ~AV_FRAME_FLAG_KEY;
}
- s->cur_pic_ptr->f->pict_type =
- s->cur_pic.f->pict_type = s->pict_type;
+ s->cur_pic.ptr->f->pict_type = s->pict_type;
- if (s->cur_pic.f->flags & AV_FRAME_FLAG_KEY)
+ if (s->cur_pic.ptr->f->flags & AV_FRAME_FLAG_KEY)
s->picture_in_gop_number=0;
s->mb_x = s->mb_y = 0;
diff --git a/libavcodec/mpegvideo_motion.c b/libavcodec/mpegvideo_motion.c
index 56d794974b..6e9368dd9c 100644
--- a/libavcodec/mpegvideo_motion.c
+++ b/libavcodec/mpegvideo_motion.c
@@ -514,7 +514,7 @@ static inline void apply_obmc(MpegEncContext *s,
const op_pixels_func (*pix_op)[4])
{
LOCAL_ALIGNED_8(int16_t, mv_cache, [4], [4][2]);
- const MPVPicture *cur_frame = &s->cur_pic;
+ const MPVWorkPicture *cur_frame = &s->cur_pic;
int mb_x = s->mb_x;
int mb_y = s->mb_y;
const int xy = mb_x + mb_y * s->mb_stride;
@@ -749,7 +749,7 @@ static av_always_inline void mpv_motion_internal(MpegEncContext *s,
av_assert2(s->out_format == FMT_MPEG1);
if (s->picture_structure != s->field_select[dir][0] + 1 &&
s->pict_type != AV_PICTURE_TYPE_B && !s->first_field) {
- ref_picture = s->cur_pic_ptr->f->data;
+ ref_picture = s->cur_pic.ptr->f->data;
}
mpeg_motion(s, dest_y, dest_cb, dest_cr,
@@ -767,7 +767,7 @@ static av_always_inline void mpv_motion_internal(MpegEncContext *s,
s->pict_type == AV_PICTURE_TYPE_B || s->first_field) {
ref2picture = ref_picture;
} else {
- ref2picture = s->cur_pic_ptr->f->data;
+ ref2picture = s->cur_pic.ptr->f->data;
}
mpeg_motion(s, dest_y, dest_cb, dest_cr,
@@ -807,7 +807,7 @@ static av_always_inline void mpv_motion_internal(MpegEncContext *s,
/* opposite parity is always in the same frame if this is
* second field */
if (!s->first_field)
- ref_picture = s->cur_pic_ptr->f->data;
+ ref_picture = s->cur_pic.ptr->f->data;
}
}
break;
diff --git a/libavcodec/mpv_reconstruct_mb_template.c b/libavcodec/mpv_reconstruct_mb_template.c
index 2da2218042..549c55ffad 100644
--- a/libavcodec/mpv_reconstruct_mb_template.c
+++ b/libavcodec/mpv_reconstruct_mb_template.c
@@ -124,11 +124,11 @@ void mpv_reconstruct_mb_internal(MpegEncContext *s, int16_t block[12][64],
if (HAVE_THREADS && is_mpeg12 != DEFINITELY_MPEG12 &&
s->avctx->active_thread_type & FF_THREAD_FRAME) {
if (s->mv_dir & MV_DIR_FORWARD) {
- ff_thread_await_progress(&s->last_pic_ptr->tf,
+ ff_thread_await_progress(&s->last_pic.ptr->tf,
lowest_referenced_row(s, 0), 0);
}
if (s->mv_dir & MV_DIR_BACKWARD) {
- ff_thread_await_progress(&s->next_pic_ptr->tf,
+ ff_thread_await_progress(&s->next_pic.ptr->tf,
lowest_referenced_row(s, 1), 0);
}
}
diff --git a/libavcodec/mss2.c b/libavcodec/mss2.c
index 05319436b6..1888053eb2 100644
--- a/libavcodec/mss2.c
+++ b/libavcodec/mss2.c
@@ -382,7 +382,7 @@ static int decode_wmv9(AVCodecContext *avctx, const uint8_t *buf, int buf_size,
MSS12Context *c = &ctx->c;
VC1Context *v = avctx->priv_data;
MpegEncContext *s = &v->s;
- MPVPicture *f;
+ MPVWorkPicture *f;
int ret;
ff_mpeg_flush(avctx);
diff --git a/libavcodec/nvdec_mpeg12.c b/libavcodec/nvdec_mpeg12.c
index 76ef81ea4d..99b2b14f1f 100644
--- a/libavcodec/nvdec_mpeg12.c
+++ b/libavcodec/nvdec_mpeg12.c
@@ -39,7 +39,7 @@ static int nvdec_mpeg12_start_frame(AVCodecContext *avctx, const uint8_t *buffer
CUVIDMPEG2PICPARAMS *ppc = &pp->CodecSpecific.mpeg2;
FrameDecodeData *fdd;
NVDECFrame *cf;
- AVFrame *cur_frame = s->cur_pic.f;
+ AVFrame *cur_frame = s->cur_pic.ptr->f;
int ret, i;
@@ -64,8 +64,8 @@ static int nvdec_mpeg12_start_frame(AVCodecContext *avctx, const uint8_t *buffer
s->pict_type == AV_PICTURE_TYPE_P,
.CodecSpecific.mpeg2 = {
- .ForwardRefIdx = ff_nvdec_get_ref_idx(s->last_pic.f),
- .BackwardRefIdx = ff_nvdec_get_ref_idx(s->next_pic.f),
+ .ForwardRefIdx = ff_nvdec_get_ref_idx(s->last_pic.ptr ? s->last_pic.ptr->f : NULL),
+ .BackwardRefIdx = ff_nvdec_get_ref_idx(s->next_pic.ptr ? s->next_pic.ptr->f : NULL),
.picture_coding_type = s->pict_type,
.full_pel_forward_vector = s->full_pel[0],
diff --git a/libavcodec/nvdec_mpeg4.c b/libavcodec/nvdec_mpeg4.c
index 468002d1c5..80da11b5b1 100644
--- a/libavcodec/nvdec_mpeg4.c
+++ b/libavcodec/nvdec_mpeg4.c
@@ -38,7 +38,7 @@ static int nvdec_mpeg4_start_frame(AVCodecContext *avctx, const uint8_t *buffer,
CUVIDMPEG4PICPARAMS *ppc = &pp->CodecSpecific.mpeg4;
FrameDecodeData *fdd;
NVDECFrame *cf;
- AVFrame *cur_frame = s->cur_pic.f;
+ AVFrame *cur_frame = s->cur_pic.ptr->f;
int ret, i;
@@ -60,8 +60,8 @@ static int nvdec_mpeg4_start_frame(AVCodecContext *avctx, const uint8_t *buffer,
s->pict_type == AV_PICTURE_TYPE_S,
.CodecSpecific.mpeg4 = {
- .ForwardRefIdx = ff_nvdec_get_ref_idx(s->last_pic.f),
- .BackwardRefIdx = ff_nvdec_get_ref_idx(s->next_pic.f),
+ .ForwardRefIdx = ff_nvdec_get_ref_idx(s->last_pic.ptr ? s->last_pic.ptr->f : NULL),
+ .BackwardRefIdx = ff_nvdec_get_ref_idx(s->next_pic.ptr ? s->next_pic.ptr->f : NULL),
.video_object_layer_width = s->width,
.video_object_layer_height = s->height,
diff --git a/libavcodec/nvdec_vc1.c b/libavcodec/nvdec_vc1.c
index 40cd18a8e7..0668863cb4 100644
--- a/libavcodec/nvdec_vc1.c
+++ b/libavcodec/nvdec_vc1.c
@@ -38,7 +38,7 @@ static int nvdec_vc1_start_frame(AVCodecContext *avctx, const uint8_t *buffer, u
CUVIDPICPARAMS *pp = &ctx->pic_params;
FrameDecodeData *fdd;
NVDECFrame *cf;
- AVFrame *cur_frame = s->cur_pic.f;
+ AVFrame *cur_frame = s->cur_pic.ptr->f;
int ret;
@@ -63,8 +63,8 @@ static int nvdec_vc1_start_frame(AVCodecContext *avctx, const uint8_t *buffer, u
s->pict_type == AV_PICTURE_TYPE_P,
.CodecSpecific.vc1 = {
- .ForwardRefIdx = ff_nvdec_get_ref_idx(s->last_pic.f),
- .BackwardRefIdx = ff_nvdec_get_ref_idx(s->next_pic.f),
+ .ForwardRefIdx = ff_nvdec_get_ref_idx(s->last_pic.ptr ? s->last_pic.ptr->f : NULL),
+ .BackwardRefIdx = ff_nvdec_get_ref_idx(s->next_pic.ptr ? s->next_pic.ptr->f : NULL),
.FrameWidth = cur_frame->width,
.FrameHeight = cur_frame->height,
diff --git a/libavcodec/ratecontrol.c b/libavcodec/ratecontrol.c
index 1429b3a93a..609d47faeb 100644
--- a/libavcodec/ratecontrol.c
+++ b/libavcodec/ratecontrol.c
@@ -40,10 +40,10 @@ void ff_write_pass1_stats(MpegEncContext *s)
snprintf(s->avctx->stats_out, 256,
"in:%d out:%d type:%d q:%d itex:%d ptex:%d mv:%d misc:%d "
"fcode:%d bcode:%d mc-var:%"PRId64" var:%"PRId64" icount:%d hbits:%d;\n",
- s->cur_pic_ptr->display_picture_number,
- s->cur_pic_ptr->coded_picture_number,
+ s->cur_pic.ptr->display_picture_number,
+ s->cur_pic.ptr->coded_picture_number,
s->pict_type,
- s->cur_pic.f->quality,
+ s->cur_pic.ptr->f->quality,
s->i_tex_bits,
s->p_tex_bits,
s->mv_bits,
@@ -936,9 +936,9 @@ float ff_rate_estimate_qscale(MpegEncContext *s, int dry_run)
* here instead of reordering but the reordering is simpler for now
* until H.264 B-pyramid must be handled. */
if (s->pict_type == AV_PICTURE_TYPE_B || s->low_delay)
- dts_pic = s->cur_pic_ptr;
+ dts_pic = s->cur_pic.ptr;
else
- dts_pic = s->last_pic_ptr;
+ dts_pic = s->last_pic.ptr;
if (!dts_pic || dts_pic->f->pts == AV_NOPTS_VALUE)
wanted_bits = (uint64_t)(s->bit_rate * (double)picture_number / fps);
diff --git a/libavcodec/rv10.c b/libavcodec/rv10.c
index aea42dd314..201e7ed6d0 100644
--- a/libavcodec/rv10.c
+++ b/libavcodec/rv10.c
@@ -170,7 +170,7 @@ static int rv20_decode_picture_header(RVDecContext *rv, int whole_size)
av_log(s->avctx, AV_LOG_ERROR, "low delay B\n");
return -1;
}
- if (!s->last_pic_ptr && s->pict_type == AV_PICTURE_TYPE_B) {
+ if (!s->last_pic.ptr && s->pict_type == AV_PICTURE_TYPE_B) {
av_log(s->avctx, AV_LOG_ERROR, "early B-frame\n");
return AVERROR_INVALIDDATA;
}
@@ -458,9 +458,9 @@ static int rv10_decode_packet(AVCodecContext *avctx, const uint8_t *buf,
if (whole_size < s->mb_width * s->mb_height / 8)
return AVERROR_INVALIDDATA;
- if ((s->mb_x == 0 && s->mb_y == 0) || !s->cur_pic_ptr) {
+ if ((s->mb_x == 0 && s->mb_y == 0) || !s->cur_pic.ptr) {
// FIXME write parser so we always have complete frames?
- if (s->cur_pic_ptr) {
+ if (s->cur_pic.ptr) {
ff_er_frame_end(&s->er, NULL);
ff_mpv_frame_end(s);
s->mb_x = s->mb_y = s->resync_mb_x = s->resync_mb_y = 0;
@@ -469,7 +469,7 @@ static int rv10_decode_packet(AVCodecContext *avctx, const uint8_t *buf,
return ret;
ff_mpeg_er_frame_start(s);
} else {
- if (s->cur_pic_ptr->f->pict_type != s->pict_type) {
+ if (s->cur_pic.ptr->f->pict_type != s->pict_type) {
av_log(s->avctx, AV_LOG_ERROR, "Slice type mismatch\n");
return AVERROR_INVALIDDATA;
}
@@ -632,28 +632,28 @@ static int rv10_decode_frame(AVCodecContext *avctx, AVFrame *pict,
i++;
}
- if (s->cur_pic_ptr && s->mb_y >= s->mb_height) {
+ if (s->cur_pic.ptr && s->mb_y >= s->mb_height) {
ff_er_frame_end(&s->er, NULL);
ff_mpv_frame_end(s);
if (s->pict_type == AV_PICTURE_TYPE_B || s->low_delay) {
- if ((ret = av_frame_ref(pict, s->cur_pic_ptr->f)) < 0)
+ if ((ret = av_frame_ref(pict, s->cur_pic.ptr->f)) < 0)
return ret;
- ff_print_debug_info(s, s->cur_pic_ptr, pict);
- ff_mpv_export_qp_table(s, pict, s->cur_pic_ptr, FF_MPV_QSCALE_TYPE_MPEG1);
- } else if (s->last_pic_ptr) {
- if ((ret = av_frame_ref(pict, s->last_pic_ptr->f)) < 0)
+ ff_print_debug_info(s, s->cur_pic.ptr, pict);
+ ff_mpv_export_qp_table(s, pict, s->cur_pic.ptr, FF_MPV_QSCALE_TYPE_MPEG1);
+ } else if (s->last_pic.ptr) {
+ if ((ret = av_frame_ref(pict, s->last_pic.ptr->f)) < 0)
return ret;
- ff_print_debug_info(s, s->last_pic_ptr, pict);
- ff_mpv_export_qp_table(s, pict,s->last_pic_ptr, FF_MPV_QSCALE_TYPE_MPEG1);
+ ff_print_debug_info(s, s->last_pic.ptr, pict);
+ ff_mpv_export_qp_table(s, pict,s->last_pic.ptr, FF_MPV_QSCALE_TYPE_MPEG1);
}
- if (s->last_pic_ptr || s->low_delay) {
+ if (s->last_pic.ptr || s->low_delay) {
*got_frame = 1;
}
// so we can detect if frame_end was not called (find some nicer solution...)
- s->cur_pic_ptr = NULL;
+ s->cur_pic.ptr = NULL;
}
return avpkt->size;
diff --git a/libavcodec/rv34.c b/libavcodec/rv34.c
index 284de14e8c..d935c261b5 100644
--- a/libavcodec/rv34.c
+++ b/libavcodec/rv34.c
@@ -565,7 +565,7 @@ static void rv34_pred_mv_b(RV34DecContext *r, int block_type, int dir)
int has_A = 0, has_B = 0, has_C = 0;
int mx, my;
int i, j;
- MPVPicture *cur_pic = &s->cur_pic;
+ MPVWorkPicture *cur_pic = &s->cur_pic;
const int mask = dir ? MB_TYPE_L1 : MB_TYPE_L0;
int type = cur_pic->mb_type[mb_pos];
@@ -719,7 +719,7 @@ static inline void rv34_mc(RV34DecContext *r, const int block_type,
if (HAVE_THREADS && (s->avctx->active_thread_type & FF_THREAD_FRAME)) {
/* wait for the referenced mb row to be finished */
int mb_row = s->mb_y + ((yoff + my + 5 + 8 * height) >> 4);
- const ThreadFrame *f = dir ? &s->next_pic_ptr->tf : &s->last_pic_ptr->tf;
+ const ThreadFrame *f = dir ? &s->next_pic.ptr->tf : &s->last_pic.ptr->tf;
ff_thread_await_progress(f, mb_row, 0);
}
@@ -899,7 +899,7 @@ static int rv34_decode_mv(RV34DecContext *r, int block_type)
//surprisingly, it uses motion scheme from next reference frame
/* wait for the current mb row to be finished */
if (HAVE_THREADS && (s->avctx->active_thread_type & FF_THREAD_FRAME))
- ff_thread_await_progress(&s->next_pic_ptr->tf, FFMAX(0, s->mb_y-1), 0);
+ ff_thread_await_progress(&s->next_pic.ptr->tf, FFMAX(0, s->mb_y-1), 0);
next_bt = s->next_pic.mb_type[s->mb_x + s->mb_y * s->mb_stride];
if(IS_INTRA(next_bt) || IS_SKIP(next_bt)){
@@ -1483,7 +1483,7 @@ static int rv34_decode_slice(RV34DecContext *r, int end, const uint8_t* buf, int
r->loop_filter(r, s->mb_y - 2);
if (HAVE_THREADS && (s->avctx->active_thread_type & FF_THREAD_FRAME))
- ff_thread_report_progress(&s->cur_pic_ptr->tf,
+ ff_thread_report_progress(&s->cur_pic.ptr->tf,
s->mb_y - 2, 0);
}
@@ -1581,19 +1581,19 @@ static int finish_frame(AVCodecContext *avctx, AVFrame *pict)
s->mb_num_left = 0;
if (HAVE_THREADS && (s->avctx->active_thread_type & FF_THREAD_FRAME))
- ff_thread_report_progress(&s->cur_pic_ptr->tf, INT_MAX, 0);
+ ff_thread_report_progress(&s->cur_pic.ptr->tf, INT_MAX, 0);
if (s->pict_type == AV_PICTURE_TYPE_B) {
- if ((ret = av_frame_ref(pict, s->cur_pic_ptr->f)) < 0)
+ if ((ret = av_frame_ref(pict, s->cur_pic.ptr->f)) < 0)
return ret;
- ff_print_debug_info(s, s->cur_pic_ptr, pict);
- ff_mpv_export_qp_table(s, pict, s->cur_pic_ptr, FF_MPV_QSCALE_TYPE_MPEG1);
+ ff_print_debug_info(s, s->cur_pic.ptr, pict);
+ ff_mpv_export_qp_table(s, pict, s->cur_pic.ptr, FF_MPV_QSCALE_TYPE_MPEG1);
got_picture = 1;
- } else if (s->last_pic_ptr) {
- if ((ret = av_frame_ref(pict, s->last_pic_ptr->f)) < 0)
+ } else if (s->last_pic.ptr) {
+ if ((ret = av_frame_ref(pict, s->last_pic.ptr->f)) < 0)
return ret;
- ff_print_debug_info(s, s->last_pic_ptr, pict);
- ff_mpv_export_qp_table(s, pict, s->last_pic_ptr, FF_MPV_QSCALE_TYPE_MPEG1);
+ ff_print_debug_info(s, s->last_pic.ptr, pict);
+ ff_mpv_export_qp_table(s, pict, s->last_pic.ptr, FF_MPV_QSCALE_TYPE_MPEG1);
got_picture = 1;
}
@@ -1628,10 +1628,10 @@ int ff_rv34_decode_frame(AVCodecContext *avctx, AVFrame *pict,
/* no supplementary picture */
if (buf_size == 0) {
/* special case for last picture */
- if (s->next_pic_ptr) {
- if ((ret = av_frame_ref(pict, s->next_pic_ptr->f)) < 0)
+ if (s->next_pic.ptr) {
+ if ((ret = av_frame_ref(pict, s->next_pic.ptr->f)) < 0)
return ret;
- s->next_pic_ptr = NULL;
+ s->next_pic.ptr = NULL;
*got_picture_ptr = 1;
}
@@ -1654,7 +1654,7 @@ int ff_rv34_decode_frame(AVCodecContext *avctx, AVFrame *pict,
av_log(avctx, AV_LOG_ERROR, "First slice header is incorrect\n");
return AVERROR_INVALIDDATA;
}
- if ((!s->last_pic_ptr || !s->last_pic_ptr->f->data[0]) &&
+ if ((!s->last_pic.ptr || !s->last_pic.ptr->f->data[0]) &&
si.type == AV_PICTURE_TYPE_B) {
av_log(avctx, AV_LOG_ERROR, "Invalid decoder state: B-frame without "
"reference data.\n");
@@ -1667,7 +1667,7 @@ int ff_rv34_decode_frame(AVCodecContext *avctx, AVFrame *pict,
/* first slice */
if (si.start == 0) {
- if (s->mb_num_left > 0 && s->cur_pic_ptr) {
+ if (s->mb_num_left > 0 && s->cur_pic.ptr) {
av_log(avctx, AV_LOG_ERROR, "New frame but still %d MB left.\n",
s->mb_num_left);
if (!s->context_reinit)
@@ -1792,7 +1792,7 @@ int ff_rv34_decode_frame(AVCodecContext *avctx, AVFrame *pict,
break;
}
- if (s->cur_pic_ptr) {
+ if (s->cur_pic.ptr) {
if (last) {
if(r->loop_filter)
r->loop_filter(r, s->mb_height - 1);
@@ -1809,7 +1809,7 @@ int ff_rv34_decode_frame(AVCodecContext *avctx, AVFrame *pict,
ff_er_frame_end(&s->er, NULL);
ff_mpv_frame_end(s);
s->mb_num_left = 0;
- ff_thread_report_progress(&s->cur_pic_ptr->tf, INT_MAX, 0);
+ ff_thread_report_progress(&s->cur_pic.ptr->tf, INT_MAX, 0);
return AVERROR_INVALIDDATA;
}
}
diff --git a/libavcodec/snowenc.c b/libavcodec/snowenc.c
index 1ed7581fea..8d6dabae65 100644
--- a/libavcodec/snowenc.c
+++ b/libavcodec/snowenc.c
@@ -62,6 +62,7 @@ typedef struct SnowEncContext {
MECmpContext mecc;
MpegEncContext m; // needed for motion estimation, should not be used for anything else, the idea is to eventually make the motion estimation independent of MpegEncContext, so this will be removed then (FIXME/XXX)
+ MPVPicture cur_pic, last_pic;
#define ME_CACHE_SIZE 1024
unsigned me_cache[ME_CACHE_SIZE];
unsigned me_cache_generation;
@@ -1834,9 +1835,9 @@ static int encode_frame(AVCodecContext *avctx, AVPacket *pkt,
if (ret < 0)
return ret;
- mpv->cur_pic_ptr = &mpv->cur_pic;
- mpv->cur_pic.f = s->current_picture;
- mpv->cur_pic.f->pts = pict->pts;
+ mpv->cur_pic.ptr = &enc->cur_pic;
+ mpv->cur_pic.ptr->f = s->current_picture;
+ mpv->cur_pic.ptr->f->pts = pict->pts;
if(pic->pict_type == AV_PICTURE_TYPE_P){
int block_width = (width +15)>>4;
int block_height= (height+15)>>4;
@@ -1846,9 +1847,9 @@ static int encode_frame(AVCodecContext *avctx, AVPacket *pkt,
av_assert0(s->last_picture[0]->data[0]);
mpv->avctx = s->avctx;
- mpv->last_pic.f = s->last_picture[0];
+ mpv->last_pic.ptr = &enc->last_pic;
+ mpv->last_pic.ptr->f = s->last_picture[0];
mpv-> new_pic = s->input_picture;
- mpv->last_pic_ptr = &mpv->last_pic;
mpv->linesize = stride;
mpv->uvlinesize = s->current_picture->linesize[1];
mpv->width = width;
@@ -2043,9 +2044,9 @@ redo_frame:
mpv->frame_bits = 8 * (s->c.bytestream - s->c.bytestream_start);
mpv->p_tex_bits = mpv->frame_bits - mpv->misc_bits - mpv->mv_bits;
mpv->total_bits += 8*(s->c.bytestream - s->c.bytestream_start);
- mpv->cur_pic.display_picture_number =
- mpv->cur_pic.coded_picture_number = avctx->frame_num;
- mpv->cur_pic.f->quality = pic->quality;
+ enc->cur_pic.display_picture_number =
+ enc->cur_pic.coded_picture_number = avctx->frame_num;
+ enc->cur_pic.f->quality = pic->quality;
if (enc->pass1_rc)
if (ff_rate_estimate_qscale(mpv, 0) < 0)
return -1;
diff --git a/libavcodec/svq1enc.c b/libavcodec/svq1enc.c
index c75ab1800a..9631fa243d 100644
--- a/libavcodec/svq1enc.c
+++ b/libavcodec/svq1enc.c
@@ -60,6 +60,7 @@ typedef struct SVQ1EncContext {
* else, the idea is to make the motion estimation eventually independent
* of MpegEncContext, so this will be removed then. */
MpegEncContext m;
+ MPVPicture cur_pic, last_pic;
AVCodecContext *avctx;
MECmpContext mecc;
HpelDSPContext hdsp;
@@ -326,8 +327,8 @@ static int svq1_encode_plane(SVQ1EncContext *s, int plane,
if (s->pict_type == AV_PICTURE_TYPE_P) {
s->m.avctx = s->avctx;
- s->m.cur_pic_ptr = &s->m.cur_pic;
- s->m.last_pic_ptr = &s->m.last_pic;
+ s->m.cur_pic.ptr = &s->cur_pic;
+ s->m.last_pic.ptr = &s->last_pic;
s->m.last_pic.data[0] = ref_plane;
s->m.linesize =
s->m.last_pic.linesize[0] =
diff --git a/libavcodec/vaapi_mpeg2.c b/libavcodec/vaapi_mpeg2.c
index 389540fd0c..328a2f7db3 100644
--- a/libavcodec/vaapi_mpeg2.c
+++ b/libavcodec/vaapi_mpeg2.c
@@ -42,12 +42,12 @@ static inline int mpeg2_get_is_frame_start(const MpegEncContext *s)
static int vaapi_mpeg2_start_frame(AVCodecContext *avctx, av_unused const uint8_t *buffer, av_unused uint32_t size)
{
const MpegEncContext *s = avctx->priv_data;
- VAAPIDecodePicture *pic = s->cur_pic_ptr->hwaccel_picture_private;
+ VAAPIDecodePicture *pic = s->cur_pic.ptr->hwaccel_picture_private;
VAPictureParameterBufferMPEG2 pic_param;
VAIQMatrixBufferMPEG2 iq_matrix;
int i, err;
- pic->output_surface = ff_vaapi_get_surface_id(s->cur_pic_ptr->f);
+ pic->output_surface = ff_vaapi_get_surface_id(s->cur_pic.ptr->f);
pic_param = (VAPictureParameterBufferMPEG2) {
.horizontal_size = s->width,
@@ -73,10 +73,10 @@ static int vaapi_mpeg2_start_frame(AVCodecContext *avctx, av_unused const uint8_
switch (s->pict_type) {
case AV_PICTURE_TYPE_B:
- pic_param.backward_reference_picture = ff_vaapi_get_surface_id(s->next_pic.f);
+ pic_param.backward_reference_picture = ff_vaapi_get_surface_id(s->next_pic.ptr->f);
// fall-through
case AV_PICTURE_TYPE_P:
- pic_param.forward_reference_picture = ff_vaapi_get_surface_id(s->last_pic.f);
+ pic_param.forward_reference_picture = ff_vaapi_get_surface_id(s->last_pic.ptr->f);
break;
}
@@ -115,7 +115,7 @@ fail:
static int vaapi_mpeg2_end_frame(AVCodecContext *avctx)
{
MpegEncContext *s = avctx->priv_data;
- VAAPIDecodePicture *pic = s->cur_pic_ptr->hwaccel_picture_private;
+ VAAPIDecodePicture *pic = s->cur_pic.ptr->hwaccel_picture_private;
int ret;
ret = ff_vaapi_decode_issue(avctx, pic);
@@ -131,7 +131,7 @@ fail:
static int vaapi_mpeg2_decode_slice(AVCodecContext *avctx, const uint8_t *buffer, uint32_t size)
{
const MpegEncContext *s = avctx->priv_data;
- VAAPIDecodePicture *pic = s->cur_pic_ptr->hwaccel_picture_private;
+ VAAPIDecodePicture *pic = s->cur_pic.ptr->hwaccel_picture_private;
VASliceParameterBufferMPEG2 slice_param;
GetBitContext gb;
uint32_t quantiser_scale_code, intra_slice_flag, macroblock_offset;
diff --git a/libavcodec/vaapi_mpeg4.c b/libavcodec/vaapi_mpeg4.c
index e227bee113..76602c544a 100644
--- a/libavcodec/vaapi_mpeg4.c
+++ b/libavcodec/vaapi_mpeg4.c
@@ -49,11 +49,11 @@ static int vaapi_mpeg4_start_frame(AVCodecContext *avctx, av_unused const uint8_
{
Mpeg4DecContext *ctx = avctx->priv_data;
MpegEncContext *s = &ctx->m;
- VAAPIDecodePicture *pic = s->cur_pic_ptr->hwaccel_picture_private;
+ VAAPIDecodePicture *pic = s->cur_pic.ptr->hwaccel_picture_private;
VAPictureParameterBufferMPEG4 pic_param;
int i, err;
- pic->output_surface = ff_vaapi_get_surface_id(s->cur_pic_ptr->f);
+ pic->output_surface = ff_vaapi_get_surface_id(s->cur_pic.ptr->f);
pic_param = (VAPictureParameterBufferMPEG4) {
.vop_width = s->width,
@@ -78,7 +78,7 @@ static int vaapi_mpeg4_start_frame(AVCodecContext *avctx, av_unused const uint8_
.vop_fields.bits = {
.vop_coding_type = s->pict_type - AV_PICTURE_TYPE_I,
.backward_reference_vop_coding_type =
- s->pict_type == AV_PICTURE_TYPE_B ? s->next_pic.f->pict_type - AV_PICTURE_TYPE_I : 0,
+ s->pict_type == AV_PICTURE_TYPE_B ? s->next_pic.ptr->f->pict_type - AV_PICTURE_TYPE_I : 0,
.vop_rounding_type = s->no_rounding,
.intra_dc_vlc_thr = mpeg4_get_intra_dc_vlc_thr(ctx),
.top_field_first = s->top_field_first,
@@ -100,9 +100,9 @@ static int vaapi_mpeg4_start_frame(AVCodecContext *avctx, av_unused const uint8_
}
if (s->pict_type == AV_PICTURE_TYPE_B)
- pic_param.backward_reference_picture = ff_vaapi_get_surface_id(s->next_pic.f);
+ pic_param.backward_reference_picture = ff_vaapi_get_surface_id(s->next_pic.ptr->f);
if (s->pict_type != AV_PICTURE_TYPE_I)
- pic_param.forward_reference_picture = ff_vaapi_get_surface_id(s->last_pic.f);
+ pic_param.forward_reference_picture = ff_vaapi_get_surface_id(s->last_pic.ptr->f);
err = ff_vaapi_decode_make_param_buffer(avctx, pic,
VAPictureParameterBufferType,
@@ -139,7 +139,7 @@ fail:
static int vaapi_mpeg4_end_frame(AVCodecContext *avctx)
{
MpegEncContext *s = avctx->priv_data;
- VAAPIDecodePicture *pic = s->cur_pic_ptr->hwaccel_picture_private;
+ VAAPIDecodePicture *pic = s->cur_pic.ptr->hwaccel_picture_private;
int ret;
ret = ff_vaapi_decode_issue(avctx, pic);
@@ -155,7 +155,7 @@ fail:
static int vaapi_mpeg4_decode_slice(AVCodecContext *avctx, const uint8_t *buffer, uint32_t size)
{
MpegEncContext *s = avctx->priv_data;
- VAAPIDecodePicture *pic = s->cur_pic_ptr->hwaccel_picture_private;
+ VAAPIDecodePicture *pic = s->cur_pic.ptr->hwaccel_picture_private;
VASliceParameterBufferMPEG4 slice_param;
int err;
diff --git a/libavcodec/vaapi_vc1.c b/libavcodec/vaapi_vc1.c
index ef914cf4b2..8aedad7828 100644
--- a/libavcodec/vaapi_vc1.c
+++ b/libavcodec/vaapi_vc1.c
@@ -253,11 +253,11 @@ static int vaapi_vc1_start_frame(AVCodecContext *avctx, av_unused const uint8_t
{
const VC1Context *v = avctx->priv_data;
const MpegEncContext *s = &v->s;
- VAAPIDecodePicture *pic = s->cur_pic_ptr->hwaccel_picture_private;
+ VAAPIDecodePicture *pic = s->cur_pic.ptr->hwaccel_picture_private;
VAPictureParameterBufferVC1 pic_param;
int err;
- pic->output_surface = ff_vaapi_get_surface_id(s->cur_pic_ptr->f);
+ pic->output_surface = ff_vaapi_get_surface_id(s->cur_pic.ptr->f);
pic_param = (VAPictureParameterBufferVC1) {
.forward_reference_picture = VA_INVALID_ID,
@@ -374,10 +374,12 @@ static int vaapi_vc1_start_frame(AVCodecContext *avctx, av_unused const uint8_t
switch (s->pict_type) {
case AV_PICTURE_TYPE_B:
- pic_param.backward_reference_picture = ff_vaapi_get_surface_id(s->next_pic.f);
+ if (s->next_pic.ptr)
+ pic_param.backward_reference_picture = ff_vaapi_get_surface_id(s->next_pic.ptr->f);
// fall-through
case AV_PICTURE_TYPE_P:
- pic_param.forward_reference_picture = ff_vaapi_get_surface_id(s->last_pic.f);
+ if (s->last_pic.ptr)
+ pic_param.forward_reference_picture = ff_vaapi_get_surface_id(s->last_pic.ptr->f);
break;
}
@@ -450,7 +452,7 @@ static int vaapi_vc1_end_frame(AVCodecContext *avctx)
{
VC1Context *v = avctx->priv_data;
MpegEncContext *s = &v->s;
- VAAPIDecodePicture *pic = s->cur_pic_ptr->hwaccel_picture_private;
+ VAAPIDecodePicture *pic = s->cur_pic.ptr->hwaccel_picture_private;
int ret;
ret = ff_vaapi_decode_issue(avctx, pic);
@@ -465,7 +467,7 @@ static int vaapi_vc1_decode_slice(AVCodecContext *avctx, const uint8_t *buffer,
{
const VC1Context *v = avctx->priv_data;
const MpegEncContext *s = &v->s;
- VAAPIDecodePicture *pic = s->cur_pic_ptr->hwaccel_picture_private;
+ VAAPIDecodePicture *pic = s->cur_pic.ptr->hwaccel_picture_private;
VASliceParameterBufferVC1 slice_param;
int mb_height;
int err;
diff --git a/libavcodec/vc1.c b/libavcodec/vc1.c
index 643232653c..987e77fcc7 100644
--- a/libavcodec/vc1.c
+++ b/libavcodec/vc1.c
@@ -856,7 +856,7 @@ int ff_vc1_parse_frame_header_adv(VC1Context *v, GetBitContext* gb)
v->s.pict_type = (v->fptype & 1) ? AV_PICTURE_TYPE_BI : AV_PICTURE_TYPE_B;
else
v->s.pict_type = (v->fptype & 1) ? AV_PICTURE_TYPE_P : AV_PICTURE_TYPE_I;
- v->s.cur_pic_ptr->f->pict_type = v->s.pict_type;
+ v->s.cur_pic.ptr->f->pict_type = v->s.pict_type;
if (!v->pic_header_flag)
goto parse_common_info;
}
diff --git a/libavcodec/vc1_block.c b/libavcodec/vc1_block.c
index 9cb9fd27bf..b880f978d1 100644
--- a/libavcodec/vc1_block.c
+++ b/libavcodec/vc1_block.c
@@ -59,9 +59,9 @@ static inline void init_block_index(VC1Context *v)
MpegEncContext *s = &v->s;
ff_init_block_index(s);
if (v->field_mode && !(v->second_field ^ v->tff)) {
- s->dest[0] += s->cur_pic_ptr->f->linesize[0];
- s->dest[1] += s->cur_pic_ptr->f->linesize[1];
- s->dest[2] += s->cur_pic_ptr->f->linesize[2];
+ s->dest[0] += s->cur_pic.ptr->f->linesize[0];
+ s->dest[1] += s->cur_pic.ptr->f->linesize[1];
+ s->dest[2] += s->cur_pic.ptr->f->linesize[2];
}
}
@@ -2106,7 +2106,7 @@ static int vc1_decode_b_mb_intfi(VC1Context *v)
if (bmvtype == BMV_TYPE_DIRECT) {
dmv_x[0] = dmv_y[0] = pred_flag[0] = 0;
dmv_x[1] = dmv_y[1] = pred_flag[0] = 0;
- if (!s->next_pic_ptr->field_picture) {
+ if (!s->next_pic.ptr->field_picture) {
av_log(s->avctx, AV_LOG_ERROR, "Mixed field/frame direct mode not supported\n");
return AVERROR_INVALIDDATA;
}
@@ -2272,7 +2272,7 @@ static int vc1_decode_b_mb_intfr(VC1Context *v)
direct = v->direct_mb_plane[mb_pos];
if (direct) {
- if (s->next_pic_ptr->field_picture)
+ if (s->next_pic.ptr->field_picture)
av_log(s->avctx, AV_LOG_WARNING, "Mixed frame/field direct mode not supported\n");
s->mv[0][0][0] = s->cur_pic.motion_val[0][s->block_index[0]][0] = scale_mv(s->next_pic.motion_val[1][s->block_index[0]][0], v->bfraction, 0, s->quarter_sample);
s->mv[0][0][1] = s->cur_pic.motion_val[0][s->block_index[0]][1] = scale_mv(s->next_pic.motion_val[1][s->block_index[0]][1], v->bfraction, 0, s->quarter_sample);
@@ -2969,7 +2969,7 @@ void ff_vc1_decode_blocks(VC1Context *v)
v->s.esc3_level_length = 0;
if (v->x8_type) {
- ff_intrax8_decode_picture(&v->x8, &v->s.cur_pic,
+ ff_intrax8_decode_picture(&v->x8, v->s.cur_pic.ptr,
&v->s.gb, &v->s.mb_x, &v->s.mb_y,
2 * v->pq + v->halfpq, v->pq * !v->pquantizer,
v->s.loop_filter, v->s.low_delay);
diff --git a/libavcodec/vc1_mc.c b/libavcodec/vc1_mc.c
index fad9a4c370..9adb71c7ad 100644
--- a/libavcodec/vc1_mc.c
+++ b/libavcodec/vc1_mc.c
@@ -187,8 +187,8 @@ void ff_vc1_mc_1mv(VC1Context *v, int dir)
!v->s.last_pic.data[0])
return;
- linesize = s->cur_pic_ptr->f->linesize[0];
- uvlinesize = s->cur_pic_ptr->f->linesize[1];
+ linesize = s->cur_pic.ptr->f->linesize[0];
+ uvlinesize = s->cur_pic.ptr->f->linesize[1];
mx = s->mv[dir][0][0];
my = s->mv[dir][0][1];
@@ -467,7 +467,7 @@ void ff_vc1_mc_4mv_luma(VC1Context *v, int n, int dir, int avg)
!v->s.last_pic.data[0])
return;
- linesize = s->cur_pic_ptr->f->linesize[0];
+ linesize = s->cur_pic.ptr->f->linesize[0];
mx = s->mv[dir][n][0];
my = s->mv[dir][n][1];
@@ -669,7 +669,7 @@ void ff_vc1_mc_4mv_chroma(VC1Context *v, int dir)
s->cur_pic.motion_val[1][s->block_index[0] + v->blocks_off][0] = tx;
s->cur_pic.motion_val[1][s->block_index[0] + v->blocks_off][1] = ty;
- uvlinesize = s->cur_pic_ptr->f->linesize[1];
+ uvlinesize = s->cur_pic.ptr->f->linesize[1];
uvmx = (tx + ((tx & 3) == 3)) >> 1;
uvmy = (ty + ((ty & 3) == 3)) >> 1;
@@ -856,7 +856,7 @@ void ff_vc1_mc_4mv_chroma4(VC1Context *v, int dir, int dir2, int avg)
if (CONFIG_GRAY && s->avctx->flags & AV_CODEC_FLAG_GRAY)
return;
- uvlinesize = s->cur_pic_ptr->f->linesize[1];
+ uvlinesize = s->cur_pic.ptr->f->linesize[1];
for (i = 0; i < 4; i++) {
int d = i < 2 ? dir: dir2;
@@ -1015,8 +1015,8 @@ void ff_vc1_interp_mc(VC1Context *v)
if (!v->field_mode && !v->s.next_pic.data[0])
return;
- linesize = s->cur_pic_ptr->f->linesize[0];
- uvlinesize = s->cur_pic_ptr->f->linesize[1];
+ linesize = s->cur_pic.ptr->f->linesize[0];
+ uvlinesize = s->cur_pic.ptr->f->linesize[1];
mx = s->mv[1][0][0];
my = s->mv[1][0][1];
diff --git a/libavcodec/vc1_pred.c b/libavcodec/vc1_pred.c
index 9141290d26..6e260fa053 100644
--- a/libavcodec/vc1_pred.c
+++ b/libavcodec/vc1_pred.c
@@ -719,7 +719,7 @@ void ff_vc1_pred_b_mv(VC1Context *v, int dmv_x[2], int dmv_y[2],
s->cur_pic.motion_val[1][xy][1] = 0;
return;
}
- if (direct && s->next_pic_ptr->field_picture)
+ if (direct && s->next_pic.ptr->field_picture)
av_log(s->avctx, AV_LOG_WARNING, "Mixed frame/field direct mode not supported\n");
s->mv[0][0][0] = scale_mv(s->next_pic.motion_val[1][xy][0], v->bfraction, 0, s->quarter_sample);
diff --git a/libavcodec/vc1dec.c b/libavcodec/vc1dec.c
index 36a47502f5..9b912bec1f 100644
--- a/libavcodec/vc1dec.c
+++ b/libavcodec/vc1dec.c
@@ -322,7 +322,7 @@ static int vc1_decode_sprites(VC1Context *v, GetBitContext* gb)
return AVERROR_UNKNOWN;
}
- if (v->two_sprites && (!s->last_pic_ptr || !s->last_pic.data[0])) {
+ if (v->two_sprites && (!s->last_pic.ptr || !s->last_pic.data[0])) {
av_log(avctx, AV_LOG_WARNING, "Need two sprites, only got one\n");
v->two_sprites = 0;
}
@@ -340,7 +340,7 @@ static void vc1_sprite_flush(AVCodecContext *avctx)
{
VC1Context *v = avctx->priv_data;
MpegEncContext *s = &v->s;
- MPVPicture *f = &s->cur_pic;
+ MPVWorkPicture *f = &s->cur_pic;
int plane, i;
/* Windows Media Image codecs have a convergence interval of two keyframes.
@@ -837,10 +837,10 @@ static int vc1_decode_frame(AVCodecContext *avctx, AVFrame *pict,
/* no supplementary picture */
if (buf_size == 0 || (buf_size == 4 && AV_RB32(buf) == VC1_CODE_ENDOFSEQ)) {
/* special case for last picture */
- if (s->low_delay == 0 && s->next_pic_ptr) {
- if ((ret = av_frame_ref(pict, s->next_pic_ptr->f)) < 0)
+ if (s->low_delay == 0 && s->next_pic.ptr) {
+ if ((ret = av_frame_ref(pict, s->next_pic.ptr->f)) < 0)
return ret;
- s->next_pic_ptr = NULL;
+ s->next_pic.ptr = NULL;
*got_frame = 1;
}
@@ -1047,7 +1047,7 @@ static int vc1_decode_frame(AVCodecContext *avctx, AVFrame *pict,
}
/* skip B-frames if we don't have reference frames */
- if (!s->last_pic_ptr && s->pict_type == AV_PICTURE_TYPE_B) {
+ if (!s->last_pic.ptr && s->pict_type == AV_PICTURE_TYPE_B) {
av_log(v->s.avctx, AV_LOG_DEBUG, "Skipping B frame without reference frames\n");
goto end;
}
@@ -1061,21 +1061,21 @@ static int vc1_decode_frame(AVCodecContext *avctx, AVFrame *pict,
goto err;
}
- v->s.cur_pic_ptr->field_picture = v->field_mode;
- v->s.cur_pic_ptr->f->flags |= AV_FRAME_FLAG_INTERLACED * (v->fcm != PROGRESSIVE);
- v->s.cur_pic_ptr->f->flags |= AV_FRAME_FLAG_TOP_FIELD_FIRST * !!v->tff;
- v->last_interlaced = v->s.last_pic_ptr ? v->s.last_pic_ptr->f->flags & AV_FRAME_FLAG_INTERLACED : 0;
- v->next_interlaced = v->s.next_pic_ptr ? v->s.next_pic_ptr->f->flags & AV_FRAME_FLAG_INTERLACED : 0;
+ v->s.cur_pic.ptr->field_picture = v->field_mode;
+ v->s.cur_pic.ptr->f->flags |= AV_FRAME_FLAG_INTERLACED * (v->fcm != PROGRESSIVE);
+ v->s.cur_pic.ptr->f->flags |= AV_FRAME_FLAG_TOP_FIELD_FIRST * !!v->tff;
+ v->last_interlaced = v->s.last_pic.ptr ? v->s.last_pic.ptr->f->flags & AV_FRAME_FLAG_INTERLACED : 0;
+ v->next_interlaced = v->s.next_pic.ptr ? v->s.next_pic.ptr->f->flags & AV_FRAME_FLAG_INTERLACED : 0;
// process pulldown flags
- s->cur_pic_ptr->f->repeat_pict = 0;
+ s->cur_pic.ptr->f->repeat_pict = 0;
// Pulldown flags are only valid when 'broadcast' has been set.
if (v->rff) {
// repeat field
- s->cur_pic_ptr->f->repeat_pict = 1;
+ s->cur_pic.ptr->f->repeat_pict = 1;
} else if (v->rptfrm) {
// repeat frames
- s->cur_pic_ptr->f->repeat_pict = v->rptfrm * 2;
+ s->cur_pic.ptr->f->repeat_pict = v->rptfrm * 2;
}
if (avctx->hwaccel) {
@@ -1137,7 +1137,7 @@ static int vc1_decode_frame(AVCodecContext *avctx, AVFrame *pict,
ret = AVERROR_INVALIDDATA;
goto err;
}
- v->s.cur_pic_ptr->f->pict_type = v->s.pict_type;
+ v->s.cur_pic.ptr->f->pict_type = v->s.pict_type;
ret = hwaccel->start_frame(avctx, buf_start_second_field,
(buf + buf_size) - buf_start_second_field);
@@ -1355,16 +1355,16 @@ image:
*got_frame = 1;
} else {
if (s->pict_type == AV_PICTURE_TYPE_B || s->low_delay) {
- if ((ret = av_frame_ref(pict, s->cur_pic_ptr->f)) < 0)
+ if ((ret = av_frame_ref(pict, s->cur_pic.ptr->f)) < 0)
goto err;
if (!v->field_mode)
- ff_print_debug_info(s, s->cur_pic_ptr, pict);
+ ff_print_debug_info(s, s->cur_pic.ptr, pict);
*got_frame = 1;
- } else if (s->last_pic_ptr) {
- if ((ret = av_frame_ref(pict, s->last_pic_ptr->f)) < 0)
+ } else if (s->last_pic.ptr) {
+ if ((ret = av_frame_ref(pict, s->last_pic.ptr->f)) < 0)
goto err;
if (!v->field_mode)
- ff_print_debug_info(s, s->last_pic_ptr, pict);
+ ff_print_debug_info(s, s->last_pic.ptr, pict);
*got_frame = 1;
}
}
diff --git a/libavcodec/vdpau.c b/libavcodec/vdpau.c
index f46bfa2bdf..0dd5641603 100644
--- a/libavcodec/vdpau.c
+++ b/libavcodec/vdpau.c
@@ -370,7 +370,7 @@ int ff_vdpau_common_end_frame(AVCodecContext *avctx, AVFrame *frame,
int ff_vdpau_mpeg_end_frame(AVCodecContext *avctx)
{
MpegEncContext *s = avctx->priv_data;
- MPVPicture *pic = s->cur_pic_ptr;
+ MPVPicture *pic = s->cur_pic.ptr;
struct vdpau_picture_context *pic_ctx = pic->hwaccel_picture_private;
int val;
diff --git a/libavcodec/vdpau_mpeg12.c b/libavcodec/vdpau_mpeg12.c
index abd8cb19af..1ce0bfaa07 100644
--- a/libavcodec/vdpau_mpeg12.c
+++ b/libavcodec/vdpau_mpeg12.c
@@ -35,7 +35,7 @@ static int vdpau_mpeg_start_frame(AVCodecContext *avctx,
const uint8_t *buffer, uint32_t size)
{
MpegEncContext * const s = avctx->priv_data;
- MPVPicture *pic = s->cur_pic_ptr;
+ MPVPicture *pic = s->cur_pic.ptr;
struct vdpau_picture_context *pic_ctx = pic->hwaccel_picture_private;
VdpPictureInfoMPEG1Or2 *info = &pic_ctx->info.mpeg;
VdpVideoSurface ref;
@@ -47,12 +47,12 @@ static int vdpau_mpeg_start_frame(AVCodecContext *avctx,
switch (s->pict_type) {
case AV_PICTURE_TYPE_B:
- ref = ff_vdpau_get_surface_id(s->next_pic.f);
+ ref = ff_vdpau_get_surface_id(s->next_pic.ptr->f);
assert(ref != VDP_INVALID_HANDLE);
info->backward_reference = ref;
/* fall through to forward prediction */
case AV_PICTURE_TYPE_P:
- ref = ff_vdpau_get_surface_id(s->last_pic.f);
+ ref = ff_vdpau_get_surface_id(s->last_pic.ptr->f);
info->forward_reference = ref;
}
@@ -87,7 +87,7 @@ static int vdpau_mpeg_decode_slice(AVCodecContext *avctx,
const uint8_t *buffer, uint32_t size)
{
MpegEncContext * const s = avctx->priv_data;
- MPVPicture *pic = s->cur_pic_ptr;
+ MPVPicture *pic = s->cur_pic.ptr;
struct vdpau_picture_context *pic_ctx = pic->hwaccel_picture_private;
int val;
diff --git a/libavcodec/vdpau_mpeg4.c b/libavcodec/vdpau_mpeg4.c
index e2766835f6..40af8655cc 100644
--- a/libavcodec/vdpau_mpeg4.c
+++ b/libavcodec/vdpau_mpeg4.c
@@ -34,7 +34,7 @@ static int vdpau_mpeg4_start_frame(AVCodecContext *avctx,
{
Mpeg4DecContext *ctx = avctx->priv_data;
MpegEncContext * const s = &ctx->m;
- MPVPicture *pic = s->cur_pic_ptr;
+ MPVPicture *pic = s->cur_pic.ptr;
struct vdpau_picture_context *pic_ctx = pic->hwaccel_picture_private;
VdpPictureInfoMPEG4Part2 *info = &pic_ctx->info.mpeg4;
VdpVideoSurface ref;
@@ -47,13 +47,13 @@ static int vdpau_mpeg4_start_frame(AVCodecContext *avctx,
switch (s->pict_type) {
case AV_PICTURE_TYPE_B:
- ref = ff_vdpau_get_surface_id(s->next_pic.f);
+ ref = ff_vdpau_get_surface_id(s->next_pic.ptr->f);
assert(ref != VDP_INVALID_HANDLE);
info->backward_reference = ref;
info->vop_coding_type = 2;
/* fall-through */
case AV_PICTURE_TYPE_P:
- ref = ff_vdpau_get_surface_id(s->last_pic.f);
+ ref = ff_vdpau_get_surface_id(s->last_pic.ptr->f);
assert(ref != VDP_INVALID_HANDLE);
info->forward_reference = ref;
}
diff --git a/libavcodec/vdpau_vc1.c b/libavcodec/vdpau_vc1.c
index 9ed1665cad..d02a454bb8 100644
--- a/libavcodec/vdpau_vc1.c
+++ b/libavcodec/vdpau_vc1.c
@@ -36,7 +36,7 @@ static int vdpau_vc1_start_frame(AVCodecContext *avctx,
{
VC1Context * const v = avctx->priv_data;
MpegEncContext * const s = &v->s;
- MPVPicture *pic = s->cur_pic_ptr;
+ MPVPicture *pic = s->cur_pic.ptr;
struct vdpau_picture_context *pic_ctx = pic->hwaccel_picture_private;
VdpPictureInfoVC1 *info = &pic_ctx->info.vc1;
VdpVideoSurface ref;
@@ -47,15 +47,15 @@ static int vdpau_vc1_start_frame(AVCodecContext *avctx,
switch (s->pict_type) {
case AV_PICTURE_TYPE_B:
- if (s->next_pic_ptr) {
- ref = ff_vdpau_get_surface_id(s->next_pic.f);
+ if (s->next_pic.ptr) {
+ ref = ff_vdpau_get_surface_id(s->next_pic.ptr->f);
assert(ref != VDP_INVALID_HANDLE);
info->backward_reference = ref;
}
/* fall-through */
case AV_PICTURE_TYPE_P:
- if (s->last_pic_ptr) {
- ref = ff_vdpau_get_surface_id(s->last_pic.f);
+ if (s->last_pic.ptr) {
+ ref = ff_vdpau_get_surface_id(s->last_pic.ptr->f);
assert(ref != VDP_INVALID_HANDLE);
info->forward_reference = ref;
}
@@ -104,7 +104,7 @@ static int vdpau_vc1_decode_slice(AVCodecContext *avctx,
{
VC1Context * const v = avctx->priv_data;
MpegEncContext * const s = &v->s;
- MPVPicture *pic = s->cur_pic_ptr;
+ MPVPicture *pic = s->cur_pic.ptr;
struct vdpau_picture_context *pic_ctx = pic->hwaccel_picture_private;
int val;
diff --git a/libavcodec/videotoolbox.c b/libavcodec/videotoolbox.c
index 7807047aa6..a8ea8ff7ff 100644
--- a/libavcodec/videotoolbox.c
+++ b/libavcodec/videotoolbox.c
@@ -1108,7 +1108,7 @@ static int videotoolbox_mpeg_decode_slice(AVCodecContext *avctx,
static int videotoolbox_mpeg_end_frame(AVCodecContext *avctx)
{
MpegEncContext *s = avctx->priv_data;
- AVFrame *frame = s->cur_pic_ptr->f;
+ AVFrame *frame = s->cur_pic.ptr->f;
return ff_videotoolbox_common_end_frame(avctx, frame);
}
diff --git a/libavcodec/wmv2dec.c b/libavcodec/wmv2dec.c
index 432d6f7223..bb3829dcd6 100644
--- a/libavcodec/wmv2dec.c
+++ b/libavcodec/wmv2dec.c
@@ -330,7 +330,7 @@ int ff_wmv2_decode_secondary_picture_header(MpegEncContext *s)
s->esc3_run_length = 0;
if (w->j_type) {
- ff_intrax8_decode_picture(&w->x8, &s->cur_pic,
+ ff_intrax8_decode_picture(&w->x8, s->cur_pic.ptr,
&s->gb, &s->mb_x, &s->mb_y,
2 * s->qscale, (s->qscale - 1) | 1,
s->loop_filter, s->low_delay);
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 44/71] avcodec/error_resilience: Deduplicate cleanup code
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (41 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 43/71] avcodec/mpegpicture: Split MPVPicture into WorkPicture and ordinary Pic Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 45/71] avcodec/mpegvideo_enc: Factor setting length of B frame chain out Andreas Rheinhardt
` (27 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/error_resilience.c | 20 ++++++--------------
1 file changed, 6 insertions(+), 14 deletions(-)
diff --git a/libavcodec/error_resilience.c b/libavcodec/error_resilience.c
index efbacb8760..66d03987b6 100644
--- a/libavcodec/error_resilience.c
+++ b/libavcodec/error_resilience.c
@@ -948,19 +948,10 @@ void ff_er_frame_end(ERContext *s, int *decode_error_flags)
s->ref_index[i] = av_calloc(s->mb_stride * s->mb_height, 4 * sizeof(uint8_t));
s->motion_val_base[i] = av_calloc(size + 4, 2 * sizeof(uint16_t));
if (!s->ref_index[i] || !s->motion_val_base[i])
- break;
+ goto cleanup;
s->cur_pic.ref_index[i] = s->ref_index[i];
s->cur_pic.motion_val[i] = s->motion_val_base[i] + 4;
}
- if (i < 2) {
- for (i = 0; i < 2; i++) {
- av_freep(&s->ref_index[i]);
- av_freep(&s->motion_val_base[i]);
- s->cur_pic.ref_index[i] = NULL;
- s->cur_pic.motion_val[i] = NULL;
- }
- return;
- }
}
if (s->avctx->debug & FF_DEBUG_ER) {
@@ -1344,14 +1335,15 @@ void ff_er_frame_end(ERContext *s, int *decode_error_flags)
s->mbintra_table[mb_xy] = 1;
}
+ memset(&s->cur_pic, 0, sizeof(ERPicture));
+ memset(&s->last_pic, 0, sizeof(ERPicture));
+ memset(&s->next_pic, 0, sizeof(ERPicture));
+
+cleanup:
for (i = 0; i < 2; i++) {
av_freep(&s->ref_index[i]);
av_freep(&s->motion_val_base[i]);
s->cur_pic.ref_index[i] = NULL;
s->cur_pic.motion_val[i] = NULL;
}
-
- memset(&s->cur_pic, 0, sizeof(ERPicture));
- memset(&s->last_pic, 0, sizeof(ERPicture));
- memset(&s->next_pic, 0, sizeof(ERPicture));
}
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 45/71] avcodec/mpegvideo_enc: Factor setting length of B frame chain out
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (42 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 44/71] avcodec/error_resilience: Deduplicate cleanup code Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 46/71] avcodec/mpegvideo_enc: Return early when getting length of B frame chain Andreas Rheinhardt
` (26 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
It already avoids a goto and will be useful in the future
to be able to specify each functions tasks and obligations.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegvideo_enc.c | 33 +++++++++++++++++++++++++--------
1 file changed, 25 insertions(+), 8 deletions(-)
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index cd25cd3221..025204f395 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -1472,13 +1472,15 @@ fail:
return best_b_count;
}
-static int select_input_picture(MpegEncContext *s)
+/**
+ * Determines whether an input picture is discarded or not
+ * and if not determines the length of the next chain of B frames
+ * and puts these pictures (including the P frame) into
+ * reordered_input_picture.
+ */
+static int set_bframe_chain_length(MpegEncContext *s)
{
- int i, ret;
-
- for (int i = 1; i <= MAX_B_FRAMES; i++)
- s->reordered_input_picture[i - 1] = s->reordered_input_picture[i];
- s->reordered_input_picture[MAX_B_FRAMES] = NULL;
+ int i;
/* set next picture type & ordering */
if (!s->reordered_input_picture[0] && s->input_picture[0]) {
@@ -1491,7 +1493,7 @@ static int select_input_picture(MpegEncContext *s)
ff_vbv_update(s, 0);
- goto no_output_pic;
+ return 0;
}
}
@@ -1598,7 +1600,22 @@ static int select_input_picture(MpegEncContext *s)
}
}
}
-no_output_pic:
+
+ return 0;
+}
+
+static int select_input_picture(MpegEncContext *s)
+{
+ int ret;
+
+ for (int i = 1; i <= MAX_B_FRAMES; i++)
+ s->reordered_input_picture[i - 1] = s->reordered_input_picture[i];
+ s->reordered_input_picture[MAX_B_FRAMES] = NULL;
+
+ ret = set_bframe_chain_length(s);
+ if (ret < 0)
+ return ret;
+
av_frame_unref(s->new_pic);
if (s->reordered_input_picture[0]) {
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 46/71] avcodec/mpegvideo_enc: Return early when getting length of B frame chain
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (43 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 45/71] avcodec/mpegvideo_enc: Factor setting length of B frame chain out Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 47/71] avcodec/mpegvideo_enc: Reindentation Andreas Rheinhardt
` (25 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Possible now that this is a function of its own.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegvideo_enc.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index 025204f395..610067eaef 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -1482,8 +1482,11 @@ static int set_bframe_chain_length(MpegEncContext *s)
{
int i;
+ /* Either nothing to do or can't do anything */
+ if (s->reordered_input_picture[0] || !s->input_picture[0])
+ return 0;
+
/* set next picture type & ordering */
- if (!s->reordered_input_picture[0] && s->input_picture[0]) {
if (s->frame_skip_threshold || s->frame_skip_factor) {
if (s->picture_in_gop_number < s->gop_size &&
s->next_pic.ptr &&
@@ -1599,7 +1602,6 @@ static int set_bframe_chain_length(MpegEncContext *s)
s->coded_picture_number++;
}
}
- }
return 0;
}
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 47/71] avcodec/mpegvideo_enc: Reindentation
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (44 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 46/71] avcodec/mpegvideo_enc: Return early when getting length of B frame chain Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 48/71] avcodec/mpeg12dec: Don't initialize inter tables for IPU Andreas Rheinhardt
` (24 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Also try to use loop-scope for iterators.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegvideo_enc.c | 193 ++++++++++++++++++-------------------
1 file changed, 96 insertions(+), 97 deletions(-)
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index 610067eaef..c9dc9959df 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -1480,129 +1480,128 @@ fail:
*/
static int set_bframe_chain_length(MpegEncContext *s)
{
- int i;
-
/* Either nothing to do or can't do anything */
if (s->reordered_input_picture[0] || !s->input_picture[0])
return 0;
/* set next picture type & ordering */
- if (s->frame_skip_threshold || s->frame_skip_factor) {
- if (s->picture_in_gop_number < s->gop_size &&
- s->next_pic.ptr &&
- skip_check(s, s->input_picture[0], s->next_pic.ptr)) {
- // FIXME check that the gop check above is +-1 correct
- ff_mpeg_unref_picture(s->input_picture[0]);
+ if (s->frame_skip_threshold || s->frame_skip_factor) {
+ if (s->picture_in_gop_number < s->gop_size &&
+ s->next_pic.ptr &&
+ skip_check(s, s->input_picture[0], s->next_pic.ptr)) {
+ // FIXME check that the gop check above is +-1 correct
+ ff_mpeg_unref_picture(s->input_picture[0]);
- ff_vbv_update(s, 0);
+ ff_vbv_update(s, 0);
- return 0;
- }
+ return 0;
}
+ }
- if (/*s->picture_in_gop_number >= s->gop_size ||*/
- !s->next_pic.ptr || s->intra_only) {
- s->reordered_input_picture[0] = s->input_picture[0];
- s->reordered_input_picture[0]->f->pict_type = AV_PICTURE_TYPE_I;
- s->reordered_input_picture[0]->coded_picture_number =
- s->coded_picture_number++;
- } else {
- int b_frames = 0;
-
- if (s->avctx->flags & AV_CODEC_FLAG_PASS2) {
- for (i = 0; i < s->max_b_frames + 1; i++) {
- int pict_num = s->input_picture[0]->display_picture_number + i;
-
- if (pict_num >= s->rc_context.num_entries)
- break;
- if (!s->input_picture[i]) {
- s->rc_context.entry[pict_num - 1].new_pict_type = AV_PICTURE_TYPE_P;
- break;
- }
+ if (/*s->picture_in_gop_number >= s->gop_size ||*/
+ !s->next_pic.ptr || s->intra_only) {
+ s->reordered_input_picture[0] = s->input_picture[0];
+ s->reordered_input_picture[0]->f->pict_type = AV_PICTURE_TYPE_I;
+ s->reordered_input_picture[0]->coded_picture_number =
+ s->coded_picture_number++;
+ } else {
+ int b_frames = 0;
- s->input_picture[i]->f->pict_type =
- s->rc_context.entry[pict_num].new_pict_type;
- }
- }
+ if (s->avctx->flags & AV_CODEC_FLAG_PASS2) {
+ for (int i = 0; i < s->max_b_frames + 1; i++) {
+ int pict_num = s->input_picture[0]->display_picture_number + i;
- if (s->b_frame_strategy == 0) {
- b_frames = s->max_b_frames;
- while (b_frames && !s->input_picture[b_frames])
- b_frames--;
- } else if (s->b_frame_strategy == 1) {
- for (i = 1; i < s->max_b_frames + 1; i++) {
- if (s->input_picture[i] &&
- s->input_picture[i]->b_frame_score == 0) {
- s->input_picture[i]->b_frame_score =
- get_intra_count(s,
- s->input_picture[i ]->f->data[0],
- s->input_picture[i - 1]->f->data[0],
- s->linesize) + 1;
- }
- }
- for (i = 0; i < s->max_b_frames + 1; i++) {
- if (!s->input_picture[i] ||
- s->input_picture[i]->b_frame_score - 1 >
- s->mb_num / s->b_sensitivity)
- break;
+ if (pict_num >= s->rc_context.num_entries)
+ break;
+ if (!s->input_picture[i]) {
+ s->rc_context.entry[pict_num - 1].new_pict_type = AV_PICTURE_TYPE_P;
+ break;
}
- b_frames = FFMAX(0, i - 1);
+ s->input_picture[i]->f->pict_type =
+ s->rc_context.entry[pict_num].new_pict_type;
+ }
+ }
- /* reset scores */
- for (i = 0; i < b_frames + 1; i++) {
- s->input_picture[i]->b_frame_score = 0;
- }
- } else if (s->b_frame_strategy == 2) {
- b_frames = estimate_best_b_count(s);
- if (b_frames < 0) {
- ff_mpeg_unref_picture(s->input_picture[0]);
- return b_frames;
+ if (s->b_frame_strategy == 0) {
+ b_frames = s->max_b_frames;
+ while (b_frames && !s->input_picture[b_frames])
+ b_frames--;
+ } else if (s->b_frame_strategy == 1) {
+ int i;
+ for (i = 1; i < s->max_b_frames + 1; i++) {
+ if (s->input_picture[i] &&
+ s->input_picture[i]->b_frame_score == 0) {
+ s->input_picture[i]->b_frame_score =
+ get_intra_count(s,
+ s->input_picture[i ]->f->data[0],
+ s->input_picture[i - 1]->f->data[0],
+ s->linesize) + 1;
}
}
+ for (i = 0; i < s->max_b_frames + 1; i++) {
+ if (!s->input_picture[i] ||
+ s->input_picture[i]->b_frame_score - 1 >
+ s->mb_num / s->b_sensitivity)
+ break;
+ }
- emms_c();
+ b_frames = FFMAX(0, i - 1);
- for (i = b_frames - 1; i >= 0; i--) {
- int type = s->input_picture[i]->f->pict_type;
- if (type && type != AV_PICTURE_TYPE_B)
- b_frames = i;
+ /* reset scores */
+ for (i = 0; i < b_frames + 1; i++) {
+ s->input_picture[i]->b_frame_score = 0;
}
- if (s->input_picture[b_frames]->f->pict_type == AV_PICTURE_TYPE_B &&
- b_frames == s->max_b_frames) {
- av_log(s->avctx, AV_LOG_ERROR,
- "warning, too many B-frames in a row\n");
+ } else if (s->b_frame_strategy == 2) {
+ b_frames = estimate_best_b_count(s);
+ if (b_frames < 0) {
+ ff_mpeg_unref_picture(s->input_picture[0]);
+ return b_frames;
}
+ }
- if (s->picture_in_gop_number + b_frames >= s->gop_size) {
- if ((s->mpv_flags & FF_MPV_FLAG_STRICT_GOP) &&
- s->gop_size > s->picture_in_gop_number) {
- b_frames = s->gop_size - s->picture_in_gop_number - 1;
- } else {
- if (s->avctx->flags & AV_CODEC_FLAG_CLOSED_GOP)
- b_frames = 0;
- s->input_picture[b_frames]->f->pict_type = AV_PICTURE_TYPE_I;
- }
- }
+ emms_c();
- if ((s->avctx->flags & AV_CODEC_FLAG_CLOSED_GOP) && b_frames &&
- s->input_picture[b_frames]->f->pict_type == AV_PICTURE_TYPE_I)
- b_frames--;
+ for (int i = b_frames - 1; i >= 0; i--) {
+ int type = s->input_picture[i]->f->pict_type;
+ if (type && type != AV_PICTURE_TYPE_B)
+ b_frames = i;
+ }
+ if (s->input_picture[b_frames]->f->pict_type == AV_PICTURE_TYPE_B &&
+ b_frames == s->max_b_frames) {
+ av_log(s->avctx, AV_LOG_ERROR,
+ "warning, too many B-frames in a row\n");
+ }
- s->reordered_input_picture[0] = s->input_picture[b_frames];
- if (s->reordered_input_picture[0]->f->pict_type != AV_PICTURE_TYPE_I)
- s->reordered_input_picture[0]->f->pict_type = AV_PICTURE_TYPE_P;
- s->reordered_input_picture[0]->coded_picture_number =
- s->coded_picture_number++;
- for (i = 0; i < b_frames; i++) {
- s->reordered_input_picture[i + 1] = s->input_picture[i];
- s->reordered_input_picture[i + 1]->f->pict_type =
- AV_PICTURE_TYPE_B;
- s->reordered_input_picture[i + 1]->coded_picture_number =
- s->coded_picture_number++;
+ if (s->picture_in_gop_number + b_frames >= s->gop_size) {
+ if ((s->mpv_flags & FF_MPV_FLAG_STRICT_GOP) &&
+ s->gop_size > s->picture_in_gop_number) {
+ b_frames = s->gop_size - s->picture_in_gop_number - 1;
+ } else {
+ if (s->avctx->flags & AV_CODEC_FLAG_CLOSED_GOP)
+ b_frames = 0;
+ s->input_picture[b_frames]->f->pict_type = AV_PICTURE_TYPE_I;
}
}
+ if ((s->avctx->flags & AV_CODEC_FLAG_CLOSED_GOP) && b_frames &&
+ s->input_picture[b_frames]->f->pict_type == AV_PICTURE_TYPE_I)
+ b_frames--;
+
+ s->reordered_input_picture[0] = s->input_picture[b_frames];
+ if (s->reordered_input_picture[0]->f->pict_type != AV_PICTURE_TYPE_I)
+ s->reordered_input_picture[0]->f->pict_type = AV_PICTURE_TYPE_P;
+ s->reordered_input_picture[0]->coded_picture_number =
+ s->coded_picture_number++;
+ for (int i = 0; i < b_frames; i++) {
+ s->reordered_input_picture[i + 1] = s->input_picture[i];
+ s->reordered_input_picture[i + 1]->f->pict_type =
+ AV_PICTURE_TYPE_B;
+ s->reordered_input_picture[i + 1]->coded_picture_number =
+ s->coded_picture_number++;
+ }
+ }
+
return 0;
}
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 48/71] avcodec/mpeg12dec: Don't initialize inter tables for IPU
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (45 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 47/71] avcodec/mpegvideo_enc: Reindentation Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 49/71] avcodec/mpeg12dec: Only initialize IDCT " Andreas Rheinhardt
` (23 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
IPU is intra-only.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpeg12dec.c | 16 ++--------------
1 file changed, 2 insertions(+), 14 deletions(-)
diff --git a/libavcodec/mpeg12dec.c b/libavcodec/mpeg12dec.c
index e3f2dd8af7..097e4ba19a 100644
--- a/libavcodec/mpeg12dec.c
+++ b/libavcodec/mpeg12dec.c
@@ -2751,13 +2751,8 @@ static int ipu_decode_frame(AVCodecContext *avctx, AVFrame *frame,
m->intra_vlc_format = !!(s->flags & 0x20);
m->alternate_scan = !!(s->flags & 0x10);
- if (s->flags & 0x10) {
- ff_init_scantable(m->idsp.idct_permutation, &m->inter_scantable, ff_alternate_vertical_scan);
- ff_init_scantable(m->idsp.idct_permutation, &m->intra_scantable, ff_alternate_vertical_scan);
- } else {
- ff_init_scantable(m->idsp.idct_permutation, &m->inter_scantable, ff_zigzag_direct);
- ff_init_scantable(m->idsp.idct_permutation, &m->intra_scantable, ff_zigzag_direct);
- }
+ ff_init_scantable(m->idsp.idct_permutation, &m->intra_scantable,
+ s->flags & 0x10 ? ff_alternate_vertical_scan : ff_zigzag_direct);
m->last_dc[0] = m->last_dc[1] = m->last_dc[2] = 1 << (7 + (s->flags & 3));
m->qscale = 1;
@@ -2846,13 +2841,6 @@ static av_cold int ipu_decode_init(AVCodecContext *avctx)
m->chroma_intra_matrix[j] = v;
}
- for (int i = 0; i < 64; i++) {
- int j = m->idsp.idct_permutation[i];
- int v = ff_mpeg1_default_non_intra_matrix[i];
- m->inter_matrix[j] = v;
- m->chroma_inter_matrix[j] = v;
- }
-
return 0;
}
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 49/71] avcodec/mpeg12dec: Only initialize IDCT for IPU
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (46 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 48/71] avcodec/mpeg12dec: Don't initialize inter tables for IPU Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 50/71] avcodec/mpeg12dec: Remove write-only assignment Andreas Rheinhardt
` (22 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
This is all that is used. This is in preparation for further
commits that will extend ff_mpv_decode_init() in a way
that will make it possible to fail and require cleanup.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpeg12dec.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/libavcodec/mpeg12dec.c b/libavcodec/mpeg12dec.c
index 097e4ba19a..3cd706de36 100644
--- a/libavcodec/mpeg12dec.c
+++ b/libavcodec/mpeg12dec.c
@@ -2830,8 +2830,9 @@ static av_cold int ipu_decode_init(AVCodecContext *avctx)
MpegEncContext *m = &s->m;
avctx->pix_fmt = AV_PIX_FMT_YUV420P;
+ m->avctx = avctx;
- ff_mpv_decode_init(m, avctx);
+ ff_idctdsp_init(&m->idsp, avctx);
ff_mpeg12_init_vlcs();
for (int i = 0; i < 64; i++) {
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 50/71] avcodec/mpeg12dec: Remove write-only assignment
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (47 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 49/71] avcodec/mpeg12dec: Only initialize IDCT " Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 51/71] avcodec/mpeg12dec: Set out_format only once Andreas Rheinhardt
` (21 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpeg12dec.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/libavcodec/mpeg12dec.c b/libavcodec/mpeg12dec.c
index 3cd706de36..e573d3cdff 100644
--- a/libavcodec/mpeg12dec.c
+++ b/libavcodec/mpeg12dec.c
@@ -2788,8 +2788,6 @@ static int ipu_decode_frame(AVCodecContext *avctx, AVFrame *frame,
m->intra_scantable.permutated,
m->last_dc, s->block[n],
n, m->qscale);
- if (ret >= 0)
- m->block_last_index[n] = ret;
} else {
ret = mpeg2_decode_block_intra(m, s->block[n], n);
}
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 51/71] avcodec/mpeg12dec: Set out_format only once
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (48 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 50/71] avcodec/mpeg12dec: Remove write-only assignment Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 52/71] avformat/riff: Declare VCR2 to be MPEG-2 Andreas Rheinhardt
` (20 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpeg12dec.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/libavcodec/mpeg12dec.c b/libavcodec/mpeg12dec.c
index e573d3cdff..0e2012b324 100644
--- a/libavcodec/mpeg12dec.c
+++ b/libavcodec/mpeg12dec.c
@@ -794,6 +794,8 @@ static av_cold int mpeg_decode_init(AVCodecContext *avctx)
Mpeg1Context *s = avctx->priv_data;
MpegEncContext *s2 = &s->mpeg_enc_ctx;
+ s2->out_format = FMT_MPEG1;
+
if ( avctx->codec_tag != AV_RL32("VCR2")
&& avctx->codec_tag != AV_RL32("BW10"))
avctx->coded_width = avctx->coded_height = 0; // do not trust dimensions from input
@@ -1859,7 +1861,6 @@ static int mpeg1_decode_sequence(AVCodecContext *avctx,
s->chroma_format = 1;
s->codec_id =
s->avctx->codec_id = AV_CODEC_ID_MPEG1VIDEO;
- s->out_format = FMT_MPEG1;
if (s->avctx->flags & AV_CODEC_FLAG_LOW_DELAY)
s->low_delay = 1;
@@ -1877,7 +1878,6 @@ static int vcr2_init_sequence(AVCodecContext *avctx)
int i, v, ret;
/* start new MPEG-1 context decoding */
- s->out_format = FMT_MPEG1;
if (s->context_initialized)
ff_mpv_common_end(s);
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 52/71] avformat/riff: Declare VCR2 to be MPEG-2
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (49 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 51/71] avcodec/mpeg12dec: Set out_format only once Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 53/71] avcodec/mpegvideo_dec: Add close function for mpegvideo-decoders Andreas Rheinhardt
` (19 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
This brings it in line with mpeg12dec.c.
(This entry has been added before the MPEG2VIDEO codec id
existed.)
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavformat/riff.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/libavformat/riff.c b/libavformat/riff.c
index 306dc3b47a..ca81b4837a 100644
--- a/libavformat/riff.c
+++ b/libavformat/riff.c
@@ -161,7 +161,7 @@ const AVCodecTag ff_codec_bmp_tags[] = {
{ AV_CODEC_ID_MPEG2VIDEO, MKTAG('M', 'P', 'E', 'G') },
{ AV_CODEC_ID_MPEG1VIDEO, MKTAG('P', 'I', 'M', '1') },
{ AV_CODEC_ID_MPEG2VIDEO, MKTAG('P', 'I', 'M', '2') },
- { AV_CODEC_ID_MPEG1VIDEO, MKTAG('V', 'C', 'R', '2') },
+ { AV_CODEC_ID_MPEG2VIDEO, MKTAG('V', 'C', 'R', '2') },
{ AV_CODEC_ID_MPEG1VIDEO, MKTAG( 1 , 0 , 0 , 16) },
{ AV_CODEC_ID_MPEG2VIDEO, MKTAG( 2 , 0 , 0 , 16) },
{ AV_CODEC_ID_MPEG4, MKTAG( 4 , 0 , 0 , 16) },
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 53/71] avcodec/mpegvideo_dec: Add close function for mpegvideo-decoders
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (50 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 52/71] avformat/riff: Declare VCR2 to be MPEG-2 Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 54/71] avcodec/mpegpicture: Make MPVPicture refcounted Andreas Rheinhardt
` (18 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Currently identical to the H.261 and H.263 close functions
(which it replaces). It will be extended in future commits.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/flvdec.c | 4 ++--
libavcodec/h261dec.c | 11 +----------
libavcodec/h263dec.c | 12 ++----------
libavcodec/h263dec.h | 1 -
libavcodec/intelh263dec.c | 2 +-
libavcodec/mpeg12dec.c | 3 +--
libavcodec/mpeg4videodec.c | 2 +-
libavcodec/mpegvideo_dec.c | 8 ++++++++
libavcodec/mpegvideodec.h | 1 +
libavcodec/msmpeg4dec.c | 9 +++++----
libavcodec/rv10.c | 12 ++----------
libavcodec/rv30.c | 1 +
libavcodec/rv34.c | 8 +++-----
libavcodec/rv40.c | 1 +
libavcodec/vc1dec.c | 18 ++++++++++++------
libavcodec/wmv2dec.c | 3 ++-
16 files changed, 43 insertions(+), 53 deletions(-)
diff --git a/libavcodec/flvdec.c b/libavcodec/flvdec.c
index 8baaed06a8..1bb1b12917 100644
--- a/libavcodec/flvdec.c
+++ b/libavcodec/flvdec.c
@@ -24,7 +24,7 @@
#include "flvdec.h"
#include "h263dec.h"
#include "mpegvideo.h"
-#include "mpegvideodata.h"
+#include "mpegvideodec.h"
int ff_flv_decode_picture_header(MpegEncContext *s)
{
@@ -118,8 +118,8 @@ const FFCodec ff_flv_decoder = {
.p.id = AV_CODEC_ID_FLV1,
.priv_data_size = sizeof(MpegEncContext),
.init = ff_h263_decode_init,
- .close = ff_h263_decode_end,
FF_CODEC_DECODE_CB(ff_h263_decode_frame),
+ .close = ff_mpv_decode_close,
.p.capabilities = AV_CODEC_CAP_DRAW_HORIZ_BAND | AV_CODEC_CAP_DR1,
.caps_internal = FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM,
.p.max_lowres = 3,
diff --git a/libavcodec/h261dec.c b/libavcodec/h261dec.c
index 9acfd984ee..d8d0dcf3cf 100644
--- a/libavcodec/h261dec.c
+++ b/libavcodec/h261dec.c
@@ -660,15 +660,6 @@ static int h261_decode_frame(AVCodecContext *avctx, AVFrame *pict,
return get_consumed_bytes(s, buf_size);
}
-static av_cold int h261_decode_end(AVCodecContext *avctx)
-{
- H261DecContext *const h = avctx->priv_data;
- MpegEncContext *s = &h->s;
-
- ff_mpv_common_end(s);
- return 0;
-}
-
const FFCodec ff_h261_decoder = {
.p.name = "h261",
CODEC_LONG_NAME("H.261"),
@@ -676,8 +667,8 @@ const FFCodec ff_h261_decoder = {
.p.id = AV_CODEC_ID_H261,
.priv_data_size = sizeof(H261DecContext),
.init = h261_decode_init,
- .close = h261_decode_end,
FF_CODEC_DECODE_CB(h261_decode_frame),
+ .close = ff_mpv_decode_close,
.p.capabilities = AV_CODEC_CAP_DR1,
.p.max_lowres = 3,
};
diff --git a/libavcodec/h263dec.c b/libavcodec/h263dec.c
index 4fe4a30000..b8db4ffc98 100644
--- a/libavcodec/h263dec.c
+++ b/libavcodec/h263dec.c
@@ -159,14 +159,6 @@ av_cold int ff_h263_decode_init(AVCodecContext *avctx)
return 0;
}
-av_cold int ff_h263_decode_end(AVCodecContext *avctx)
-{
- MpegEncContext *s = avctx->priv_data;
-
- ff_mpv_common_end(s);
- return 0;
-}
-
/**
* Return the number of bytes consumed for building the current frame.
*/
@@ -702,8 +694,8 @@ const FFCodec ff_h263_decoder = {
.p.id = AV_CODEC_ID_H263,
.priv_data_size = sizeof(MpegEncContext),
.init = ff_h263_decode_init,
- .close = ff_h263_decode_end,
FF_CODEC_DECODE_CB(ff_h263_decode_frame),
+ .close = ff_mpv_decode_close,
.p.capabilities = AV_CODEC_CAP_DRAW_HORIZ_BAND | AV_CODEC_CAP_DR1 |
AV_CODEC_CAP_DELAY,
.caps_internal = FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM,
@@ -719,8 +711,8 @@ const FFCodec ff_h263p_decoder = {
.p.id = AV_CODEC_ID_H263P,
.priv_data_size = sizeof(MpegEncContext),
.init = ff_h263_decode_init,
- .close = ff_h263_decode_end,
FF_CODEC_DECODE_CB(ff_h263_decode_frame),
+ .close = ff_mpv_decode_close,
.p.capabilities = AV_CODEC_CAP_DRAW_HORIZ_BAND | AV_CODEC_CAP_DR1 |
AV_CODEC_CAP_DELAY,
.caps_internal = FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM,
diff --git a/libavcodec/h263dec.h b/libavcodec/h263dec.h
index a01acc0834..633d4aa577 100644
--- a/libavcodec/h263dec.h
+++ b/libavcodec/h263dec.h
@@ -47,7 +47,6 @@ int ff_h263_decode_motion(MpegEncContext * s, int pred, int f_code);
int ff_h263_decode_init(AVCodecContext *avctx);
int ff_h263_decode_frame(AVCodecContext *avctx, AVFrame *frame,
int *got_frame, AVPacket *avpkt);
-int ff_h263_decode_end(AVCodecContext *avctx);
void ff_h263_decode_init_vlc(void);
int ff_h263_decode_picture_header(MpegEncContext *s);
int ff_h263_decode_gob_header(MpegEncContext *s);
diff --git a/libavcodec/intelh263dec.c b/libavcodec/intelh263dec.c
index 5d34892ef7..d4051c36f1 100644
--- a/libavcodec/intelh263dec.c
+++ b/libavcodec/intelh263dec.c
@@ -132,8 +132,8 @@ const FFCodec ff_h263i_decoder = {
.p.id = AV_CODEC_ID_H263I,
.priv_data_size = sizeof(MpegEncContext),
.init = ff_h263_decode_init,
- .close = ff_h263_decode_end,
FF_CODEC_DECODE_CB(ff_h263_decode_frame),
+ .close = ff_mpv_decode_close,
.p.capabilities = AV_CODEC_CAP_DRAW_HORIZ_BAND | AV_CODEC_CAP_DR1,
.caps_internal = FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM,
};
diff --git a/libavcodec/mpeg12dec.c b/libavcodec/mpeg12dec.c
index 0e2012b324..cf8493ba51 100644
--- a/libavcodec/mpeg12dec.c
+++ b/libavcodec/mpeg12dec.c
@@ -2595,9 +2595,8 @@ static av_cold int mpeg_decode_end(AVCodecContext *avctx)
{
Mpeg1Context *s = avctx->priv_data;
- ff_mpv_common_end(&s->mpeg_enc_ctx);
av_buffer_unref(&s->a53_buf_ref);
- return 0;
+ return ff_mpv_decode_close(avctx);
}
const FFCodec ff_mpeg1video_decoder = {
diff --git a/libavcodec/mpeg4videodec.c b/libavcodec/mpeg4videodec.c
index 6cdab62b46..4dcb967f7d 100644
--- a/libavcodec/mpeg4videodec.c
+++ b/libavcodec/mpeg4videodec.c
@@ -3860,8 +3860,8 @@ const FFCodec ff_mpeg4_decoder = {
.p.id = AV_CODEC_ID_MPEG4,
.priv_data_size = sizeof(Mpeg4DecContext),
.init = decode_init,
- .close = ff_h263_decode_end,
FF_CODEC_DECODE_CB(ff_h263_decode_frame),
+ .close = ff_mpv_decode_close,
.p.capabilities = AV_CODEC_CAP_DRAW_HORIZ_BAND | AV_CODEC_CAP_DR1 |
AV_CODEC_CAP_DELAY | AV_CODEC_CAP_FRAME_THREADS,
.caps_internal = FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM,
diff --git a/libavcodec/mpegvideo_dec.c b/libavcodec/mpegvideo_dec.c
index 71a6c0ad67..c8952f4831 100644
--- a/libavcodec/mpegvideo_dec.c
+++ b/libavcodec/mpegvideo_dec.c
@@ -173,6 +173,14 @@ do {\
return 0;
}
+int ff_mpv_decode_close(AVCodecContext *avctx)
+{
+ MpegEncContext *s = avctx->priv_data;
+
+ ff_mpv_common_end(s);
+ return 0;
+}
+
int ff_mpv_common_frame_size_change(MpegEncContext *s)
{
int err = 0;
diff --git a/libavcodec/mpegvideodec.h b/libavcodec/mpegvideodec.h
index 4259d5a02d..35e9081d2c 100644
--- a/libavcodec/mpegvideodec.h
+++ b/libavcodec/mpegvideodec.h
@@ -63,6 +63,7 @@ int ff_mpv_export_qp_table(const MpegEncContext *s, AVFrame *f,
int ff_mpeg_update_thread_context(AVCodecContext *dst, const AVCodecContext *src);
void ff_mpeg_draw_horiz_band(MpegEncContext *s, int y, int h);
void ff_mpeg_flush(AVCodecContext *avctx);
+int ff_mpv_decode_close(AVCodecContext *avctx);
void ff_print_debug_info(const MpegEncContext *s, const MPVPicture *p, AVFrame *pict);
diff --git a/libavcodec/msmpeg4dec.c b/libavcodec/msmpeg4dec.c
index a7b3fc4603..20d735a152 100644
--- a/libavcodec/msmpeg4dec.c
+++ b/libavcodec/msmpeg4dec.c
@@ -28,6 +28,7 @@
#include "codec_internal.h"
#include "mpegutils.h"
#include "mpegvideo.h"
+#include "mpegvideodec.h"
#include "msmpeg4.h"
#include "msmpeg4dec.h"
#include "libavutil/imgutils.h"
@@ -848,8 +849,8 @@ const FFCodec ff_msmpeg4v1_decoder = {
.p.id = AV_CODEC_ID_MSMPEG4V1,
.priv_data_size = sizeof(MpegEncContext),
.init = ff_msmpeg4_decode_init,
- .close = ff_h263_decode_end,
FF_CODEC_DECODE_CB(ff_h263_decode_frame),
+ .close = ff_mpv_decode_close,
.p.capabilities = AV_CODEC_CAP_DRAW_HORIZ_BAND | AV_CODEC_CAP_DR1,
.caps_internal = FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM,
.p.max_lowres = 3,
@@ -862,8 +863,8 @@ const FFCodec ff_msmpeg4v2_decoder = {
.p.id = AV_CODEC_ID_MSMPEG4V2,
.priv_data_size = sizeof(MpegEncContext),
.init = ff_msmpeg4_decode_init,
- .close = ff_h263_decode_end,
FF_CODEC_DECODE_CB(ff_h263_decode_frame),
+ .close = ff_mpv_decode_close,
.p.capabilities = AV_CODEC_CAP_DRAW_HORIZ_BAND | AV_CODEC_CAP_DR1,
.caps_internal = FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM,
.p.max_lowres = 3,
@@ -876,8 +877,8 @@ const FFCodec ff_msmpeg4v3_decoder = {
.p.id = AV_CODEC_ID_MSMPEG4V3,
.priv_data_size = sizeof(MpegEncContext),
.init = ff_msmpeg4_decode_init,
- .close = ff_h263_decode_end,
FF_CODEC_DECODE_CB(ff_h263_decode_frame),
+ .close = ff_mpv_decode_close,
.p.capabilities = AV_CODEC_CAP_DRAW_HORIZ_BAND | AV_CODEC_CAP_DR1,
.caps_internal = FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM,
.p.max_lowres = 3,
@@ -890,8 +891,8 @@ const FFCodec ff_wmv1_decoder = {
.p.id = AV_CODEC_ID_WMV1,
.priv_data_size = sizeof(MpegEncContext),
.init = ff_msmpeg4_decode_init,
- .close = ff_h263_decode_end,
FF_CODEC_DECODE_CB(ff_h263_decode_frame),
+ .close = ff_mpv_decode_close,
.p.capabilities = AV_CODEC_CAP_DRAW_HORIZ_BAND | AV_CODEC_CAP_DR1,
.caps_internal = FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM,
.p.max_lowres = 3,
diff --git a/libavcodec/rv10.c b/libavcodec/rv10.c
index 201e7ed6d0..6ece6a5a25 100644
--- a/libavcodec/rv10.c
+++ b/libavcodec/rv10.c
@@ -416,14 +416,6 @@ static av_cold int rv10_decode_init(AVCodecContext *avctx)
return 0;
}
-static av_cold int rv10_decode_end(AVCodecContext *avctx)
-{
- MpegEncContext *s = avctx->priv_data;
-
- ff_mpv_common_end(s);
- return 0;
-}
-
static int rv10_decode_packet(AVCodecContext *avctx, const uint8_t *buf,
int buf_size, int buf_size2, int whole_size)
{
@@ -666,8 +658,8 @@ const FFCodec ff_rv10_decoder = {
.p.id = AV_CODEC_ID_RV10,
.priv_data_size = sizeof(RVDecContext),
.init = rv10_decode_init,
- .close = rv10_decode_end,
FF_CODEC_DECODE_CB(rv10_decode_frame),
+ .close = ff_mpv_decode_close,
.p.capabilities = AV_CODEC_CAP_DR1,
.p.max_lowres = 3,
};
@@ -679,8 +671,8 @@ const FFCodec ff_rv20_decoder = {
.p.id = AV_CODEC_ID_RV20,
.priv_data_size = sizeof(RVDecContext),
.init = rv10_decode_init,
- .close = rv10_decode_end,
FF_CODEC_DECODE_CB(rv10_decode_frame),
+ .close = ff_mpv_decode_close,
.p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_DELAY,
.flush = ff_mpeg_flush,
.p.max_lowres = 3,
diff --git a/libavcodec/rv30.c b/libavcodec/rv30.c
index 9c8bb966e9..5e1dd01aa1 100644
--- a/libavcodec/rv30.c
+++ b/libavcodec/rv30.c
@@ -302,6 +302,7 @@ const FFCodec ff_rv30_decoder = {
FF_CODEC_DECODE_CB(ff_rv34_decode_frame),
.p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_DELAY |
AV_CODEC_CAP_FRAME_THREADS,
+ .caps_internal = FF_CODEC_CAP_INIT_CLEANUP,
.flush = ff_mpeg_flush,
UPDATE_THREAD_CONTEXT(ff_rv34_decode_update_thread_context),
};
diff --git a/libavcodec/rv34.c b/libavcodec/rv34.c
index d935c261b5..90296d7de3 100644
--- a/libavcodec/rv34.c
+++ b/libavcodec/rv34.c
@@ -1520,10 +1520,9 @@ av_cold int ff_rv34_decode_init(AVCodecContext *avctx)
ff_h264_pred_init(&r->h, AV_CODEC_ID_RV40, 8, 1);
- if ((ret = rv34_decoder_alloc(r)) < 0) {
- ff_mpv_common_end(&r->s);
+ ret = rv34_decoder_alloc(r);
+ if (ret < 0)
return ret;
- }
ff_thread_once(&init_static_once, rv34_init_tables);
@@ -1821,8 +1820,7 @@ av_cold int ff_rv34_decode_end(AVCodecContext *avctx)
{
RV34DecContext *r = avctx->priv_data;
- ff_mpv_common_end(&r->s);
rv34_decoder_free(r);
- return 0;
+ return ff_mpv_decode_close(avctx);
}
diff --git a/libavcodec/rv40.c b/libavcodec/rv40.c
index 536bbc9623..0a5136d129 100644
--- a/libavcodec/rv40.c
+++ b/libavcodec/rv40.c
@@ -580,6 +580,7 @@ const FFCodec ff_rv40_decoder = {
FF_CODEC_DECODE_CB(ff_rv34_decode_frame),
.p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_DELAY |
AV_CODEC_CAP_FRAME_THREADS,
+ .caps_internal = FF_CODEC_CAP_INIT_CLEANUP,
.flush = ff_mpeg_flush,
UPDATE_THREAD_CONTEXT(ff_rv34_decode_update_thread_context),
};
diff --git a/libavcodec/vc1dec.c b/libavcodec/vc1dec.c
index 9b912bec1f..a20facc899 100644
--- a/libavcodec/vc1dec.c
+++ b/libavcodec/vc1dec.c
@@ -449,6 +449,8 @@ static enum AVPixelFormat vc1_get_format(AVCodecContext *avctx)
return ff_get_format(avctx, vc1_hwaccel_pixfmt_list_420);
}
+static void vc1_decode_reset(AVCodecContext *avctx);
+
av_cold int ff_vc1_decode_init(AVCodecContext *avctx)
{
VC1Context *const v = avctx->priv_data;
@@ -477,7 +479,7 @@ av_cold int ff_vc1_decode_init(AVCodecContext *avctx)
ret = vc1_decode_init_alloc_tables(v);
if (ret < 0) {
- ff_vc1_decode_end(avctx);
+ vc1_decode_reset(avctx);
return ret;
}
return 0;
@@ -774,10 +776,7 @@ static av_cold int vc1_decode_init(AVCodecContext *avctx)
return 0;
}
-/** Close a VC1/WMV3 decoder
- * @warning Initial try at using MpegEncContext stuff
- */
-av_cold int ff_vc1_decode_end(AVCodecContext *avctx)
+static av_cold void vc1_decode_reset(AVCodecContext *avctx)
{
VC1Context *v = avctx->priv_data;
int i;
@@ -803,9 +802,16 @@ av_cold int ff_vc1_decode_end(AVCodecContext *avctx)
av_freep(&v->is_intra_base); // FIXME use v->mb_type[]
av_freep(&v->luma_mv_base);
ff_intrax8_common_end(&v->x8);
- return 0;
}
+/**
+ * Close a MSS2/VC1/WMV3 decoder
+ */
+av_cold int ff_vc1_decode_end(AVCodecContext *avctx)
+{
+ vc1_decode_reset(avctx);
+ return ff_mpv_decode_close(avctx);
+}
/** Decode a VC1/WMV3 frame
* @todo TODO: Handle VC-1 IDUs (Transport level?)
diff --git a/libavcodec/wmv2dec.c b/libavcodec/wmv2dec.c
index bb3829dcd6..5c91006169 100644
--- a/libavcodec/wmv2dec.c
+++ b/libavcodec/wmv2dec.c
@@ -27,6 +27,7 @@
#include "mathops.h"
#include "mpegutils.h"
#include "mpegvideo.h"
+#include "mpegvideodec.h"
#include "msmpeg4.h"
#include "msmpeg4_vc1_data.h"
#include "msmpeg4dec.h"
@@ -584,7 +585,7 @@ static av_cold int wmv2_decode_end(AVCodecContext *avctx)
WMV2DecContext *const w = avctx->priv_data;
ff_intrax8_common_end(&w->x8);
- return ff_h263_decode_end(avctx);
+ return ff_mpv_decode_close(avctx);
}
const FFCodec ff_wmv2_decoder = {
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 54/71] avcodec/mpegpicture: Make MPVPicture refcounted
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (51 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 53/71] avcodec/mpegvideo_dec: Add close function for mpegvideo-decoders Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 55/71] avcodec/mpeg4videoenc: Avoid branch for writing stuffing Andreas Rheinhardt
` (17 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Up until now, an initialized MpegEncContext had an array of
MPVPictures (way more than were ever needed) and the MPVPicture*
contained in the MPVWorkPictures as well as the input_picture
and reordered_input_picture arrays (for the encoder) pointed
into this array. Several of the pointers could point to the
same slot and because there was no reference counting involved,
one had to check for aliasing before unreferencing.
Furthermore, given that these pointers were not ownership pointers
the pointers were often simply reset without unreferencing
the slot (happened e.g. for the RV30 and RV40 decoders) or
there were moved without resetting the src pointer (happened
for the encoders where the entries in the input_picture
and reordered_input_picture arrays were not reset).
Instead actually releasing these pictures was performed by looping
over the whole array and checking which one of the entries needed
to be kept. Given that the array had way too many slots (36),
this meant that more than 30 MPVPictures have been unnecessarily
unreferenced in every ff_mpv_frame_start(); something similar
happened for the encoder.
This commit changes this by making the MPVPictures refcounted
via the RefStruct API. The MPVPictures itself are part of a pool
so that this does not entail constant allocations; instead,
the amount of allocations actually goes down, because the
earlier code used such a large array of MPVPictures (36 entries) and
allocated an AVFrame for every one of these on every
ff_mpv_common_init(). In fact, the pool is only freed when closing
the codec, so that reinitializations don't lead to new allocations
(this avoids having to sync the pool in update_thread_context).
Making MPVPictures refcounted also has another key benefit:
It makes it possible to directly share them across threads
(when using frame-threaded decoding), eliminating ugly code
with underlying av_frame_ref()'s; sharing these pictures
can't fail any more.
The pool is allocated in ff_mpv_decode_init() for decoders,
which therefore can fail now. This and the fact that the pool
is not unreferenced in ff_mpv_common_end() also necessitated
to mark several mpegvideo-decoders with the FF_CODEC_CAP_INIT_CLEANUP
flag.
*: This also means that there is no good reason any more for
ff_mpv_common_frame_size_change() to exist.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/flvdec.c | 3 +-
libavcodec/h261dec.c | 5 +-
libavcodec/h263dec.c | 14 ++--
libavcodec/intelh263dec.c | 3 +-
libavcodec/mjpegenc.c | 2 +-
libavcodec/mpeg12dec.c | 11 ++-
libavcodec/mpeg4videodec.c | 3 +-
libavcodec/mpegpicture.c | 157 ++++++++++++-------------------------
libavcodec/mpegpicture.h | 14 ++--
libavcodec/mpegvideo.c | 21 +----
libavcodec/mpegvideo.h | 2 +-
libavcodec/mpegvideo_dec.c | 72 +++++------------
libavcodec/mpegvideo_enc.c | 74 +++++++++--------
libavcodec/mpegvideodec.h | 4 +-
libavcodec/msmpeg4dec.c | 12 ++-
libavcodec/rv10.c | 8 +-
libavcodec/rv34.c | 6 +-
libavcodec/svq1enc.c | 3 -
libavcodec/vc1dec.c | 8 +-
19 files changed, 175 insertions(+), 247 deletions(-)
diff --git a/libavcodec/flvdec.c b/libavcodec/flvdec.c
index 1bb1b12917..f4bfd99417 100644
--- a/libavcodec/flvdec.c
+++ b/libavcodec/flvdec.c
@@ -121,6 +121,7 @@ const FFCodec ff_flv_decoder = {
FF_CODEC_DECODE_CB(ff_h263_decode_frame),
.close = ff_mpv_decode_close,
.p.capabilities = AV_CODEC_CAP_DRAW_HORIZ_BAND | AV_CODEC_CAP_DR1,
- .caps_internal = FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM,
+ .caps_internal = FF_CODEC_CAP_INIT_CLEANUP |
+ FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM,
.p.max_lowres = 3,
};
diff --git a/libavcodec/h261dec.c b/libavcodec/h261dec.c
index d8d0dcf3cf..392f1aef1d 100644
--- a/libavcodec/h261dec.c
+++ b/libavcodec/h261dec.c
@@ -84,10 +84,13 @@ static av_cold int h261_decode_init(AVCodecContext *avctx)
static AVOnce init_static_once = AV_ONCE_INIT;
H261DecContext *const h = avctx->priv_data;
MpegEncContext *const s = &h->s;
+ int ret;
s->private_ctx = &h->common;
// set defaults
- ff_mpv_decode_init(s, avctx);
+ ret = ff_mpv_decode_init(s, avctx);
+ if (ret < 0)
+ return ret;
s->out_format = FMT_H261;
s->low_delay = 1;
diff --git a/libavcodec/h263dec.c b/libavcodec/h263dec.c
index b8db4ffc98..b9762be9c9 100644
--- a/libavcodec/h263dec.c
+++ b/libavcodec/h263dec.c
@@ -95,7 +95,9 @@ av_cold int ff_h263_decode_init(AVCodecContext *avctx)
s->out_format = FMT_H263;
// set defaults
- ff_mpv_decode_init(s, avctx);
+ ret = ff_mpv_decode_init(s, avctx);
+ if (ret < 0)
+ return ret;
s->quant_precision = 5;
s->decode_mb = ff_h263_decode_mb;
@@ -427,7 +429,7 @@ int ff_h263_decode_frame(AVCodecContext *avctx, AVFrame *pict,
if (s->low_delay == 0 && s->next_pic.ptr) {
if ((ret = av_frame_ref(pict, s->next_pic.ptr->f)) < 0)
return ret;
- s->next_pic.ptr = NULL;
+ ff_mpv_unref_picture(&s->next_pic);
*got_frame = 1;
} else if (s->skipped_last_frame && s->cur_pic.ptr) {
@@ -439,7 +441,7 @@ int ff_h263_decode_frame(AVCodecContext *avctx, AVFrame *pict,
* returned picture would be reused */
if ((ret = ff_decode_frame_props(avctx, pict)) < 0)
return ret;
- s->cur_pic.ptr = NULL;
+ ff_mpv_unref_picture(&s->cur_pic);
*got_frame = 1;
}
@@ -698,7 +700,8 @@ const FFCodec ff_h263_decoder = {
.close = ff_mpv_decode_close,
.p.capabilities = AV_CODEC_CAP_DRAW_HORIZ_BAND | AV_CODEC_CAP_DR1 |
AV_CODEC_CAP_DELAY,
- .caps_internal = FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM,
+ .caps_internal = FF_CODEC_CAP_INIT_CLEANUP |
+ FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM,
.flush = ff_mpeg_flush,
.p.max_lowres = 3,
.hw_configs = h263_hw_config_list,
@@ -715,7 +718,8 @@ const FFCodec ff_h263p_decoder = {
.close = ff_mpv_decode_close,
.p.capabilities = AV_CODEC_CAP_DRAW_HORIZ_BAND | AV_CODEC_CAP_DR1 |
AV_CODEC_CAP_DELAY,
- .caps_internal = FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM,
+ .caps_internal = FF_CODEC_CAP_INIT_CLEANUP |
+ FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM,
.flush = ff_mpeg_flush,
.p.max_lowres = 3,
.hw_configs = h263_hw_config_list,
diff --git a/libavcodec/intelh263dec.c b/libavcodec/intelh263dec.c
index d4051c36f1..4efae7938c 100644
--- a/libavcodec/intelh263dec.c
+++ b/libavcodec/intelh263dec.c
@@ -135,5 +135,6 @@ const FFCodec ff_h263i_decoder = {
FF_CODEC_DECODE_CB(ff_h263_decode_frame),
.close = ff_mpv_decode_close,
.p.capabilities = AV_CODEC_CAP_DRAW_HORIZ_BAND | AV_CODEC_CAP_DR1,
- .caps_internal = FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM,
+ .caps_internal = FF_CODEC_CAP_INIT_CLEANUP |
+ FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM,
};
diff --git a/libavcodec/mjpegenc.c b/libavcodec/mjpegenc.c
index b6de50edce..9d4c3a4f41 100644
--- a/libavcodec/mjpegenc.c
+++ b/libavcodec/mjpegenc.c
@@ -80,7 +80,7 @@ static av_cold void init_uni_ac_vlc(const uint8_t huff_size_ac[256],
static void mjpeg_encode_picture_header(MpegEncContext *s)
{
- ff_mjpeg_encode_picture_header(s->avctx, &s->pb, s->picture->f, s->mjpeg_ctx,
+ ff_mjpeg_encode_picture_header(s->avctx, &s->pb, s->cur_pic.ptr->f, s->mjpeg_ctx,
s->intra_scantable.permutated, 0,
s->intra_matrix, s->chroma_intra_matrix,
s->slice_context_count > 1);
diff --git a/libavcodec/mpeg12dec.c b/libavcodec/mpeg12dec.c
index cf8493ba51..0d5540fd2f 100644
--- a/libavcodec/mpeg12dec.c
+++ b/libavcodec/mpeg12dec.c
@@ -793,13 +793,16 @@ static av_cold int mpeg_decode_init(AVCodecContext *avctx)
{
Mpeg1Context *s = avctx->priv_data;
MpegEncContext *s2 = &s->mpeg_enc_ctx;
+ int ret;
s2->out_format = FMT_MPEG1;
if ( avctx->codec_tag != AV_RL32("VCR2")
&& avctx->codec_tag != AV_RL32("BW10"))
avctx->coded_width = avctx->coded_height = 0; // do not trust dimensions from input
- ff_mpv_decode_init(s2, avctx);
+ ret = ff_mpv_decode_init(s2, avctx);
+ if (ret < 0)
+ return ret;
ff_mpeg12_init_vlcs();
@@ -2529,7 +2532,7 @@ static int mpeg_decode_frame(AVCodecContext *avctx, AVFrame *picture,
if (ret < 0)
return ret;
- s2->next_pic.ptr = NULL;
+ ff_mpv_unref_picture(&s2->next_pic);
*got_output = 1;
}
@@ -2552,14 +2555,14 @@ static int mpeg_decode_frame(AVCodecContext *avctx, AVFrame *picture,
}
s->extradata_decoded = 1;
if (ret < 0 && (avctx->err_recognition & AV_EF_EXPLODE)) {
- s2->cur_pic.ptr = NULL;
+ ff_mpv_unref_picture(&s2->cur_pic);
return ret;
}
}
ret = decode_chunks(avctx, picture, got_output, buf, buf_size);
if (ret<0 || *got_output) {
- s2->cur_pic.ptr = NULL;
+ ff_mpv_unref_picture(&s2->cur_pic);
if (s->timecode_frame_start != -1 && *got_output) {
char tcbuf[AV_TIMECODE_STR_SIZE];
diff --git a/libavcodec/mpeg4videodec.c b/libavcodec/mpeg4videodec.c
index 4dcb967f7d..f4d83dc2aa 100644
--- a/libavcodec/mpeg4videodec.c
+++ b/libavcodec/mpeg4videodec.c
@@ -3864,7 +3864,8 @@ const FFCodec ff_mpeg4_decoder = {
.close = ff_mpv_decode_close,
.p.capabilities = AV_CODEC_CAP_DRAW_HORIZ_BAND | AV_CODEC_CAP_DR1 |
AV_CODEC_CAP_DELAY | AV_CODEC_CAP_FRAME_THREADS,
- .caps_internal = FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM,
+ .caps_internal = FF_CODEC_CAP_INIT_CLEANUP |
+ FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM,
.flush = ff_mpeg_flush,
.p.max_lowres = 3,
.p.profiles = NULL_IF_CONFIG_SMALL(ff_mpeg4_video_profiles),
diff --git a/libavcodec/mpegpicture.c b/libavcodec/mpegpicture.c
index 9d5a24523f..95255b893e 100644
--- a/libavcodec/mpegpicture.c
+++ b/libavcodec/mpegpicture.c
@@ -30,8 +30,14 @@
#include "refstruct.h"
#include "threadframe.h"
-static void av_noinline free_picture_tables(MPVPicture *pic)
+static void mpv_pic_reset(FFRefStructOpaque unused, void *obj)
{
+ MPVPicture *pic = obj;
+
+ ff_thread_release_ext_buffer(&pic->tf);
+
+ ff_refstruct_unref(&pic->hwaccel_picture_private);
+
ff_refstruct_unref(&pic->mbskip_table);
ff_refstruct_unref(&pic->qscale_table_base);
ff_refstruct_unref(&pic->mb_type_base);
@@ -39,16 +45,53 @@ static void av_noinline free_picture_tables(MPVPicture *pic)
for (int i = 0; i < 2; i++) {
ff_refstruct_unref(&pic->motion_val_base[i]);
ff_refstruct_unref(&pic->ref_index[i]);
+
+ pic->motion_val[i] = NULL;
}
+ pic->mb_type = NULL;
+ pic->qscale_table = NULL;
+
+ pic->mb_stride =
pic->mb_width =
pic->mb_height = 0;
+
+ pic->dummy = 0;
+ pic->field_picture = 0;
+ pic->b_frame_score = 0;
+ pic->reference = 0;
+ pic->shared = 0;
+ pic->display_picture_number = 0;
+ pic->coded_picture_number = 0;
+}
+
+static int av_cold mpv_pic_init(FFRefStructOpaque unused, void *obj)
+{
+ MPVPicture *pic = obj;
+
+ pic->f = av_frame_alloc();
+ if (!pic->f)
+ return AVERROR(ENOMEM);
+ pic->tf.f = pic->f;
+ return 0;
+}
+
+static void av_cold mpv_pic_free(FFRefStructOpaque unused, void *obj)
+{
+ MPVPicture *pic = obj;
+
+ av_frame_free(&pic->f);
+}
+
+av_cold FFRefStructPool *ff_mpv_alloc_pic_pool(void)
+{
+ return ff_refstruct_pool_alloc_ext(sizeof(MPVPicture), 0, NULL,
+ mpv_pic_init, mpv_pic_reset, mpv_pic_free, NULL);
}
void ff_mpv_unref_picture(MPVWorkPicture *pic)
{
- if (pic->ptr)
- ff_mpeg_unref_picture(pic->ptr);
+ ff_refstruct_unref(&pic->ptr);
memset(pic, 0, sizeof(*pic));
}
@@ -71,16 +114,18 @@ static void set_workpic_from_pic(MPVWorkPicture *wpic, const MPVPicture *pic)
void ff_mpv_replace_picture(MPVWorkPicture *dst, const MPVWorkPicture *src)
{
+ av_assert1(dst != src);
+ ff_refstruct_replace(&dst->ptr, src->ptr);
memcpy(dst, src, sizeof(*dst));
}
void ff_mpv_workpic_from_pic(MPVWorkPicture *wpic, MPVPicture *pic)
{
+ ff_refstruct_replace(&wpic->ptr, pic);
if (!pic) {
memset(wpic, 0, sizeof(*wpic));
return;
}
- wpic->ptr = pic;
set_workpic_from_pic(wpic, pic);
}
@@ -212,107 +257,3 @@ fail:
av_log(avctx, AV_LOG_ERROR, "Error allocating picture accessories.\n");
return ret;
}
-
-/**
- * Deallocate a picture; frees the picture tables in case they
- * need to be reallocated anyway.
- */
-void ff_mpeg_unref_picture(MPVPicture *pic)
-{
- pic->tf.f = pic->f;
- ff_thread_release_ext_buffer(&pic->tf);
-
- ff_refstruct_unref(&pic->hwaccel_picture_private);
-
- free_picture_tables(pic);
-
- pic->dummy = 0;
-
- pic->field_picture = 0;
- pic->b_frame_score = 0;
- pic->reference = 0;
- pic->shared = 0;
- pic->display_picture_number = 0;
- pic->coded_picture_number = 0;
-}
-
-static void update_picture_tables(MPVPicture *dst, const MPVPicture *src)
-{
- ff_refstruct_replace(&dst->mbskip_table, src->mbskip_table);
- ff_refstruct_replace(&dst->qscale_table_base, src->qscale_table_base);
- ff_refstruct_replace(&dst->mb_type_base, src->mb_type_base);
- for (int i = 0; i < 2; i++) {
- ff_refstruct_replace(&dst->motion_val_base[i], src->motion_val_base[i]);
- ff_refstruct_replace(&dst->ref_index[i], src->ref_index[i]);
- }
-
- dst->qscale_table = src->qscale_table;
- dst->mb_type = src->mb_type;
- for (int i = 0; i < 2; i++)
- dst->motion_val[i] = src->motion_val[i];
-
- dst->mb_width = src->mb_width;
- dst->mb_height = src->mb_height;
- dst->mb_stride = src->mb_stride;
-}
-
-int ff_mpeg_ref_picture(MPVPicture *dst, MPVPicture *src)
-{
- int ret;
-
- av_assert0(!dst->f->buf[0]);
- av_assert0(src->f->buf[0]);
-
- src->tf.f = src->f;
- dst->tf.f = dst->f;
- ret = ff_thread_ref_frame(&dst->tf, &src->tf);
- if (ret < 0)
- goto fail;
-
- update_picture_tables(dst, src);
-
- ff_refstruct_replace(&dst->hwaccel_picture_private,
- src->hwaccel_picture_private);
-
- dst->dummy = src->dummy;
- dst->field_picture = src->field_picture;
- dst->b_frame_score = src->b_frame_score;
- dst->reference = src->reference;
- dst->shared = src->shared;
- dst->display_picture_number = src->display_picture_number;
- dst->coded_picture_number = src->coded_picture_number;
-
- return 0;
-fail:
- ff_mpeg_unref_picture(dst);
- return ret;
-}
-
-int ff_find_unused_picture(AVCodecContext *avctx, MPVPicture *picture, int shared)
-{
- for (int i = 0; i < MAX_PICTURE_COUNT; i++)
- if (!picture[i].f->buf[0])
- return i;
-
- av_log(avctx, AV_LOG_FATAL,
- "Internal error, picture buffer overflow\n");
- /* We could return -1, but the codec would crash trying to draw into a
- * non-existing frame anyway. This is safer than waiting for a random crash.
- * Also the return of this is never useful, an encoder must only allocate
- * as much as allowed in the specification. This has no relationship to how
- * much libavcodec could allocate (and MAX_PICTURE_COUNT is always large
- * enough for such valid streams).
- * Plus, a decoder has to check stream validity and remove frames if too
- * many reference frames are around. Waiting for "OOM" is not correct at
- * all. Similarly, missing reference frames have to be replaced by
- * interpolated/MC frames, anything else is a bug in the codec ...
- */
- abort();
- return -1;
-}
-
-void av_cold ff_mpv_picture_free(MPVPicture *pic)
-{
- ff_mpeg_unref_picture(pic);
- av_frame_free(&pic->f);
-}
diff --git a/libavcodec/mpegpicture.h b/libavcodec/mpegpicture.h
index 7bf204dd5b..f6db4238b5 100644
--- a/libavcodec/mpegpicture.h
+++ b/libavcodec/mpegpicture.h
@@ -29,7 +29,6 @@
#include "threadframe.h"
#define MPV_MAX_PLANES 3
-#define MAX_PICTURE_COUNT 36
#define EDGE_WIDTH 16
typedef struct ScratchpadContext {
@@ -94,7 +93,7 @@ typedef struct MPVWorkPicture {
uint8_t *data[MPV_MAX_PLANES];
ptrdiff_t linesize[MPV_MAX_PLANES];
- MPVPicture *ptr;
+ MPVPicture *ptr; ///< RefStruct reference
int8_t *qscale_table;
@@ -109,6 +108,11 @@ typedef struct MPVWorkPicture {
int reference;
} MPVWorkPicture;
+/**
+ * Allocate a pool of MPVPictures.
+ */
+struct FFRefStructPool *ff_mpv_alloc_pic_pool(void);
+
/**
* Allocate an MPVPicture's accessories (but not the AVFrame's buffer itself)
* and set the MPVWorkPicture's fields.
@@ -129,14 +133,8 @@ int ff_mpv_pic_check_linesize(void *logctx, const struct AVFrame *f,
int ff_mpeg_framesize_alloc(AVCodecContext *avctx, MotionEstContext *me,
ScratchpadContext *sc, int linesize);
-int ff_mpeg_ref_picture(MPVPicture *dst, MPVPicture *src);
void ff_mpv_unref_picture(MPVWorkPicture *pic);
void ff_mpv_workpic_from_pic(MPVWorkPicture *wpic, MPVPicture *pic);
void ff_mpv_replace_picture(MPVWorkPicture *dst, const MPVWorkPicture *src);
-void ff_mpeg_unref_picture(MPVPicture *picture);
-
-void ff_mpv_picture_free(MPVPicture *pic);
-
-int ff_find_unused_picture(AVCodecContext *avctx, MPVPicture *picture, int shared);
#endif /* AVCODEC_MPEGPICTURE_H */
diff --git a/libavcodec/mpegvideo.c b/libavcodec/mpegvideo.c
index e062749291..42b4d7f395 100644
--- a/libavcodec/mpegvideo.c
+++ b/libavcodec/mpegvideo.c
@@ -701,7 +701,6 @@ static void clear_context(MpegEncContext *s)
s->bitstream_buffer = NULL;
s->allocated_bitstream_buffer_size = 0;
- s->picture = NULL;
s->p_field_mv_table_base = NULL;
for (int i = 0; i < 2; i++)
for (int j = 0; j < 2; j++)
@@ -726,10 +725,10 @@ static void clear_context(MpegEncContext *s)
*/
av_cold int ff_mpv_common_init(MpegEncContext *s)
{
- int i, ret;
int nb_slices = (HAVE_THREADS &&
s->avctx->active_thread_type & FF_THREAD_SLICE) ?
s->avctx->thread_count : 1;
+ int ret;
clear_context(s);
@@ -755,14 +754,6 @@ av_cold int ff_mpv_common_init(MpegEncContext *s)
if (ret)
return ret;
- if (!FF_ALLOCZ_TYPED_ARRAY(s->picture, MAX_PICTURE_COUNT))
- return AVERROR(ENOMEM);
- for (i = 0; i < MAX_PICTURE_COUNT; i++) {
- s->picture[i].f = av_frame_alloc();
- if (!s->picture[i].f)
- goto fail_nomem;
- }
-
if ((ret = ff_mpv_init_context_frame(s)))
goto fail;
@@ -789,8 +780,6 @@ av_cold int ff_mpv_common_init(MpegEncContext *s)
// }
return 0;
- fail_nomem:
- ret = AVERROR(ENOMEM);
fail:
ff_mpv_common_end(s);
return ret;
@@ -830,11 +819,9 @@ void ff_mpv_common_end(MpegEncContext *s)
av_freep(&s->bitstream_buffer);
s->allocated_bitstream_buffer_size = 0;
- if (s->picture) {
- for (int i = 0; i < MAX_PICTURE_COUNT; i++)
- ff_mpv_picture_free(&s->picture[i]);
- }
- av_freep(&s->picture);
+ ff_mpv_unref_picture(&s->last_pic);
+ ff_mpv_unref_picture(&s->cur_pic);
+ ff_mpv_unref_picture(&s->next_pic);
s->context_initialized = 0;
s->context_reinit = 0;
diff --git a/libavcodec/mpegvideo.h b/libavcodec/mpegvideo.h
index 3e2c98b039..4339a145aa 100644
--- a/libavcodec/mpegvideo.h
+++ b/libavcodec/mpegvideo.h
@@ -128,7 +128,7 @@ typedef struct MpegEncContext {
int mb_num; ///< number of MBs of a picture
ptrdiff_t linesize; ///< line size, in bytes, may be different from width
ptrdiff_t uvlinesize; ///< line size, for chroma in bytes, may be different from width
- MPVPicture *picture; ///< main picture buffer
+ struct FFRefStructPool *picture_pool; ///< Pool for MPVPictures
MPVPicture **input_picture;///< next pictures on display order for encoding
MPVPicture **reordered_input_picture; ///< pointer to the next pictures in coded order for encoding
diff --git a/libavcodec/mpegvideo_dec.c b/libavcodec/mpegvideo_dec.c
index c8952f4831..d596f94df3 100644
--- a/libavcodec/mpegvideo_dec.c
+++ b/libavcodec/mpegvideo_dec.c
@@ -38,11 +38,12 @@
#include "mpegvideo.h"
#include "mpegvideodec.h"
#include "mpeg4videodec.h"
+#include "refstruct.h"
#include "thread.h"
#include "threadframe.h"
#include "wmv2dec.h"
-void ff_mpv_decode_init(MpegEncContext *s, AVCodecContext *avctx)
+int ff_mpv_decode_init(MpegEncContext *s, AVCodecContext *avctx)
{
ff_mpv_common_defaults(s);
@@ -57,6 +58,14 @@ void ff_mpv_decode_init(MpegEncContext *s, AVCodecContext *avctx)
ff_mpv_idct_init(s);
ff_h264chroma_init(&s->h264chroma, 8); //for lowres
+
+ if (!s->picture_pool && // VC-1 can call this multiple times
+ ff_thread_sync_ref(avctx, offsetof(MpegEncContext, picture_pool))) {
+ s->picture_pool = ff_mpv_alloc_pic_pool();
+ if (!s->picture_pool)
+ return AVERROR(ENOMEM);
+ }
+ return 0;
}
int ff_mpeg_update_thread_context(AVCodecContext *dst,
@@ -103,26 +112,9 @@ int ff_mpeg_update_thread_context(AVCodecContext *dst,
s->coded_picture_number = s1->coded_picture_number;
s->picture_number = s1->picture_number;
- av_assert0(!s->picture || s->picture != s1->picture);
- if (s->picture)
- for (int i = 0; i < MAX_PICTURE_COUNT; i++) {
- ff_mpeg_unref_picture(&s->picture[i]);
- if (s1->picture && s1->picture[i].f->buf[0] &&
- (ret = ff_mpeg_ref_picture(&s->picture[i], &s1->picture[i])) < 0)
- return ret;
- }
-
-#define UPDATE_PICTURE(pic)\
-do {\
- if (s->picture && s1->picture && s1->pic.ptr && s1->pic.ptr->f->buf[0]) {\
- ff_mpv_workpic_from_pic(&s->pic, &s->picture[s1->pic.ptr - s1->picture]);\
- } else\
- ff_mpv_unref_picture(&s->pic);\
-} while (0)
-
- UPDATE_PICTURE(cur_pic);
- UPDATE_PICTURE(last_pic);
- UPDATE_PICTURE(next_pic);
+ ff_mpv_replace_picture(&s->cur_pic, &s1->cur_pic);
+ ff_mpv_replace_picture(&s->last_pic, &s1->last_pic);
+ ff_mpv_replace_picture(&s->next_pic, &s1->next_pic);
s->linesize = s1->linesize;
s->uvlinesize = s1->uvlinesize;
@@ -177,6 +169,7 @@ int ff_mpv_decode_close(AVCodecContext *avctx)
{
MpegEncContext *s = avctx->priv_data;
+ ff_refstruct_pool_uninit(&s->picture_pool);
ff_mpv_common_end(s);
return 0;
}
@@ -190,9 +183,9 @@ int ff_mpv_common_frame_size_change(MpegEncContext *s)
ff_mpv_free_context_frame(s);
- s->last_pic.ptr =
- s->next_pic.ptr =
- s->cur_pic.ptr = NULL;
+ ff_mpv_unref_picture(&s->last_pic);
+ ff_mpv_unref_picture(&s->next_pic);
+ ff_mpv_unref_picture(&s->cur_pic);
if ((s->width || s->height) &&
(err = av_image_check_size(s->width, s->height, 0, s->avctx)) < 0)
@@ -228,14 +221,12 @@ int ff_mpv_common_frame_size_change(MpegEncContext *s)
static int alloc_picture(MpegEncContext *s, MPVWorkPicture *dst, int reference)
{
AVCodecContext *avctx = s->avctx;
- int idx = ff_find_unused_picture(s->avctx, s->picture, 0);
- MPVPicture *pic;
+ MPVPicture *pic = ff_refstruct_pool_get(s->picture_pool);
int ret;
- if (idx < 0)
- return idx;
+ if (!pic)
+ return AVERROR(ENOMEM);
- pic = &s->picture[idx];
dst->ptr = pic;
pic->tf.f = pic->f;
@@ -368,22 +359,7 @@ int ff_mpv_frame_start(MpegEncContext *s, AVCodecContext *avctx)
return AVERROR_BUG;
}
- /* mark & release old frames */
- if (s->pict_type != AV_PICTURE_TYPE_B && s->last_pic.ptr &&
- s->last_pic.ptr != s->next_pic.ptr &&
- s->last_pic.ptr->f->buf[0]) {
- ff_mpeg_unref_picture(s->last_pic.ptr);
- }
-
- /* release non reference/forgotten frames */
- for (int i = 0; i < MAX_PICTURE_COUNT; i++) {
- if (!s->picture[i].reference ||
- (&s->picture[i] != s->last_pic.ptr &&
- &s->picture[i] != s->next_pic.ptr)) {
- ff_mpeg_unref_picture(&s->picture[i]);
- }
- }
-
+ ff_mpv_unref_picture(&s->cur_pic);
ret = alloc_picture(s, &s->cur_pic,
s->pict_type != AV_PICTURE_TYPE_B && !s->droppable);
if (ret < 0)
@@ -495,12 +471,6 @@ void ff_mpeg_flush(AVCodecContext *avctx)
{
MpegEncContext *const s = avctx->priv_data;
- if (!s->picture)
- return;
-
- for (int i = 0; i < MAX_PICTURE_COUNT; i++)
- ff_mpeg_unref_picture(&s->picture[i]);
-
ff_mpv_unref_picture(&s->cur_pic);
ff_mpv_unref_picture(&s->last_pic);
ff_mpv_unref_picture(&s->next_pic);
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index c9dc9959df..df2ff67ed2 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -75,6 +75,7 @@
#include "wmv2enc.h"
#include "rv10enc.h"
#include "packet_internal.h"
+#include "refstruct.h"
#include <limits.h>
#include "sp5x.h"
@@ -821,7 +822,8 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx)
!FF_ALLOCZ_TYPED_ARRAY(s->q_inter_matrix16, 32) ||
!FF_ALLOCZ_TYPED_ARRAY(s->input_picture, MAX_B_FRAMES + 1) ||
!FF_ALLOCZ_TYPED_ARRAY(s->reordered_input_picture, MAX_B_FRAMES + 1) ||
- !(s->new_pic = av_frame_alloc()))
+ !(s->new_pic = av_frame_alloc()) ||
+ !(s->picture_pool = ff_mpv_alloc_pic_pool()))
return AVERROR(ENOMEM);
/* Allocate MV tables; the MV and MB tables will be copied
@@ -992,7 +994,14 @@ av_cold int ff_mpv_encode_end(AVCodecContext *avctx)
ff_rate_control_uninit(&s->rc_context);
ff_mpv_common_end(s);
+ ff_refstruct_pool_uninit(&s->picture_pool);
+ if (s->input_picture && s->reordered_input_picture) {
+ for (int i = 0; i < MAX_B_FRAMES + 1; i++) {
+ ff_refstruct_unref(&s->input_picture[i]);
+ ff_refstruct_unref(&s->reordered_input_picture[i]);
+ }
+ }
for (i = 0; i < FF_ARRAY_ELEMS(s->tmp_frames); i++)
av_frame_free(&s->tmp_frames[i]);
@@ -1131,12 +1140,14 @@ static int load_input_picture(MpegEncContext *s, const AVFrame *pic_arg)
{
MPVPicture *pic = NULL;
int64_t pts;
- int i, display_picture_number = 0, ret;
+ int display_picture_number = 0, ret;
int encoding_delay = s->max_b_frames ? s->max_b_frames
: (s->low_delay ? 0 : 1);
int flush_offset = 1;
int direct = 1;
+ av_assert1(!s->input_picture[0]);
+
if (pic_arg) {
pts = pic_arg->pts;
display_picture_number = s->input_picture_number++;
@@ -1182,16 +1193,13 @@ static int load_input_picture(MpegEncContext *s, const AVFrame *pic_arg)
ff_dlog(s->avctx, "%d %d %"PTRDIFF_SPECIFIER" %"PTRDIFF_SPECIFIER"\n", pic_arg->linesize[0],
pic_arg->linesize[1], s->linesize, s->uvlinesize);
- i = ff_find_unused_picture(s->avctx, s->picture, direct);
- if (i < 0)
- return i;
-
- pic = &s->picture[i];
- pic->reference = 3;
+ pic = ff_refstruct_pool_get(s->picture_pool);
+ if (!pic)
+ return AVERROR(ENOMEM);
if (direct) {
if ((ret = av_frame_ref(pic->f, pic_arg)) < 0)
- return ret;
+ goto fail;
pic->shared = 1;
} else {
ret = prepare_picture(s, pic->f, pic_arg);
@@ -1241,17 +1249,17 @@ static int load_input_picture(MpegEncContext *s, const AVFrame *pic_arg)
pic->display_picture_number = display_picture_number;
pic->f->pts = pts; // we set this here to avoid modifying pic_arg
- } else {
- /* Flushing: When we have not received enough input frames,
- * ensure s->input_picture[0] contains the first picture */
+ } else if (!s->reordered_input_picture[1]) {
+ /* Flushing: When the above check is true, the encoder is about to run
+ * out of frames to encode. Check if there are input_pictures left;
+ * if so, ensure s->input_picture[0] contains the first picture.
+ * A flush_offset != 1 will only happen if we did not receive enough
+ * input frames. */
for (flush_offset = 0; flush_offset < encoding_delay + 1; flush_offset++)
if (s->input_picture[flush_offset])
break;
- if (flush_offset <= 1)
- flush_offset = 1;
- else
- encoding_delay = encoding_delay - flush_offset + 1;
+ encoding_delay -= flush_offset - 1;
}
/* shift buffer entries */
@@ -1262,7 +1270,7 @@ static int load_input_picture(MpegEncContext *s, const AVFrame *pic_arg)
return 0;
fail:
- ff_mpeg_unref_picture(pic);
+ ff_refstruct_unref(&pic);
return ret;
}
@@ -1475,8 +1483,10 @@ fail:
/**
* Determines whether an input picture is discarded or not
* and if not determines the length of the next chain of B frames
- * and puts these pictures (including the P frame) into
+ * and moves these pictures (including the P frame) into
* reordered_input_picture.
+ * input_picture[0] is always NULL when exiting this function, even on error;
+ * reordered_input_picture[0] is always NULL when exiting this function on error.
*/
static int set_bframe_chain_length(MpegEncContext *s)
{
@@ -1490,7 +1500,7 @@ static int set_bframe_chain_length(MpegEncContext *s)
s->next_pic.ptr &&
skip_check(s, s->input_picture[0], s->next_pic.ptr)) {
// FIXME check that the gop check above is +-1 correct
- ff_mpeg_unref_picture(s->input_picture[0]);
+ ff_refstruct_unref(&s->input_picture[0]);
ff_vbv_update(s, 0);
@@ -1501,6 +1511,7 @@ static int set_bframe_chain_length(MpegEncContext *s)
if (/*s->picture_in_gop_number >= s->gop_size ||*/
!s->next_pic.ptr || s->intra_only) {
s->reordered_input_picture[0] = s->input_picture[0];
+ s->input_picture[0] = NULL;
s->reordered_input_picture[0]->f->pict_type = AV_PICTURE_TYPE_I;
s->reordered_input_picture[0]->coded_picture_number =
s->coded_picture_number++;
@@ -1555,7 +1566,7 @@ static int set_bframe_chain_length(MpegEncContext *s)
} else if (s->b_frame_strategy == 2) {
b_frames = estimate_best_b_count(s);
if (b_frames < 0) {
- ff_mpeg_unref_picture(s->input_picture[0]);
+ ff_refstruct_unref(&s->input_picture[0]);
return b_frames;
}
}
@@ -1589,12 +1600,14 @@ static int set_bframe_chain_length(MpegEncContext *s)
b_frames--;
s->reordered_input_picture[0] = s->input_picture[b_frames];
+ s->input_picture[b_frames] = NULL;
if (s->reordered_input_picture[0]->f->pict_type != AV_PICTURE_TYPE_I)
s->reordered_input_picture[0]->f->pict_type = AV_PICTURE_TYPE_P;
s->reordered_input_picture[0]->coded_picture_number =
s->coded_picture_number++;
for (int i = 0; i < b_frames; i++) {
s->reordered_input_picture[i + 1] = s->input_picture[i];
+ s->input_picture[i] = NULL;
s->reordered_input_picture[i + 1]->f->pict_type =
AV_PICTURE_TYPE_B;
s->reordered_input_picture[i + 1]->coded_picture_number =
@@ -1609,11 +1622,14 @@ static int select_input_picture(MpegEncContext *s)
{
int ret;
+ av_assert1(!s->reordered_input_picture[0]);
+
for (int i = 1; i <= MAX_B_FRAMES; i++)
s->reordered_input_picture[i - 1] = s->reordered_input_picture[i];
s->reordered_input_picture[MAX_B_FRAMES] = NULL;
ret = set_bframe_chain_length(s);
+ av_assert1(!s->input_picture[0]);
if (ret < 0)
return ret;
@@ -1643,6 +1659,7 @@ static int select_input_picture(MpegEncContext *s)
}
}
s->cur_pic.ptr = s->reordered_input_picture[0];
+ s->reordered_input_picture[0] = NULL;
av_assert1(s->mb_width == s->buffer_pools.alloc_mb_width);
av_assert1(s->mb_height == s->buffer_pools.alloc_mb_height);
av_assert1(s->mb_stride == s->buffer_pools.alloc_mb_stride);
@@ -1657,7 +1674,7 @@ static int select_input_picture(MpegEncContext *s)
}
return 0;
fail:
- ff_mpeg_unref_picture(s->reordered_input_picture[0]);
+ ff_refstruct_unref(&s->reordered_input_picture[0]);
return ret;
}
@@ -1720,13 +1737,6 @@ static void update_noise_reduction(MpegEncContext *s)
static void frame_start(MpegEncContext *s)
{
- /* mark & release old frames */
- if (s->pict_type != AV_PICTURE_TYPE_B && s->last_pic.ptr &&
- s->last_pic.ptr != s->next_pic.ptr &&
- s->last_pic.ptr->f->buf[0]) {
- ff_mpv_unref_picture(&s->last_pic);
- }
-
s->cur_pic.ptr->f->pict_type = s->pict_type;
if (s->pict_type != AV_PICTURE_TYPE_B) {
@@ -1747,6 +1757,8 @@ int ff_mpv_encode_picture(AVCodecContext *avctx, AVPacket *pkt,
int i, stuffing_count, ret;
int context_count = s->slice_context_count;
+ ff_mpv_unref_picture(&s->cur_pic);
+
s->vbv_ignore_qmax = 0;
s->picture_in_gop_number++;
@@ -1973,11 +1985,7 @@ vbv_retry:
s->frame_bits = 0;
}
- /* release non-reference frames */
- for (i = 0; i < MAX_PICTURE_COUNT; i++) {
- if (!s->picture[i].reference)
- ff_mpeg_unref_picture(&s->picture[i]);
- }
+ ff_mpv_unref_picture(&s->cur_pic);
av_assert1((s->frame_bits & 7) == 0);
diff --git a/libavcodec/mpegvideodec.h b/libavcodec/mpegvideodec.h
index 35e9081d2c..6100364715 100644
--- a/libavcodec/mpegvideodec.h
+++ b/libavcodec/mpegvideodec.h
@@ -44,8 +44,10 @@
* Initialize the given MpegEncContext for decoding.
* the changed fields will not depend upon
* the prior state of the MpegEncContext.
+ *
+ * Also initialize the picture pool.
*/
-void ff_mpv_decode_init(MpegEncContext *s, AVCodecContext *avctx);
+int ff_mpv_decode_init(MpegEncContext *s, AVCodecContext *avctx);
int ff_mpv_common_frame_size_change(MpegEncContext *s);
diff --git a/libavcodec/msmpeg4dec.c b/libavcodec/msmpeg4dec.c
index 20d735a152..4143c46c15 100644
--- a/libavcodec/msmpeg4dec.c
+++ b/libavcodec/msmpeg4dec.c
@@ -852,7 +852,8 @@ const FFCodec ff_msmpeg4v1_decoder = {
FF_CODEC_DECODE_CB(ff_h263_decode_frame),
.close = ff_mpv_decode_close,
.p.capabilities = AV_CODEC_CAP_DRAW_HORIZ_BAND | AV_CODEC_CAP_DR1,
- .caps_internal = FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM,
+ .caps_internal = FF_CODEC_CAP_INIT_CLEANUP |
+ FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM,
.p.max_lowres = 3,
};
@@ -866,7 +867,8 @@ const FFCodec ff_msmpeg4v2_decoder = {
FF_CODEC_DECODE_CB(ff_h263_decode_frame),
.close = ff_mpv_decode_close,
.p.capabilities = AV_CODEC_CAP_DRAW_HORIZ_BAND | AV_CODEC_CAP_DR1,
- .caps_internal = FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM,
+ .caps_internal = FF_CODEC_CAP_INIT_CLEANUP |
+ FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM,
.p.max_lowres = 3,
};
@@ -880,7 +882,8 @@ const FFCodec ff_msmpeg4v3_decoder = {
FF_CODEC_DECODE_CB(ff_h263_decode_frame),
.close = ff_mpv_decode_close,
.p.capabilities = AV_CODEC_CAP_DRAW_HORIZ_BAND | AV_CODEC_CAP_DR1,
- .caps_internal = FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM,
+ .caps_internal = FF_CODEC_CAP_INIT_CLEANUP |
+ FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM,
.p.max_lowres = 3,
};
@@ -894,6 +897,7 @@ const FFCodec ff_wmv1_decoder = {
FF_CODEC_DECODE_CB(ff_h263_decode_frame),
.close = ff_mpv_decode_close,
.p.capabilities = AV_CODEC_CAP_DRAW_HORIZ_BAND | AV_CODEC_CAP_DR1,
- .caps_internal = FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM,
+ .caps_internal = FF_CODEC_CAP_INIT_CLEANUP |
+ FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM,
.p.max_lowres = 3,
};
diff --git a/libavcodec/rv10.c b/libavcodec/rv10.c
index 6ece6a5a25..3dcee0a065 100644
--- a/libavcodec/rv10.c
+++ b/libavcodec/rv10.c
@@ -364,7 +364,9 @@ static av_cold int rv10_decode_init(AVCodecContext *avctx)
avctx->coded_height, 0, avctx)) < 0)
return ret;
- ff_mpv_decode_init(s, avctx);
+ ret = ff_mpv_decode_init(s, avctx);
+ if (ret < 0)
+ return ret;
s->out_format = FMT_H263;
@@ -645,7 +647,7 @@ static int rv10_decode_frame(AVCodecContext *avctx, AVFrame *pict,
}
// so we can detect if frame_end was not called (find some nicer solution...)
- s->cur_pic.ptr = NULL;
+ ff_mpv_unref_picture(&s->cur_pic);
}
return avpkt->size;
@@ -662,6 +664,7 @@ const FFCodec ff_rv10_decoder = {
.close = ff_mpv_decode_close,
.p.capabilities = AV_CODEC_CAP_DR1,
.p.max_lowres = 3,
+ .caps_internal = FF_CODEC_CAP_INIT_CLEANUP,
};
const FFCodec ff_rv20_decoder = {
@@ -676,4 +679,5 @@ const FFCodec ff_rv20_decoder = {
.p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_DELAY,
.flush = ff_mpeg_flush,
.p.max_lowres = 3,
+ .caps_internal = FF_CODEC_CAP_INIT_CLEANUP,
};
diff --git a/libavcodec/rv34.c b/libavcodec/rv34.c
index 90296d7de3..94d5ee30df 100644
--- a/libavcodec/rv34.c
+++ b/libavcodec/rv34.c
@@ -1508,7 +1508,9 @@ av_cold int ff_rv34_decode_init(AVCodecContext *avctx)
MpegEncContext *s = &r->s;
int ret;
- ff_mpv_decode_init(s, avctx);
+ ret = ff_mpv_decode_init(s, avctx);
+ if (ret < 0)
+ return ret;
s->out_format = FMT_H263;
avctx->pix_fmt = AV_PIX_FMT_YUV420P;
@@ -1630,7 +1632,7 @@ int ff_rv34_decode_frame(AVCodecContext *avctx, AVFrame *pict,
if (s->next_pic.ptr) {
if ((ret = av_frame_ref(pict, s->next_pic.ptr->f)) < 0)
return ret;
- s->next_pic.ptr = NULL;
+ ff_mpv_unref_picture(&s->next_pic);
*got_picture_ptr = 1;
}
diff --git a/libavcodec/svq1enc.c b/libavcodec/svq1enc.c
index 9631fa243d..9c9be8c6b3 100644
--- a/libavcodec/svq1enc.c
+++ b/libavcodec/svq1enc.c
@@ -60,7 +60,6 @@ typedef struct SVQ1EncContext {
* else, the idea is to make the motion estimation eventually independent
* of MpegEncContext, so this will be removed then. */
MpegEncContext m;
- MPVPicture cur_pic, last_pic;
AVCodecContext *avctx;
MECmpContext mecc;
HpelDSPContext hdsp;
@@ -327,8 +326,6 @@ static int svq1_encode_plane(SVQ1EncContext *s, int plane,
if (s->pict_type == AV_PICTURE_TYPE_P) {
s->m.avctx = s->avctx;
- s->m.cur_pic.ptr = &s->cur_pic;
- s->m.last_pic.ptr = &s->last_pic;
s->m.last_pic.data[0] = ref_plane;
s->m.linesize =
s->m.last_pic.linesize[0] =
diff --git a/libavcodec/vc1dec.c b/libavcodec/vc1dec.c
index a20facc899..17da7ed7cd 100644
--- a/libavcodec/vc1dec.c
+++ b/libavcodec/vc1dec.c
@@ -461,7 +461,9 @@ av_cold int ff_vc1_decode_init(AVCodecContext *avctx)
if (ret < 0)
return ret;
- ff_mpv_decode_init(s, avctx);
+ ret = ff_mpv_decode_init(s, avctx);
+ if (ret < 0)
+ return ret;
avctx->pix_fmt = vc1_get_format(avctx);
@@ -846,7 +848,7 @@ static int vc1_decode_frame(AVCodecContext *avctx, AVFrame *pict,
if (s->low_delay == 0 && s->next_pic.ptr) {
if ((ret = av_frame_ref(pict, s->next_pic.ptr->f)) < 0)
return ret;
- s->next_pic.ptr = NULL;
+ ff_mpv_unref_picture(&s->next_pic);
*got_frame = 1;
}
@@ -997,7 +999,7 @@ static int vc1_decode_frame(AVCodecContext *avctx, AVFrame *pict,
if (s->context_initialized &&
(s->width != avctx->coded_width ||
s->height != avctx->coded_height)) {
- ff_vc1_decode_end(avctx);
+ vc1_decode_reset(avctx);
}
if (!s->context_initialized) {
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 55/71] avcodec/mpeg4videoenc: Avoid branch for writing stuffing
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (52 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 54/71] avcodec/mpegpicture: Make MPVPicture refcounted Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 56/71] avcodec/mpeg4videoenc: Simplify writing startcodes Andreas Rheinhardt
` (16 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpeg4videoenc.c | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/libavcodec/mpeg4videoenc.c b/libavcodec/mpeg4videoenc.c
index 2f4b1a1d52..76960c2ced 100644
--- a/libavcodec/mpeg4videoenc.c
+++ b/libavcodec/mpeg4videoenc.c
@@ -862,11 +862,9 @@ void ff_mpeg4_encode_mb(MpegEncContext *s, int16_t block[6][64],
*/
void ff_mpeg4_stuffing(PutBitContext *pbc)
{
- int length;
- put_bits(pbc, 1, 0);
- length = (-put_bits_count(pbc)) & 7;
- if (length)
- put_bits(pbc, length, (1 << length) - 1);
+ int length = 8 - (put_bits_count(pbc) & 7);
+
+ put_bits(pbc, length, (1 << (length - 1)) - 1);
}
/* must be called before writing the header */
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 56/71] avcodec/mpeg4videoenc: Simplify writing startcodes
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (53 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 55/71] avcodec/mpeg4videoenc: Avoid branch for writing stuffing Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 57/71] avcodec/mpegpicture: Use ThreadProgress instead of ThreadFrame API Andreas Rheinhardt
` (15 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpeg4videoenc.c | 21 +++++++--------------
1 file changed, 7 insertions(+), 14 deletions(-)
diff --git a/libavcodec/mpeg4videoenc.c b/libavcodec/mpeg4videoenc.c
index 76960c2ced..583ea9de6f 100644
--- a/libavcodec/mpeg4videoenc.c
+++ b/libavcodec/mpeg4videoenc.c
@@ -883,8 +883,7 @@ static void mpeg4_encode_gop_header(MpegEncContext *s)
int64_t hours, minutes, seconds;
int64_t time;
- put_bits(&s->pb, 16, 0);
- put_bits(&s->pb, 16, GOP_STARTCODE);
+ put_bits32(&s->pb, GOP_STARTCODE);
time = s->cur_pic.ptr->f->pts;
if (s->reordered_input_picture[1])
@@ -933,13 +932,11 @@ static void mpeg4_encode_visual_object_header(MpegEncContext *s)
// FIXME levels
- put_bits(&s->pb, 16, 0);
- put_bits(&s->pb, 16, VOS_STARTCODE);
+ put_bits32(&s->pb, VOS_STARTCODE);
put_bits(&s->pb, 8, profile_and_level_indication);
- put_bits(&s->pb, 16, 0);
- put_bits(&s->pb, 16, VISUAL_OBJ_STARTCODE);
+ put_bits32(&s->pb, VISUAL_OBJ_STARTCODE);
put_bits(&s->pb, 1, 1);
put_bits(&s->pb, 4, vo_ver_id);
@@ -966,10 +963,8 @@ static void mpeg4_encode_vol_header(MpegEncContext *s,
vo_type = SIMPLE_VO_TYPE;
}
- put_bits(&s->pb, 16, 0);
- put_bits(&s->pb, 16, 0x100 + vo_number); /* video obj */
- put_bits(&s->pb, 16, 0);
- put_bits(&s->pb, 16, 0x120 + vol_number); /* video obj layer */
+ put_bits32(&s->pb, 0x100 + vo_number); /* video obj */
+ put_bits32(&s->pb, 0x120 + vol_number); /* video obj layer */
put_bits(&s->pb, 1, 0); /* random access vol */
put_bits(&s->pb, 8, vo_type); /* video obj type indication */
@@ -1046,8 +1041,7 @@ static void mpeg4_encode_vol_header(MpegEncContext *s,
/* user data */
if (!(s->avctx->flags & AV_CODEC_FLAG_BITEXACT)) {
- put_bits(&s->pb, 16, 0);
- put_bits(&s->pb, 16, 0x1B2); /* user_data */
+ put_bits32(&s->pb, USER_DATA_STARTCODE);
ff_put_string(&s->pb, LIBAVCODEC_IDENT, 0);
}
}
@@ -1071,8 +1065,7 @@ int ff_mpeg4_encode_picture_header(MpegEncContext *s)
s->partitioned_frame = s->data_partitioning && s->pict_type != AV_PICTURE_TYPE_B;
- put_bits(&s->pb, 16, 0); /* vop header */
- put_bits(&s->pb, 16, VOP_STARTCODE); /* vop header */
+ put_bits32(&s->pb, VOP_STARTCODE); /* vop header */
put_bits(&s->pb, 2, s->pict_type - 1); /* pict type: I = 0 , P = 1 */
time_div = FFUDIV(s->time, s->avctx->time_base.den);
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 57/71] avcodec/mpegpicture: Use ThreadProgress instead of ThreadFrame API
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (54 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 56/71] avcodec/mpeg4videoenc: Simplify writing startcodes Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 58/71] avcodec/mpegpicture: Avoid loop and branch when setting motion_val Andreas Rheinhardt
` (14 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Given that MPVPictures are already directly shared between threads
in case of frame-threaded decoding, one can simply use it to
pass decoding progress information between threads. This allows
to avoid one level of indirection; it also means avoids allocations
(of the ThreadFrameProgress structure) in case of frame-threading
and indeed makes ff_thread_release_ext_buffer() decoder-only
(actually, H.264-decoder-only).
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/error_resilience.c | 13 ++++++++----
libavcodec/error_resilience.h | 1 +
libavcodec/mpeg4videodec.c | 8 ++++----
libavcodec/mpeg_er.c | 2 +-
libavcodec/mpegpicture.c | 19 ++++++++++++------
libavcodec/mpegpicture.h | 7 ++++---
libavcodec/mpegvideo_dec.c | 25 +++++++++++++-----------
libavcodec/mpegvideo_enc.c | 2 +-
libavcodec/mpv_reconstruct_mb_template.c | 8 ++++----
libavcodec/pthread_frame.c | 5 -----
libavcodec/rv34.c | 16 +++++++--------
11 files changed, 59 insertions(+), 47 deletions(-)
diff --git a/libavcodec/error_resilience.c b/libavcodec/error_resilience.c
index 66d03987b6..56844d5084 100644
--- a/libavcodec/error_resilience.c
+++ b/libavcodec/error_resilience.c
@@ -34,6 +34,7 @@
#include "mpegutils.h"
#include "mpegvideo.h"
#include "threadframe.h"
+#include "threadprogress.h"
/**
* @param stride the number of MVs to get to the next row
@@ -409,8 +410,12 @@ static void guess_mv(ERContext *s)
set_mv_strides(s, &mot_step, &mot_stride);
num_avail = 0;
- if (s->last_pic.motion_val[0])
- ff_thread_await_progress(s->last_pic.tf, mb_height-1, 0);
+ if (s->last_pic.motion_val[0]) {
+ if (s->last_pic.tf)
+ ff_thread_await_progress(s->last_pic.tf, mb_height-1, 0);
+ else
+ ff_thread_progress_await(s->last_pic.progress, mb_height - 1);
+ }
for (i = 0; i < mb_width * mb_height; i++) {
const int mb_xy = s->mb_index2xy[i];
int f = 0;
@@ -763,7 +768,7 @@ static int is_intra_more_likely(ERContext *s)
if (s->avctx->codec_id == AV_CODEC_ID_H264) {
// FIXME
} else {
- ff_thread_await_progress(s->last_pic.tf, mb_y, 0);
+ ff_thread_progress_await(s->last_pic.progress, mb_y);
}
is_intra_likely += s->sad(NULL, last_mb_ptr, mb_ptr,
linesize[0], 16);
@@ -1198,7 +1203,7 @@ void ff_er_frame_end(ERContext *s, int *decode_error_flags)
int time_pb = s->pb_time;
av_assert0(s->avctx->codec_id != AV_CODEC_ID_H264);
- ff_thread_await_progress(s->next_pic.tf, mb_y, 0);
+ ff_thread_progress_await(s->next_pic.progress, mb_y);
s->mv[0][0][0] = s->next_pic.motion_val[0][xy][0] * time_pb / time_pp;
s->mv[0][0][1] = s->next_pic.motion_val[0][xy][1] * time_pb / time_pp;
diff --git a/libavcodec/error_resilience.h b/libavcodec/error_resilience.h
index 1346639c3c..a1b9b9ec1a 100644
--- a/libavcodec/error_resilience.h
+++ b/libavcodec/error_resilience.h
@@ -40,6 +40,7 @@
typedef struct ERPicture {
AVFrame *f;
const struct ThreadFrame *tf;
+ const struct ThreadProgress *progress;
// it is the caller's responsibility to allocate these buffers
int16_t (*motion_val[2])[2];
diff --git a/libavcodec/mpeg4videodec.c b/libavcodec/mpeg4videodec.c
index f4d83dc2aa..450278eee7 100644
--- a/libavcodec/mpeg4videodec.c
+++ b/libavcodec/mpeg4videodec.c
@@ -45,7 +45,7 @@
#include "internal.h"
#include "profiles.h"
#include "qpeldsp.h"
-#include "threadframe.h"
+#include "threadprogress.h"
#include "xvididct.h"
#include "unary.h"
@@ -1811,7 +1811,7 @@ static int mpeg4_decode_mb(MpegEncContext *s, int16_t block[6][64])
s->last_mv[i][1][1] = 0;
}
- ff_thread_await_progress(&s->next_pic.ptr->tf, s->mb_y, 0);
+ ff_thread_progress_await(&s->next_pic.ptr->progress, s->mb_y);
}
/* if we skipped it in the future P-frame than skip it now too */
@@ -2016,10 +2016,10 @@ end:
if (s->pict_type == AV_PICTURE_TYPE_B) {
const int delta = s->mb_x + 1 == s->mb_width ? 2 : 1;
- ff_thread_await_progress(&s->next_pic.ptr->tf,
+ ff_thread_progress_await(&s->next_pic.ptr->progress,
(s->mb_x + delta >= s->mb_width)
? FFMIN(s->mb_y + 1, s->mb_height - 1)
- : s->mb_y, 0);
+ : s->mb_y);
if (s->next_pic.mbskip_table[xy + delta])
return SLICE_OK;
}
diff --git a/libavcodec/mpeg_er.c b/libavcodec/mpeg_er.c
index f9421ec91f..e7b3197bb1 100644
--- a/libavcodec/mpeg_er.c
+++ b/libavcodec/mpeg_er.c
@@ -34,7 +34,7 @@ static void set_erpic(ERPicture *dst, const MPVPicture *src)
}
dst->f = src->f;
- dst->tf = &src->tf;
+ dst->progress = &src->progress;
for (i = 0; i < 2; i++) {
dst->motion_val[i] = src->motion_val[i];
diff --git a/libavcodec/mpegpicture.c b/libavcodec/mpegpicture.c
index 95255b893e..ea5d54c670 100644
--- a/libavcodec/mpegpicture.c
+++ b/libavcodec/mpegpicture.c
@@ -28,13 +28,13 @@
#include "motion_est.h"
#include "mpegpicture.h"
#include "refstruct.h"
-#include "threadframe.h"
static void mpv_pic_reset(FFRefStructOpaque unused, void *obj)
{
MPVPicture *pic = obj;
- ff_thread_release_ext_buffer(&pic->tf);
+ av_frame_unref(pic->f);
+ ff_thread_progress_reset(&pic->progress);
ff_refstruct_unref(&pic->hwaccel_picture_private);
@@ -65,14 +65,18 @@ static void mpv_pic_reset(FFRefStructOpaque unused, void *obj)
pic->coded_picture_number = 0;
}
-static int av_cold mpv_pic_init(FFRefStructOpaque unused, void *obj)
+static int av_cold mpv_pic_init(FFRefStructOpaque opaque, void *obj)
{
MPVPicture *pic = obj;
+ int ret, init_progress = (uintptr_t)opaque.nc;
+
+ ret = ff_thread_progress_init(&pic->progress, init_progress);
+ if (ret < 0)
+ return ret;
pic->f = av_frame_alloc();
if (!pic->f)
return AVERROR(ENOMEM);
- pic->tf.f = pic->f;
return 0;
}
@@ -80,12 +84,15 @@ static void av_cold mpv_pic_free(FFRefStructOpaque unused, void *obj)
{
MPVPicture *pic = obj;
+ ff_thread_progress_destroy(&pic->progress);
av_frame_free(&pic->f);
}
-av_cold FFRefStructPool *ff_mpv_alloc_pic_pool(void)
+av_cold FFRefStructPool *ff_mpv_alloc_pic_pool(int init_progress)
{
- return ff_refstruct_pool_alloc_ext(sizeof(MPVPicture), 0, NULL,
+ return ff_refstruct_pool_alloc_ext(sizeof(MPVPicture),
+ FF_REFSTRUCT_POOL_FLAG_FREE_ON_INIT_ERROR,
+ (void*)(uintptr_t)init_progress,
mpv_pic_init, mpv_pic_reset, mpv_pic_free, NULL);
}
diff --git a/libavcodec/mpegpicture.h b/libavcodec/mpegpicture.h
index f6db4238b5..f9633e11db 100644
--- a/libavcodec/mpegpicture.h
+++ b/libavcodec/mpegpicture.h
@@ -26,7 +26,7 @@
#include "avcodec.h"
#include "motion_est.h"
-#include "threadframe.h"
+#include "threadprogress.h"
#define MPV_MAX_PLANES 3
#define EDGE_WIDTH 16
@@ -55,7 +55,6 @@ typedef struct BufferPoolContext {
*/
typedef struct MPVPicture {
struct AVFrame *f;
- ThreadFrame tf;
int8_t *qscale_table_base;
int8_t *qscale_table;
@@ -87,6 +86,8 @@ typedef struct MPVPicture {
int display_picture_number;
int coded_picture_number;
+
+ ThreadProgress progress;
} MPVPicture;
typedef struct MPVWorkPicture {
@@ -111,7 +112,7 @@ typedef struct MPVWorkPicture {
/**
* Allocate a pool of MPVPictures.
*/
-struct FFRefStructPool *ff_mpv_alloc_pic_pool(void);
+struct FFRefStructPool *ff_mpv_alloc_pic_pool(int init_progress);
/**
* Allocate an MPVPicture's accessories (but not the AVFrame's buffer itself)
diff --git a/libavcodec/mpegvideo_dec.c b/libavcodec/mpegvideo_dec.c
index d596f94df3..b7f72ad460 100644
--- a/libavcodec/mpegvideo_dec.c
+++ b/libavcodec/mpegvideo_dec.c
@@ -40,11 +40,13 @@
#include "mpeg4videodec.h"
#include "refstruct.h"
#include "thread.h"
-#include "threadframe.h"
+#include "threadprogress.h"
#include "wmv2dec.h"
int ff_mpv_decode_init(MpegEncContext *s, AVCodecContext *avctx)
{
+ enum ThreadingStatus thread_status;
+
ff_mpv_common_defaults(s);
s->avctx = avctx;
@@ -59,9 +61,12 @@ int ff_mpv_decode_init(MpegEncContext *s, AVCodecContext *avctx)
ff_mpv_idct_init(s);
ff_h264chroma_init(&s->h264chroma, 8); //for lowres
- if (!s->picture_pool && // VC-1 can call this multiple times
- ff_thread_sync_ref(avctx, offsetof(MpegEncContext, picture_pool))) {
- s->picture_pool = ff_mpv_alloc_pic_pool();
+ if (s->picture_pool) // VC-1 can call this multiple times
+ return 0;
+
+ thread_status = ff_thread_sync_ref(avctx, offsetof(MpegEncContext, picture_pool));
+ if (thread_status != FF_THREAD_IS_COPY) {
+ s->picture_pool = ff_mpv_alloc_pic_pool(thread_status != FF_THREAD_NO_FRAME_THREADING);
if (!s->picture_pool)
return AVERROR(ENOMEM);
}
@@ -229,7 +234,6 @@ static int alloc_picture(MpegEncContext *s, MPVWorkPicture *dst, int reference)
dst->ptr = pic;
- pic->tf.f = pic->f;
pic->reference = reference;
/* WM Image / Screen codecs allocate internal buffers with different
@@ -237,8 +241,8 @@ static int alloc_picture(MpegEncContext *s, MPVWorkPicture *dst, int reference)
if (avctx->codec_id != AV_CODEC_ID_WMV3IMAGE &&
avctx->codec_id != AV_CODEC_ID_VC1IMAGE &&
avctx->codec_id != AV_CODEC_ID_MSS2) {
- ret = ff_thread_get_ext_buffer(avctx, &pic->tf,
- reference ? AV_GET_BUFFER_FLAG_REF : 0);
+ ret = ff_thread_get_buffer(avctx, pic->f,
+ reference ? AV_GET_BUFFER_FLAG_REF : 0);
} else {
pic->f->width = avctx->width;
pic->f->height = avctx->height;
@@ -281,8 +285,7 @@ static int av_cold alloc_dummy_frame(MpegEncContext *s, MPVWorkPicture *dst)
pic = dst->ptr;
pic->dummy = 1;
- ff_thread_report_progress(&pic->tf, INT_MAX, 0);
- ff_thread_report_progress(&pic->tf, INT_MAX, 1);
+ ff_thread_progress_report(&pic->progress, INT_MAX);
return 0;
}
@@ -418,7 +421,7 @@ void ff_mpv_frame_end(MpegEncContext *s)
emms_c();
if (s->cur_pic.reference)
- ff_thread_report_progress(&s->cur_pic.ptr->tf, INT_MAX, 0);
+ ff_thread_progress_report(&s->cur_pic.ptr->progress, INT_MAX);
}
void ff_print_debug_info(const MpegEncContext *s, const MPVPicture *p, AVFrame *pict)
@@ -484,7 +487,7 @@ void ff_mpeg_flush(AVCodecContext *avctx)
void ff_mpv_report_decode_progress(MpegEncContext *s)
{
if (s->pict_type != AV_PICTURE_TYPE_B && !s->partitioned_frame && !s->er.error_occurred)
- ff_thread_report_progress(&s->cur_pic.ptr->tf, s->mb_y, 0);
+ ff_thread_progress_report(&s->cur_pic.ptr->progress, s->mb_y);
}
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index df2ff67ed2..85ed52d9ad 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -823,7 +823,7 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx)
!FF_ALLOCZ_TYPED_ARRAY(s->input_picture, MAX_B_FRAMES + 1) ||
!FF_ALLOCZ_TYPED_ARRAY(s->reordered_input_picture, MAX_B_FRAMES + 1) ||
!(s->new_pic = av_frame_alloc()) ||
- !(s->picture_pool = ff_mpv_alloc_pic_pool()))
+ !(s->picture_pool = ff_mpv_alloc_pic_pool(0)))
return AVERROR(ENOMEM);
/* Allocate MV tables; the MV and MB tables will be copied
diff --git a/libavcodec/mpv_reconstruct_mb_template.c b/libavcodec/mpv_reconstruct_mb_template.c
index 549c55ffad..9aacf380a1 100644
--- a/libavcodec/mpv_reconstruct_mb_template.c
+++ b/libavcodec/mpv_reconstruct_mb_template.c
@@ -124,12 +124,12 @@ void mpv_reconstruct_mb_internal(MpegEncContext *s, int16_t block[12][64],
if (HAVE_THREADS && is_mpeg12 != DEFINITELY_MPEG12 &&
s->avctx->active_thread_type & FF_THREAD_FRAME) {
if (s->mv_dir & MV_DIR_FORWARD) {
- ff_thread_await_progress(&s->last_pic.ptr->tf,
- lowest_referenced_row(s, 0), 0);
+ ff_thread_progress_await(&s->last_pic.ptr->progress,
+ lowest_referenced_row(s, 0));
}
if (s->mv_dir & MV_DIR_BACKWARD) {
- ff_thread_await_progress(&s->next_pic.ptr->tf,
- lowest_referenced_row(s, 1), 0);
+ ff_thread_progress_await(&s->next_pic.ptr->progress,
+ lowest_referenced_row(s, 1));
}
}
diff --git a/libavcodec/pthread_frame.c b/libavcodec/pthread_frame.c
index 67f09c1f48..fd7819f52d 100644
--- a/libavcodec/pthread_frame.c
+++ b/libavcodec/pthread_frame.c
@@ -985,11 +985,6 @@ int ff_thread_get_ext_buffer(AVCodecContext *avctx, ThreadFrame *f, int flags)
int ret;
f->owner[0] = f->owner[1] = avctx;
- /* Hint: It is possible for this function to be called with codecs
- * that don't support frame threading at all, namely in case
- * a frame-threaded decoder shares code with codecs that are not.
- * This currently affects non-MPEG-4 mpegvideo codecs.
- * The following check will always be true for them. */
if (!(avctx->active_thread_type & FF_THREAD_FRAME))
return ff_get_buffer(avctx, f->f, flags);
diff --git a/libavcodec/rv34.c b/libavcodec/rv34.c
index 94d5ee30df..d3816df059 100644
--- a/libavcodec/rv34.c
+++ b/libavcodec/rv34.c
@@ -43,7 +43,7 @@
#include "qpeldsp.h"
#include "rectangle.h"
#include "thread.h"
-#include "threadframe.h"
+#include "threadprogress.h"
#include "rv34vlc.h"
#include "rv34data.h"
@@ -719,8 +719,8 @@ static inline void rv34_mc(RV34DecContext *r, const int block_type,
if (HAVE_THREADS && (s->avctx->active_thread_type & FF_THREAD_FRAME)) {
/* wait for the referenced mb row to be finished */
int mb_row = s->mb_y + ((yoff + my + 5 + 8 * height) >> 4);
- const ThreadFrame *f = dir ? &s->next_pic.ptr->tf : &s->last_pic.ptr->tf;
- ff_thread_await_progress(f, mb_row, 0);
+ const ThreadProgress *p = dir ? &s->next_pic.ptr->progress : &s->last_pic.ptr->progress;
+ ff_thread_progress_await(p, mb_row);
}
dxy = ly*4 + lx;
@@ -899,7 +899,7 @@ static int rv34_decode_mv(RV34DecContext *r, int block_type)
//surprisingly, it uses motion scheme from next reference frame
/* wait for the current mb row to be finished */
if (HAVE_THREADS && (s->avctx->active_thread_type & FF_THREAD_FRAME))
- ff_thread_await_progress(&s->next_pic.ptr->tf, FFMAX(0, s->mb_y-1), 0);
+ ff_thread_progress_await(&s->next_pic.ptr->progress, FFMAX(0, s->mb_y-1));
next_bt = s->next_pic.mb_type[s->mb_x + s->mb_y * s->mb_stride];
if(IS_INTRA(next_bt) || IS_SKIP(next_bt)){
@@ -1483,8 +1483,8 @@ static int rv34_decode_slice(RV34DecContext *r, int end, const uint8_t* buf, int
r->loop_filter(r, s->mb_y - 2);
if (HAVE_THREADS && (s->avctx->active_thread_type & FF_THREAD_FRAME))
- ff_thread_report_progress(&s->cur_pic.ptr->tf,
- s->mb_y - 2, 0);
+ ff_thread_progress_report(&s->cur_pic.ptr->progress,
+ s->mb_y - 2);
}
if(s->mb_x == s->resync_mb_x)
@@ -1582,7 +1582,7 @@ static int finish_frame(AVCodecContext *avctx, AVFrame *pict)
s->mb_num_left = 0;
if (HAVE_THREADS && (s->avctx->active_thread_type & FF_THREAD_FRAME))
- ff_thread_report_progress(&s->cur_pic.ptr->tf, INT_MAX, 0);
+ ff_thread_progress_report(&s->cur_pic.ptr->progress, INT_MAX);
if (s->pict_type == AV_PICTURE_TYPE_B) {
if ((ret = av_frame_ref(pict, s->cur_pic.ptr->f)) < 0)
@@ -1810,7 +1810,7 @@ int ff_rv34_decode_frame(AVCodecContext *avctx, AVFrame *pict,
ff_er_frame_end(&s->er, NULL);
ff_mpv_frame_end(s);
s->mb_num_left = 0;
- ff_thread_report_progress(&s->cur_pic.ptr->tf, INT_MAX, 0);
+ ff_thread_progress_report(&s->cur_pic.ptr->progress, INT_MAX);
return AVERROR_INVALIDDATA;
}
}
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 58/71] avcodec/mpegpicture: Avoid loop and branch when setting motion_val
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (55 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 57/71] avcodec/mpegpicture: Use ThreadProgress instead of ThreadFrame API Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 59/71] avcodec/mpegpicture: Use union for b_scratchpad and rd_scratchpad Andreas Rheinhardt
` (13 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegpicture.c | 11 ++++-------
1 file changed, 4 insertions(+), 7 deletions(-)
diff --git a/libavcodec/mpegpicture.c b/libavcodec/mpegpicture.c
index ea5d54c670..9308fce97c 100644
--- a/libavcodec/mpegpicture.c
+++ b/libavcodec/mpegpicture.c
@@ -223,6 +223,7 @@ static int alloc_picture_tables(BufferPoolContext *pools, MPVPicture *pic,
for (int i = 0; i < 2; i++) {
GET_BUFFER(ref_index,, [i]);
GET_BUFFER(motion_val, _base, [i]);
+ pic->motion_val[i] = pic->motion_val_base[i] + 4;
}
}
#undef GET_BUFFER
@@ -231,6 +232,9 @@ static int alloc_picture_tables(BufferPoolContext *pools, MPVPicture *pic,
pic->mb_height = mb_height;
pic->mb_stride = pools->alloc_mb_stride;
+ pic->qscale_table = pic->qscale_table_base + 2 * pic->mb_stride + 1;
+ pic->mb_type = pic->mb_type_base + 2 * pic->mb_stride + 1;
+
return 0;
}
@@ -250,13 +254,6 @@ int ff_mpv_alloc_pic_accessories(AVCodecContext *avctx, MPVWorkPicture *wpic,
if (ret < 0)
goto fail;
- pic->qscale_table = pic->qscale_table_base + 2 * pic->mb_stride + 1;
- pic->mb_type = pic->mb_type_base + 2 * pic->mb_stride + 1;
-
- if (pic->motion_val_base[0]) {
- for (int i = 0; i < 2; i++)
- pic->motion_val[i] = pic->motion_val_base[i] + 4;
- }
set_workpic_from_pic(wpic, pic);
return 0;
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 59/71] avcodec/mpegpicture: Use union for b_scratchpad and rd_scratchpad
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (56 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 58/71] avcodec/mpegpicture: Avoid loop and branch when setting motion_val Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 60/71] avcodec/mpegpicture: Avoid MotionEstContext in ff_mpeg_framesize_alloc() Andreas Rheinhardt
` (12 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
These pointers point to the same buffers, so one can just
use a union for them and avoid synchronising one of them.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegpicture.c | 1 -
libavcodec/mpegpicture.h | 6 ++++--
libavcodec/mpegvideo.c | 1 -
3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/libavcodec/mpegpicture.c b/libavcodec/mpegpicture.c
index 9308fce97c..43d35934c8 100644
--- a/libavcodec/mpegpicture.c
+++ b/libavcodec/mpegpicture.c
@@ -175,7 +175,6 @@ int ff_mpeg_framesize_alloc(AVCodecContext *avctx, MotionEstContext *me,
sc->linesize = linesizeabs;
me->temp = me->scratchpad;
- sc->rd_scratchpad = me->scratchpad;
sc->b_scratchpad = me->scratchpad;
sc->obmc_scratchpad = me->scratchpad + 16;
diff --git a/libavcodec/mpegpicture.h b/libavcodec/mpegpicture.h
index f9633e11db..ddb3ac5a2b 100644
--- a/libavcodec/mpegpicture.h
+++ b/libavcodec/mpegpicture.h
@@ -33,9 +33,11 @@
typedef struct ScratchpadContext {
uint8_t *edge_emu_buffer; ///< temporary buffer for if MVs point to out-of-frame data
- uint8_t *rd_scratchpad; ///< scratchpad for rate distortion mb decision
uint8_t *obmc_scratchpad;
- uint8_t *b_scratchpad; ///< scratchpad used for writing into write only buffers
+ union {
+ uint8_t *b_scratchpad; ///< scratchpad used for writing into write only buffers
+ uint8_t *rd_scratchpad; ///< scratchpad for rate distortion mb decision
+ };
int linesize; ///< linesize that the buffers in this context have been allocated for
} ScratchpadContext;
diff --git a/libavcodec/mpegvideo.c b/libavcodec/mpegvideo.c
index 42b4d7f395..36947c6e31 100644
--- a/libavcodec/mpegvideo.c
+++ b/libavcodec/mpegvideo.c
@@ -440,7 +440,6 @@ static void free_duplicate_context(MpegEncContext *s)
av_freep(&s->sc.edge_emu_buffer);
av_freep(&s->me.scratchpad);
s->me.temp =
- s->sc.rd_scratchpad =
s->sc.b_scratchpad =
s->sc.obmc_scratchpad = NULL;
s->sc.linesize = 0;
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 60/71] avcodec/mpegpicture: Avoid MotionEstContext in ff_mpeg_framesize_alloc()
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (57 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 59/71] avcodec/mpegpicture: Use union for b_scratchpad and rd_scratchpad Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 61/71] avcodec/mpegvideo_enc: Unify initializing PutBitContexts Andreas Rheinhardt
` (11 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Only set the ScratchpadContext and let the users that need it
(i.e. encoders) set the MotionEstContext stuff themselves.
Also add an explicit pointer to ScratchpadContext to point
to the allocated buffer so that none of the other scratchpad
pointers is singled out as being used for the allocations.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegpicture.c | 18 +++++++-----------
libavcodec/mpegpicture.h | 8 ++++----
libavcodec/mpegvideo.c | 10 +++-------
libavcodec/mpegvideo_dec.c | 5 ++---
libavcodec/mpegvideo_enc.c | 7 +++++--
libavcodec/svq1enc.c | 6 +++---
6 files changed, 24 insertions(+), 30 deletions(-)
diff --git a/libavcodec/mpegpicture.c b/libavcodec/mpegpicture.c
index 43d35934c8..cde060aa1f 100644
--- a/libavcodec/mpegpicture.c
+++ b/libavcodec/mpegpicture.c
@@ -25,7 +25,6 @@
#include "libavutil/imgutils.h"
#include "avcodec.h"
-#include "motion_est.h"
#include "mpegpicture.h"
#include "refstruct.h"
@@ -136,8 +135,8 @@ void ff_mpv_workpic_from_pic(MPVWorkPicture *wpic, MPVPicture *pic)
set_workpic_from_pic(wpic, pic);
}
-int ff_mpeg_framesize_alloc(AVCodecContext *avctx, MotionEstContext *me,
- ScratchpadContext *sc, int linesize)
+int ff_mpv_framesize_alloc(AVCodecContext *avctx,
+ ScratchpadContext *sc, int linesize)
{
# define EMU_EDGE_HEIGHT (4 * 70)
int linesizeabs = FFABS(linesize);
@@ -158,7 +157,7 @@ int ff_mpeg_framesize_alloc(AVCodecContext *avctx, MotionEstContext *me,
return AVERROR(ENOMEM);
av_freep(&sc->edge_emu_buffer);
- av_freep(&me->scratchpad);
+ av_freep(&sc->scratchpad_buf);
// edge emu needs blocksize + filter length - 1
// (= 17x17 for halfpel / 21x21 for H.264)
@@ -167,16 +166,14 @@ int ff_mpeg_framesize_alloc(AVCodecContext *avctx, MotionEstContext *me,
// linesize * interlaced * MBsize
// we also use this buffer for encoding in encode_mb_internal() needig an additional 32 lines
if (!FF_ALLOCZ_TYPED_ARRAY(sc->edge_emu_buffer, alloc_size * EMU_EDGE_HEIGHT) ||
- !FF_ALLOCZ_TYPED_ARRAY(me->scratchpad, alloc_size * 4 * 16 * 2)) {
+ !FF_ALLOCZ_TYPED_ARRAY(sc->scratchpad_buf, alloc_size * 4 * 16 * 2)) {
sc->linesize = 0;
av_freep(&sc->edge_emu_buffer);
return AVERROR(ENOMEM);
}
sc->linesize = linesizeabs;
- me->temp = me->scratchpad;
- sc->b_scratchpad = me->scratchpad;
- sc->obmc_scratchpad = me->scratchpad + 16;
+ sc->obmc_scratchpad = sc->scratchpad_buf + 16;
return 0;
}
@@ -238,14 +235,13 @@ static int alloc_picture_tables(BufferPoolContext *pools, MPVPicture *pic,
}
int ff_mpv_alloc_pic_accessories(AVCodecContext *avctx, MPVWorkPicture *wpic,
- MotionEstContext *me, ScratchpadContext *sc,
+ ScratchpadContext *sc,
BufferPoolContext *pools, int mb_height)
{
MPVPicture *pic = wpic->ptr;
int ret;
- ret = ff_mpeg_framesize_alloc(avctx, me, sc,
- pic->f->linesize[0]);
+ ret = ff_mpv_framesize_alloc(avctx, sc, pic->f->linesize[0]);
if (ret < 0)
goto fail;
diff --git a/libavcodec/mpegpicture.h b/libavcodec/mpegpicture.h
index ddb3ac5a2b..86504fe8ca 100644
--- a/libavcodec/mpegpicture.h
+++ b/libavcodec/mpegpicture.h
@@ -25,7 +25,6 @@
#include <stdint.h>
#include "avcodec.h"
-#include "motion_est.h"
#include "threadprogress.h"
#define MPV_MAX_PLANES 3
@@ -35,6 +34,7 @@ typedef struct ScratchpadContext {
uint8_t *edge_emu_buffer; ///< temporary buffer for if MVs point to out-of-frame data
uint8_t *obmc_scratchpad;
union {
+ uint8_t *scratchpad_buf; ///< the other *_scratchpad point into this buffer
uint8_t *b_scratchpad; ///< scratchpad used for writing into write only buffers
uint8_t *rd_scratchpad; ///< scratchpad for rate distortion mb decision
};
@@ -121,7 +121,7 @@ struct FFRefStructPool *ff_mpv_alloc_pic_pool(int init_progress);
* and set the MPVWorkPicture's fields.
*/
int ff_mpv_alloc_pic_accessories(AVCodecContext *avctx, MPVWorkPicture *pic,
- MotionEstContext *me, ScratchpadContext *sc,
+ ScratchpadContext *sc,
BufferPoolContext *pools, int mb_height);
/**
@@ -133,8 +133,8 @@ int ff_mpv_alloc_pic_accessories(AVCodecContext *avctx, MPVWorkPicture *pic,
int ff_mpv_pic_check_linesize(void *logctx, const struct AVFrame *f,
ptrdiff_t *linesizep, ptrdiff_t *uvlinesizep);
-int ff_mpeg_framesize_alloc(AVCodecContext *avctx, MotionEstContext *me,
- ScratchpadContext *sc, int linesize);
+int ff_mpv_framesize_alloc(AVCodecContext *avctx,
+ ScratchpadContext *sc, int linesize);
void ff_mpv_unref_picture(MPVWorkPicture *pic);
void ff_mpv_workpic_from_pic(MPVWorkPicture *wpic, MPVPicture *pic);
diff --git a/libavcodec/mpegvideo.c b/libavcodec/mpegvideo.c
index 36947c6e31..89d19a743a 100644
--- a/libavcodec/mpegvideo.c
+++ b/libavcodec/mpegvideo.c
@@ -438,9 +438,8 @@ static void free_duplicate_context(MpegEncContext *s)
return;
av_freep(&s->sc.edge_emu_buffer);
- av_freep(&s->me.scratchpad);
- s->me.temp =
- s->sc.b_scratchpad =
+ av_freep(&s->sc.scratchpad_buf);
+ s->me.temp = s->me.scratchpad =
s->sc.obmc_scratchpad = NULL;
s->sc.linesize = 0;
@@ -465,8 +464,6 @@ static void backup_duplicate_context(MpegEncContext *bak, MpegEncContext *src)
{
#define COPY(a) bak->a = src->a
COPY(sc);
- COPY(me.scratchpad);
- COPY(me.temp);
COPY(me.map);
COPY(me.score_map);
COPY(blocks);
@@ -500,8 +497,7 @@ int ff_update_duplicate_context(MpegEncContext *dst, const MpegEncContext *src)
// exchange uv
FFSWAP(void *, dst->pblocks[4], dst->pblocks[5]);
}
- ret = ff_mpeg_framesize_alloc(dst->avctx, &dst->me,
- &dst->sc, dst->linesize);
+ ret = ff_mpv_framesize_alloc(dst->avctx, &dst->sc, dst->linesize);
if (ret < 0) {
av_log(dst->avctx, AV_LOG_ERROR, "failed to allocate context "
"scratch buffers.\n");
diff --git a/libavcodec/mpegvideo_dec.c b/libavcodec/mpegvideo_dec.c
index b7f72ad460..9d2b7671e3 100644
--- a/libavcodec/mpegvideo_dec.c
+++ b/libavcodec/mpegvideo_dec.c
@@ -155,8 +155,7 @@ int ff_mpeg_update_thread_context(AVCodecContext *dst,
}
// linesize-dependent scratch buffer allocation
- ret = ff_mpeg_framesize_alloc(s->avctx, &s->me,
- &s->sc, s1->linesize);
+ ret = ff_mpv_framesize_alloc(s->avctx, &s->sc, s1->linesize);
if (ret < 0) {
av_log(s->avctx, AV_LOG_ERROR, "Failed to allocate context "
"scratch buffers.\n");
@@ -264,7 +263,7 @@ static int alloc_picture(MpegEncContext *s, MPVWorkPicture *dst, int reference)
av_assert1(s->mb_height == s->buffer_pools.alloc_mb_height ||
FFALIGN(s->mb_height, 2) == s->buffer_pools.alloc_mb_height);
av_assert1(s->mb_stride == s->buffer_pools.alloc_mb_stride);
- ret = ff_mpv_alloc_pic_accessories(s->avctx, dst, &s->me, &s->sc,
+ ret = ff_mpv_alloc_pic_accessories(s->avctx, dst, &s->sc,
&s->buffer_pools, s->mb_height);
if (ret < 0)
goto fail;
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index 85ed52d9ad..aac1fad535 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -1663,12 +1663,13 @@ static int select_input_picture(MpegEncContext *s)
av_assert1(s->mb_width == s->buffer_pools.alloc_mb_width);
av_assert1(s->mb_height == s->buffer_pools.alloc_mb_height);
av_assert1(s->mb_stride == s->buffer_pools.alloc_mb_stride);
- ret = ff_mpv_alloc_pic_accessories(s->avctx, &s->cur_pic, &s->me,
+ ret = ff_mpv_alloc_pic_accessories(s->avctx, &s->cur_pic,
&s->sc, &s->buffer_pools, s->mb_height);
if (ret < 0) {
ff_mpv_unref_picture(&s->cur_pic);
return ret;
}
+ s->me.temp = s->me.scratchpad = s->sc.scratchpad_buf;
s->picture_number = s->cur_pic.ptr->display_picture_number;
}
@@ -3616,9 +3617,11 @@ static int encode_picture(MpegEncContext *s)
s->mb_intra=0; //for the rate distortion & bit compare functions
for(i=1; i<context_count; i++){
- ret = ff_update_duplicate_context(s->thread_context[i], s);
+ MpegEncContext *const slice = s->thread_context[i];
+ ret = ff_update_duplicate_context(slice, s);
if (ret < 0)
return ret;
+ slice->me.temp = slice->me.scratchpad = slice->sc.scratchpad_buf;
}
/* Estimate motion for every MB */
diff --git a/libavcodec/svq1enc.c b/libavcodec/svq1enc.c
index 9c9be8c6b3..35413b8afd 100644
--- a/libavcodec/svq1enc.c
+++ b/libavcodec/svq1enc.c
@@ -543,15 +543,15 @@ static av_cold int svq1_encode_end(AVCodecContext *avctx)
s->rd_total / (double)(avctx->width * avctx->height *
avctx->frame_num));
- s->m.mb_type = NULL;
- ff_mpv_common_end(&s->m);
-
av_freep(&s->m.me.scratchpad);
av_freep(&s->m.me.map);
av_freep(&s->mb_type);
av_freep(&s->dummy);
av_freep(&s->scratchbuf);
+ s->m.mb_type = NULL;
+ ff_mpv_common_end(&s->m);
+
for (i = 0; i < 3; i++) {
av_freep(&s->motion_val8[i]);
av_freep(&s->motion_val16[i]);
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 61/71] avcodec/mpegvideo_enc: Unify initializing PutBitContexts
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (58 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 60/71] avcodec/mpegpicture: Avoid MotionEstContext in ff_mpeg_framesize_alloc() Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 62/71] avcodec/mpeg12enc: Simplify writing startcodes Andreas Rheinhardt
` (10 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
This also rids us of the requirement to preserve the PutBitContext
in ff_update_duplicate_context().
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegvideo.c | 1 -
libavcodec/mpegvideo_enc.c | 42 +++++++++++++++++---------------------
2 files changed, 19 insertions(+), 24 deletions(-)
diff --git a/libavcodec/mpegvideo.c b/libavcodec/mpegvideo.c
index 89d19a743a..93df8a315d 100644
--- a/libavcodec/mpegvideo.c
+++ b/libavcodec/mpegvideo.c
@@ -471,7 +471,6 @@ static void backup_duplicate_context(MpegEncContext *bak, MpegEncContext *src)
COPY(start_mb_y);
COPY(end_mb_y);
COPY(me.map_generation);
- COPY(pb);
COPY(dct_error_sum);
COPY(dct_count[0]);
COPY(dct_count[1]);
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index aac1fad535..195d1e3465 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -84,7 +84,7 @@
#define QMAT_SHIFT_MMX 16
#define QMAT_SHIFT 21
-static int encode_picture(MpegEncContext *s);
+static int encode_picture(MpegEncContext *s, const AVPacket *pkt);
static int dct_quantize_refine(MpegEncContext *s, int16_t *block, int16_t *weight, int16_t *orig, int n, int qscale);
static int sse_mb(MpegEncContext *s);
static void denoise_dct_c(MpegEncContext *s, int16_t *block);
@@ -1669,7 +1669,6 @@ static int select_input_picture(MpegEncContext *s)
ff_mpv_unref_picture(&s->cur_pic);
return ret;
}
- s->me.temp = s->me.scratchpad = s->sc.scratchpad_buf;
s->picture_number = s->cur_pic.ptr->display_picture_number;
}
@@ -1755,7 +1754,7 @@ int ff_mpv_encode_picture(AVCodecContext *avctx, AVPacket *pkt,
const AVFrame *pic_arg, int *got_packet)
{
MpegEncContext *s = avctx->priv_data;
- int i, stuffing_count, ret;
+ int stuffing_count, ret;
int context_count = s->slice_context_count;
ff_mpv_unref_picture(&s->cur_pic);
@@ -1791,21 +1790,11 @@ int ff_mpv_encode_picture(AVCodecContext *avctx, AVPacket *pkt,
s->prev_mb_info = s->last_mb_info = s->mb_info_size = 0;
}
- for (i = 0; i < context_count; i++) {
- int start_y = s->thread_context[i]->start_mb_y;
- int end_y = s->thread_context[i]-> end_mb_y;
- int h = s->mb_height;
- uint8_t *start = pkt->data + (size_t)(((int64_t) pkt->size) * start_y / h);
- uint8_t *end = pkt->data + (size_t)(((int64_t) pkt->size) * end_y / h);
-
- init_put_bits(&s->thread_context[i]->pb, start, end - start);
- }
-
s->pict_type = s->new_pic->pict_type;
//emms_c();
frame_start(s);
vbv_retry:
- ret = encode_picture(s);
+ ret = encode_picture(s, pkt);
if (growing_buffer) {
av_assert0(s->pb.buf == avctx->internal->byte_buffer);
pkt->data = s->pb.buf;
@@ -1849,10 +1838,6 @@ vbv_retry:
s->time_base = s->last_time_base;
s->last_non_b_time = s->time - s->pp_time;
}
- for (i = 0; i < context_count; i++) {
- PutBitContext *pb = &s->thread_context[i]->pb;
- init_put_bits(pb, pb->buf, pb->buf_end - pb->buf);
- }
s->vbv_ignore_qmax = 1;
av_log(avctx, AV_LOG_VERBOSE, "reencoding frame due to VBV\n");
goto vbv_retry;
@@ -3564,7 +3549,7 @@ static void set_frame_distances(MpegEncContext * s){
}
}
-static int encode_picture(MpegEncContext *s)
+static int encode_picture(MpegEncContext *s, const AVPacket *pkt)
{
int i, ret;
int bits;
@@ -3616,12 +3601,23 @@ static int encode_picture(MpegEncContext *s)
return -1;
s->mb_intra=0; //for the rate distortion & bit compare functions
- for(i=1; i<context_count; i++){
+ for (int i = 0; i < context_count; i++) {
MpegEncContext *const slice = s->thread_context[i];
- ret = ff_update_duplicate_context(slice, s);
- if (ret < 0)
- return ret;
+ uint8_t *start, *end;
+ int h;
+
+ if (i) {
+ ret = ff_update_duplicate_context(slice, s);
+ if (ret < 0)
+ return ret;
+ }
slice->me.temp = slice->me.scratchpad = slice->sc.scratchpad_buf;
+
+ h = s->mb_height;
+ start = pkt->data + (size_t)(((int64_t) pkt->size) * slice->start_mb_y / h);
+ end = pkt->data + (size_t)(((int64_t) pkt->size) * slice-> end_mb_y / h);
+
+ init_put_bits(&s->thread_context[i]->pb, start, end - start);
}
/* Estimate motion for every MB */
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 62/71] avcodec/mpeg12enc: Simplify writing startcodes
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (59 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 61/71] avcodec/mpegvideo_enc: Unify initializing PutBitContexts Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 63/71] avcodec/mpegvideo_dec, rv34: Simplify check for "does pic exist?" Andreas Rheinhardt
` (9 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpeg12enc.c | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/libavcodec/mpeg12enc.c b/libavcodec/mpeg12enc.c
index 304cfb9046..ba56f0c37a 100644
--- a/libavcodec/mpeg12enc.c
+++ b/libavcodec/mpeg12enc.c
@@ -271,11 +271,10 @@ static av_cold int encode_init(AVCodecContext *avctx)
return 0;
}
-static void put_header(MpegEncContext *s, int header)
+static void put_header(MpegEncContext *s, uint32_t header)
{
align_put_bits(&s->pb);
- put_bits(&s->pb, 16, header >> 16);
- put_sbits(&s->pb, 16, header);
+ put_bits32(&s->pb, header);
}
/* put sequence header if needed */
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 63/71] avcodec/mpegvideo_dec, rv34: Simplify check for "does pic exist?"
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (60 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 62/71] avcodec/mpeg12enc: Simplify writing startcodes Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 64/71] avcodec/mpegvideo_dec: Don't sync encoder-only coded_picture_number Andreas Rheinhardt
` (8 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
The days in which an MPVPicture* is set with the corresponding frame
being blank are over; this allows to simplify some checks.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegvideo_dec.c | 10 +++++-----
libavcodec/rv34.c | 3 +--
2 files changed, 6 insertions(+), 7 deletions(-)
diff --git a/libavcodec/mpegvideo_dec.c b/libavcodec/mpegvideo_dec.c
index 9d2b7671e3..f840dc9ffc 100644
--- a/libavcodec/mpegvideo_dec.c
+++ b/libavcodec/mpegvideo_dec.c
@@ -312,9 +312,10 @@ int ff_mpv_alloc_dummy_frames(MpegEncContext *s)
AVCodecContext *avctx = s->avctx;
int ret;
- if ((!s->last_pic.ptr || !s->last_pic.ptr->f->buf[0]) &&
- (s->pict_type != AV_PICTURE_TYPE_I)) {
- if (s->pict_type == AV_PICTURE_TYPE_B && s->next_pic.ptr && s->next_pic.ptr->f->buf[0])
+ av_assert1(!s->last_pic.ptr || s->last_pic.ptr->f->buf[0]);
+ av_assert1(!s->next_pic.ptr || s->next_pic.ptr->f->buf[0]);
+ if (!s->last_pic.ptr && s->pict_type != AV_PICTURE_TYPE_I) {
+ if (s->pict_type == AV_PICTURE_TYPE_B && s->next_pic.ptr)
av_log(avctx, AV_LOG_DEBUG,
"allocating dummy last picture for B frame\n");
else if (s->codec_id != AV_CODEC_ID_H261 /* H.261 has no keyframes */ &&
@@ -332,8 +333,7 @@ int ff_mpv_alloc_dummy_frames(MpegEncContext *s)
color_frame(s->last_pic.ptr->f, luma_val);
}
}
- if ((!s->next_pic.ptr || !s->next_pic.ptr->f->buf[0]) &&
- s->pict_type == AV_PICTURE_TYPE_B) {
+ if (!s->next_pic.ptr && s->pict_type == AV_PICTURE_TYPE_B) {
/* Allocate a dummy frame */
ret = alloc_dummy_frame(s, &s->next_pic);
if (ret < 0)
diff --git a/libavcodec/rv34.c b/libavcodec/rv34.c
index d3816df059..f667023266 100644
--- a/libavcodec/rv34.c
+++ b/libavcodec/rv34.c
@@ -1655,8 +1655,7 @@ int ff_rv34_decode_frame(AVCodecContext *avctx, AVFrame *pict,
av_log(avctx, AV_LOG_ERROR, "First slice header is incorrect\n");
return AVERROR_INVALIDDATA;
}
- if ((!s->last_pic.ptr || !s->last_pic.ptr->f->data[0]) &&
- si.type == AV_PICTURE_TYPE_B) {
+ if (!s->last_pic.ptr && si.type == AV_PICTURE_TYPE_B) {
av_log(avctx, AV_LOG_ERROR, "Invalid decoder state: B-frame without "
"reference data.\n");
faulty_b = 1;
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 64/71] avcodec/mpegvideo_dec: Don't sync encoder-only coded_picture_number
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (61 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 63/71] avcodec/mpegvideo_dec, rv34: Simplify check for "does pic exist?" Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 65/71] avcodec/mpeg12dec: Pass Mpeg1Context* in mpeg_field_start() Andreas Rheinhardt
` (7 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegvideo.c | 1 -
libavcodec/mpegvideo_dec.c | 1 -
2 files changed, 2 deletions(-)
diff --git a/libavcodec/mpegvideo.c b/libavcodec/mpegvideo.c
index 93df8a315d..8a8ff2fbd9 100644
--- a/libavcodec/mpegvideo.c
+++ b/libavcodec/mpegvideo.c
@@ -520,7 +520,6 @@ void ff_mpv_common_defaults(MpegEncContext *s)
s->progressive_sequence = 1;
s->picture_structure = PICT_FRAME;
- s->coded_picture_number = 0;
s->picture_number = 0;
s->f_code = 1;
diff --git a/libavcodec/mpegvideo_dec.c b/libavcodec/mpegvideo_dec.c
index f840dc9ffc..0a50cfac5b 100644
--- a/libavcodec/mpegvideo_dec.c
+++ b/libavcodec/mpegvideo_dec.c
@@ -114,7 +114,6 @@ int ff_mpeg_update_thread_context(AVCodecContext *dst,
s->quarter_sample = s1->quarter_sample;
- s->coded_picture_number = s1->coded_picture_number;
s->picture_number = s1->picture_number;
ff_mpv_replace_picture(&s->cur_pic, &s1->cur_pic);
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 65/71] avcodec/mpeg12dec: Pass Mpeg1Context* in mpeg_field_start()
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (62 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 64/71] avcodec/mpegvideo_dec: Don't sync encoder-only coded_picture_number Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 66/71] avcodec/mpeg12dec: Don't initialize inter_scantable Andreas Rheinhardt
` (6 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Avoids a cast.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpeg12dec.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/libavcodec/mpeg12dec.c b/libavcodec/mpeg12dec.c
index 0d5540fd2f..52f98986f6 100644
--- a/libavcodec/mpeg12dec.c
+++ b/libavcodec/mpeg12dec.c
@@ -1278,10 +1278,10 @@ static int mpeg_decode_picture_coding_extension(Mpeg1Context *s1)
return 0;
}
-static int mpeg_field_start(MpegEncContext *s, const uint8_t *buf, int buf_size)
+static int mpeg_field_start(Mpeg1Context *s1, const uint8_t *buf, int buf_size)
{
+ MpegEncContext *s = &s1->mpeg_enc_ctx;
AVCodecContext *avctx = s->avctx;
- Mpeg1Context *s1 = (Mpeg1Context *) s;
int ret;
if (!(avctx->flags2 & AV_CODEC_FLAG2_CHUNKS)) {
@@ -2460,7 +2460,7 @@ static int decode_chunks(AVCodecContext *avctx, AVFrame *picture,
if (s->first_slice) {
skip_frame = 0;
s->first_slice = 0;
- if ((ret = mpeg_field_start(s2, buf, buf_size)) < 0)
+ if ((ret = mpeg_field_start(s, buf, buf_size)) < 0)
return ret;
}
if (!s2->cur_pic.ptr) {
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 66/71] avcodec/mpeg12dec: Don't initialize inter_scantable
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (63 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 65/71] avcodec/mpeg12dec: Pass Mpeg1Context* in mpeg_field_start() Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 67/71] avcodec/mpegvideo: Remove pblocks Andreas Rheinhardt
` (5 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
MPEG-1/2 only needs one scantable and therefore all code
already uses the intra one. So stop initializing
the inter one altogether.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpeg12dec.c | 11 ++++-------
1 file changed, 4 insertions(+), 7 deletions(-)
diff --git a/libavcodec/mpeg12dec.c b/libavcodec/mpeg12dec.c
index 52f98986f6..55e3a31e95 100644
--- a/libavcodec/mpeg12dec.c
+++ b/libavcodec/mpeg12dec.c
@@ -1256,13 +1256,10 @@ static int mpeg_decode_picture_coding_extension(Mpeg1Context *s1)
s->chroma_420_type = get_bits1(&s->gb);
s->progressive_frame = get_bits1(&s->gb);
- if (s->alternate_scan) {
- ff_init_scantable(s->idsp.idct_permutation, &s->inter_scantable, ff_alternate_vertical_scan);
- ff_init_scantable(s->idsp.idct_permutation, &s->intra_scantable, ff_alternate_vertical_scan);
- } else {
- ff_init_scantable(s->idsp.idct_permutation, &s->inter_scantable, ff_zigzag_direct);
- ff_init_scantable(s->idsp.idct_permutation, &s->intra_scantable, ff_zigzag_direct);
- }
+ // We only initialize intra_scantable, as both scantables always coincide
+ // and all code therefore only uses the intra one.
+ ff_init_scantable(s->idsp.idct_permutation, &s->intra_scantable,
+ s->alternate_scan ? ff_alternate_vertical_scan : ff_zigzag_direct);
/* composite display not parsed */
ff_dlog(s->avctx, "intra_dc_precision=%d\n", s->intra_dc_precision);
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 67/71] avcodec/mpegvideo: Remove pblocks
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (64 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 66/71] avcodec/mpeg12dec: Don't initialize inter_scantable Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 68/71] avcodec/mpegvideo: Use enum for msmpeg4_version Andreas Rheinhardt
` (4 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
It has been added in a579db0c4fe026d49c71d1ff64a2d1d07c152d68
due to XvMC, but today it is only used to swap U and V
for VCR2, a MPEG-2 variant with U and V swapped.
This can be done in a simpler fashion, namely by simply
swapping the U and V pointers of the corresponding
MPVWorkPictures.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpeg12dec.c | 20 ++++++++++++++++----
libavcodec/mpegvideo.c | 22 ++--------------------
libavcodec/mpegvideo.h | 1 -
3 files changed, 18 insertions(+), 25 deletions(-)
diff --git a/libavcodec/mpeg12dec.c b/libavcodec/mpeg12dec.c
index 55e3a31e95..139e825216 100644
--- a/libavcodec/mpeg12dec.c
+++ b/libavcodec/mpeg12dec.c
@@ -535,14 +535,14 @@ static int mpeg_decode_mb(MpegEncContext *s, int16_t block[12][64])
if (s->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
for (i = 0; i < mb_block_count; i++)
- if ((ret = mpeg2_decode_block_intra(s, *s->pblocks[i], i)) < 0)
+ if ((ret = mpeg2_decode_block_intra(s, s->block[i], i)) < 0)
return ret;
} else {
for (i = 0; i < 6; i++) {
ret = ff_mpeg1_decode_block_intra(&s->gb,
s->intra_matrix,
s->intra_scantable.permutated,
- s->last_dc, *s->pblocks[i],
+ s->last_dc, s->block[i],
i, s->qscale);
if (ret < 0) {
av_log(s->avctx, AV_LOG_ERROR, "ac-tex damaged at %d %d\n",
@@ -760,7 +760,7 @@ static int mpeg_decode_mb(MpegEncContext *s, int16_t block[12][64])
for (i = 0; i < mb_block_count; i++) {
if (cbp & (1 << 11)) {
- if ((ret = mpeg2_decode_block_non_intra(s, *s->pblocks[i], i)) < 0)
+ if ((ret = mpeg2_decode_block_non_intra(s, s->block[i], i)) < 0)
return ret;
} else {
s->block_last_index[i] = -1;
@@ -770,7 +770,7 @@ static int mpeg_decode_mb(MpegEncContext *s, int16_t block[12][64])
} else {
for (i = 0; i < 6; i++) {
if (cbp & 32) {
- if ((ret = mpeg1_decode_block_inter(s, *s->pblocks[i], i)) < 0)
+ if ((ret = mpeg1_decode_block_inter(s, s->block[i], i)) < 0)
return ret;
} else {
s->block_last_index[i] = -1;
@@ -1279,6 +1279,7 @@ static int mpeg_field_start(Mpeg1Context *s1, const uint8_t *buf, int buf_size)
{
MpegEncContext *s = &s1->mpeg_enc_ctx;
AVCodecContext *avctx = s->avctx;
+ int second_field = 0;
int ret;
if (!(avctx->flags2 & AV_CODEC_FLAG2_CHUNKS)) {
@@ -1362,6 +1363,7 @@ static int mpeg_field_start(Mpeg1Context *s1, const uint8_t *buf, int buf_size)
if (HAVE_THREADS && (avctx->active_thread_type & FF_THREAD_FRAME))
ff_thread_finish_setup(avctx);
} else { // second field
+ second_field = 1;
if (!s->cur_pic.ptr) {
av_log(s->avctx, AV_LOG_ERROR, "first field missing\n");
return AVERROR_INVALIDDATA;
@@ -1389,6 +1391,16 @@ static int mpeg_field_start(Mpeg1Context *s1, const uint8_t *buf, int buf_size)
if (avctx->hwaccel) {
if ((ret = FF_HW_CALL(avctx, start_frame, buf, buf_size)) < 0)
return ret;
+ } else if (s->codec_tag == MKTAG('V', 'C', 'R', '2')) {
+ // Exchange UV
+ FFSWAP(uint8_t*, s->cur_pic.data[1], s->cur_pic.data[2]);
+ FFSWAP(ptrdiff_t, s->cur_pic.linesize[1], s->cur_pic.linesize[2]);
+ if (!second_field) {
+ FFSWAP(uint8_t*, s->next_pic.data[1], s->next_pic.data[2]);
+ FFSWAP(ptrdiff_t, s->next_pic.linesize[1], s->next_pic.linesize[2]);
+ FFSWAP(uint8_t*, s->last_pic.data[1], s->last_pic.data[2]);
+ FFSWAP(ptrdiff_t, s->last_pic.linesize[1], s->last_pic.linesize[2]);
+ }
}
return 0;
diff --git a/libavcodec/mpegvideo.c b/libavcodec/mpegvideo.c
index 8a8ff2fbd9..c5ed4701d0 100644
--- a/libavcodec/mpegvideo.c
+++ b/libavcodec/mpegvideo.c
@@ -365,8 +365,6 @@ av_cold void ff_mpv_idct_init(MpegEncContext *s)
static int init_duplicate_context(MpegEncContext *s)
{
- int i;
-
if (s->encoding) {
s->me.map = av_mallocz(2 * ME_MAP_SIZE * sizeof(*s->me.map));
if (!s->me.map)
@@ -382,15 +380,6 @@ static int init_duplicate_context(MpegEncContext *s)
return AVERROR(ENOMEM);
s->block = s->blocks[0];
- for (i = 0; i < 12; i++) {
- s->pblocks[i] = &s->block[i];
- }
-
- if (s->avctx->codec_tag == AV_RL32("VCR2")) {
- // exchange uv
- FFSWAP(void *, s->pblocks[4], s->pblocks[5]);
- }
-
if (s->out_format == FMT_H263) {
int mb_height = s->msmpeg4_version == 6 /* VC-1 like */ ?
FFALIGN(s->mb_height, 2) : s->mb_height;
@@ -484,18 +473,12 @@ static void backup_duplicate_context(MpegEncContext *bak, MpegEncContext *src)
int ff_update_duplicate_context(MpegEncContext *dst, const MpegEncContext *src)
{
MpegEncContext bak;
- int i, ret;
+ int ret;
// FIXME copy only needed parts
backup_duplicate_context(&bak, dst);
memcpy(dst, src, sizeof(MpegEncContext));
backup_duplicate_context(dst, &bak);
- for (i = 0; i < 12; i++) {
- dst->pblocks[i] = &dst->block[i];
- }
- if (dst->avctx->codec_tag == AV_RL32("VCR2")) {
- // exchange uv
- FFSWAP(void *, dst->pblocks[4], dst->pblocks[5]);
- }
+
ret = ff_mpv_framesize_alloc(dst->avctx, &dst->sc, dst->linesize);
if (ret < 0) {
av_log(dst->avctx, AV_LOG_ERROR, "failed to allocate context "
@@ -682,7 +665,6 @@ static void clear_context(MpegEncContext *s)
s->dct_error_sum = NULL;
s->block = NULL;
s->blocks = NULL;
- memset(s->pblocks, 0, sizeof(s->pblocks));
s->ac_val_base = NULL;
s->ac_val[0] =
s->ac_val[1] =
diff --git a/libavcodec/mpegvideo.h b/libavcodec/mpegvideo.h
index 4339a145aa..5af74acf95 100644
--- a/libavcodec/mpegvideo.h
+++ b/libavcodec/mpegvideo.h
@@ -466,7 +466,6 @@ typedef struct MpegEncContext {
int rtp_payload_size;
uint8_t *ptr_lastgob;
- int16_t (*pblocks[12])[64];
int16_t (*block)[64]; ///< points to one of the following blocks
int16_t (*blocks)[12][64]; // for HQ mode we need to keep the best block
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 68/71] avcodec/mpegvideo: Use enum for msmpeg4_version
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (65 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 67/71] avcodec/mpegvideo: Remove pblocks Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 69/71] avcodec/ituh263enc: Remove redundant check Andreas Rheinhardt
` (3 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Improves readability.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/h263dec.c | 34 ++++++------
libavcodec/motion_est.c | 8 +--
libavcodec/mpegvideo.c | 6 +--
libavcodec/mpegvideo.h | 10 +++-
libavcodec/mpegvideo_enc.c | 28 ++++++----
libavcodec/mpv_reconstruct_mb_template.c | 2 +-
libavcodec/msmpeg4.c | 17 +++---
libavcodec/msmpeg4dec.c | 68 ++++++++++++------------
libavcodec/msmpeg4enc.c | 37 ++++++-------
libavcodec/vc1dec.c | 2 +-
10 files changed, 114 insertions(+), 98 deletions(-)
diff --git a/libavcodec/h263dec.c b/libavcodec/h263dec.c
index b9762be9c9..eee7978452 100644
--- a/libavcodec/h263dec.c
+++ b/libavcodec/h263dec.c
@@ -113,23 +113,23 @@ av_cold int ff_h263_decode_init(AVCodecContext *avctx)
break;
case AV_CODEC_ID_MSMPEG4V1:
s->h263_pred = 1;
- s->msmpeg4_version = 1;
+ s->msmpeg4_version = MSMP4_V1;
break;
case AV_CODEC_ID_MSMPEG4V2:
s->h263_pred = 1;
- s->msmpeg4_version = 2;
+ s->msmpeg4_version = MSMP4_V2;
break;
case AV_CODEC_ID_MSMPEG4V3:
s->h263_pred = 1;
- s->msmpeg4_version = 3;
+ s->msmpeg4_version = MSMP4_V3;
break;
case AV_CODEC_ID_WMV1:
s->h263_pred = 1;
- s->msmpeg4_version = 4;
+ s->msmpeg4_version = MSMP4_WMV1;
break;
case AV_CODEC_ID_WMV2:
s->h263_pred = 1;
- s->msmpeg4_version = 5;
+ s->msmpeg4_version = MSMP4_WMV2;
break;
case AV_CODEC_ID_H263I:
break;
@@ -227,7 +227,7 @@ static int decode_slice(MpegEncContext *s)
for (; s->mb_y < s->mb_height; s->mb_y++) {
/* per-row end of slice checks */
- if (s->msmpeg4_version) {
+ if (s->msmpeg4_version != MSMP4_UNUSED) {
if (s->resync_mb_y + s->slice_height == s->mb_y) {
ff_er_add_slice(&s->er, s->resync_mb_x, s->resync_mb_y,
s->mb_x - 1, s->mb_y, ER_MB_END);
@@ -236,7 +236,7 @@ static int decode_slice(MpegEncContext *s)
}
}
- if (s->msmpeg4_version == 1) {
+ if (s->msmpeg4_version == MSMP4_V1) {
s->last_dc[0] =
s->last_dc[1] =
s->last_dc[2] = 128;
@@ -375,12 +375,12 @@ static int decode_slice(MpegEncContext *s)
}
// handle formats which don't have unique end markers
- if (s->msmpeg4_version || (s->workaround_bugs & FF_BUG_NO_PADDING)) { // FIXME perhaps solve this more cleanly
+ if (s->msmpeg4_version != MSMP4_UNUSED || (s->workaround_bugs & FF_BUG_NO_PADDING)) { // FIXME perhaps solve this more cleanly
int left = get_bits_left(&s->gb);
int max_extra = 7;
/* no markers in M$ crap */
- if (s->msmpeg4_version && s->pict_type == AV_PICTURE_TYPE_I)
+ if (s->msmpeg4_version != MSMP4_UNUSED && s->pict_type == AV_PICTURE_TYPE_I)
max_extra += 17;
/* buggy padding but the frame should still end approximately at
@@ -474,10 +474,12 @@ retry:
return ret;
/* let's go :-) */
- if (CONFIG_WMV2_DECODER && s->msmpeg4_version == 5) {
+ if (CONFIG_WMV2_DECODER && s->msmpeg4_version == MSMP4_WMV2) {
ret = ff_wmv2_decode_picture_header(s);
- } else if (CONFIG_MSMPEG4DEC && s->msmpeg4_version) {
+#if CONFIG_MSMPEG4DEC
+ } else if (s->msmpeg4_version != MSMP4_UNUSED) {
ret = ff_msmpeg4_decode_picture_header(s);
+#endif
} else if (CONFIG_MPEG4_DECODER && avctx->codec_id == AV_CODEC_ID_MPEG4) {
ret = ff_mpeg4_decode_picture_header(avctx->priv_data, &s->gb, 0, 0);
s->skipped_last_frame = (ret == FRAME_SKIPPED);
@@ -583,13 +585,15 @@ retry:
/* the second part of the wmv2 header contains the MB skip bits which
* are stored in current_picture->mb_type which is not available before
* ff_mpv_frame_start() */
- if (CONFIG_WMV2_DECODER && s->msmpeg4_version == 5) {
+#if CONFIG_WMV2_DECODER
+ if (s->msmpeg4_version == MSMP4_WMV2) {
ret = ff_wmv2_decode_secondary_picture_header(s);
if (ret < 0)
return ret;
if (ret == 1)
goto frame_end;
}
+#endif
/* decode each macroblock */
s->mb_x = 0;
@@ -597,7 +601,7 @@ retry:
slice_ret = decode_slice(s);
while (s->mb_y < s->mb_height) {
- if (s->msmpeg4_version) {
+ if (s->msmpeg4_version != MSMP4_UNUSED) {
if (s->slice_height == 0 || s->mb_x != 0 || slice_ret < 0 ||
(s->mb_y % s->slice_height) != 0 || get_bits_left(&s->gb) < 0)
break;
@@ -609,14 +613,14 @@ retry:
s->er.error_occurred = 1;
}
- if (s->msmpeg4_version < 4 && s->h263_pred)
+ if (s->msmpeg4_version < MSMP4_WMV1 && s->h263_pred)
ff_mpeg4_clean_buffers(s);
if (decode_slice(s) < 0)
slice_ret = AVERROR_INVALIDDATA;
}
- if (s->msmpeg4_version && s->msmpeg4_version < 4 &&
+ if (s->msmpeg4_version != MSMP4_UNUSED && s->msmpeg4_version < MSMP4_WMV1 &&
s->pict_type == AV_PICTURE_TYPE_I)
if (!CONFIG_MSMPEG4DEC ||
ff_msmpeg4_decode_ext_header(s, buf_size) < 0)
diff --git a/libavcodec/motion_est.c b/libavcodec/motion_est.c
index fcef47a623..162472d693 100644
--- a/libavcodec/motion_est.c
+++ b/libavcodec/motion_est.c
@@ -1609,7 +1609,7 @@ int ff_get_best_fcode(MpegEncContext * s, const int16_t (*mv_table)[2], int type
int best_fcode=-1;
int best_score=-10000000;
- if(s->msmpeg4_version)
+ if (s->msmpeg4_version != MSMP4_UNUSED)
range= FFMIN(range, 16);
else if(s->codec_id == AV_CODEC_ID_MPEG2VIDEO && s->avctx->strict_std_compliance >= FF_COMPLIANCE_NORMAL)
range= FFMIN(range, 256);
@@ -1660,9 +1660,9 @@ void ff_fix_long_p_mvs(MpegEncContext * s, int type)
int y, range;
av_assert0(s->pict_type==AV_PICTURE_TYPE_P);
- range = (((s->out_format == FMT_MPEG1 || s->msmpeg4_version) ? 8 : 16) << f_code);
+ range = (((s->out_format == FMT_MPEG1 || s->msmpeg4_version != MSMP4_UNUSED) ? 8 : 16) << f_code);
- av_assert0(range <= 16 || !s->msmpeg4_version);
+ av_assert0(range <= 16 || s->msmpeg4_version == MSMP4_UNUSED);
av_assert0(range <=256 || !(s->codec_id == AV_CODEC_ID_MPEG2VIDEO && s->avctx->strict_std_compliance >= FF_COMPLIANCE_NORMAL));
if(c->avctx->me_range && range > c->avctx->me_range) range= c->avctx->me_range;
@@ -1709,7 +1709,7 @@ void ff_fix_long_mvs(MpegEncContext * s, uint8_t *field_select_table, int field_
int y, h_range, v_range;
// RAL: 8 in MPEG-1, 16 in MPEG-4
- int range = (((s->out_format == FMT_MPEG1 || s->msmpeg4_version) ? 8 : 16) << f_code);
+ int range = (((s->out_format == FMT_MPEG1 || s->msmpeg4_version != MSMP4_UNUSED) ? 8 : 16) << f_code);
if(c->avctx->me_range && range > c->avctx->me_range) range= c->avctx->me_range;
diff --git a/libavcodec/mpegvideo.c b/libavcodec/mpegvideo.c
index c5ed4701d0..4fe9350c40 100644
--- a/libavcodec/mpegvideo.c
+++ b/libavcodec/mpegvideo.c
@@ -381,7 +381,7 @@ static int init_duplicate_context(MpegEncContext *s)
s->block = s->blocks[0];
if (s->out_format == FMT_H263) {
- int mb_height = s->msmpeg4_version == 6 /* VC-1 like */ ?
+ int mb_height = s->msmpeg4_version == MSMP4_VC1 ?
FFALIGN(s->mb_height, 2) : s->mb_height;
int y_size = s->b8_stride * (2 * mb_height + 1);
int c_size = s->mb_stride * (mb_height + 1);
@@ -535,7 +535,7 @@ int ff_mpv_init_context_frame(MpegEncContext *s)
/* VC-1 can change from being progressive to interlaced on a per-frame
* basis. We therefore allocate certain buffers so big that they work
* in both instances. */
- mb_height = s->msmpeg4_version == 6 /* VC-1 like*/ ?
+ mb_height = s->msmpeg4_version == MSMP4_VC1 ?
FFALIGN(s->mb_height, 2) : s->mb_height;
s->mb_width = (s->width + 15) / 16;
@@ -602,7 +602,7 @@ int ff_mpv_init_context_frame(MpegEncContext *s)
}
}
- if (s->msmpeg4_version >= 3) {
+ if (s->msmpeg4_version >= MSMP4_V3) {
s->coded_block_base = av_mallocz(y_size);
if (!s->coded_block_base)
return AVERROR(ENOMEM);
diff --git a/libavcodec/mpegvideo.h b/libavcodec/mpegvideo.h
index 5af74acf95..8351f806b4 100644
--- a/libavcodec/mpegvideo.h
+++ b/libavcodec/mpegvideo.h
@@ -418,7 +418,15 @@ typedef struct MpegEncContext {
int slice_height; ///< in macroblocks
int first_slice_line; ///< used in MPEG-4 too to handle resync markers
int flipflop_rounding;
- int msmpeg4_version; ///< 0=not msmpeg4, 1=mp41, 2=mp42, 3=mp43/divx3 4=wmv1/7 5=wmv2/8
+ enum {
+ MSMP4_UNUSED,
+ MSMP4_V1,
+ MSMP4_V2,
+ MSMP4_V3,
+ MSMP4_WMV1,
+ MSMP4_WMV2,
+ MSMP4_VC1, ///< for VC1 (image), WMV3 (image) and MSS2.
+ } msmpeg4_version;
int per_mb_rl_table;
int esc3_level_length;
int esc3_run_length;
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index 195d1e3465..58f68ef5f3 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -754,7 +754,7 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx)
s->out_format = FMT_H263;
s->h263_pred = 1;
s->unrestricted_mv = 1;
- s->msmpeg4_version = 2;
+ s->msmpeg4_version = MSMP4_V2;
avctx->delay = 0;
s->low_delay = 1;
break;
@@ -762,7 +762,7 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx)
s->out_format = FMT_H263;
s->h263_pred = 1;
s->unrestricted_mv = 1;
- s->msmpeg4_version = 3;
+ s->msmpeg4_version = MSMP4_V3;
s->flipflop_rounding = 1;
avctx->delay = 0;
s->low_delay = 1;
@@ -771,7 +771,7 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx)
s->out_format = FMT_H263;
s->h263_pred = 1;
s->unrestricted_mv = 1;
- s->msmpeg4_version = 4;
+ s->msmpeg4_version = MSMP4_WMV1;
s->flipflop_rounding = 1;
avctx->delay = 0;
s->low_delay = 1;
@@ -780,7 +780,7 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx)
s->out_format = FMT_H263;
s->h263_pred = 1;
s->unrestricted_mv = 1;
- s->msmpeg4_version = 5;
+ s->msmpeg4_version = MSMP4_WMV2;
s->flipflop_rounding = 1;
avctx->delay = 0;
s->low_delay = 1;
@@ -916,8 +916,10 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx)
if (CONFIG_H263_ENCODER && s->out_format == FMT_H263) {
ff_h263_encode_init(s);
- if (CONFIG_MSMPEG4ENC && s->msmpeg4_version)
+#if CONFIG_MSMPEG4ENC
+ if (s->msmpeg4_version != MSMP4_UNUSED)
ff_msmpeg4_encode_init(s);
+#endif
}
/* init q matrix */
@@ -3456,9 +3458,12 @@ static int encode_thread(AVCodecContext *c, void *arg){
}
}
+#if CONFIG_MSMPEG4ENC
//not beautiful here but we must write it before flushing so it has to be here
- if (CONFIG_MSMPEG4ENC && s->msmpeg4_version && s->msmpeg4_version<4 && s->pict_type == AV_PICTURE_TYPE_I)
+ if (s->msmpeg4_version != MSMP4_UNUSED && s->msmpeg4_version < MSMP4_WMV1 &&
+ s->pict_type == AV_PICTURE_TYPE_I)
ff_msmpeg4_encode_ext_header(s);
+#endif
write_slice_end(s);
@@ -3561,7 +3566,7 @@ static int encode_picture(MpegEncContext *s, const AVPacket *pkt)
/* we need to initialize some time vars before we can encode B-frames */
// RAL: Condition added for MPEG1VIDEO
- if (s->out_format == FMT_MPEG1 || (s->h263_pred && !s->msmpeg4_version))
+ if (s->out_format == FMT_MPEG1 || (s->h263_pred && s->msmpeg4_version == MSMP4_UNUSED))
set_frame_distances(s);
if(CONFIG_MPEG4_ENCODER && s->codec_id == AV_CODEC_ID_MPEG4)
ff_set_mpeg4_time(s);
@@ -3571,8 +3576,7 @@ static int encode_picture(MpegEncContext *s, const AVPacket *pkt)
// s->lambda= s->cur_pic.ptr->quality; //FIXME qscale / ... stuff for ME rate distortion
if(s->pict_type==AV_PICTURE_TYPE_I){
- if(s->msmpeg4_version >= 3) s->no_rounding=1;
- else s->no_rounding=0;
+ s->no_rounding = s->msmpeg4_version >= MSMP4_V3;
}else if(s->pict_type!=AV_PICTURE_TYPE_B){
if(s->flipflop_rounding || s->codec_id == AV_CODEC_ID_H263P || s->codec_id == AV_CODEC_ID_MPEG4)
s->no_rounding ^= 1;
@@ -3654,7 +3658,7 @@ static int encode_picture(MpegEncContext *s, const AVPacket *pkt)
s->pict_type= AV_PICTURE_TYPE_I;
for(i=0; i<s->mb_stride*s->mb_height; i++)
s->mb_type[i]= CANDIDATE_MB_TYPE_INTRA;
- if(s->msmpeg4_version >= 3)
+ if (s->msmpeg4_version >= MSMP4_V3)
s->no_rounding=1;
ff_dlog(s, "Scene change detected, encoding as I Frame %"PRId64" %"PRId64"\n",
s->mb_var_sum, s->mc_mb_var_sum);
@@ -3798,8 +3802,10 @@ static int encode_picture(MpegEncContext *s, const AVPacket *pkt)
case FMT_H263:
if (CONFIG_WMV2_ENCODER && s->codec_id == AV_CODEC_ID_WMV2)
ff_wmv2_encode_picture_header(s);
- else if (CONFIG_MSMPEG4ENC && s->msmpeg4_version)
+#if CONFIG_MSMPEG4ENC
+ else if (s->msmpeg4_version != MSMP4_UNUSED)
ff_msmpeg4_encode_picture_header(s);
+#endif
else if (CONFIG_MPEG4_ENCODER && s->h263_pred) {
ret = ff_mpeg4_encode_picture_header(s);
if (ret < 0)
diff --git a/libavcodec/mpv_reconstruct_mb_template.c b/libavcodec/mpv_reconstruct_mb_template.c
index 9aacf380a1..6ad353ddfd 100644
--- a/libavcodec/mpv_reconstruct_mb_template.c
+++ b/libavcodec/mpv_reconstruct_mb_template.c
@@ -173,7 +173,7 @@ void mpv_reconstruct_mb_internal(MpegEncContext *s, int16_t block[12][64],
}
/* add dct residue */
- if (!(IS_MPEG12(s) || s->msmpeg4_version ||
+ if (!(IS_MPEG12(s) || s->msmpeg4_version != MSMP4_UNUSED ||
(s->codec_id == AV_CODEC_ID_MPEG4 && !s->mpeg_quant)))
#endif /* !IS_ENCODER */
{
diff --git a/libavcodec/msmpeg4.c b/libavcodec/msmpeg4.c
index f7ebb8ba89..50fd581a83 100644
--- a/libavcodec/msmpeg4.c
+++ b/libavcodec/msmpeg4.c
@@ -120,12 +120,12 @@ av_cold void ff_msmpeg4_common_init(MpegEncContext *s)
static AVOnce init_static_once = AV_ONCE_INIT;
switch(s->msmpeg4_version){
- case 1:
- case 2:
+ case MSMP4_V1:
+ case MSMP4_V2:
s->y_dc_scale_table=
s->c_dc_scale_table= ff_mpeg1_dc_scale_table;
break;
- case 3:
+ case MSMP4_V3:
if(s->workaround_bugs){
s->y_dc_scale_table= ff_old_ff_y_dc_scale_table;
s->c_dc_scale_table= ff_wmv1_c_dc_scale_table;
@@ -134,14 +134,14 @@ av_cold void ff_msmpeg4_common_init(MpegEncContext *s)
s->c_dc_scale_table= ff_mpeg4_c_dc_scale_table;
}
break;
- case 4:
- case 5:
+ case MSMP4_WMV1:
+ case MSMP4_WMV2:
s->y_dc_scale_table= ff_wmv1_y_dc_scale_table;
s->c_dc_scale_table= ff_wmv1_c_dc_scale_table;
break;
}
- if(s->msmpeg4_version>=4){
+ if (s->msmpeg4_version >= MSMP4_WMV1) {
ff_init_scantable(s->idsp.idct_permutation, &s->intra_scantable, ff_wmv1_scantable[1]);
ff_init_scantable(s->idsp.idct_permutation, &s->inter_scantable, ff_wmv1_scantable[0]);
ff_permute_scantable(s->permutated_intra_h_scantable, ff_wmv1_scantable[2],
@@ -218,9 +218,8 @@ int ff_msmpeg4_pred_dc(MpegEncContext *s, int n,
b = dc_val[ - 1 - wrap];
c = dc_val[ - wrap];
- if(s->first_slice_line && (n&2)==0 && s->msmpeg4_version<4){
+ if (s->first_slice_line && !(n & 2) && s->msmpeg4_version < MSMP4_WMV1)
b=c=1024;
- }
/* XXX: the following solution consumes divisions, but it does not
necessitate to modify mpegvideo.c. The problem comes from the
@@ -259,7 +258,7 @@ int ff_msmpeg4_pred_dc(MpegEncContext *s, int n,
#endif
/* XXX: WARNING: they did not choose the same test as MPEG-4. This
is very important ! */
- if(s->msmpeg4_version>3){
+ if (s->msmpeg4_version > MSMP4_V3) {
if(s->inter_intra_pred){
uint8_t *dest;
int wrap;
diff --git a/libavcodec/msmpeg4dec.c b/libavcodec/msmpeg4dec.c
index 4143c46c15..209e1fe1b2 100644
--- a/libavcodec/msmpeg4dec.c
+++ b/libavcodec/msmpeg4dec.c
@@ -125,7 +125,7 @@ static int msmpeg4v12_decode_mb(MpegEncContext *s, int16_t block[6][64])
}
}
- if(s->msmpeg4_version==2)
+ if (s->msmpeg4_version == MSMP4_V2)
code = get_vlc2(&s->gb, v2_mb_type_vlc, V2_MB_TYPE_VLC_BITS, 1);
else
code = get_vlc2(&s->gb, ff_h263_inter_MCBPC_vlc, INTER_MCBPC_VLC_BITS, 2);
@@ -139,7 +139,7 @@ static int msmpeg4v12_decode_mb(MpegEncContext *s, int16_t block[6][64])
cbp = code & 0x3;
} else {
s->mb_intra = 1;
- if(s->msmpeg4_version==2)
+ if (s->msmpeg4_version == MSMP4_V2)
cbp = get_vlc2(&s->gb, v2_intra_cbpc_vlc, V2_INTRA_CBPC_VLC_BITS, 1);
else
cbp = get_vlc2(&s->gb, ff_h263_intra_MCBPC_vlc, INTRA_MCBPC_VLC_BITS, 2);
@@ -159,7 +159,8 @@ static int msmpeg4v12_decode_mb(MpegEncContext *s, int16_t block[6][64])
}
cbp|= cbpy<<2;
- if(s->msmpeg4_version==1 || (cbp&3) != 3) cbp^= 0x3C;
+ if (s->msmpeg4_version == MSMP4_V1 || (cbp&3) != 3)
+ cbp ^= 0x3C;
ff_h263_pred_motion(s, 0, 0, &mx, &my);
mx= msmpeg4v2_decode_motion(s, mx, 1);
@@ -172,7 +173,7 @@ static int msmpeg4v12_decode_mb(MpegEncContext *s, int16_t block[6][64])
*mb_type_ptr = MB_TYPE_L0 | MB_TYPE_16x16;
} else {
int v;
- if(s->msmpeg4_version==2){
+ if (s->msmpeg4_version == MSMP4_V2) {
s->ac_pred = get_bits1(&s->gb);
v = get_vlc2(&s->gb, ff_h263_cbpy_vlc, CBPY_VLC_BITS, 1);
if (v < 0) {
@@ -366,16 +367,16 @@ av_cold int ff_msmpeg4_decode_init(AVCodecContext *avctx)
ff_msmpeg4_common_init(s);
- switch(s->msmpeg4_version){
- case 1:
- case 2:
+ switch (s->msmpeg4_version) {
+ case MSMP4_V1:
+ case MSMP4_V2:
s->decode_mb= msmpeg4v12_decode_mb;
break;
- case 3:
- case 4:
+ case MSMP4_V3:
+ case MSMP4_WMV1:
s->decode_mb= msmpeg4v34_decode_mb;
break;
- case 5:
+ case MSMP4_WMV2:
break;
}
@@ -398,7 +399,7 @@ int ff_msmpeg4_decode_picture_header(MpegEncContext * s)
if (get_bits_left(&s->gb) * 8LL < (s->width+15)/16 * ((s->height+15)/16))
return AVERROR_INVALIDDATA;
- if(s->msmpeg4_version==1){
+ if (s->msmpeg4_version == MSMP4_V1) {
int start_code = get_bits_long(&s->gb, 32);
if(start_code!=0x00000100){
av_log(s->avctx, AV_LOG_ERROR, "invalid startcode\n");
@@ -422,7 +423,7 @@ int ff_msmpeg4_decode_picture_header(MpegEncContext * s)
if (s->pict_type == AV_PICTURE_TYPE_I) {
code = get_bits(&s->gb, 5);
- if(s->msmpeg4_version==1){
+ if (s->msmpeg4_version == MSMP4_V1) {
if(code==0 || code>s->mb_height){
av_log(s->avctx, AV_LOG_ERROR, "invalid slice height %d\n", code);
return -1;
@@ -440,20 +441,20 @@ int ff_msmpeg4_decode_picture_header(MpegEncContext * s)
}
switch(s->msmpeg4_version){
- case 1:
- case 2:
+ case MSMP4_V1:
+ case MSMP4_V2:
s->rl_chroma_table_index = 2;
s->rl_table_index = 2;
s->dc_table_index = 0; //not used
break;
- case 3:
+ case MSMP4_V3:
s->rl_chroma_table_index = decode012(&s->gb);
s->rl_table_index = decode012(&s->gb);
s->dc_table_index = get_bits1(&s->gb);
break;
- case 4:
+ case MSMP4_WMV1:
ff_msmpeg4_decode_ext_header(s, (2+5+5+17+7)/8);
if(s->bit_rate > MBAC_BITRATE) s->per_mb_rl_table= get_bits1(&s->gb);
@@ -479,9 +480,9 @@ int ff_msmpeg4_decode_picture_header(MpegEncContext * s)
s->slice_height);
} else {
switch(s->msmpeg4_version){
- case 1:
- case 2:
- if(s->msmpeg4_version==1)
+ case MSMP4_V1:
+ case MSMP4_V2:
+ if (s->msmpeg4_version == MSMP4_V1)
s->use_skip_mb_code = 1;
else
s->use_skip_mb_code = get_bits1(&s->gb);
@@ -490,7 +491,7 @@ int ff_msmpeg4_decode_picture_header(MpegEncContext * s)
s->dc_table_index = 0; //not used
s->mv_table_index = 0;
break;
- case 3:
+ case MSMP4_V3:
s->use_skip_mb_code = get_bits1(&s->gb);
s->rl_table_index = decode012(&s->gb);
s->rl_chroma_table_index = s->rl_table_index;
@@ -499,7 +500,7 @@ int ff_msmpeg4_decode_picture_header(MpegEncContext * s)
s->mv_table_index = get_bits1(&s->gb);
break;
- case 4:
+ case MSMP4_WMV1:
s->use_skip_mb_code = get_bits1(&s->gb);
if(s->bit_rate > MBAC_BITRATE) s->per_mb_rl_table= get_bits1(&s->gb);
@@ -545,13 +546,13 @@ int ff_msmpeg4_decode_picture_header(MpegEncContext * s)
int ff_msmpeg4_decode_ext_header(MpegEncContext * s, int buf_size)
{
int left= buf_size*8 - get_bits_count(&s->gb);
- int length= s->msmpeg4_version>=3 ? 17 : 16;
+ int length = s->msmpeg4_version >= MSMP4_V3 ? 17 : 16;
/* the alt_bitstream reader could read over the end so we need to check it */
if(left>=length && left<length+8)
{
skip_bits(&s->gb, 5); /* fps */
s->bit_rate= get_bits(&s->gb, 11)*1024;
- if(s->msmpeg4_version>=3)
+ if (s->msmpeg4_version >= MSMP4_V3)
s->flipflop_rounding= get_bits1(&s->gb);
else
s->flipflop_rounding= 0;
@@ -559,7 +560,7 @@ int ff_msmpeg4_decode_ext_header(MpegEncContext * s, int buf_size)
else if(left<length+8)
{
s->flipflop_rounding= 0;
- if(s->msmpeg4_version != 2)
+ if (s->msmpeg4_version != MSMP4_V2)
av_log(s->avctx, AV_LOG_ERROR, "ext header missing, %d left\n", left);
}
else
@@ -574,7 +575,7 @@ static int msmpeg4_decode_dc(MpegEncContext * s, int n, int *dir_ptr)
{
int level, pred;
- if(s->msmpeg4_version<=2){
+ if (s->msmpeg4_version <= MSMP4_V2) {
if (n < 4) {
level = get_vlc2(&s->gb, v2_dc_lum_vlc, MSMP4_DC_VLC_BITS, 3);
} else {
@@ -600,7 +601,7 @@ static int msmpeg4_decode_dc(MpegEncContext * s, int n, int *dir_ptr)
}
}
- if(s->msmpeg4_version==1){
+ if (s->msmpeg4_version == MSMP4_V1) {
int32_t *dc_val;
pred = msmpeg4v1_pred_dc(s, n, &dc_val);
level += pred;
@@ -658,7 +659,7 @@ int ff_msmpeg4_decode_block(MpegEncContext * s, int16_t * block,
}
block[0] = level;
- run_diff = s->msmpeg4_version >= 4;
+ run_diff = s->msmpeg4_version >= MSMP4_WMV1;
i = 0;
if (!coded) {
goto not_coded;
@@ -678,7 +679,7 @@ int ff_msmpeg4_decode_block(MpegEncContext * s, int16_t * block,
i = -1;
rl = &ff_rl_table[3 + s->rl_table_index];
- if(s->msmpeg4_version==2)
+ if (s->msmpeg4_version == MSMP4_V2)
run_diff = 0;
else
run_diff = 1;
@@ -700,12 +701,13 @@ int ff_msmpeg4_decode_block(MpegEncContext * s, int16_t * block,
int cache;
cache= GET_CACHE(re, &s->gb);
/* escape */
- if (s->msmpeg4_version==1 || (cache&0x80000000)==0) {
- if (s->msmpeg4_version==1 || (cache&0x40000000)==0) {
+ if (s->msmpeg4_version == MSMP4_V1 || (cache&0x80000000)==0) {
+ if (s->msmpeg4_version == MSMP4_V1 || (cache&0x40000000)==0) {
/* third escape */
- if(s->msmpeg4_version!=1) LAST_SKIP_BITS(re, &s->gb, 2);
+ if (s->msmpeg4_version != MSMP4_V1)
+ LAST_SKIP_BITS(re, &s->gb, 2);
UPDATE_CACHE(re, &s->gb);
- if(s->msmpeg4_version<=3){
+ if (s->msmpeg4_version <= MSMP4_V3) {
last= SHOW_UBITS(re, &s->gb, 1); SKIP_CACHE(re, &s->gb, 1);
run= SHOW_UBITS(re, &s->gb, 6); SKIP_CACHE(re, &s->gb, 6);
level= SHOW_SBITS(re, &s->gb, 8);
@@ -804,7 +806,7 @@ int ff_msmpeg4_decode_block(MpegEncContext * s, int16_t * block,
i = 63; /* XXX: not optimal */
}
}
- if(s->msmpeg4_version>=4 && i>0) i=63; //FIXME/XXX optimize
+ if (s->msmpeg4_version >= MSMP4_WMV1 && i > 0) i=63; //FIXME/XXX optimize
s->block_last_index[n] = i;
return 0;
diff --git a/libavcodec/msmpeg4enc.c b/libavcodec/msmpeg4enc.c
index 5e6bc231d4..642a0ff100 100644
--- a/libavcodec/msmpeg4enc.c
+++ b/libavcodec/msmpeg4enc.c
@@ -141,7 +141,7 @@ av_cold void ff_msmpeg4_encode_init(MpegEncContext *s)
static AVOnce init_static_once = AV_ONCE_INIT;
ff_msmpeg4_common_init(s);
- if (s->msmpeg4_version >= 4) {
+ if (s->msmpeg4_version >= MSMP4_WMV1) {
s->min_qcoeff = -255;
s->max_qcoeff = 255;
}
@@ -226,7 +226,7 @@ void ff_msmpeg4_encode_picture_header(MpegEncContext * s)
put_bits(&s->pb, 2, s->pict_type - 1);
put_bits(&s->pb, 5, s->qscale);
- if(s->msmpeg4_version<=2){
+ if (s->msmpeg4_version <= MSMP4_V2) {
s->rl_table_index = 2;
s->rl_chroma_table_index = 2;
}
@@ -235,7 +235,7 @@ void ff_msmpeg4_encode_picture_header(MpegEncContext * s)
s->mv_table_index = 1; /* only if P-frame */
s->use_skip_mb_code = 1; /* only if P-frame */
s->per_mb_rl_table = 0;
- if(s->msmpeg4_version==4)
+ if (s->msmpeg4_version == MSMP4_WMV1)
s->inter_intra_pred= (s->width*s->height < 320*240 && s->bit_rate<=II_BITRATE && s->pict_type==AV_PICTURE_TYPE_P);
ff_dlog(s, "%d %"PRId64" %d %d %d\n", s->pict_type, s->bit_rate,
s->inter_intra_pred, s->width, s->height);
@@ -244,13 +244,13 @@ void ff_msmpeg4_encode_picture_header(MpegEncContext * s)
s->slice_height= s->mb_height/1;
put_bits(&s->pb, 5, 0x16 + s->mb_height/s->slice_height);
- if(s->msmpeg4_version==4){
+ if (s->msmpeg4_version == MSMP4_WMV1) {
ff_msmpeg4_encode_ext_header(s);
if(s->bit_rate>MBAC_BITRATE)
put_bits(&s->pb, 1, s->per_mb_rl_table);
}
- if(s->msmpeg4_version>2){
+ if (s->msmpeg4_version > MSMP4_V2) {
if(!s->per_mb_rl_table){
ff_msmpeg4_code012(&s->pb, s->rl_chroma_table_index);
ff_msmpeg4_code012(&s->pb, s->rl_table_index);
@@ -261,10 +261,10 @@ void ff_msmpeg4_encode_picture_header(MpegEncContext * s)
} else {
put_bits(&s->pb, 1, s->use_skip_mb_code);
- if(s->msmpeg4_version==4 && s->bit_rate>MBAC_BITRATE)
+ if (s->msmpeg4_version == MSMP4_WMV1 && s->bit_rate > MBAC_BITRATE)
put_bits(&s->pb, 1, s->per_mb_rl_table);
- if(s->msmpeg4_version>2){
+ if (s->msmpeg4_version > MSMP4_V2) {
if(!s->per_mb_rl_table)
ff_msmpeg4_code012(&s->pb, s->rl_table_index);
@@ -298,7 +298,7 @@ FF_ENABLE_DEPRECATION_WARNINGS
put_bits(&s->pb, 11, FFMIN(s->bit_rate / 1024, 2047));
- if (s->msmpeg4_version >= 3)
+ if (s->msmpeg4_version >= MSMP4_V3)
put_bits(&s->pb, 1, s->flipflop_rounding);
else
av_assert0(!s->flipflop_rounding);
@@ -340,7 +340,7 @@ void ff_msmpeg4_encode_motion(MpegEncContext * s,
void ff_msmpeg4_handle_slices(MpegEncContext *s){
if (s->mb_x == 0) {
if (s->slice_height && (s->mb_y % s->slice_height) == 0) {
- if(s->msmpeg4_version < 4){
+ if (s->msmpeg4_version < MSMP4_WMV1) {
ff_mpeg4_clean_buffers(s);
}
s->first_slice_line = 1;
@@ -410,7 +410,7 @@ void ff_msmpeg4_encode_mb(MpegEncContext * s,
if (s->use_skip_mb_code)
put_bits(&s->pb, 1, 0); /* mb coded */
- if(s->msmpeg4_version<=2){
+ if (s->msmpeg4_version <= MSMP4_V2) {
put_bits(&s->pb,
ff_v2_mb_type[cbp&3][1],
ff_v2_mb_type[cbp&3][0]);
@@ -452,7 +452,7 @@ void ff_msmpeg4_encode_mb(MpegEncContext * s,
int val = (s->block_last_index[i] >= 1);
cbp |= val << (5 - i);
}
- if(s->msmpeg4_version<=2){
+ if (s->msmpeg4_version <= MSMP4_V2) {
if (s->pict_type == AV_PICTURE_TYPE_I) {
put_bits(&s->pb,
ff_v2_intra_cbpc[cbp&3][1], ff_v2_intra_cbpc[cbp&3][0]);
@@ -524,7 +524,7 @@ static void msmpeg4_encode_dc(MpegEncContext * s, int level, int n, int *dir_ptr
/* do the prediction */
level -= pred;
- if(s->msmpeg4_version<=2){
+ if (s->msmpeg4_version <= MSMP4_V2) {
if (n < 4) {
put_bits(&s->pb,
ff_v2_dc_lum_table[level + 256][1],
@@ -575,20 +575,17 @@ void ff_msmpeg4_encode_block(MpegEncContext * s, int16_t * block, int n)
} else {
rl = &ff_rl_table[3 + s->rl_chroma_table_index];
}
- run_diff = s->msmpeg4_version>=4;
+ run_diff = s->msmpeg4_version >= MSMP4_WMV1;
scantable= s->intra_scantable.permutated;
} else {
i = 0;
rl = &ff_rl_table[3 + s->rl_table_index];
- if(s->msmpeg4_version<=2)
- run_diff = 0;
- else
- run_diff = 1;
+ run_diff = s->msmpeg4_version > MSMP4_V2;
scantable= s->inter_scantable.permutated;
}
/* recalculate block_last_index for M$ wmv1 */
- if (s->msmpeg4_version >= 4 && s->block_last_index[n] > 0) {
+ if (s->msmpeg4_version >= MSMP4_WMV1 && s->block_last_index[n] > 0) {
for(last_index=63; last_index>=0; last_index--){
if(block[scantable[last_index]]) break;
}
@@ -634,7 +631,7 @@ void ff_msmpeg4_encode_block(MpegEncContext * s, int16_t * block, int n)
if (run1 < 0)
goto esc3;
code = get_rl_index(rl, last, run1+1, level);
- if (s->msmpeg4_version == 4 && code == rl->n)
+ if (s->msmpeg4_version == MSMP4_WMV1 && code == rl->n)
goto esc3;
code = get_rl_index(rl, last, run1, level);
if (code == rl->n) {
@@ -642,7 +639,7 @@ void ff_msmpeg4_encode_block(MpegEncContext * s, int16_t * block, int n)
/* third escape */
put_bits(&s->pb, 1, 0);
put_bits(&s->pb, 1, last);
- if(s->msmpeg4_version>=4){
+ if (s->msmpeg4_version >= MSMP4_WMV1) {
if(s->esc3_level_length==0){
s->esc3_level_length=8;
s->esc3_run_length= 6;
diff --git a/libavcodec/vc1dec.c b/libavcodec/vc1dec.c
index 17da7ed7cd..4b31860c3f 100644
--- a/libavcodec/vc1dec.c
+++ b/libavcodec/vc1dec.c
@@ -610,7 +610,7 @@ av_cold void ff_vc1_init_common(VC1Context *v)
s->out_format = FMT_H263;
s->h263_pred = 1;
- s->msmpeg4_version = 6;
+ s->msmpeg4_version = MSMP4_VC1;
ff_vc1dsp_init(&v->vc1dsp);
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 69/71] avcodec/ituh263enc: Remove redundant check
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (66 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 68/71] avcodec/mpegvideo: Use enum for msmpeg4_version Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 70/71] avcodec/mpegvideo_enc: Binarize reference Andreas Rheinhardt
` (2 subsequent siblings)
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
It is redundant due to the identical check in ff_mpv_encode_init().
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/ituh263enc.c | 5 -----
1 file changed, 5 deletions(-)
diff --git a/libavcodec/ituh263enc.c b/libavcodec/ituh263enc.c
index b7c9f124a9..4b3c55896f 100644
--- a/libavcodec/ituh263enc.c
+++ b/libavcodec/ituh263enc.c
@@ -881,11 +881,6 @@ av_cold void ff_h263_encode_init(MpegEncContext *s)
s->c_dc_scale_table= ff_mpeg1_dc_scale_table;
}
- if (s->lmin > s->lmax) {
- av_log(s->avctx, AV_LOG_WARNING, "Clipping lmin value to %d\n", s->lmax);
- s->lmin = s->lmax;
- }
-
ff_thread_once(&init_static_once, h263_encode_init_static);
}
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 70/71] avcodec/mpegvideo_enc: Binarize reference
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (67 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 69/71] avcodec/ituh263enc: Remove redundant check Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 71/71] avcodec/vc1_pred: Fix indentation Andreas Rheinhardt
2024-06-11 20:59 ` [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
The H.264 decoder used reference to store its picture_structure
into it; yet it does not use mpegvideo any more since commit
2c541554076cc8a72e7145d4da30389ca763f32f. Afterwards commit
629259bdb58061b7b7c1ae4cdc44599f6c0bb050 removed the last remnants.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/mpegvideo_enc.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index 58f68ef5f3..401ba8ca5a 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -1639,8 +1639,7 @@ static int select_input_picture(MpegEncContext *s)
if (s->reordered_input_picture[0]) {
s->reordered_input_picture[0]->reference =
- s->reordered_input_picture[0]->f->pict_type !=
- AV_PICTURE_TYPE_B ? 3 : 0;
+ s->reordered_input_picture[0]->f->pict_type != AV_PICTURE_TYPE_B;
if (s->reordered_input_picture[0]->shared || s->avctx->rc_buffer_size) {
// input is a shared pix, so we can't modify it -> allocate a new
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v2 71/71] avcodec/vc1_pred: Fix indentation
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (68 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 70/71] avcodec/mpegvideo_enc: Binarize reference Andreas Rheinhardt
@ 2024-05-11 20:51 ` Andreas Rheinhardt
2024-06-11 20:59 ` [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-05-11 20:51 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
Forgotten after 41f974205317f6f40e72c9689374ab52d490dc6a.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
libavcodec/vc1_pred.c | 26 +++++++++++++-------------
1 file changed, 13 insertions(+), 13 deletions(-)
diff --git a/libavcodec/vc1_pred.c b/libavcodec/vc1_pred.c
index 6e260fa053..87d9b6d6dc 100644
--- a/libavcodec/vc1_pred.c
+++ b/libavcodec/vc1_pred.c
@@ -719,19 +719,19 @@ void ff_vc1_pred_b_mv(VC1Context *v, int dmv_x[2], int dmv_y[2],
s->cur_pic.motion_val[1][xy][1] = 0;
return;
}
- if (direct && s->next_pic.ptr->field_picture)
- av_log(s->avctx, AV_LOG_WARNING, "Mixed frame/field direct mode not supported\n");
-
- s->mv[0][0][0] = scale_mv(s->next_pic.motion_val[1][xy][0], v->bfraction, 0, s->quarter_sample);
- s->mv[0][0][1] = scale_mv(s->next_pic.motion_val[1][xy][1], v->bfraction, 0, s->quarter_sample);
- s->mv[1][0][0] = scale_mv(s->next_pic.motion_val[1][xy][0], v->bfraction, 1, s->quarter_sample);
- s->mv[1][0][1] = scale_mv(s->next_pic.motion_val[1][xy][1], v->bfraction, 1, s->quarter_sample);
-
- /* Pullback predicted motion vectors as specified in 8.4.5.4 */
- s->mv[0][0][0] = av_clip(s->mv[0][0][0], -60 - (s->mb_x << 6), (s->mb_width << 6) - 4 - (s->mb_x << 6));
- s->mv[0][0][1] = av_clip(s->mv[0][0][1], -60 - (s->mb_y << 6), (s->mb_height << 6) - 4 - (s->mb_y << 6));
- s->mv[1][0][0] = av_clip(s->mv[1][0][0], -60 - (s->mb_x << 6), (s->mb_width << 6) - 4 - (s->mb_x << 6));
- s->mv[1][0][1] = av_clip(s->mv[1][0][1], -60 - (s->mb_y << 6), (s->mb_height << 6) - 4 - (s->mb_y << 6));
+ if (direct && s->next_pic.ptr->field_picture)
+ av_log(s->avctx, AV_LOG_WARNING, "Mixed frame/field direct mode not supported\n");
+
+ s->mv[0][0][0] = scale_mv(s->next_pic.motion_val[1][xy][0], v->bfraction, 0, s->quarter_sample);
+ s->mv[0][0][1] = scale_mv(s->next_pic.motion_val[1][xy][1], v->bfraction, 0, s->quarter_sample);
+ s->mv[1][0][0] = scale_mv(s->next_pic.motion_val[1][xy][0], v->bfraction, 1, s->quarter_sample);
+ s->mv[1][0][1] = scale_mv(s->next_pic.motion_val[1][xy][1], v->bfraction, 1, s->quarter_sample);
+
+ /* Pullback predicted motion vectors as specified in 8.4.5.4 */
+ s->mv[0][0][0] = av_clip(s->mv[0][0][0], -60 - (s->mb_x << 6), (s->mb_width << 6) - 4 - (s->mb_x << 6));
+ s->mv[0][0][1] = av_clip(s->mv[0][0][1], -60 - (s->mb_y << 6), (s->mb_height << 6) - 4 - (s->mb_y << 6));
+ s->mv[1][0][0] = av_clip(s->mv[1][0][0], -60 - (s->mb_x << 6), (s->mb_width << 6) - 4 - (s->mb_x << 6));
+ s->mv[1][0][1] = av_clip(s->mv[1][0][1], -60 - (s->mb_y << 6), (s->mb_height << 6) - 4 - (s->mb_y << 6));
if (direct) {
s->cur_pic.motion_val[0][xy][0] = s->mv[0][0][0];
s->cur_pic.motion_val[0][xy][1] = s->mv[0][0][1];
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [FFmpeg-devel] [PATCH v2 40/71] avcodec/mpegvideo_enc: Move copying properties to alloc_picture()
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 40/71] avcodec/mpegvideo_enc: Move copying properties " Andreas Rheinhardt
@ 2024-05-12 19:55 ` Michael Niedermayer
2024-06-08 14:03 ` [FFmpeg-devel] [PATCH v3 " Andreas Rheinhardt
0 siblings, 1 reply; 75+ messages in thread
From: Michael Niedermayer @ 2024-05-12 19:55 UTC (permalink / raw)
To: FFmpeg development discussions and patches
[-- Attachment #1.1: Type: text/plain, Size: 1210 bytes --]
On Sat, May 11, 2024 at 10:51:04PM +0200, Andreas Rheinhardt wrote:
> This way said function sets everything (except for the actual
> contents of the frame's data). Also rename it to prepare_picture()
> given its new role.
>
> Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
> ---
> libavcodec/mpegvideo_enc.c | 22 +++++++++++-----------
> 1 file changed, 11 insertions(+), 11 deletions(-)
changes output for:
make -j32 && ./ffmpeg -y -i mm-short.mpg -qmin 1 -qmax 8 -vtag mx3p -vcodec mpeg2video -r 25 -pix_fmt yuv422p -minrate 30000k -maxrate 30000k -b 30000k -g 1 -flags +ildct+low_delay -dc 10 -ps 1 -qmin 1 -qmax 8 -top 1 -bufsize 1200000 -lmin QP2LAMBDA -t 2 -an -bitexact /tmp/file-qp1-hq-old.mpg
md5sum /tmp/file-qp1-hq-old.mpg /tmp/file-qp1-hq.mpg
dee0a5b29d2fbb63060ed43668db0df0 /tmp/file-qp1-hq-old.mpg
86c3ed0ec7a948e32888a444475439ae /tmp/file-qp1-hq.mpg
did not investigate why
thx
[...]
--
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
Into a blind darkness they enter who follow after the Ignorance,
they as if into a greater darkness enter who devote themselves
to the Knowledge alone. -- Isha Upanishad
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]
[-- Attachment #2: Type: text/plain, Size: 251 bytes --]
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* [FFmpeg-devel] [PATCH v3 40/71] avcodec/mpegvideo_enc: Move copying properties to alloc_picture()
2024-05-12 19:55 ` Michael Niedermayer
@ 2024-06-08 14:03 ` Andreas Rheinhardt
0 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-06-08 14:03 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Andreas Rheinhardt
This way said function sets everything (except for the actual
contents of the frame's data). Also rename it to prepare_picture()
given its new role.
Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
Now calling av_frame_copy_props() after setting the frame's dimensions
to the actually valid (i.e. unpadded) dimensions. (The earlier code
did it before this block for aesthetical reasons.) It matters because
av_frame_copy_props() only copies pan&scan side data when the dimensions
match.
And once again: Thanks for testing.
libavcodec/mpegvideo_enc.c | 22 +++++++++++-----------
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index f54ec93cf2..3e86737d26 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -1091,7 +1091,11 @@ static int get_intra_count(MpegEncContext *s, const uint8_t *src,
return acc;
}
-static int alloc_picture(MpegEncContext *s, AVFrame *f)
+/**
+ * Allocates new buffers for an AVFrame and copies the properties
+ * from another AVFrame.
+ */
+static int prepare_picture(MpegEncContext *s, AVFrame *f, const AVFrame *props_frame)
{
AVCodecContext *avctx = s->avctx;
int ret;
@@ -1116,6 +1120,10 @@ static int alloc_picture(MpegEncContext *s, AVFrame *f)
f->width = avctx->width;
f->height = avctx->height;
+ ret = av_frame_copy_props(f, props_frame);
+ if (ret < 0)
+ return ret;
+
return 0;
}
@@ -1186,14 +1194,9 @@ static int load_input_picture(MpegEncContext *s, const AVFrame *pic_arg)
return ret;
pic->shared = 1;
} else {
- ret = alloc_picture(s, pic->f);
+ ret = prepare_picture(s, pic->f, pic_arg);
if (ret < 0)
goto fail;
- ret = av_frame_copy_props(pic->f, pic_arg);
- if (ret < 0) {
- ff_mpeg_unref_picture(pic);
- return ret;
- }
for (int i = 0; i < 3; i++) {
ptrdiff_t src_stride = pic_arg->linesize[i];
@@ -1607,11 +1610,8 @@ no_output_pic:
// input is a shared pix, so we can't modify it -> allocate a new
// one & ensure that the shared one is reuseable
av_frame_move_ref(s->new_pic, s->reordered_input_picture[0]->f);
- ret = alloc_picture(s, s->reordered_input_picture[0]->f);
- if (ret < 0)
- goto fail;
- ret = av_frame_copy_props(s->reordered_input_picture[0]->f, s->new_pic);
+ ret = prepare_picture(s, s->reordered_input_picture[0]->f, s->new_pic);
if (ret < 0)
goto fail;
} else {
--
2.40.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
` (69 preceding siblings ...)
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 71/71] avcodec/vc1_pred: Fix indentation Andreas Rheinhardt
@ 2024-06-11 20:59 ` Andreas Rheinhardt
70 siblings, 0 replies; 75+ messages in thread
From: Andreas Rheinhardt @ 2024-06-11 20:59 UTC (permalink / raw)
To: ffmpeg-devel
Andreas Rheinhardt:
> Happens on init_pass2() failure.
>
> Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
> ---
> libavcodec/ratecontrol.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/libavcodec/ratecontrol.c b/libavcodec/ratecontrol.c
> index 9ee08ecb88..27017d7976 100644
> --- a/libavcodec/ratecontrol.c
> +++ b/libavcodec/ratecontrol.c
> @@ -694,6 +694,7 @@ av_cold void ff_rate_control_uninit(MpegEncContext *s)
> emms_c();
>
> av_expr_free(rcc->rc_eq_eval);
> + rcc->rc_eq_eval = NULL;
> av_freep(&rcc->entry);
> }
>
Will apply this patchset tomorrow unless there are objections.
- Andreas
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
* Re: [FFmpeg-devel] [PATCH v2 43/71] avcodec/mpegpicture: Split MPVPicture into WorkPicture and ordinary Pic
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 43/71] avcodec/mpegpicture: Split MPVPicture into WorkPicture and ordinary Pic Andreas Rheinhardt
@ 2024-06-23 22:28 ` Michael Niedermayer
0 siblings, 0 replies; 75+ messages in thread
From: Michael Niedermayer @ 2024-06-23 22:28 UTC (permalink / raw)
To: FFmpeg development discussions and patches
[-- Attachment #1.1: Type: text/plain, Size: 12086 bytes --]
On Sat, May 11, 2024 at 10:51:07PM +0200, Andreas Rheinhardt wrote:
> There are two types of MPVPictures: Three (cur_pic, last_pic, next_pic)
> that are directly part of MpegEncContext and an array of MPVPictures
> that are separately allocated and are mostly accessed via pointers
> (cur|last|next)_pic_ptr; they are also used to store AVFrames in the
> encoder (necessary due to B-frames). As the name implies, each of the
> former is directly associated with one of the _ptr pointers:
> They actually share the same underlying buffers, but the ones
> that are part of the context can have their data pointers offset
> and their linesize doubled for field pictures.
>
> Up until now, each of these had their own references; in particular,
> there was an underlying av_frame_ref() to sync cur_pic and cur_pic_ptr
> etc. This is wasteful.
>
> This commit changes this relationship: cur_pic, last_pic and next_pic
> now become MPVWorkPictures; this structure does not have an AVFrame
> at all any more, but only the cached values of data and linesize.
> It also contains a pointer to the corresponding MPVPicture, establishing
> a more natural relationsship between the two.
> This already means that creating the context-pictures from the pointers
> can no longer fail.
>
> What has not been changed is the fact that the MPVPicture* pointers
> are not ownership pointers and that the MPVPictures are part of an
> array of MPVPictures that is owned by a single AVCodecContext.
> Doing so will be done in a latter commit.
>
> Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
> ---
> libavcodec/d3d12va_mpeg2.c | 10 +-
> libavcodec/d3d12va_vc1.c | 10 +-
> libavcodec/dxva2_mpeg2.c | 16 +--
> libavcodec/dxva2_vc1.c | 20 ++--
> libavcodec/h261dec.c | 7 +-
> libavcodec/h263dec.c | 33 +++---
> libavcodec/ituh263dec.c | 4 +-
> libavcodec/mpeg12dec.c | 56 ++++-----
> libavcodec/mpeg12enc.c | 14 +--
> libavcodec/mpeg4videodec.c | 4 +-
> libavcodec/mpeg4videoenc.c | 4 +-
> libavcodec/mpeg_er.c | 6 +-
> libavcodec/mpegpicture.c | 56 ++++++---
> libavcodec/mpegpicture.h | 30 ++++-
> libavcodec/mpegvideo.c | 11 --
> libavcodec/mpegvideo.h | 9 +-
> libavcodec/mpegvideo_dec.c | 143 +++++++++--------------
> libavcodec/mpegvideo_enc.c | 99 ++++++----------
> libavcodec/mpegvideo_motion.c | 8 +-
> libavcodec/mpv_reconstruct_mb_template.c | 4 +-
> libavcodec/mss2.c | 2 +-
> libavcodec/nvdec_mpeg12.c | 6 +-
> libavcodec/nvdec_mpeg4.c | 6 +-
> libavcodec/nvdec_vc1.c | 6 +-
> libavcodec/ratecontrol.c | 10 +-
> libavcodec/rv10.c | 28 ++---
> libavcodec/rv34.c | 38 +++---
> libavcodec/snowenc.c | 17 +--
> libavcodec/svq1enc.c | 5 +-
> libavcodec/vaapi_mpeg2.c | 12 +-
> libavcodec/vaapi_mpeg4.c | 14 +--
> libavcodec/vaapi_vc1.c | 14 ++-
> libavcodec/vc1.c | 2 +-
> libavcodec/vc1_block.c | 12 +-
> libavcodec/vc1_mc.c | 14 +--
> libavcodec/vc1_pred.c | 2 +-
> libavcodec/vc1dec.c | 40 +++----
> libavcodec/vdpau.c | 2 +-
> libavcodec/vdpau_mpeg12.c | 8 +-
> libavcodec/vdpau_mpeg4.c | 6 +-
> libavcodec/vdpau_vc1.c | 12 +-
> libavcodec/videotoolbox.c | 2 +-
> libavcodec/wmv2dec.c | 2 +-
> 43 files changed, 386 insertions(+), 418 deletions(-)
[...]
after this the linesize for teh last field picture goes exponential
s->last_pic.linesize[i] *= 2;
libavcodec/mpeg12dec.c:1304:41: runtime error: signed integer overflow: 4611686018427387904 * 2 cannot be represented in type 'long'
issue: 69732/clusterfuzz-testcase-minimized-ffmpeg_AV_CODEC_ID_MPEGVIDEO_fuzzer-5123551179374592
something like this:
L0 40 0x62e0001f8400 0x613000000780
L1 40 0x62a0001fe200 0x613000000780
L2 40 0x62a000204200 0x613000000780
L0 80 0x62e0001f8400 0x613000000780
L1 80 0x62a0001fe200 0x613000000780
L2 80 0x62a000204200 0x613000000780
L0 100 0x62e0001f8400 0x613000000780
L1 100 0x62a0001fe200 0x613000000780
L2 100 0x62a000204200 0x613000000780
L0 200 0x62e0001f8400 0x613000000780
L1 200 0x62a0001fe200 0x613000000780
L2 200 0x62a000204200 0x613000000780
L0 400 0x62e0001f8400 0x613000000780
L1 400 0x62a0001fe200 0x613000000780
L2 400 0x62a000204200 0x613000000780
L0 800 0x62e0001f8400 0x613000000780
L1 800 0x62a0001fe200 0x613000000780
L2 800 0x62a000204200 0x613000000780
L0 1000 0x62e0001f8400 0x613000000780
L1 1000 0x62a0001fe200 0x613000000780
L2 1000 0x62a000204200 0x613000000780
L0 2000 0x62e0001f8400 0x613000000780
L1 2000 0x62a0001fe200 0x613000000780
L2 2000 0x62a000204200 0x613000000780
L0 4000 0x62e0001f8400 0x613000000780
L1 4000 0x62a0001fe200 0x613000000780
L2 4000 0x62a000204200 0x613000000780
L0 8000 0x62e0001f8400 0x613000000780
L1 8000 0x62a0001fe200 0x613000000780
L2 8000 0x62a000204200 0x613000000780
L0 10000 0x62e0001f8400 0x613000000780
L1 10000 0x62a0001fe200 0x613000000780
L2 10000 0x62a000204200 0x613000000780
L0 20000 0x62e0001f8400 0x613000000780
L1 20000 0x62a0001fe200 0x613000000780
L2 20000 0x62a000204200 0x613000000780
L0 40000 0x62e0001f8400 0x613000000780
L1 40000 0x62a0001fe200 0x613000000780
L2 40000 0x62a000204200 0x613000000780
L0 80000 0x62e0001f8400 0x613000000780
L1 80000 0x62a0001fe200 0x613000000780
L2 80000 0x62a000204200 0x613000000780
L0 100000 0x62e0001f8400 0x613000000780
L1 100000 0x62a0001fe200 0x613000000780
L2 100000 0x62a000204200 0x613000000780
L0 200000 0x62e0001f8400 0x613000000780
L1 200000 0x62a0001fe200 0x613000000780
L2 200000 0x62a000204200 0x613000000780
L0 400000 0x62e0001f8400 0x613000000780
L1 400000 0x62a0001fe200 0x613000000780
L2 400000 0x62a000204200 0x613000000780
L0 800000 0x62e0001f8400 0x613000000780
L1 800000 0x62a0001fe200 0x613000000780
L2 800000 0x62a000204200 0x613000000780
L0 1000000 0x62e0001f8400 0x613000000780
L1 1000000 0x62a0001fe200 0x613000000780
L2 1000000 0x62a000204200 0x613000000780
L0 2000000 0x62e0001f8400 0x613000000780
L1 2000000 0x62a0001fe200 0x613000000780
L2 2000000 0x62a000204200 0x613000000780
L0 4000000 0x62e0001f8400 0x613000000780
L1 4000000 0x62a0001fe200 0x613000000780
L2 4000000 0x62a000204200 0x613000000780
L0 8000000 0x62e0001f8400 0x613000000780
L1 8000000 0x62a0001fe200 0x613000000780
L2 8000000 0x62a000204200 0x613000000780
L0 10000000 0x62e0001f8400 0x613000000780
L1 10000000 0x62a0001fe200 0x613000000780
L2 10000000 0x62a000204200 0x613000000780
L0 20000000 0x62e0001f8400 0x613000000780
L1 20000000 0x62a0001fe200 0x613000000780
L2 20000000 0x62a000204200 0x613000000780
L0 40000000 0x62e0001f8400 0x613000000780
L1 40000000 0x62a0001fe200 0x613000000780
L2 40000000 0x62a000204200 0x613000000780
L0 80000000 0x62e0001f8400 0x613000000780
L1 80000000 0x62a0001fe200 0x613000000780
L2 80000000 0x62a000204200 0x613000000780
L0 100000000 0x62e0001f8400 0x613000000780
L1 100000000 0x62a0001fe200 0x613000000780
L2 100000000 0x62a000204200 0x613000000780
L0 200000000 0x62e0001f8400 0x613000000780
L1 200000000 0x62a0001fe200 0x613000000780
L2 200000000 0x62a000204200 0x613000000780
L0 400000000 0x62e0001f8400 0x613000000780
L1 400000000 0x62a0001fe200 0x613000000780
L2 400000000 0x62a000204200 0x613000000780
L0 800000000 0x62e0001f8400 0x613000000780
L1 800000000 0x62a0001fe200 0x613000000780
L2 800000000 0x62a000204200 0x613000000780
L0 1000000000 0x62e0001f8400 0x613000000780
L1 1000000000 0x62a0001fe200 0x613000000780
L2 1000000000 0x62a000204200 0x613000000780
L0 2000000000 0x62e0001f8400 0x613000000780
L1 2000000000 0x62a0001fe200 0x613000000780
L2 2000000000 0x62a000204200 0x613000000780
L0 4000000000 0x62e0001f8400 0x613000000780
L1 4000000000 0x62a0001fe200 0x613000000780
L2 4000000000 0x62a000204200 0x613000000780
L0 8000000000 0x62e0001f8400 0x613000000780
L1 8000000000 0x62a0001fe200 0x613000000780
L2 8000000000 0x62a000204200 0x613000000780
L0 10000000000 0x62e0001f8400 0x613000000780
L1 10000000000 0x62a0001fe200 0x613000000780
L2 10000000000 0x62a000204200 0x613000000780
L0 20000000000 0x62e0001f8400 0x613000000780
L1 20000000000 0x62a0001fe200 0x613000000780
L2 20000000000 0x62a000204200 0x613000000780
L0 40000000000 0x62e0001f8400 0x613000000780
L1 40000000000 0x62a0001fe200 0x613000000780
L2 40000000000 0x62a000204200 0x613000000780
L0 80000000000 0x62e0001f8400 0x613000000780
L1 80000000000 0x62a0001fe200 0x613000000780
L2 80000000000 0x62a000204200 0x613000000780
L0 100000000000 0x62e0001f8400 0x613000000780
L1 100000000000 0x62a0001fe200 0x613000000780
L2 100000000000 0x62a000204200 0x613000000780
L0 200000000000 0x62e0001f8400 0x613000000780
L1 200000000000 0x62a0001fe200 0x613000000780
L2 200000000000 0x62a000204200 0x613000000780
L0 400000000000 0x62e0001f8400 0x613000000780
L1 400000000000 0x62a0001fe200 0x613000000780
L2 400000000000 0x62a000204200 0x613000000780
L0 800000000000 0x62e0001f8400 0x613000000780
L1 800000000000 0x62a0001fe200 0x613000000780
L2 800000000000 0x62a000204200 0x613000000780
L0 1000000000000 0x62e0001f8400 0x613000000780
L1 1000000000000 0x62a0001fe200 0x613000000780
L2 1000000000000 0x62a000204200 0x613000000780
L0 2000000000000 0x62e0001f8400 0x613000000780
L1 2000000000000 0x62a0001fe200 0x613000000780
L2 2000000000000 0x62a000204200 0x613000000780
L0 4000000000000 0x62e0001f8400 0x613000000780
L1 4000000000000 0x62a0001fe200 0x613000000780
L2 4000000000000 0x62a000204200 0x613000000780
L0 8000000000000 0x62e0001f8400 0x613000000780
L1 8000000000000 0x62a0001fe200 0x613000000780
L2 8000000000000 0x62a000204200 0x613000000780
L0 10000000000000 0x62e0001f8400 0x613000000780
L1 10000000000000 0x62a0001fe200 0x613000000780
L2 10000000000000 0x62a000204200 0x613000000780
L0 20000000000000 0x62e0001f8400 0x613000000780
L1 20000000000000 0x62a0001fe200 0x613000000780
L2 20000000000000 0x62a000204200 0x613000000780
L0 40000000000000 0x62e0001f8400 0x613000000780
L1 40000000000000 0x62a0001fe200 0x613000000780
L2 40000000000000 0x62a000204200 0x613000000780
L0 80000000000000 0x62e0001f8400 0x613000000780
L1 80000000000000 0x62a0001fe200 0x613000000780
L2 80000000000000 0x62a000204200 0x613000000780
L0 100000000000000 0x62e0001f8400 0x613000000780
L1 100000000000000 0x62a0001fe200 0x613000000780
L2 100000000000000 0x62a000204200 0x613000000780
L0 200000000000000 0x62e0001f8400 0x613000000780
L1 200000000000000 0x62a0001fe200 0x613000000780
L2 200000000000000 0x62a000204200 0x613000000780
L0 400000000000000 0x62e0001f8400 0x613000000780
L1 400000000000000 0x62a0001fe200 0x613000000780
L2 400000000000000 0x62a000204200 0x613000000780
L0 800000000000000 0x62e0001f8400 0x613000000780
L1 800000000000000 0x62a0001fe200 0x613000000780
L2 800000000000000 0x62a000204200 0x613000000780
L0 1000000000000000 0x62e0001f8400 0x613000000780
L1 1000000000000000 0x62a0001fe200 0x613000000780
L2 1000000000000000 0x62a000204200 0x613000000780
L0 2000000000000000 0x62e0001f8400 0x613000000780
L1 2000000000000000 0x62a0001fe200 0x613000000780
L2 2000000000000000 0x62a000204200 0x613000000780
L0 4000000000000000 0x62e0001f8400 0x613000000780
[...]
--
Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB
The greatest way to live with honor in this world is to be what we pretend
to be. -- Socrates
[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]
[-- Attachment #2: Type: text/plain, Size: 251 bytes --]
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 75+ messages in thread
end of thread, other threads:[~2024-06-23 22:28 UTC | newest]
Thread overview: 75+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-05-11 20:23 [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 02/71] avcodec/ratecontrol: Pass RCContext directly in ff_rate_control_uninit() Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 03/71] avcodec/ratecontrol: Don't call ff_rate_control_uninit() ourselves Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 04/71] avcodec/mpegvideo, ratecontrol: Remove write-only skip_count Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 05/71] avcodec/ratecontrol: Avoid padding in RateControlEntry Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 06/71] avcodec/get_buffer: Remove redundant check Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 07/71] avcodec/mpegpicture: Store linesize in ScratchpadContext Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 08/71] avcodec/mpegvideo_dec: Sync linesize and uvlinesize between threads Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 09/71] avcodec/mpegvideo_dec: Factor allocating dummy frames out Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 10/71] avcodec/mpegpicture: Mark dummy frames as such Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 11/71] avcodec/mpeg12dec: Allocate dummy frames for non-I fields Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 12/71] avcodec/mpegvideo_motion: Remove dead checks for existence of reference Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 13/71] avcodec/mpegvideo_motion: Optimize check away Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 14/71] " Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 15/71] avcodec/mpegvideo_motion: Avoid constant function argument Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 16/71] avcodec/msmpeg4enc: Only calculate coded_cbp when used Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 17/71] avcodec/mpegvideo: Only allocate coded_block when needed Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 18/71] avcodec/mpegvideo: Don't reset coded_block unnecessarily Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 19/71] avcodec/mpegvideo: Only allocate cbp_table, pred_dir_table when needed Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 20/71] avcodec/mpegpicture: Always reset motion val buffer Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 21/71] avcodec/mpegpicture: Always reset mbskip_table Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 22/71] avcodec/mpegvideo: Redo aligning mb_height for VC-1 Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 23/71] avcodec/mpegvideo, mpegpicture: Add buffer pool Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 24/71] avcodec/mpegpicture: Reindent after the previous commit Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 25/71] avcodec/mpegpicture: Use RefStruct-pool API Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 26/71] avcodec/h263: Move encoder-only part out of ff_h263_update_motion_val() Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 27/71] avcodec/h263, mpeg(picture|video): Only allocate mbskip_table for MPEG-4 Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 28/71] avcodec/mpegvideo: Reindent after the previous commit Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 29/71] avcodec/h263: Move setting mbskip_table to decoder/encoders Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 30/71] avcodec/mpegvideo: Restrict resetting mbskip_table to MPEG-4 decoder Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 31/71] avcodec/mpegvideo: Shorten variable names Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 32/71] avcodec/mpegpicture: Reduce value of MAX_PLANES define Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 33/71] avcodec/mpegpicture: Cache AVFrame.data and linesize values Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 34/71] avcodec/rv30, rv34, rv40: Avoid indirection Andreas Rheinhardt
2024-05-11 20:50 ` [FFmpeg-devel] [PATCH v2 35/71] avcodec/mpegvideo: Add const where appropriate Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 36/71] avcodec/vc1_pred: Remove unused function parameter Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 37/71] avcodec/mpegpicture: Improve error messages and code Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 38/71] avcodec/mpegpicture: Split ff_alloc_picture() into check and alloc part Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 39/71] avcodec/mpegvideo_enc: Pass AVFrame*, not Picture* to alloc_picture() Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 40/71] avcodec/mpegvideo_enc: Move copying properties " Andreas Rheinhardt
2024-05-12 19:55 ` Michael Niedermayer
2024-06-08 14:03 ` [FFmpeg-devel] [PATCH v3 " Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 41/71] avcodec/mpegpicture: Rename Picture->MPVPicture Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 42/71] avcodec/vc1_mc: Don't check AVFrame INTERLACE flags Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 43/71] avcodec/mpegpicture: Split MPVPicture into WorkPicture and ordinary Pic Andreas Rheinhardt
2024-06-23 22:28 ` Michael Niedermayer
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 44/71] avcodec/error_resilience: Deduplicate cleanup code Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 45/71] avcodec/mpegvideo_enc: Factor setting length of B frame chain out Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 46/71] avcodec/mpegvideo_enc: Return early when getting length of B frame chain Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 47/71] avcodec/mpegvideo_enc: Reindentation Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 48/71] avcodec/mpeg12dec: Don't initialize inter tables for IPU Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 49/71] avcodec/mpeg12dec: Only initialize IDCT " Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 50/71] avcodec/mpeg12dec: Remove write-only assignment Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 51/71] avcodec/mpeg12dec: Set out_format only once Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 52/71] avformat/riff: Declare VCR2 to be MPEG-2 Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 53/71] avcodec/mpegvideo_dec: Add close function for mpegvideo-decoders Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 54/71] avcodec/mpegpicture: Make MPVPicture refcounted Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 55/71] avcodec/mpeg4videoenc: Avoid branch for writing stuffing Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 56/71] avcodec/mpeg4videoenc: Simplify writing startcodes Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 57/71] avcodec/mpegpicture: Use ThreadProgress instead of ThreadFrame API Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 58/71] avcodec/mpegpicture: Avoid loop and branch when setting motion_val Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 59/71] avcodec/mpegpicture: Use union for b_scratchpad and rd_scratchpad Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 60/71] avcodec/mpegpicture: Avoid MotionEstContext in ff_mpeg_framesize_alloc() Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 61/71] avcodec/mpegvideo_enc: Unify initializing PutBitContexts Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 62/71] avcodec/mpeg12enc: Simplify writing startcodes Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 63/71] avcodec/mpegvideo_dec, rv34: Simplify check for "does pic exist?" Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 64/71] avcodec/mpegvideo_dec: Don't sync encoder-only coded_picture_number Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 65/71] avcodec/mpeg12dec: Pass Mpeg1Context* in mpeg_field_start() Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 66/71] avcodec/mpeg12dec: Don't initialize inter_scantable Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 67/71] avcodec/mpegvideo: Remove pblocks Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 68/71] avcodec/mpegvideo: Use enum for msmpeg4_version Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 69/71] avcodec/ituh263enc: Remove redundant check Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 70/71] avcodec/mpegvideo_enc: Binarize reference Andreas Rheinhardt
2024-05-11 20:51 ` [FFmpeg-devel] [PATCH v2 71/71] avcodec/vc1_pred: Fix indentation Andreas Rheinhardt
2024-06-11 20:59 ` [FFmpeg-devel] [PATCH v2 01/71] avcodec/ratecontrol: Fix double free on error Andreas Rheinhardt
Git Inbox Mirror of the ffmpeg-devel mailing list - see https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
This inbox may be cloned and mirrored by anyone:
git clone --mirror https://master.gitmailbox.com/ffmpegdev/0 ffmpegdev/git/0.git
# If you have public-inbox 1.1+ installed, you may
# initialize and index your mirror using the following commands:
public-inbox-init -V2 ffmpegdev ffmpegdev/ https://master.gitmailbox.com/ffmpegdev \
ffmpegdev@gitmailbox.com
public-inbox-index ffmpegdev
Example config snippet for mirrors.
AGPL code for this site: git clone https://public-inbox.org/public-inbox.git