Git Inbox Mirror of the ffmpeg-devel mailing list - see https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
 help / color / mirror / Atom feed
From: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
To: ffmpeg-devel@ffmpeg.org
Subject: Re: [FFmpeg-devel] [PATCH 01/77] avcodec/mpegvideo(_enc)?: Mark init, close functions as, av_cold
Date: Wed, 19 Mar 2025 22:20:22 +0100
Message-ID: <AS8P250MB07446BD9DA9523CB9EA6DDD88FD92@AS8P250MB0744.EURP250.PROD.OUTLOOK.COM> (raw)
In-Reply-To: <AS8P250MB0744A8DBC25ABEC4F82344B58FD92@AS8P250MB0744.EURP250.PROD.OUTLOOK.COM>

[-- Attachment #1: Type: text/plain, Size: 348 bytes --]

Andreas Rheinhardt:
> First part of a patchset; the second part will be sent separately
> because the complete set crosses the ML thresholds ("Message body is too
> big: 1731572 bytes with a limit of 1000 KB"). A complete branch can be
> found here: https://github.com/mkver/FFmpeg/tree/mpvenc
> 
> - Andreas

And here is the remainder.

- Andreas

[-- Attachment #2: 0063-avcodec-mpegvideoenc-Add-MPVEncContext.patch --]
[-- Type: text/x-patch, Size: 636549 bytes --]

From 4eec2a3654509741e7a46fb788e4b3004a5f05cc Mon Sep 17 00:00:00 2001
From: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Date: Wed, 19 Mar 2025 08:11:01 +0100
Subject: [PATCH 63/77] avcodec/mpegvideoenc: Add MPVEncContext

Many of the fields of MpegEncContext (which is also used by decoders)
are actually only used by encoders. Therefore this commit adds
a new encoder-only structure and moves all of the encoder-only
fields to it except for those which require more explicit
synchronisation between the main slice context and the other
slice contexts. This synchronisation is currently mainly provided
by ff_update_thread_context() which simply copies most of
the main slice context over the other slice contexts. Fields
which are moved to the new MPVEncContext no longer participate
in this (which is desired, because it is horrible and for the
fields b) below wasteful) which means that some fields can only
be moved when explicit synchronisation code is added in later commits.

More explicitly, this commit moves the following fields:
a) Fields not copied by ff_update_duplicate_context():
dct_error_sum and dct_count; the former does not need synchronisation,
the latter is synchronised in merge_context_after_encode().
b) Fields which do not change after initialisation (these fields
could also be put into MPVMainEncContext at the cost of
an indirection to access them): lambda_table, adaptive_quant,
{luma,chroma}_elim_threshold, new_pic, fdsp, mpvencdsp, pdsp,
{p,b_forw,b_back,b_bidir_forw,b_bidir_back,b_direct,b_field}_mv_table,
[pb]_field_select_table, mb_{type,var,mean}, mc_mb_var, {min,max}_qcoeff,
{inter,intra}_quant_bias, ac_esc_length, the *_vlc_length fields,
the q_{intra,inter,chroma_intra}_matrix{,16}, dct_offset, mb_info,
mjpeg_ctx, rtp_mode, rtp_payload_size, encode_mb, all function
pointers, mpv_flags, quantizer_noise_shaping,
frame_reconstruction_bitfield, error_rate and intra_penalty.
c) Fields which are already (re)set explicitly: The PutBitContexts
pb, tex_pb, pb2; dquant, skipdct, encoding_error, the statistics
fields {mv,i_tex,p_tex,misc,last}_bits and i_count; last_mv_dir,
esc_pos (reset when writing the header).
d) Fields which are only used by encoders not supporting slice
threading for which synchronisation doesn't matter: esc3_level_length
and the remaining mb_info fields.
e) coded_score: This field is only really used when FF_MPV_FLAG_CBP_RD
is set (which implies trellis) and even then it is only used for
non-intra blocks. For these blocks dct_quantize_trellis_c() either
sets coded_score[n] or returns a last_non_zero value of -1
in which case coded_score will be reset in encode_mb_internal().
Therefore no old values are ever used.

The MotionEstContext has not been moved yet.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
 libavcodec/aarch64/me_cmp_init_aarch64.c    |   58 +-
 libavcodec/arm/me_cmp_init_arm.c            |   11 +-
 libavcodec/dnxhdenc.c                       |  283 ++-
 libavcodec/dnxhdenc.h                       |    4 +-
 libavcodec/flvenc.c                         |   32 +-
 libavcodec/h261enc.c                        |  104 +-
 libavcodec/h261enc.h                        |    4 +-
 libavcodec/h263enc.h                        |   26 +-
 libavcodec/ituh263enc.c                     |  248 ++-
 libavcodec/me_cmp.c                         |  100 +-
 libavcodec/me_cmp.h                         |    6 +-
 libavcodec/mips/me_cmp_mips.h               |   32 +-
 libavcodec/mips/me_cmp_msa.c                |   28 +-
 libavcodec/mips/mpegvideo_mips.h            |    3 +-
 libavcodec/mips/mpegvideoenc_init_mips.c    |    2 +-
 libavcodec/mips/mpegvideoenc_mmi.c          |    4 +-
 libavcodec/mips/mpegvideoencdsp_init_mips.c |    1 +
 libavcodec/mips/pixblockdsp_init_mips.c     |    1 +
 libavcodec/mips/pixblockdsp_mips.h          |    3 +-
 libavcodec/mjpegenc.c                       |   86 +-
 libavcodec/mjpegenc.h                       |    8 +-
 libavcodec/motion_est.c                     |  499 ++---
 libavcodec/motion_est.h                     |   20 +-
 libavcodec/motion_est_template.c            |  104 +-
 libavcodec/mpeg12enc.c                      |  460 ++--
 libavcodec/mpeg12enc.h                      |   10 +-
 libavcodec/mpeg4videoenc.c                  |  505 +++--
 libavcodec/mpeg4videoenc.h                  |   12 +-
 libavcodec/mpegvideo.c                      |    3 -
 libavcodec/mpegvideo.h                      |  123 +-
 libavcodec/mpegvideo_dec.c                  |    2 +-
 libavcodec/mpegvideo_enc.c                  | 2105 ++++++++++---------
 libavcodec/mpegvideoenc.h                   |  155 +-
 libavcodec/msmpeg4enc.c                     |  156 +-
 libavcodec/msmpeg4enc.h                     |    8 +-
 libavcodec/ppc/me_cmp.c                     |   20 +-
 libavcodec/ratecontrol.c                    |  253 ++-
 libavcodec/riscv/me_cmp_init.c              |   44 +-
 libavcodec/rv10enc.c                        |   16 +-
 libavcodec/rv20enc.c                        |   36 +-
 libavcodec/snow_dwt.c                       |   14 +-
 libavcodec/snow_dwt.h                       |    6 +-
 libavcodec/snowenc.c                        |  128 +-
 libavcodec/speedhqenc.c                     |   28 +-
 libavcodec/speedhqenc.h                     |    6 +-
 libavcodec/svq1enc.c                        |  126 +-
 libavcodec/wmv2enc.c                        |   66 +-
 libavcodec/x86/me_cmp.asm                   |   16 +-
 libavcodec/x86/me_cmp_init.c                |   62 +-
 libavcodec/x86/mpegvideoenc.c               |    9 +-
 libavcodec/x86/mpegvideoenc_template.c      |   38 +-
 tests/checkasm/motion.c                     |    2 +-
 52 files changed, 3046 insertions(+), 3030 deletions(-)

diff --git a/libavcodec/aarch64/me_cmp_init_aarch64.c b/libavcodec/aarch64/me_cmp_init_aarch64.c
index fa2724403d..dac6676886 100644
--- a/libavcodec/aarch64/me_cmp_init_aarch64.c
+++ b/libavcodec/aarch64/me_cmp_init_aarch64.c
@@ -21,66 +21,66 @@
 #include "config.h"
 #include "libavutil/attributes.h"
 #include "libavutil/aarch64/cpu.h"
-#include "libavcodec/mpegvideo.h"
+#include "libavcodec/mpegvideoenc.h"
 
-int ff_pix_abs16_neon(MpegEncContext *s, const uint8_t *blk1, const uint8_t *blk2,
+int ff_pix_abs16_neon(MPVEncContext *s, const uint8_t *blk1, const uint8_t *blk2,
                       ptrdiff_t stride, int h);
-int ff_pix_abs16_xy2_neon(MpegEncContext *s, const uint8_t *blk1, const uint8_t *blk2,
+int ff_pix_abs16_xy2_neon(MPVEncContext *s, const uint8_t *blk1, const uint8_t *blk2,
                           ptrdiff_t stride, int h);
-int ff_pix_abs16_x2_neon(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_pix_abs16_x2_neon(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                          ptrdiff_t stride, int h);
-int ff_pix_abs16_y2_neon(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_pix_abs16_y2_neon(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                          ptrdiff_t stride, int h);
-int ff_pix_abs8_neon(MpegEncContext *s, const uint8_t *blk1, const uint8_t *blk2,
+int ff_pix_abs8_neon(MPVEncContext *s, const uint8_t *blk1, const uint8_t *blk2,
                      ptrdiff_t stride, int h);
 
-int sse16_neon(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int sse16_neon(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                ptrdiff_t stride, int h);
-int sse8_neon(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int sse8_neon(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
               ptrdiff_t stride, int h);
-int sse4_neon(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int sse4_neon(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
               ptrdiff_t stride, int h);
 
-int vsad16_neon(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2,
+int vsad16_neon(MPVEncContext *c, const uint8_t *s1, const uint8_t *s2,
                 ptrdiff_t stride, int h);
-int vsad_intra16_neon(MpegEncContext *c, const uint8_t *s, const uint8_t *dummy,
+int vsad_intra16_neon(MPVEncContext *c, const uint8_t *s, const uint8_t *dummy,
                       ptrdiff_t stride, int h) ;
-int vsad_intra8_neon(MpegEncContext *c, const uint8_t *s, const uint8_t *dummy,
+int vsad_intra8_neon(MPVEncContext *c, const uint8_t *s, const uint8_t *dummy,
                      ptrdiff_t stride, int h) ;
-int vsse16_neon(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2,
+int vsse16_neon(MPVEncContext *c, const uint8_t *s1, const uint8_t *s2,
                 ptrdiff_t stride, int h);
-int vsse_intra16_neon(MpegEncContext *c, const uint8_t *s, const uint8_t *dummy,
+int vsse_intra16_neon(MPVEncContext *c, const uint8_t *s, const uint8_t *dummy,
                       ptrdiff_t stride, int h);
 int nsse16_neon(int multiplier, const uint8_t *s, const uint8_t *s2,
                 ptrdiff_t stride, int h);
-int nsse16_neon_wrapper(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2,
+int nsse16_neon_wrapper(MPVEncContext *c, const uint8_t *s1, const uint8_t *s2,
                         ptrdiff_t stride, int h);
-int pix_median_abs16_neon(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int pix_median_abs16_neon(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                           ptrdiff_t stride, int h);
-int pix_median_abs8_neon(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int pix_median_abs8_neon(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                          ptrdiff_t stride, int h);
-int ff_pix_abs8_x2_neon(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_pix_abs8_x2_neon(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                         ptrdiff_t stride, int h);
-int ff_pix_abs8_y2_neon(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_pix_abs8_y2_neon(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                         ptrdiff_t stride, int h);
-int ff_pix_abs8_xy2_neon(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_pix_abs8_xy2_neon(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                          ptrdiff_t stride, int h);
 
 int nsse8_neon(int multiplier, const uint8_t *s, const uint8_t *s2,
                ptrdiff_t stride, int h);
-int nsse8_neon_wrapper(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2,
+int nsse8_neon_wrapper(MPVEncContext *c, const uint8_t *s1, const uint8_t *s2,
                        ptrdiff_t stride, int h);
 
-int vsse8_neon(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2,
+int vsse8_neon(MPVEncContext *c, const uint8_t *s1, const uint8_t *s2,
                ptrdiff_t stride, int h);
 
-int vsse_intra8_neon(MpegEncContext *c, const uint8_t *s, const uint8_t *dummy,
+int vsse_intra8_neon(MPVEncContext *c, const uint8_t *s, const uint8_t *dummy,
                      ptrdiff_t stride, int h);
 
 #if HAVE_DOTPROD
-int sse16_neon_dotprod(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int sse16_neon_dotprod(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                        ptrdiff_t stride, int h);
-int vsse_intra16_neon_dotprod(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2,
+int vsse_intra16_neon_dotprod(MPVEncContext *c, const uint8_t *s1, const uint8_t *s2,
                               ptrdiff_t stride, int h);
 #endif
 
@@ -129,20 +129,20 @@ av_cold void ff_me_cmp_init_aarch64(MECmpContext *c, AVCodecContext *avctx)
 #endif
 }
 
-int nsse16_neon_wrapper(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2,
+int nsse16_neon_wrapper(MPVEncContext *c, const uint8_t *s1, const uint8_t *s2,
                         ptrdiff_t stride, int h)
 {
     if (c)
-        return nsse16_neon(c->avctx->nsse_weight, s1, s2, stride, h);
+        return nsse16_neon(c->c.avctx->nsse_weight, s1, s2, stride, h);
     else
         return nsse16_neon(8, s1, s2, stride, h);
 }
 
-int nsse8_neon_wrapper(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2,
+int nsse8_neon_wrapper(MPVEncContext *c, const uint8_t *s1, const uint8_t *s2,
                        ptrdiff_t stride, int h)
 {
     if (c)
-        return nsse8_neon(c->avctx->nsse_weight, s1, s2, stride, h);
+        return nsse8_neon(c->c.avctx->nsse_weight, s1, s2, stride, h);
     else
         return nsse8_neon(8, s1, s2, stride, h);
 }
diff --git a/libavcodec/arm/me_cmp_init_arm.c b/libavcodec/arm/me_cmp_init_arm.c
index 8c556f1755..a47e2bc4fa 100644
--- a/libavcodec/arm/me_cmp_init_arm.c
+++ b/libavcodec/arm/me_cmp_init_arm.c
@@ -23,19 +23,18 @@
 #include "libavutil/arm/cpu.h"
 #include "libavcodec/avcodec.h"
 #include "libavcodec/me_cmp.h"
-#include "libavcodec/mpegvideo.h"
 
-int ff_pix_abs16_armv6(MpegEncContext *s, const uint8_t *blk1, const uint8_t *blk2,
+int ff_pix_abs16_armv6(MPVEncContext *s, const uint8_t *blk1, const uint8_t *blk2,
                        ptrdiff_t stride, int h);
-int ff_pix_abs16_x2_armv6(MpegEncContext *s, const uint8_t *blk1, const uint8_t *blk2,
+int ff_pix_abs16_x2_armv6(MPVEncContext *s, const uint8_t *blk1, const uint8_t *blk2,
                           ptrdiff_t stride, int h);
-int ff_pix_abs16_y2_armv6(MpegEncContext *s, const uint8_t *blk1, const uint8_t *blk2,
+int ff_pix_abs16_y2_armv6(MPVEncContext *s, const uint8_t *blk1, const uint8_t *blk2,
                           ptrdiff_t stride, int h);
 
-int ff_pix_abs8_armv6(MpegEncContext *s, const uint8_t *blk1, const uint8_t *blk2,
+int ff_pix_abs8_armv6(MPVEncContext *s, const uint8_t *blk1, const uint8_t *blk2,
                       ptrdiff_t stride, int h);
 
-int ff_sse16_armv6(MpegEncContext *s, const uint8_t *blk1, const uint8_t *blk2,
+int ff_sse16_armv6(MPVEncContext *s, const uint8_t *blk1, const uint8_t *blk2,
                    ptrdiff_t stride, int h);
 
 av_cold void ff_me_cmp_init_arm(MECmpContext *c, AVCodecContext *avctx)
diff --git a/libavcodec/dnxhdenc.c b/libavcodec/dnxhdenc.c
index 6009e392f8..a8f8ab3cd9 100644
--- a/libavcodec/dnxhdenc.c
+++ b/libavcodec/dnxhdenc.c
@@ -117,12 +117,12 @@ void dnxhd_10bit_get_pixels_8x4_sym(int16_t *restrict block,
     memcpy(block + 4 * 8, pixels + 3 * line_size, 8 * sizeof(*block));
 }
 
-static int dnxhd_10bit_dct_quantize_444(MpegEncContext *ctx, int16_t *block,
+static int dnxhd_10bit_dct_quantize_444(MPVEncContext *ctx, int16_t *block,
                                         int n, int qscale, int *overflow)
 {
     int i, j, level, last_non_zero, start_i;
     const int *qmat;
-    const uint8_t *scantable= ctx->intra_scantable.scantable;
+    const uint8_t *scantable = ctx->c.intra_scantable.scantable;
     int bias;
     int max = 0;
     unsigned int threshold1, threshold2;
@@ -169,17 +169,17 @@ static int dnxhd_10bit_dct_quantize_444(MpegEncContext *ctx, int16_t *block,
     *overflow = ctx->max_qcoeff < max; //overflow might have happened
 
     /* we need this permutation so that we correct the IDCT, we only permute the !=0 elements */
-    if (ctx->idsp.perm_type != FF_IDCT_PERM_NONE)
-        ff_block_permute(block, ctx->idsp.idct_permutation,
+    if (ctx->c.idsp.perm_type != FF_IDCT_PERM_NONE)
+        ff_block_permute(block, ctx->c.idsp.idct_permutation,
                          scantable, last_non_zero);
 
     return last_non_zero;
 }
 
-static int dnxhd_10bit_dct_quantize(MpegEncContext *ctx, int16_t *block,
+static int dnxhd_10bit_dct_quantize(MPVEncContext *ctx, int16_t *block,
                                     int n, int qscale, int *overflow)
 {
-    const uint8_t *scantable= ctx->intra_scantable.scantable;
+    const uint8_t *scantable = ctx->c.intra_scantable.scantable;
     const int *qmat = n<4 ? ctx->q_intra_matrix[qscale] : ctx->q_chroma_intra_matrix[qscale];
     int last_non_zero = 0;
     int i;
@@ -200,8 +200,8 @@ static int dnxhd_10bit_dct_quantize(MpegEncContext *ctx, int16_t *block,
     }
 
     /* we need this permutation so that we correct the IDCT, we only permute the !=0 elements */
-    if (ctx->idsp.perm_type != FF_IDCT_PERM_NONE)
-        ff_block_permute(block, ctx->idsp.idct_permutation,
+    if (ctx->c.idsp.perm_type != FF_IDCT_PERM_NONE)
+        ff_block_permute(block, ctx->c.idsp.idct_permutation,
                          scantable, last_non_zero);
 
     return last_non_zero;
@@ -266,34 +266,33 @@ static av_cold int dnxhd_init_qmat(DNXHDEncContext *ctx, int lbias, int cbias)
 {
     // init first elem to 1 to avoid div by 0 in convert_matrix
     uint16_t weight_matrix[64] = { 1, }; // convert_matrix needs uint16_t*
-    int qscale, i;
     const uint8_t *luma_weight_table   = ctx->cid_table->luma_weight;
     const uint8_t *chroma_weight_table = ctx->cid_table->chroma_weight;
 
-    if (!FF_ALLOCZ_TYPED_ARRAY(ctx->qmatrix_l,   ctx->m.avctx->qmax + 1) ||
-        !FF_ALLOCZ_TYPED_ARRAY(ctx->qmatrix_c,   ctx->m.avctx->qmax + 1) ||
-        !FF_ALLOCZ_TYPED_ARRAY(ctx->qmatrix_l16, ctx->m.avctx->qmax + 1) ||
-        !FF_ALLOCZ_TYPED_ARRAY(ctx->qmatrix_c16, ctx->m.avctx->qmax + 1))
+    if (!FF_ALLOCZ_TYPED_ARRAY(ctx->qmatrix_l,   ctx->m.c.avctx->qmax + 1) ||
+        !FF_ALLOCZ_TYPED_ARRAY(ctx->qmatrix_c,   ctx->m.c.avctx->qmax + 1) ||
+        !FF_ALLOCZ_TYPED_ARRAY(ctx->qmatrix_l16, ctx->m.c.avctx->qmax + 1) ||
+        !FF_ALLOCZ_TYPED_ARRAY(ctx->qmatrix_c16, ctx->m.c.avctx->qmax + 1))
         return AVERROR(ENOMEM);
 
     if (ctx->bit_depth == 8) {
-        for (i = 1; i < 64; i++) {
-            int j = ctx->m.idsp.idct_permutation[ff_zigzag_direct[i]];
+        for (int i = 1; i < 64; i++) {
+            int j = ctx->m.c.idsp.idct_permutation[ff_zigzag_direct[i]];
             weight_matrix[j] = ctx->cid_table->luma_weight[i];
         }
         ff_convert_matrix(&ctx->m, ctx->qmatrix_l, ctx->qmatrix_l16,
                           weight_matrix, ctx->intra_quant_bias, 1,
-                          ctx->m.avctx->qmax, 1);
-        for (i = 1; i < 64; i++) {
-            int j = ctx->m.idsp.idct_permutation[ff_zigzag_direct[i]];
+                          ctx->m.c.avctx->qmax, 1);
+        for (int i = 1; i < 64; i++) {
+            int j = ctx->m.c.idsp.idct_permutation[ff_zigzag_direct[i]];
             weight_matrix[j] = ctx->cid_table->chroma_weight[i];
         }
         ff_convert_matrix(&ctx->m, ctx->qmatrix_c, ctx->qmatrix_c16,
                           weight_matrix, ctx->intra_quant_bias, 1,
-                          ctx->m.avctx->qmax, 1);
+                          ctx->m.c.avctx->qmax, 1);
 
-        for (qscale = 1; qscale <= ctx->m.avctx->qmax; qscale++) {
-            for (i = 0; i < 64; i++) {
+        for (int qscale = 1; qscale <= ctx->m.c.avctx->qmax; qscale++) {
+            for (int i = 0; i < 64; i++) {
                 ctx->qmatrix_l[qscale][i]      <<= 2;
                 ctx->qmatrix_c[qscale][i]      <<= 2;
                 ctx->qmatrix_l16[qscale][0][i] <<= 2;
@@ -304,8 +303,8 @@ static av_cold int dnxhd_init_qmat(DNXHDEncContext *ctx, int lbias, int cbias)
         }
     } else {
         // 10-bit
-        for (qscale = 1; qscale <= ctx->m.avctx->qmax; qscale++) {
-            for (i = 1; i < 64; i++) {
+        for (int qscale = 1; qscale <= ctx->m.c.avctx->qmax; qscale++) {
+            for (int i = 1; i < 64; i++) {
                 int j = ff_zigzag_direct[i];
 
                 /* The quantization formula from the VC-3 standard is:
@@ -337,12 +336,12 @@ static av_cold int dnxhd_init_qmat(DNXHDEncContext *ctx, int lbias, int cbias)
 
 static av_cold int dnxhd_init_rc(DNXHDEncContext *ctx)
 {
-    if (!FF_ALLOCZ_TYPED_ARRAY(ctx->mb_rc, (ctx->m.avctx->qmax + 1) * ctx->m.mb_num))
+    if (!FF_ALLOCZ_TYPED_ARRAY(ctx->mb_rc, (ctx->m.c.avctx->qmax + 1) * ctx->m.c.mb_num))
         return AVERROR(ENOMEM);
 
-    if (ctx->m.avctx->mb_decision != FF_MB_DECISION_RD) {
-        if (!FF_ALLOCZ_TYPED_ARRAY(ctx->mb_cmp,     ctx->m.mb_num) ||
-            !FF_ALLOCZ_TYPED_ARRAY(ctx->mb_cmp_tmp, ctx->m.mb_num))
+    if (ctx->m.c.avctx->mb_decision != FF_MB_DECISION_RD) {
+        if (!FF_ALLOCZ_TYPED_ARRAY(ctx->mb_cmp,     ctx->m.c.mb_num) ||
+            !FF_ALLOCZ_TYPED_ARRAY(ctx->mb_cmp_tmp, ctx->m.c.mb_num))
             return AVERROR(ENOMEM);
     }
     ctx->frame_bits = (ctx->coding_unit_size -
@@ -414,21 +413,21 @@ static av_cold int dnxhd_encode_init(AVCodecContext *avctx)
     ctx->cid_table = ff_dnxhd_get_cid_table(ctx->cid);
     av_assert0(ctx->cid_table);
 
-    ctx->m.avctx    = avctx;
-    ctx->m.mb_intra = 1;
-    ctx->m.h263_aic = 1;
+    ctx->m.c.avctx    = avctx;
+    ctx->m.c.mb_intra = 1;
+    ctx->m.c.h263_aic = 1;
 
     avctx->bits_per_raw_sample = ctx->bit_depth;
 
-    ff_blockdsp_init(&ctx->m.bdsp);
+    ff_blockdsp_init(&ctx->m.c.bdsp);
     ff_fdctdsp_init(&ctx->m.fdsp, avctx);
-    ff_mpv_idct_init(&ctx->m);
+    ff_mpv_idct_init(&ctx->m.c);
     ff_mpegvideoencdsp_init(&ctx->m.mpvencdsp, avctx);
     ff_pixblockdsp_init(&ctx->m.pdsp, avctx);
     ff_dct_encode_init(&ctx->m);
 
     if (ctx->profile != AV_PROFILE_DNXHD)
-        ff_videodsp_init(&ctx->m.vdsp, ctx->bit_depth);
+        ff_videodsp_init(&ctx->m.c.vdsp, ctx->bit_depth);
 
     if (ctx->is_444 || ctx->profile == AV_PROFILE_DNXHR_HQX) {
         ctx->m.dct_quantize     = dnxhd_10bit_dct_quantize_444;
@@ -445,12 +444,12 @@ static av_cold int dnxhd_encode_init(AVCodecContext *avctx)
 
     ff_dnxhdenc_init(ctx);
 
-    ctx->m.mb_height = (avctx->height + 15) / 16;
-    ctx->m.mb_width  = (avctx->width  + 15) / 16;
+    ctx->m.c.mb_height = (avctx->height + 15) / 16;
+    ctx->m.c.mb_width  = (avctx->width  + 15) / 16;
 
     if (avctx->flags & AV_CODEC_FLAG_INTERLACED_DCT) {
         ctx->interlaced   = 1;
-        ctx->m.mb_height /= 2;
+        ctx->m.c.mb_height /= 2;
     }
 
     if (ctx->interlaced && ctx->profile != AV_PROFILE_DNXHD) {
@@ -459,7 +458,7 @@ static av_cold int dnxhd_encode_init(AVCodecContext *avctx)
         return AVERROR(EINVAL);
     }
 
-    ctx->m.mb_num = ctx->m.mb_height * ctx->m.mb_width;
+    ctx->m.c.mb_num = ctx->m.c.mb_height * ctx->m.c.mb_width;
 
     if (ctx->cid_table->frame_size == DNXHD_VARIABLE) {
         ctx->frame_size = ff_dnxhd_get_hr_frame_size(ctx->cid,
@@ -471,8 +470,8 @@ static av_cold int dnxhd_encode_init(AVCodecContext *avctx)
         ctx->coding_unit_size = ctx->cid_table->coding_unit_size;
     }
 
-    if (ctx->m.mb_height > 68)
-        ctx->data_offset = 0x170 + (ctx->m.mb_height << 2);
+    if (ctx->m.c.mb_height > 68)
+        ctx->data_offset = 0x170 + (ctx->m.c.mb_height << 2);
     else
         ctx->data_offset = 0x280;
 
@@ -490,10 +489,10 @@ static av_cold int dnxhd_encode_init(AVCodecContext *avctx)
     if ((ret = dnxhd_init_rc(ctx)) < 0)
         return ret;
 
-    if (!FF_ALLOCZ_TYPED_ARRAY(ctx->slice_size, ctx->m.mb_height) ||
-        !FF_ALLOCZ_TYPED_ARRAY(ctx->slice_offs, ctx->m.mb_height) ||
-        !FF_ALLOCZ_TYPED_ARRAY(ctx->mb_bits,    ctx->m.mb_num)    ||
-        !FF_ALLOCZ_TYPED_ARRAY(ctx->mb_qscale,  ctx->m.mb_num))
+    if (!FF_ALLOCZ_TYPED_ARRAY(ctx->slice_size, ctx->m.c.mb_height) ||
+        !FF_ALLOCZ_TYPED_ARRAY(ctx->slice_offs, ctx->m.c.mb_height) ||
+        !FF_ALLOCZ_TYPED_ARRAY(ctx->mb_bits,    ctx->m.c.mb_num)    ||
+        !FF_ALLOCZ_TYPED_ARRAY(ctx->mb_qscale,  ctx->m.c.mb_num))
         return AVERROR(ENOMEM);
 
     if (avctx->active_thread_type == FF_THREAD_SLICE) {
@@ -548,8 +547,8 @@ static int dnxhd_write_header(AVCodecContext *avctx, uint8_t *buf)
     buf[0x5f] = 0x01; // UDL
 
     buf[0x167] = 0x02; // reserved
-    AV_WB16(buf + 0x16a, ctx->m.mb_height * 4 + 4); // MSIPS
-    AV_WB16(buf + 0x16c, ctx->m.mb_height); // Ns
+    AV_WB16(buf + 0x16a, ctx->m.c.mb_height * 4 + 4); // MSIPS
+    AV_WB16(buf + 0x16c, ctx->m.c.mb_height); // Ns
     buf[0x16f] = 0x10; // reserved
 
     ctx->msip = buf + 0x170;
@@ -577,11 +576,11 @@ void dnxhd_encode_block(PutBitContext *pb, DNXHDEncContext *ctx,
     int last_non_zero = 0;
     int slevel, i, j;
 
-    dnxhd_encode_dc(pb, ctx, block[0] - ctx->m.last_dc[n]);
-    ctx->m.last_dc[n] = block[0];
+    dnxhd_encode_dc(pb, ctx, block[0] - ctx->m.c.last_dc[n]);
+    ctx->m.c.last_dc[n] = block[0];
 
     for (i = 1; i <= last_index; i++) {
-        j = ctx->m.intra_scantable.permutated[i];
+        j = ctx->m.c.intra_scantable.permutated[i];
         slevel = block[j];
         if (slevel) {
             int run_level = i - last_non_zero - 1;
@@ -613,7 +612,7 @@ void dnxhd_unquantize_c(DNXHDEncContext *ctx, int16_t *block, int n,
     }
 
     for (i = 1; i <= last_index; i++) {
-        int j = ctx->m.intra_scantable.permutated[i];
+        int j = ctx->m.c.intra_scantable.permutated[i];
         level = block[j];
         if (level) {
             if (level < 0) {
@@ -661,7 +660,7 @@ int dnxhd_calc_ac_bits(DNXHDEncContext *ctx, int16_t *block, int last_index)
     int bits = 0;
     int i, j, level;
     for (i = 1; i <= last_index; i++) {
-        j = ctx->m.intra_scantable.permutated[i];
+        j = ctx->m.c.intra_scantable.permutated[i];
         level = block[j];
         if (level) {
             int run_level = i - last_non_zero - 1;
@@ -680,36 +679,36 @@ void dnxhd_get_blocks(DNXHDEncContext *ctx, int mb_x, int mb_y)
     const int bw = 1 << bs;
     int dct_y_offset = ctx->dct_y_offset;
     int dct_uv_offset = ctx->dct_uv_offset;
-    int linesize = ctx->m.linesize;
-    int uvlinesize = ctx->m.uvlinesize;
+    int linesize = ctx->m.c.linesize;
+    int uvlinesize = ctx->m.c.uvlinesize;
     const uint8_t *ptr_y = ctx->thread[0]->src[0] +
-                           ((mb_y << 4) * ctx->m.linesize) + (mb_x << bs + 1);
+                           ((mb_y << 4) * ctx->m.c.linesize) + (mb_x << bs + 1);
     const uint8_t *ptr_u = ctx->thread[0]->src[1] +
-                           ((mb_y << 4) * ctx->m.uvlinesize) + (mb_x << bs + ctx->is_444);
+                           ((mb_y << 4) * ctx->m.c.uvlinesize) + (mb_x << bs + ctx->is_444);
     const uint8_t *ptr_v = ctx->thread[0]->src[2] +
-                           ((mb_y << 4) * ctx->m.uvlinesize) + (mb_x << bs + ctx->is_444);
+                           ((mb_y << 4) * ctx->m.c.uvlinesize) + (mb_x << bs + ctx->is_444);
     PixblockDSPContext *pdsp = &ctx->m.pdsp;
-    VideoDSPContext *vdsp = &ctx->m.vdsp;
+    VideoDSPContext *vdsp = &ctx->m.c.vdsp;
 
-    if (ctx->bit_depth != 10 && vdsp->emulated_edge_mc && ((mb_x << 4) + 16 > ctx->m.avctx->width ||
-                                                           (mb_y << 4) + 16 > ctx->m.avctx->height)) {
-        int y_w = ctx->m.avctx->width  - (mb_x << 4);
-        int y_h = ctx->m.avctx->height - (mb_y << 4);
+    if (ctx->bit_depth != 10 && vdsp->emulated_edge_mc && ((mb_x << 4) + 16 > ctx->m.c.avctx->width ||
+                                                           (mb_y << 4) + 16 > ctx->m.c.avctx->height)) {
+        int y_w = ctx->m.c.avctx->width  - (mb_x << 4);
+        int y_h = ctx->m.c.avctx->height - (mb_y << 4);
         int uv_w = (y_w + 1) / 2;
         int uv_h = y_h;
         linesize = 16;
         uvlinesize = 8;
 
         vdsp->emulated_edge_mc(&ctx->edge_buf_y[0], ptr_y,
-                               linesize, ctx->m.linesize,
+                               linesize, ctx->m.c.linesize,
                                linesize, 16,
                                0, 0, y_w, y_h);
         vdsp->emulated_edge_mc(&ctx->edge_buf_uv[0][0], ptr_u,
-                               uvlinesize, ctx->m.uvlinesize,
+                               uvlinesize, ctx->m.c.uvlinesize,
                                uvlinesize, 16,
                                0, 0, uv_w, uv_h);
         vdsp->emulated_edge_mc(&ctx->edge_buf_uv[1][0], ptr_v,
-                               uvlinesize, ctx->m.uvlinesize,
+                               uvlinesize, ctx->m.c.uvlinesize,
                                uvlinesize, 16,
                                0, 0, uv_w, uv_h);
 
@@ -718,25 +717,25 @@ void dnxhd_get_blocks(DNXHDEncContext *ctx, int mb_x, int mb_y)
         ptr_y = &ctx->edge_buf_y[0];
         ptr_u = &ctx->edge_buf_uv[0][0];
         ptr_v = &ctx->edge_buf_uv[1][0];
-    } else if (ctx->bit_depth == 10 && vdsp->emulated_edge_mc && ((mb_x << 4) + 16 > ctx->m.avctx->width ||
-                                                                  (mb_y << 4) + 16 > ctx->m.avctx->height)) {
-        int y_w = ctx->m.avctx->width  - (mb_x << 4);
-        int y_h = ctx->m.avctx->height - (mb_y << 4);
+    } else if (ctx->bit_depth == 10 && vdsp->emulated_edge_mc && ((mb_x << 4) + 16 > ctx->m.c.avctx->width ||
+                                                                  (mb_y << 4) + 16 > ctx->m.c.avctx->height)) {
+        int y_w = ctx->m.c.avctx->width  - (mb_x << 4);
+        int y_h = ctx->m.c.avctx->height - (mb_y << 4);
         int uv_w = ctx->is_444 ? y_w : (y_w + 1) / 2;
         int uv_h = y_h;
         linesize = 32;
         uvlinesize = 16 + 16 * ctx->is_444;
 
         vdsp->emulated_edge_mc(&ctx->edge_buf_y[0], ptr_y,
-                               linesize, ctx->m.linesize,
+                               linesize, ctx->m.c.linesize,
                                linesize / 2, 16,
                                0, 0, y_w, y_h);
         vdsp->emulated_edge_mc(&ctx->edge_buf_uv[0][0], ptr_u,
-                               uvlinesize, ctx->m.uvlinesize,
+                               uvlinesize, ctx->m.c.uvlinesize,
                                uvlinesize / 2, 16,
                                0, 0, uv_w, uv_h);
         vdsp->emulated_edge_mc(&ctx->edge_buf_uv[1][0], ptr_v,
-                               uvlinesize, ctx->m.uvlinesize,
+                               uvlinesize, ctx->m.c.uvlinesize,
                                uvlinesize / 2, 16,
                                0, 0, uv_w, uv_h);
 
@@ -753,7 +752,7 @@ void dnxhd_get_blocks(DNXHDEncContext *ctx, int mb_x, int mb_y)
         pdsp->get_pixels(ctx->blocks[2], ptr_u,      uvlinesize);
         pdsp->get_pixels(ctx->blocks[3], ptr_v,      uvlinesize);
 
-        if (mb_y + 1 == ctx->m.mb_height && ctx->m.avctx->height == 1080) {
+        if (mb_y + 1 == ctx->m.c.mb_height && ctx->m.c.avctx->height == 1080) {
             if (ctx->interlaced) {
                 ctx->get_pixels_8x4_sym(ctx->blocks[4],
                                         ptr_y + dct_y_offset,
@@ -768,10 +767,10 @@ void dnxhd_get_blocks(DNXHDEncContext *ctx, int mb_x, int mb_y)
                                         ptr_v + dct_uv_offset,
                                         uvlinesize);
             } else {
-                ctx->m.bdsp.clear_block(ctx->blocks[4]);
-                ctx->m.bdsp.clear_block(ctx->blocks[5]);
-                ctx->m.bdsp.clear_block(ctx->blocks[6]);
-                ctx->m.bdsp.clear_block(ctx->blocks[7]);
+                ctx->m.c.bdsp.clear_block(ctx->blocks[4]);
+                ctx->m.c.bdsp.clear_block(ctx->blocks[5]);
+                ctx->m.c.bdsp.clear_block(ctx->blocks[6]);
+                ctx->m.c.bdsp.clear_block(ctx->blocks[7]);
             }
         } else {
             pdsp->get_pixels(ctx->blocks[4],
@@ -819,17 +818,17 @@ static int dnxhd_calc_bits_thread(AVCodecContext *avctx, void *arg,
                                   int jobnr, int threadnr)
 {
     DNXHDEncContext *ctx = avctx->priv_data;
-    int mb_y = jobnr, mb_x;
+    int mb_y = jobnr;
     int qscale = ctx->qscale;
     LOCAL_ALIGNED_16(int16_t, block, [64]);
     ctx = ctx->thread[threadnr];
 
-    ctx->m.last_dc[0] =
-    ctx->m.last_dc[1] =
-    ctx->m.last_dc[2] = 1 << (ctx->bit_depth + 2);
+    ctx->m.c.last_dc[0] =
+    ctx->m.c.last_dc[1] =
+    ctx->m.c.last_dc[2] = 1 << (ctx->bit_depth + 2);
 
-    for (mb_x = 0; mb_x < ctx->m.mb_width; mb_x++) {
-        unsigned mb = mb_y * ctx->m.mb_width + mb_x;
+    for (int mb_x = 0; mb_x < ctx->m.c.mb_width; mb_x++) {
+        unsigned mb = mb_y * ctx->m.c.mb_width + mb_x;
         int ssd     = 0;
         int ac_bits = 0;
         int dc_bits = 0;
@@ -848,7 +847,7 @@ static int dnxhd_calc_bits_thread(AVCodecContext *avctx, void *arg,
                                              qscale, &overflow);
             ac_bits   += dnxhd_calc_ac_bits(ctx, block, last_index);
 
-            diff = block[0] - ctx->m.last_dc[n];
+            diff = block[0] - ctx->m.c.last_dc[n];
             if (diff < 0)
                 nbits = av_log2_16bit(-2 * diff);
             else
@@ -857,16 +856,16 @@ static int dnxhd_calc_bits_thread(AVCodecContext *avctx, void *arg,
             av_assert1(nbits < ctx->bit_depth + 4);
             dc_bits += ctx->cid_table->dc_bits[nbits] + nbits;
 
-            ctx->m.last_dc[n] = block[0];
+            ctx->m.c.last_dc[n] = block[0];
 
             if (avctx->mb_decision == FF_MB_DECISION_RD || !RC_VARIANCE) {
                 dnxhd_unquantize_c(ctx, block, i, qscale, last_index);
-                ctx->m.idsp.idct(block);
+                ctx->m.c.idsp.idct(block);
                 ssd += dnxhd_ssd_block(block, src_block);
             }
         }
-        ctx->mb_rc[(qscale * ctx->m.mb_num) + mb].ssd  = ssd;
-        ctx->mb_rc[(qscale * ctx->m.mb_num) + mb].bits = ac_bits + dc_bits + 12 +
+        ctx->mb_rc[(qscale * ctx->m.c.mb_num) + mb].ssd  = ssd;
+        ctx->mb_rc[(qscale * ctx->m.c.mb_num) + mb].bits = ac_bits + dc_bits + 12 +
                                      (1 + ctx->is_444) * 8 * ctx->vlc_bits[0];
     }
     return 0;
@@ -877,16 +876,16 @@ static int dnxhd_encode_thread(AVCodecContext *avctx, void *arg,
 {
     DNXHDEncContext *ctx = avctx->priv_data;
     PutBitContext pb0, *const pb = &pb0;
-    int mb_y = jobnr, mb_x;
+    int mb_y = jobnr;
     ctx = ctx->thread[threadnr];
     init_put_bits(pb, (uint8_t *)arg + ctx->data_offset + ctx->slice_offs[jobnr],
                   ctx->slice_size[jobnr]);
 
-    ctx->m.last_dc[0] =
-    ctx->m.last_dc[1] =
-    ctx->m.last_dc[2] = 1 << (ctx->bit_depth + 2);
-    for (mb_x = 0; mb_x < ctx->m.mb_width; mb_x++) {
-        unsigned mb = mb_y * ctx->m.mb_width + mb_x;
+    ctx->m.c.last_dc[0] =
+    ctx->m.c.last_dc[1] =
+    ctx->m.c.last_dc[2] = 1 << (ctx->bit_depth + 2);
+    for (int mb_x = 0; mb_x < ctx->m.c.mb_width; mb_x++) {
+        unsigned mb = mb_y * ctx->m.c.mb_width + mb_x;
         int qscale = ctx->mb_qscale[mb];
         int i;
 
@@ -912,14 +911,12 @@ static int dnxhd_encode_thread(AVCodecContext *avctx, void *arg,
 
 static void dnxhd_setup_threads_slices(DNXHDEncContext *ctx)
 {
-    int mb_y, mb_x;
-    int offset = 0;
-    for (mb_y = 0; mb_y < ctx->m.mb_height; mb_y++) {
+    for (int mb_y = 0, offset = 0; mb_y < ctx->m.c.mb_height; mb_y++) {
         int thread_size;
         ctx->slice_offs[mb_y] = offset;
         ctx->slice_size[mb_y] = 0;
-        for (mb_x = 0; mb_x < ctx->m.mb_width; mb_x++) {
-            unsigned mb = mb_y * ctx->m.mb_width + mb_x;
+        for (int mb_x = 0; mb_x < ctx->m.c.mb_width; mb_x++) {
+            unsigned mb = mb_y * ctx->m.c.mb_width + mb_x;
             ctx->slice_size[mb_y] += ctx->mb_bits[mb];
         }
         ctx->slice_size[mb_y]   = (ctx->slice_size[mb_y] + 31U) & ~31U;
@@ -933,28 +930,28 @@ static int dnxhd_mb_var_thread(AVCodecContext *avctx, void *arg,
                                int jobnr, int threadnr)
 {
     DNXHDEncContext *ctx = avctx->priv_data;
-    int mb_y = jobnr, mb_x, x, y;
-    int partial_last_row = (mb_y == ctx->m.mb_height - 1) &&
+    int mb_y = jobnr, x, y;
+    int partial_last_row = (mb_y == ctx->m.c.mb_height - 1) &&
                            ((avctx->height >> ctx->interlaced) & 0xF);
 
     ctx = ctx->thread[threadnr];
     if (ctx->bit_depth == 8) {
-        const uint8_t *pix = ctx->thread[0]->src[0] + ((mb_y << 4) * ctx->m.linesize);
-        for (mb_x = 0; mb_x < ctx->m.mb_width; ++mb_x, pix += 16) {
-            unsigned mb = mb_y * ctx->m.mb_width + mb_x;
+        const uint8_t *pix = ctx->thread[0]->src[0] + ((mb_y << 4) * ctx->m.c.linesize);
+        for (int mb_x = 0; mb_x < ctx->m.c.mb_width; ++mb_x, pix += 16) {
+            unsigned mb = mb_y * ctx->m.c.mb_width + mb_x;
             int sum;
             int varc;
 
             if (!partial_last_row && mb_x * 16 <= avctx->width - 16 && (avctx->width % 16) == 0) {
-                sum  = ctx->m.mpvencdsp.pix_sum(pix, ctx->m.linesize);
-                varc = ctx->m.mpvencdsp.pix_norm1(pix, ctx->m.linesize);
+                sum  = ctx->m.mpvencdsp.pix_sum(pix, ctx->m.c.linesize);
+                varc = ctx->m.mpvencdsp.pix_norm1(pix, ctx->m.c.linesize);
             } else {
                 int bw = FFMIN(avctx->width - 16 * mb_x, 16);
                 int bh = FFMIN((avctx->height >> ctx->interlaced) - 16 * mb_y, 16);
                 sum = varc = 0;
                 for (y = 0; y < bh; y++) {
                     for (x = 0; x < bw; x++) {
-                        uint8_t val = pix[x + y * ctx->m.linesize];
+                        uint8_t val = pix[x + y * ctx->m.c.linesize];
                         sum  += val;
                         varc += val * val;
                     }
@@ -966,11 +963,11 @@ static int dnxhd_mb_var_thread(AVCodecContext *avctx, void *arg,
             ctx->mb_cmp[mb].mb    = mb;
         }
     } else { // 10-bit
-        const int linesize = ctx->m.linesize >> 1;
-        for (mb_x = 0; mb_x < ctx->m.mb_width; ++mb_x) {
+        const int linesize = ctx->m.c.linesize >> 1;
+        for (int mb_x = 0; mb_x < ctx->m.c.mb_width; ++mb_x) {
             const uint16_t *pix = (const uint16_t *)ctx->thread[0]->src[0] +
                                      ((mb_y << 4) * linesize) + (mb_x << 4);
-            unsigned mb  = mb_y * ctx->m.mb_width + mb_x;
+            unsigned mb  = mb_y * ctx->m.c.mb_width + mb_x;
             int sum = 0;
             int sqsum = 0;
             int bw = FFMIN(avctx->width - 16 * mb_x, 16);
@@ -1001,12 +998,11 @@ static int dnxhd_encode_rdo(AVCodecContext *avctx, DNXHDEncContext *ctx)
 {
     int lambda, up_step, down_step;
     int last_lower = INT_MAX, last_higher = 0;
-    int x, y, q;
 
-    for (q = 1; q < avctx->qmax; q++) {
+    for (int q = 1; q < avctx->qmax; q++) {
         ctx->qscale = q;
         avctx->execute2(avctx, dnxhd_calc_bits_thread,
-                        NULL, NULL, ctx->m.mb_height);
+                        NULL, NULL, ctx->m.c.mb_height);
     }
     up_step = down_step = 2 << LAMBDA_FRAC_BITS;
     lambda  = ctx->lambda;
@@ -1018,14 +1014,14 @@ static int dnxhd_encode_rdo(AVCodecContext *avctx, DNXHDEncContext *ctx)
             lambda++;
             end = 1; // need to set final qscales/bits
         }
-        for (y = 0; y < ctx->m.mb_height; y++) {
-            for (x = 0; x < ctx->m.mb_width; x++) {
+        for (int y = 0; y < ctx->m.c.mb_height; y++) {
+            for (int x = 0; x < ctx->m.c.mb_width; x++) {
                 unsigned min = UINT_MAX;
                 int qscale = 1;
-                int mb     = y * ctx->m.mb_width + x;
+                int mb     = y * ctx->m.c.mb_width + x;
                 int rc = 0;
-                for (q = 1; q < avctx->qmax; q++) {
-                    int i = (q*ctx->m.mb_num) + mb;
+                for (int q = 1; q < avctx->qmax; q++) {
+                    int i = (q*ctx->m.c.mb_num) + mb;
                     unsigned score = ctx->mb_rc[i].bits * lambda +
                                      ((unsigned) ctx->mb_rc[i].ssd << LAMBDA_FRAC_BITS);
                     if (score < min) {
@@ -1082,18 +1078,17 @@ static int dnxhd_find_qscale(DNXHDEncContext *ctx)
     int last_higher = 0;
     int last_lower = INT_MAX;
     int qscale;
-    int x, y;
 
     qscale = ctx->qscale;
     for (;;) {
         bits = 0;
         ctx->qscale = qscale;
         // XXX avoid recalculating bits
-        ctx->m.avctx->execute2(ctx->m.avctx, dnxhd_calc_bits_thread,
-                               NULL, NULL, ctx->m.mb_height);
-        for (y = 0; y < ctx->m.mb_height; y++) {
-            for (x = 0; x < ctx->m.mb_width; x++)
-                bits += ctx->mb_rc[(qscale*ctx->m.mb_num) + (y*ctx->m.mb_width+x)].bits;
+        ctx->m.c.avctx->execute2(ctx->m.c.avctx, dnxhd_calc_bits_thread,
+                               NULL, NULL, ctx->m.c.mb_height);
+        for (int y = 0; y < ctx->m.c.mb_height; y++) {
+            for (int x = 0; x < ctx->m.c.mb_width; x++)
+                bits += ctx->mb_rc[(qscale*ctx->m.c.mb_num) + (y*ctx->m.c.mb_width+x)].bits;
             bits = (bits+31)&~31; // padding
             if (bits > ctx->frame_bits)
                 break;
@@ -1122,7 +1117,7 @@ static int dnxhd_find_qscale(DNXHDEncContext *ctx)
             else
                 qscale += up_step++;
             down_step = 1;
-            if (qscale >= ctx->m.avctx->qmax)
+            if (qscale >= ctx->m.c.avctx->qmax)
                 return AVERROR(EINVAL);
         }
     }
@@ -1189,24 +1184,24 @@ static void radix_sort(RCCMPEntry *data, RCCMPEntry *tmp, int size)
 static int dnxhd_encode_fast(AVCodecContext *avctx, DNXHDEncContext *ctx)
 {
     int max_bits = 0;
-    int ret, x, y;
+    int ret;
     if ((ret = dnxhd_find_qscale(ctx)) < 0)
         return ret;
-    for (y = 0; y < ctx->m.mb_height; y++) {
-        for (x = 0; x < ctx->m.mb_width; x++) {
-            int mb = y * ctx->m.mb_width + x;
-            int rc = (ctx->qscale * ctx->m.mb_num ) + mb;
+    for (int y = 0; y < ctx->m.c.mb_height; y++) {
+        for (int x = 0; x < ctx->m.c.mb_width; x++) {
+            int mb = y * ctx->m.c.mb_width + x;
+            int rc = (ctx->qscale * ctx->m.c.mb_num ) + mb;
             int delta_bits;
             ctx->mb_qscale[mb] = ctx->qscale;
             ctx->mb_bits[mb] = ctx->mb_rc[rc].bits;
             max_bits += ctx->mb_rc[rc].bits;
             if (!RC_VARIANCE) {
                 delta_bits = ctx->mb_rc[rc].bits -
-                             ctx->mb_rc[rc + ctx->m.mb_num].bits;
+                             ctx->mb_rc[rc + ctx->m.c.mb_num].bits;
                 ctx->mb_cmp[mb].mb = mb;
                 ctx->mb_cmp[mb].value =
                     delta_bits ? ((ctx->mb_rc[rc].ssd -
-                                   ctx->mb_rc[rc + ctx->m.mb_num].ssd) * 100) /
+                                   ctx->mb_rc[rc + ctx->m.c.mb_num].ssd) * 100) /
                                   delta_bits
                                : INT_MIN; // avoid increasing qscale
             }
@@ -1216,17 +1211,17 @@ static int dnxhd_encode_fast(AVCodecContext *avctx, DNXHDEncContext *ctx)
     if (!ret) {
         if (RC_VARIANCE)
             avctx->execute2(avctx, dnxhd_mb_var_thread,
-                            NULL, NULL, ctx->m.mb_height);
-        radix_sort(ctx->mb_cmp, ctx->mb_cmp_tmp, ctx->m.mb_num);
+                            NULL, NULL, ctx->m.c.mb_height);
+        radix_sort(ctx->mb_cmp, ctx->mb_cmp_tmp, ctx->m.c.mb_num);
 retry:
-        for (x = 0; x < ctx->m.mb_num && max_bits > ctx->frame_bits; x++) {
+        for (int x = 0; x < ctx->m.c.mb_num && max_bits > ctx->frame_bits; x++) {
             int mb = ctx->mb_cmp[x].mb;
-            int rc = (ctx->qscale * ctx->m.mb_num ) + mb;
+            int rc = (ctx->qscale * ctx->m.c.mb_num ) + mb;
             max_bits -= ctx->mb_rc[rc].bits -
-                        ctx->mb_rc[rc + ctx->m.mb_num].bits;
+                        ctx->mb_rc[rc + ctx->m.c.mb_num].bits;
             if (ctx->mb_qscale[mb] < 255)
                 ctx->mb_qscale[mb]++;
-            ctx->mb_bits[mb]   = ctx->mb_rc[rc + ctx->m.mb_num].bits;
+            ctx->mb_bits[mb]   = ctx->mb_rc[rc + ctx->m.c.mb_num].bits;
         }
 
         if (max_bits > ctx->frame_bits)
@@ -1237,13 +1232,11 @@ retry:
 
 static void dnxhd_load_picture(DNXHDEncContext *ctx, const AVFrame *frame)
 {
-    int i;
-
-    for (i = 0; i < ctx->m.avctx->thread_count; i++) {
-        ctx->thread[i]->m.linesize    = frame->linesize[0] << ctx->interlaced;
-        ctx->thread[i]->m.uvlinesize  = frame->linesize[1] << ctx->interlaced;
-        ctx->thread[i]->dct_y_offset  = ctx->m.linesize  *8;
-        ctx->thread[i]->dct_uv_offset = ctx->m.uvlinesize*8;
+    for (int i = 0; i < ctx->m.c.avctx->thread_count; i++) {
+        ctx->thread[i]->m.c.linesize    = frame->linesize[0] << ctx->interlaced;
+        ctx->thread[i]->m.c.uvlinesize  = frame->linesize[1] << ctx->interlaced;
+        ctx->thread[i]->dct_y_offset    = ctx->m.c.linesize  *8;
+        ctx->thread[i]->dct_uv_offset   = ctx->m.c.uvlinesize*8;
     }
 
     ctx->cur_field = (frame->flags & AV_FRAME_FLAG_INTERLACED) &&
@@ -1286,13 +1279,13 @@ encode_coding_unit:
     dnxhd_setup_threads_slices(ctx);
 
     offset = 0;
-    for (i = 0; i < ctx->m.mb_height; i++) {
+    for (i = 0; i < ctx->m.c.mb_height; i++) {
         AV_WB32(ctx->msip + i * 4, offset);
         offset += ctx->slice_size[i];
         av_assert1(!(ctx->slice_size[i] & 3));
     }
 
-    avctx->execute2(avctx, dnxhd_encode_thread, buf, NULL, ctx->m.mb_height);
+    avctx->execute2(avctx, dnxhd_encode_thread, buf, NULL, ctx->m.c.mb_height);
 
     av_assert1(ctx->data_offset + offset + 4 <= ctx->coding_unit_size);
     memset(buf + ctx->data_offset + offset, 0,
diff --git a/libavcodec/dnxhdenc.h b/libavcodec/dnxhdenc.h
index 00d486babd..7540607cdc 100644
--- a/libavcodec/dnxhdenc.h
+++ b/libavcodec/dnxhdenc.h
@@ -28,7 +28,7 @@
 
 #include "libavutil/mem_internal.h"
 
-#include "mpegvideo.h"
+#include "mpegvideoenc.h"
 #include "dnxhddata.h"
 
 typedef struct RCCMPEntry {
@@ -43,7 +43,7 @@ typedef struct RCEntry {
 
 typedef struct DNXHDEncContext {
     AVClass *class;
-    MpegEncContext m; ///< Used for quantization dsp functions
+    MPVEncContext m; ///< Used for quantization dsp functions
 
     int cid;
     int profile;
diff --git a/libavcodec/flvenc.c b/libavcodec/flvenc.c
index b4a30fe558..df1a650222 100644
--- a/libavcodec/flvenc.c
+++ b/libavcodec/flvenc.c
@@ -25,42 +25,42 @@
 
 int ff_flv_encode_picture_header(MPVMainEncContext *const m)
 {
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
     int format;
 
     align_put_bits(&s->pb);
 
     put_bits(&s->pb, 17, 1);
     /* 0: H.263 escape codes 1: 11-bit escape codes */
-    put_bits(&s->pb, 5, (s->h263_flv - 1));
+    put_bits(&s->pb, 5, (s->c.h263_flv - 1));
     put_bits(&s->pb, 8,
-             (((int64_t) s->picture_number * 30 * s->avctx->time_base.num) /   // FIXME use timestamp
-              s->avctx->time_base.den) & 0xff);   /* TemporalReference */
-    if (s->width == 352 && s->height == 288)
+             (((int64_t) s->c.picture_number * 30 * s->c.avctx->time_base.num) /   // FIXME use timestamp
+              s->c.avctx->time_base.den) & 0xff);   /* TemporalReference */
+    if (s->c.width == 352 && s->c.height == 288)
         format = 2;
-    else if (s->width == 176 && s->height == 144)
+    else if (s->c.width == 176 && s->c.height == 144)
         format = 3;
-    else if (s->width == 128 && s->height == 96)
+    else if (s->c.width == 128 && s->c.height == 96)
         format = 4;
-    else if (s->width == 320 && s->height == 240)
+    else if (s->c.width == 320 && s->c.height == 240)
         format = 5;
-    else if (s->width == 160 && s->height == 120)
+    else if (s->c.width == 160 && s->c.height == 120)
         format = 6;
-    else if (s->width <= 255 && s->height <= 255)
+    else if (s->c.width <= 255 && s->c.height <= 255)
         format = 0;   /* use 1 byte width & height */
     else
         format = 1;   /* use 2 bytes width & height */
     put_bits(&s->pb, 3, format);   /* PictureSize */
     if (format == 0) {
-        put_bits(&s->pb, 8, s->width);
-        put_bits(&s->pb, 8, s->height);
+        put_bits(&s->pb, 8, s->c.width);
+        put_bits(&s->pb, 8, s->c.height);
     } else if (format == 1) {
-        put_bits(&s->pb, 16, s->width);
-        put_bits(&s->pb, 16, s->height);
+        put_bits(&s->pb, 16, s->c.width);
+        put_bits(&s->pb, 16, s->c.height);
     }
-    put_bits(&s->pb, 2, s->pict_type == AV_PICTURE_TYPE_P);   /* PictureType */
+    put_bits(&s->pb, 2, s->c.pict_type == AV_PICTURE_TYPE_P);   /* PictureType */
     put_bits(&s->pb, 1, 1);   /* DeblockingFlag: on */
-    put_bits(&s->pb, 5, s->qscale);   /* Quantizer */
+    put_bits(&s->pb, 5, s->c.qscale);   /* Quantizer */
     put_bits(&s->pb, 1, 0);   /* ExtraInformation */
 
     return 0;
diff --git a/libavcodec/h261enc.c b/libavcodec/h261enc.c
index da8736a78c..7c3c8752df 100644
--- a/libavcodec/h261enc.c
+++ b/libavcodec/h261enc.c
@@ -69,20 +69,20 @@ typedef struct H261EncContext {
 static int h261_encode_picture_header(MPVMainEncContext *const m)
 {
     H261EncContext *const h = (H261EncContext *)m;
-    MpegEncContext *const s = &h->s.s;
+    MPVEncContext *const s = &h->s.s;
     int temp_ref;
 
     align_put_bits(&s->pb);
 
     put_bits(&s->pb, 20, 0x10); /* PSC */
 
-    temp_ref = s->picture_number * 30000LL * s->avctx->time_base.num /
-               (1001LL * s->avctx->time_base.den);   // FIXME maybe this should use a timestamp
+    temp_ref = s->c.picture_number * 30000LL * s->c.avctx->time_base.num /
+               (1001LL * s->c.avctx->time_base.den);   // FIXME maybe this should use a timestamp
     put_sbits(&s->pb, 5, temp_ref); /* TemporalReference */
 
     put_bits(&s->pb, 1, 0); /* split screen off */
     put_bits(&s->pb, 1, 0); /* camera  off */
-    put_bits(&s->pb, 1, s->pict_type == AV_PICTURE_TYPE_I); /* freeze picture release on/off */
+    put_bits(&s->pb, 1, s->c.pict_type == AV_PICTURE_TYPE_I); /* freeze picture release on/off */
 
     put_bits(&s->pb, 1, h->format); /* 0 == QCIF, 1 == CIF */
 
@@ -91,7 +91,7 @@ static int h261_encode_picture_header(MPVMainEncContext *const m)
 
     put_bits(&s->pb, 1, 0); /* no PEI */
     h->gob_number = h->format - 1;
-    s->mb_skip_run = 0;
+    s->c.mb_skip_run = 0;
 
     return 0;
 }
@@ -99,7 +99,7 @@ static int h261_encode_picture_header(MPVMainEncContext *const m)
 /**
  * Encode a group of blocks header.
  */
-static void h261_encode_gob_header(MpegEncContext *s, int mb_line)
+static void h261_encode_gob_header(MPVEncContext *const s, int mb_line)
 {
     H261EncContext *const h = (H261EncContext *)s;
     if (h->format == H261_QCIF) {
@@ -109,38 +109,38 @@ static void h261_encode_gob_header(MpegEncContext *s, int mb_line)
     }
     put_bits(&s->pb, 16, 1);            /* GBSC */
     put_bits(&s->pb, 4, h->gob_number); /* GN */
-    put_bits(&s->pb, 5, s->qscale);     /* GQUANT */
+    put_bits(&s->pb, 5, s->c.qscale);     /* GQUANT */
     put_bits(&s->pb, 1, 0);             /* no GEI */
-    s->mb_skip_run = 0;
-    s->last_mv[0][0][0] = 0;
-    s->last_mv[0][0][1] = 0;
+    s->c.mb_skip_run = 0;
+    s->c.last_mv[0][0][0] = 0;
+    s->c.last_mv[0][0][1] = 0;
 }
 
-void ff_h261_reorder_mb_index(MpegEncContext *s)
+void ff_h261_reorder_mb_index(MPVEncContext *const s)
 {
     const H261EncContext *const h = (H261EncContext*)s;
-    int index = s->mb_x + s->mb_y * s->mb_width;
+    int index = s->c.mb_x + s->c.mb_y * s->c.mb_width;
 
     if (index % 11 == 0) {
         if (index % 33 == 0)
             h261_encode_gob_header(s, 0);
-        s->last_mv[0][0][0] = 0;
-        s->last_mv[0][0][1] = 0;
+        s->c.last_mv[0][0][0] = 0;
+        s->c.last_mv[0][0][1] = 0;
     }
 
     /* for CIF the GOB's are fragmented in the middle of a scanline
      * that's why we need to adjust the x and y index of the macroblocks */
     if (h->format == H261_CIF) {
-        s->mb_x  = index % 11;
+        s->c.mb_x  = index % 11;
         index   /= 11;
-        s->mb_y  = index % 3;
+        s->c.mb_y  = index % 3;
         index   /= 3;
-        s->mb_x += 11 * (index % 2);
+        s->c.mb_x += 11 * (index % 2);
         index   /= 2;
-        s->mb_y += 3 * index;
+        s->c.mb_y += 3 * index;
 
-        ff_init_block_index(s);
-        ff_update_block_index(s, 8, 0, 1);
+        ff_init_block_index(&s->c);
+        ff_update_block_index(&s->c, 8, 0, 1);
     }
 }
 
@@ -150,12 +150,12 @@ static void h261_encode_motion(PutBitContext *pb, int val)
                  h261_mv_codes[MV_TAB_OFFSET + val][0]);
 }
 
-static inline int get_cbp(MpegEncContext *s, int16_t block[6][64])
+static inline int get_cbp(const int block_last_index[6])
 {
     int i, cbp;
     cbp = 0;
     for (i = 0; i < 6; i++)
-        if (s->block_last_index[i] >= 0)
+        if (block_last_index[i] >= 0)
             cbp |= 1 << (5 - i);
     return cbp;
 }
@@ -167,10 +167,10 @@ static inline int get_cbp(MpegEncContext *s, int16_t block[6][64])
  */
 static void h261_encode_block(H261EncContext *h, int16_t *block, int n)
 {
-    MpegEncContext *const s = &h->s.s;
+    MPVEncContext *const s = &h->s.s;
     int level, run, i, j, last_index, last_non_zero;
 
-    if (s->mb_intra) {
+    if (s->c.mb_intra) {
         /* DC coef */
         level = block[0];
         /* 255 cannot be represented, so we clamp */
@@ -189,7 +189,7 @@ static void h261_encode_block(H261EncContext *h, int16_t *block, int n)
             put_bits(&s->pb, 8, level);
         i = 1;
     } else if ((block[0] == 1 || block[0] == -1) &&
-               (s->block_last_index[n] > -1)) {
+               (s->c.block_last_index[n] > -1)) {
         // special case
         put_bits(&s->pb, 2, block[0] > 0 ? 2 : 3);
         i = 1;
@@ -198,10 +198,10 @@ static void h261_encode_block(H261EncContext *h, int16_t *block, int n)
     }
 
     /* AC coefs */
-    last_index    = s->block_last_index[n];
+    last_index    = s->c.block_last_index[n];
     last_non_zero = i - 1;
     for (; i <= last_index; i++) {
-        j     = s->intra_scantable.permutated[i];
+        j     = s->c.intra_scantable.permutated[i];
         level = block[j];
         if (level) {
             run    = i - last_non_zero - 1;
@@ -225,7 +225,7 @@ static void h261_encode_block(H261EncContext *h, int16_t *block, int n)
         put_bits(&s->pb, 2, 0x2); // EOB
 }
 
-static void h261_encode_mb(MpegEncContext *const s, int16_t block[6][64],
+static void h261_encode_mb(MPVEncContext *const s, int16_t block[6][64],
                            int motion_x, int motion_y)
 {
     /* The following is only allowed because this encoder
@@ -238,36 +238,36 @@ static void h261_encode_mb(MpegEncContext *const s, int16_t block[6][64],
 
     com->mtype = 0;
 
-    if (!s->mb_intra) {
+    if (!s->c.mb_intra) {
         /* compute cbp */
-        cbp = get_cbp(s, block);
+        cbp = get_cbp(s->c.block_last_index);
 
         /* mvd indicates if this block is motion compensated */
         mvd = motion_x | motion_y;
 
         if ((cbp | mvd) == 0) {
             /* skip macroblock */
-            s->mb_skip_run++;
-            s->last_mv[0][0][0] = 0;
-            s->last_mv[0][0][1] = 0;
-            s->qscale -= s->dquant;
+            s->c.mb_skip_run++;
+            s->c.last_mv[0][0][0] = 0;
+            s->c.last_mv[0][0][1] = 0;
+            s->c.qscale -= s->dquant;
             return;
         }
     }
 
     /* MB is not skipped, encode MBA */
     put_bits(&s->pb,
-             ff_h261_mba_bits[s->mb_skip_run],
-             ff_h261_mba_code[s->mb_skip_run]);
-    s->mb_skip_run = 0;
+             ff_h261_mba_bits[s->c.mb_skip_run],
+             ff_h261_mba_code[s->c.mb_skip_run]);
+    s->c.mb_skip_run = 0;
 
     /* calculate MTYPE */
-    if (!s->mb_intra) {
+    if (!s->c.mb_intra) {
         com->mtype++;
 
-        if (mvd || s->loop_filter)
+        if (mvd || s->c.loop_filter)
             com->mtype += 3;
-        if (s->loop_filter)
+        if (s->c.loop_filter)
             com->mtype += 3;
         if (cbp)
             com->mtype++;
@@ -277,7 +277,7 @@ static void h261_encode_mb(MpegEncContext *const s, int16_t block[6][64],
     if (s->dquant && cbp) {
         com->mtype++;
     } else
-        s->qscale -= s->dquant;
+        s->c.qscale -= s->dquant;
 
     put_bits(&s->pb,
              ff_h261_mtype_bits[com->mtype],
@@ -286,15 +286,15 @@ static void h261_encode_mb(MpegEncContext *const s, int16_t block[6][64],
     com->mtype = ff_h261_mtype_map[com->mtype];
 
     if (IS_QUANT(com->mtype)) {
-        ff_set_qscale(s, s->qscale + s->dquant);
-        put_bits(&s->pb, 5, s->qscale);
+        ff_set_qscale(&s->c, s->c.qscale + s->dquant);
+        put_bits(&s->pb, 5, s->c.qscale);
     }
 
     if (IS_16X16(com->mtype)) {
-        mv_diff_x       = (motion_x >> 1) - s->last_mv[0][0][0];
-        mv_diff_y       = (motion_y >> 1) - s->last_mv[0][0][1];
-        s->last_mv[0][0][0] = (motion_x >> 1);
-        s->last_mv[0][0][1] = (motion_y >> 1);
+        mv_diff_x       = (motion_x >> 1) - s->c.last_mv[0][0][0];
+        mv_diff_y       = (motion_y >> 1) - s->c.last_mv[0][0][1];
+        s->c.last_mv[0][0][0] = (motion_x >> 1);
+        s->c.last_mv[0][0][1] = (motion_y >> 1);
         h261_encode_motion(&s->pb, mv_diff_x);
         h261_encode_motion(&s->pb, mv_diff_y);
     }
@@ -310,8 +310,8 @@ static void h261_encode_mb(MpegEncContext *const s, int16_t block[6][64],
         h261_encode_block(h, block[i], i);
 
     if (!IS_16X16(com->mtype)) {
-        s->last_mv[0][0][0] = 0;
-        s->last_mv[0][0][1] = 0;
+        s->c.last_mv[0][0][0] = 0;
+        s->c.last_mv[0][0][1] = 0;
     }
 }
 
@@ -356,7 +356,7 @@ static av_cold int h261_encode_init(AVCodecContext *avctx)
 {
     static AVOnce init_static_once = AV_ONCE_INIT;
     H261EncContext *const h = avctx->priv_data;
-    MpegEncContext *const s = &h->s.s;
+    MPVEncContext *const s = &h->s.s;
 
     if (avctx->width == 176 && avctx->height == 144) {
         h->format = H261_QCIF;
@@ -369,7 +369,7 @@ static av_cold int h261_encode_init(AVCodecContext *avctx)
                avctx->width, avctx->height);
         return AVERROR(EINVAL);
     }
-    s->private_ctx = &h->common;
+    s->c.private_ctx = &h->common;
     h->s.encode_picture_header = h261_encode_picture_header;
     s->encode_mb               = h261_encode_mb;
 
@@ -377,7 +377,7 @@ static av_cold int h261_encode_init(AVCodecContext *avctx)
     s->max_qcoeff       = 127;
     s->ac_esc_length    = H261_ESC_LEN;
 
-    s->me.mv_penalty = mv_penalty;
+    s->c.me.mv_penalty = mv_penalty;
 
     s->intra_ac_vlc_length      = s->inter_ac_vlc_length      = uni_h261_rl_len;
     s->intra_ac_vlc_last_length = s->inter_ac_vlc_last_length = uni_h261_rl_len_last;
diff --git a/libavcodec/h261enc.h b/libavcodec/h261enc.h
index 79cdd31c2f..77f072a5e7 100644
--- a/libavcodec/h261enc.h
+++ b/libavcodec/h261enc.h
@@ -28,8 +28,8 @@
 #ifndef AVCODEC_H261ENC_H
 #define AVCODEC_H261ENC_H
 
-#include "mpegvideo.h"
+typedef struct MPVEncContext MPVEncContext;
 
-void ff_h261_reorder_mb_index(MpegEncContext *s);
+void ff_h261_reorder_mb_index(MPVEncContext *s);
 
 #endif
diff --git a/libavcodec/h263enc.h b/libavcodec/h263enc.h
index dd9caa7969..1f459a332c 100644
--- a/libavcodec/h263enc.h
+++ b/libavcodec/h263enc.h
@@ -27,22 +27,22 @@
 const uint8_t (*ff_h263_get_mv_penalty(void))[MAX_DMV*2+1];
 
 void ff_h263_encode_init(MPVMainEncContext *m);
-void ff_h263_encode_gob_header(MpegEncContext * s, int mb_line);
-void ff_h263_encode_mba(MpegEncContext *s);
+void ff_h263_encode_gob_header(MPVEncContext *s, int mb_line);
+void ff_h263_encode_mba(MPVEncContext *s);
 
-void ff_clean_h263_qscales(MpegEncContext *s);
+void ff_clean_h263_qscales(MPVEncContext *s);
 
 void ff_h263_encode_motion(PutBitContext *pb, int val, int f_code);
-void ff_h263_update_mb(MpegEncContext *s);
+void ff_h263_update_mb(MPVEncContext *s);
 
-static inline void ff_h263_encode_motion_vector(MpegEncContext * s,
+static inline void ff_h263_encode_motion_vector(MPVEncContext *s,
                                                 int x, int y, int f_code)
 {
     ff_h263_encode_motion(&s->pb, x, f_code);
     ff_h263_encode_motion(&s->pb, y, f_code);
 }
 
-static inline int get_p_cbp(MpegEncContext * s,
+static inline int get_p_cbp(MPVEncContext *const s,
                       int16_t block[6][64],
                       int motion_x, int motion_y){
     int cbp;
@@ -51,8 +51,8 @@ static inline int get_p_cbp(MpegEncContext * s,
         int best_cbpy_score = INT_MAX;
         int best_cbpc_score = INT_MAX;
         int cbpc = (-1), cbpy = (-1);
-        const int offset = (s->mv_type == MV_TYPE_16X16 ? 0 : 16) + (s->dquant ? 8 : 0);
-        const int lambda = s->lambda2 >> (FF_LAMBDA_SHIFT - 6);
+        const int offset = (s->c.mv_type == MV_TYPE_16X16 ? 0 : 16) + (s->dquant ? 8 : 0);
+        const int lambda = s->c.lambda2 >> (FF_LAMBDA_SHIFT - 6);
 
         for (int i = 0; i < 4; i++) {
             int score = ff_h263_inter_MCBPC_bits[i + offset] * lambda;
@@ -78,21 +78,21 @@ static inline int get_p_cbp(MpegEncContext * s,
             }
         }
         cbp = cbpc + 4 * cbpy;
-        if (!(motion_x | motion_y | s->dquant) && s->mv_type == MV_TYPE_16X16) {
+        if (!(motion_x | motion_y | s->dquant) && s->c.mv_type == MV_TYPE_16X16) {
             if (best_cbpy_score + best_cbpc_score + 2 * lambda >= 0)
                 cbp= 0;
         }
 
         for (int i = 0; i < 6; i++) {
-            if (s->block_last_index[i] >= 0 && !((cbp >> (5 - i)) & 1)) {
-                s->block_last_index[i] = -1;
-                s->bdsp.clear_block(s->block[i]);
+            if (s->c.block_last_index[i] >= 0 && !((cbp >> (5 - i)) & 1)) {
+                s->c.block_last_index[i] = -1;
+                s->c.bdsp.clear_block(s->c.block[i]);
             }
         }
     } else {
         cbp = 0;
         for (int i = 0; i < 6; i++) {
-            if (s->block_last_index[i] >= 0)
+            if (s->c.block_last_index[i] >= 0)
                 cbp |= 1 << (5 - i);
         }
     }
diff --git a/libavcodec/ituh263enc.c b/libavcodec/ituh263enc.c
index 876e178070..6bd7b6a6cd 100644
--- a/libavcodec/ituh263enc.c
+++ b/libavcodec/ituh263enc.c
@@ -223,19 +223,19 @@ av_const int ff_h263_aspect_to_info(AVRational aspect){
 
 static int h263_encode_picture_header(MPVMainEncContext *const m)
 {
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
     int format, coded_frame_rate, coded_frame_rate_base, i, temp_ref;
     int best_clock_code=1;
     int best_divisor=60;
     int best_error= INT_MAX;
     int custom_pcf;
 
-    if(s->h263_plus){
+    if(s->c.h263_plus){
         for(i=0; i<2; i++){
             int div, error;
-            div= (s->avctx->time_base.num*1800000LL + 500LL*s->avctx->time_base.den) / ((1000LL+i)*s->avctx->time_base.den);
+            div= (s->c.avctx->time_base.num*1800000LL + 500LL*s->c.avctx->time_base.den) / ((1000LL+i)*s->c.avctx->time_base.den);
             div= av_clip(div, 1, 127);
-            error= FFABS(s->avctx->time_base.num*1800000LL - (1000LL+i)*s->avctx->time_base.den*div);
+            error= FFABS(s->c.avctx->time_base.num*1800000LL - (1000LL+i)*s->c.avctx->time_base.den*div);
             if(error < best_error){
                 best_error= error;
                 best_divisor= div;
@@ -250,8 +250,8 @@ static int h263_encode_picture_header(MPVMainEncContext *const m)
     align_put_bits(&s->pb);
 
     put_bits(&s->pb, 22, 0x20); /* PSC */
-    temp_ref= s->picture_number * (int64_t)coded_frame_rate * s->avctx->time_base.num / //FIXME use timestamp
-                         (coded_frame_rate_base * (int64_t)s->avctx->time_base.den);
+    temp_ref= s->c.picture_number * (int64_t)coded_frame_rate * s->c.avctx->time_base.num / //FIXME use timestamp
+                         (coded_frame_rate_base * (int64_t)s->c.avctx->time_base.den);
     put_sbits(&s->pb, 8, temp_ref); /* TemporalReference */
 
     put_bits(&s->pb, 1, 1);     /* marker */
@@ -260,19 +260,19 @@ static int h263_encode_picture_header(MPVMainEncContext *const m)
     put_bits(&s->pb, 1, 0);     /* camera  off */
     put_bits(&s->pb, 1, 0);     /* freeze picture release off */
 
-    format = ff_match_2uint16(ff_h263_format, FF_ARRAY_ELEMS(ff_h263_format), s->width, s->height);
-    if (!s->h263_plus) {
+    format = ff_match_2uint16(ff_h263_format, FF_ARRAY_ELEMS(ff_h263_format), s->c.width, s->c.height);
+    if (!s->c.h263_plus) {
         /* H.263v1 */
         put_bits(&s->pb, 3, format);
-        put_bits(&s->pb, 1, (s->pict_type == AV_PICTURE_TYPE_P));
+        put_bits(&s->pb, 1, (s->c.pict_type == AV_PICTURE_TYPE_P));
         /* By now UMV IS DISABLED ON H.263v1, since the restrictions
         of H.263v1 UMV implies to check the predicted MV after
         calculation of the current MB to see if we're on the limits */
         put_bits(&s->pb, 1, 0);         /* Unrestricted Motion Vector: off */
         put_bits(&s->pb, 1, 0);         /* SAC: off */
-        put_bits(&s->pb, 1, s->obmc);   /* Advanced Prediction */
+        put_bits(&s->pb, 1, s->c.obmc);   /* Advanced Prediction */
         put_bits(&s->pb, 1, 0);         /* only I/P-frames, no PB-frame */
-        put_bits(&s->pb, 5, s->qscale);
+        put_bits(&s->pb, 5, s->c.qscale);
         put_bits(&s->pb, 1, 0);         /* Continuous Presence Multipoint mode: off */
     } else {
         int ufep=1;
@@ -287,24 +287,24 @@ static int h263_encode_picture_header(MPVMainEncContext *const m)
             put_bits(&s->pb, 3, format);
 
         put_bits(&s->pb,1, custom_pcf);
-        put_bits(&s->pb,1, s->umvplus); /* Unrestricted Motion Vector */
+        put_bits(&s->pb,1, s->c.umvplus); /* Unrestricted Motion Vector */
         put_bits(&s->pb,1,0); /* SAC: off */
-        put_bits(&s->pb,1,s->obmc); /* Advanced Prediction Mode */
-        put_bits(&s->pb,1,s->h263_aic); /* Advanced Intra Coding */
-        put_bits(&s->pb,1,s->loop_filter); /* Deblocking Filter */
-        put_bits(&s->pb,1,s->h263_slice_structured); /* Slice Structured */
+        put_bits(&s->pb,1,s->c.obmc); /* Advanced Prediction Mode */
+        put_bits(&s->pb,1,s->c.h263_aic); /* Advanced Intra Coding */
+        put_bits(&s->pb,1,s->c.loop_filter); /* Deblocking Filter */
+        put_bits(&s->pb,1,s->c.h263_slice_structured); /* Slice Structured */
         put_bits(&s->pb,1,0); /* Reference Picture Selection: off */
         put_bits(&s->pb,1,0); /* Independent Segment Decoding: off */
-        put_bits(&s->pb,1,s->alt_inter_vlc); /* Alternative Inter VLC */
-        put_bits(&s->pb,1,s->modified_quant); /* Modified Quantization: */
+        put_bits(&s->pb,1,s->c.alt_inter_vlc); /* Alternative Inter VLC */
+        put_bits(&s->pb,1,s->c.modified_quant); /* Modified Quantization: */
         put_bits(&s->pb,1,1); /* "1" to prevent start code emulation */
         put_bits(&s->pb,3,0); /* Reserved */
 
-        put_bits(&s->pb, 3, s->pict_type == AV_PICTURE_TYPE_P);
+        put_bits(&s->pb, 3, s->c.pict_type == AV_PICTURE_TYPE_P);
 
         put_bits(&s->pb,1,0); /* Reference Picture Resampling: off */
         put_bits(&s->pb,1,0); /* Reduced-Resolution Update: off */
-        put_bits(&s->pb,1,s->no_rounding); /* Rounding Type */
+        put_bits(&s->pb,1,s->c.no_rounding); /* Rounding Type */
         put_bits(&s->pb,2,0); /* Reserved */
         put_bits(&s->pb,1,1); /* "1" to prevent start code emulation */
 
@@ -313,15 +313,15 @@ static int h263_encode_picture_header(MPVMainEncContext *const m)
 
         if (format == 8) {
             /* Custom Picture Format (CPFMT) */
-            unsigned aspect_ratio_info = ff_h263_aspect_to_info(s->avctx->sample_aspect_ratio);
+            unsigned aspect_ratio_info = ff_h263_aspect_to_info(s->c.avctx->sample_aspect_ratio);
 
             put_bits(&s->pb,4, aspect_ratio_info);
-            put_bits(&s->pb,9,(s->width >> 2) - 1);
+            put_bits(&s->pb,9,(s->c.width >> 2) - 1);
             put_bits(&s->pb,1,1); /* "1" to prevent start code emulation */
-            put_bits(&s->pb,9,(s->height >> 2));
+            put_bits(&s->pb,9,(s->c.height >> 2));
             if (aspect_ratio_info == FF_ASPECT_EXTENDED){
-                put_bits(&s->pb, 8, s->avctx->sample_aspect_ratio.num);
-                put_bits(&s->pb, 8, s->avctx->sample_aspect_ratio.den);
+                put_bits(&s->pb, 8, s->c.avctx->sample_aspect_ratio.num);
+                put_bits(&s->pb, 8, s->c.avctx->sample_aspect_ratio.den);
             }
         }
         if (custom_pcf) {
@@ -333,22 +333,22 @@ static int h263_encode_picture_header(MPVMainEncContext *const m)
         }
 
         /* Unlimited Unrestricted Motion Vectors Indicator (UUI) */
-        if (s->umvplus)
+        if (s->c.umvplus)
 //            put_bits(&s->pb,1,1); /* Limited according tables of Annex D */
 //FIXME check actual requested range
             put_bits(&s->pb,2,1); /* unlimited */
-        if(s->h263_slice_structured)
+        if(s->c.h263_slice_structured)
             put_bits(&s->pb,2,0); /* no weird submodes */
 
-        put_bits(&s->pb, 5, s->qscale);
+        put_bits(&s->pb, 5, s->c.qscale);
     }
 
     put_bits(&s->pb, 1, 0);     /* no PEI */
 
-    if(s->h263_slice_structured){
+    if(s->c.h263_slice_structured){
         put_bits(&s->pb, 1, 1);
 
-        av_assert1(s->mb_x == 0 && s->mb_y == 0);
+        av_assert1(s->c.mb_x == 0 && s->c.mb_y == 0);
         ff_h263_encode_mba(s);
 
         put_bits(&s->pb, 1, 1);
@@ -360,50 +360,51 @@ static int h263_encode_picture_header(MPVMainEncContext *const m)
 /**
  * Encode a group of blocks header.
  */
-void ff_h263_encode_gob_header(MpegEncContext * s, int mb_line)
+void ff_h263_encode_gob_header(MPVEncContext *const s, int mb_line)
 {
     put_bits(&s->pb, 17, 1); /* GBSC */
 
-    if(s->h263_slice_structured){
+    if(s->c.h263_slice_structured){
         put_bits(&s->pb, 1, 1);
 
         ff_h263_encode_mba(s);
 
-        if(s->mb_num > 1583)
+        if(s->c.mb_num > 1583)
             put_bits(&s->pb, 1, 1);
-        put_bits(&s->pb, 5, s->qscale); /* GQUANT */
+        put_bits(&s->pb, 5, s->c.qscale); /* GQUANT */
         put_bits(&s->pb, 1, 1);
-        put_bits(&s->pb, 2, s->pict_type == AV_PICTURE_TYPE_I); /* GFID */
+        put_bits(&s->pb, 2, s->c.pict_type == AV_PICTURE_TYPE_I); /* GFID */
     }else{
-        int gob_number= mb_line / s->gob_index;
+        int gob_number= mb_line / s->c.gob_index;
 
         put_bits(&s->pb, 5, gob_number); /* GN */
-        put_bits(&s->pb, 2, s->pict_type == AV_PICTURE_TYPE_I); /* GFID */
-        put_bits(&s->pb, 5, s->qscale); /* GQUANT */
+        put_bits(&s->pb, 2, s->c.pict_type == AV_PICTURE_TYPE_I); /* GFID */
+        put_bits(&s->pb, 5, s->c.qscale); /* GQUANT */
     }
 }
 
 /**
  * modify qscale so that encoding is actually possible in H.263 (limit difference to -2..2)
  */
-void ff_clean_h263_qscales(MpegEncContext *s){
-    int i;
-    int8_t * const qscale_table = s->cur_pic.qscale_table;
+void ff_clean_h263_qscales(MPVEncContext *const s)
+{
+    int8_t * const qscale_table = s->c.cur_pic.qscale_table;
 
-    for(i=1; i<s->mb_num; i++){
-        if(qscale_table[ s->mb_index2xy[i] ] - qscale_table[ s->mb_index2xy[i-1] ] >2)
-            qscale_table[ s->mb_index2xy[i] ]= qscale_table[ s->mb_index2xy[i-1] ]+2;
+    for (int i = 1; i < s->c.mb_num; i++) {
+        if (qscale_table[ s->c.mb_index2xy[i] ] - qscale_table[ s->c.mb_index2xy[i-1] ] > 2)
+            qscale_table[ s->c.mb_index2xy[i] ] = qscale_table[ s->c.mb_index2xy[i-1] ] + 2;
     }
-    for(i=s->mb_num-2; i>=0; i--){
-        if(qscale_table[ s->mb_index2xy[i] ] - qscale_table[ s->mb_index2xy[i+1] ] >2)
-            qscale_table[ s->mb_index2xy[i] ]= qscale_table[ s->mb_index2xy[i+1] ]+2;
+    for(int i = s->c.mb_num - 2; i >= 0; i--) {
+        if (qscale_table[ s->c.mb_index2xy[i] ] - qscale_table[ s->c.mb_index2xy[i+1] ] > 2)
+            qscale_table[ s->c.mb_index2xy[i] ] = qscale_table[ s->c.mb_index2xy[i+1] ] + 2;
     }
 
-    if(s->codec_id != AV_CODEC_ID_H263P){
-        for(i=1; i<s->mb_num; i++){
-            int mb_xy= s->mb_index2xy[i];
+    if (s->c.codec_id != AV_CODEC_ID_H263P) {
+        for (int i = 1; i < s->c.mb_num; i++) {
+            int mb_xy = s->c.mb_index2xy[i];
 
-            if(qscale_table[mb_xy] != qscale_table[s->mb_index2xy[i-1]] && (s->mb_type[mb_xy]&CANDIDATE_MB_TYPE_INTER4V)){
+            if (qscale_table[mb_xy] != qscale_table[s->c.mb_index2xy[i - 1]] &&
+                (s->mb_type[mb_xy] & CANDIDATE_MB_TYPE_INTER4V)) {
                 s->mb_type[mb_xy]|= CANDIDATE_MB_TYPE_INTER;
             }
         }
@@ -417,13 +418,13 @@ static const int dquant_code[5]= {1,0,9,2,3};
  * @param block the 8x8 block
  * @param n block index (0-3 are luma, 4-5 are chroma)
  */
-static void h263_encode_block(MpegEncContext * s, int16_t * block, int n)
+static void h263_encode_block(MPVEncContext *const s, int16_t block[], int n)
 {
     int level, run, last, i, j, last_index, last_non_zero, sign, slevel, code;
     const RLTable *rl;
 
     rl = &ff_h263_rl_inter;
-    if (s->mb_intra && !s->h263_aic) {
+    if (s->c.mb_intra && !s->c.h263_aic) {
         /* DC coef */
         level = block[0];
         /* 255 cannot be represented, so we clamp */
@@ -443,19 +444,19 @@ static void h263_encode_block(MpegEncContext * s, int16_t * block, int n)
         i = 1;
     } else {
         i = 0;
-        if (s->h263_aic && s->mb_intra)
+        if (s->c.h263_aic && s->c.mb_intra)
             rl = &ff_rl_intra_aic;
 
-        if(s->alt_inter_vlc && !s->mb_intra){
+        if(s->c.alt_inter_vlc && !s->c.mb_intra){
             int aic_vlc_bits=0;
             int inter_vlc_bits=0;
             int wrong_pos=-1;
             int aic_code;
 
-            last_index = s->block_last_index[n];
+            last_index = s->c.block_last_index[n];
             last_non_zero = i - 1;
             for (; i <= last_index; i++) {
-                j = s->intra_scantable.permutated[i];
+                j = s->c.intra_scantable.permutated[i];
                 level = block[j];
                 if (level) {
                     run = i - last_non_zero - 1;
@@ -486,10 +487,10 @@ static void h263_encode_block(MpegEncContext * s, int16_t * block, int n)
     }
 
     /* AC coefs */
-    last_index = s->block_last_index[n];
+    last_index = s->c.block_last_index[n];
     last_non_zero = i - 1;
     for (; i <= last_index; i++) {
-        j = s->intra_scantable.permutated[i];
+        j = s->c.intra_scantable.permutated[i];
         level = block[j];
         if (level) {
             run = i - last_non_zero - 1;
@@ -503,7 +504,7 @@ static void h263_encode_block(MpegEncContext * s, int16_t * block, int n)
             code = get_rl_index(rl, last, run, level);
             put_bits(&s->pb, rl->table_vlc[code][1], rl->table_vlc[code][0]);
             if (code == rl->n) {
-              if(!CONFIG_FLV_ENCODER || s->h263_flv <= 1){
+              if(!CONFIG_FLV_ENCODER || s->c.h263_flv <= 1){
                 put_bits(&s->pb, 1, last);
                 put_bits(&s->pb, 6, run);
 
@@ -565,22 +566,22 @@ static void h263p_encode_umotion(PutBitContext *pb, int val)
     }
 }
 
-static int h263_pred_dc(MpegEncContext * s, int n, int16_t **dc_val_ptr)
+static int h263_pred_dc(MPVEncContext *const s, int n, int16_t **dc_val_ptr)
 {
     int x, y, wrap, a, c, pred_dc;
     int16_t *dc_val;
 
     /* find prediction */
     if (n < 4) {
-        x = 2 * s->mb_x + (n & 1);
-        y = 2 * s->mb_y + ((n & 2) >> 1);
-        wrap = s->b8_stride;
-        dc_val = s->dc_val[0];
+        x = 2 * s->c.mb_x + (n & 1);
+        y = 2 * s->c.mb_y + ((n & 2) >> 1);
+        wrap = s->c.b8_stride;
+        dc_val = s->c.dc_val[0];
     } else {
-        x = s->mb_x;
-        y = s->mb_y;
-        wrap = s->mb_stride;
-        dc_val = s->dc_val[n - 4 + 1];
+        x = s->c.mb_x;
+        y = s->c.mb_y;
+        wrap = s->c.mb_stride;
+        dc_val = s->c.dc_val[n - 4 + 1];
     }
     /* B C
      * A X
@@ -589,9 +590,9 @@ static int h263_pred_dc(MpegEncContext * s, int n, int16_t **dc_val_ptr)
     c = dc_val[(x) + (y - 1) * wrap];
 
     /* No prediction outside GOB boundary */
-    if (s->first_slice_line && n != 3) {
+    if (s->c.first_slice_line && n != 3) {
         if (n != 2) c = 1024;
-        if (n != 1 && s->mb_x == s->resync_mb_x) a = 1024;
+        if (n != 1 && s->c.mb_x == s->c.resync_mb_x) a = 1024;
     }
     /* just DC prediction */
     if (a != 1024 && c != 1024)
@@ -606,7 +607,7 @@ static int h263_pred_dc(MpegEncContext * s, int n, int16_t **dc_val_ptr)
     return pred_dc;
 }
 
-static void h263_encode_mb(MpegEncContext *const s,
+static void h263_encode_mb(MPVEncContext *const s,
                            int16_t block[][64],
                            int motion_x, int motion_y)
 {
@@ -614,13 +615,13 @@ static void h263_encode_mb(MpegEncContext *const s,
     int16_t pred_dc;
     int16_t rec_intradc[6];
     int16_t *dc_ptr[6];
-    const int interleaved_stats = s->avctx->flags & AV_CODEC_FLAG_PASS1;
+    const int interleaved_stats = s->c.avctx->flags & AV_CODEC_FLAG_PASS1;
 
-    if (!s->mb_intra) {
+    if (!s->c.mb_intra) {
         /* compute cbp */
         cbp= get_p_cbp(s, block, motion_x, motion_y);
 
-        if ((cbp | motion_x | motion_y | s->dquant | (s->mv_type - MV_TYPE_16X16)) == 0) {
+        if ((cbp | motion_x | motion_y | s->dquant | (s->c.mv_type - MV_TYPE_16X16)) == 0) {
             /* skip macroblock */
             put_bits(&s->pb, 1, 1);
             if(interleaved_stats){
@@ -634,10 +635,10 @@ static void h263_encode_mb(MpegEncContext *const s,
 
         cbpc = cbp & 3;
         cbpy = cbp >> 2;
-        if(s->alt_inter_vlc==0 || cbpc!=3)
+        if(s->c.alt_inter_vlc==0 || cbpc!=3)
             cbpy ^= 0xF;
         if(s->dquant) cbpc+= 8;
-        if(s->mv_type==MV_TYPE_16X16){
+        if(s->c.mv_type==MV_TYPE_16X16){
             put_bits(&s->pb,
                     ff_h263_inter_MCBPC_bits[cbpc],
                     ff_h263_inter_MCBPC_code[cbpc]);
@@ -651,9 +652,9 @@ static void h263_encode_mb(MpegEncContext *const s,
             }
 
             /* motion vectors: 16x16 mode */
-            ff_h263_pred_motion(s, 0, 0, &pred_x, &pred_y);
+            ff_h263_pred_motion(&s->c, 0, 0, &pred_x, &pred_y);
 
-            if (!s->umvplus) {
+            if (!s->c.umvplus) {
                 ff_h263_encode_motion_vector(s, motion_x - pred_x,
                                                 motion_y - pred_y, 1);
             }
@@ -678,11 +679,11 @@ static void h263_encode_mb(MpegEncContext *const s,
 
             for(i=0; i<4; i++){
                 /* motion vectors: 8x8 mode*/
-                ff_h263_pred_motion(s, i, 0, &pred_x, &pred_y);
+                ff_h263_pred_motion(&s->c, i, 0, &pred_x, &pred_y);
 
-                motion_x = s->cur_pic.motion_val[0][s->block_index[i]][0];
-                motion_y = s->cur_pic.motion_val[0][s->block_index[i]][1];
-                if (!s->umvplus) {
+                motion_x = s->c.cur_pic.motion_val[0][s->c.block_index[i]][0];
+                motion_y = s->c.cur_pic.motion_val[0][s->c.block_index[i]][1];
+                if (!s->c.umvplus) {
                     ff_h263_encode_motion_vector(s, motion_x - pred_x,
                                                     motion_y - pred_y, 1);
                 }
@@ -700,17 +701,14 @@ static void h263_encode_mb(MpegEncContext *const s,
             s->mv_bits+= get_bits_diff(s);
         }
     } else {
-        av_assert2(s->mb_intra);
+        av_assert2(s->c.mb_intra);
 
         cbp = 0;
-        if (s->h263_aic) {
+        if (s->c.h263_aic) {
             /* Predict DC */
             for(i=0; i<6; i++) {
                 int16_t level = block[i][0];
-                int scale;
-
-                if(i<4) scale= s->y_dc_scale;
-                else    scale= s->c_dc_scale;
+                int scale = i < 4 ? s->c.y_dc_scale : s->c.c_dc_scale;
 
                 pred_dc = h263_pred_dc(s, i, &dc_ptr[i]);
                 level -= pred_dc;
@@ -720,7 +718,7 @@ static void h263_encode_mb(MpegEncContext *const s,
                 else
                     level = (level - (scale>>1))/scale;
 
-                if(!s->modified_quant){
+                if (!s->c.modified_quant) {
                     if (level < -127)
                         level = -127;
                     else if (level > 127)
@@ -743,20 +741,20 @@ static void h263_encode_mb(MpegEncContext *const s,
                 /* Update AC/DC tables */
                 *dc_ptr[i] = rec_intradc[i];
                 /* AIC can change CBP */
-                if (s->block_last_index[i] > 0 ||
-                    (s->block_last_index[i] == 0 && level !=0))
+                if (s->c.block_last_index[i] > 0 ||
+                    (s->c.block_last_index[i] == 0 && level !=0))
                     cbp |= 1 << (5 - i);
             }
         }else{
             for(i=0; i<6; i++) {
                 /* compute cbp */
-                if (s->block_last_index[i] >= 1)
+                if (s->c.block_last_index[i] >= 1)
                     cbp |= 1 << (5 - i);
             }
         }
 
         cbpc = cbp & 3;
-        if (s->pict_type == AV_PICTURE_TYPE_I) {
+        if (s->c.pict_type == AV_PICTURE_TYPE_I) {
             if(s->dquant) cbpc+=4;
             put_bits(&s->pb,
                 ff_h263_intra_MCBPC_bits[cbpc],
@@ -768,7 +766,7 @@ static void h263_encode_mb(MpegEncContext *const s,
                 ff_h263_inter_MCBPC_bits[cbpc + 4],
                 ff_h263_inter_MCBPC_code[cbpc + 4]);
         }
-        if (s->h263_aic) {
+        if (s->c.h263_aic) {
             /* XXX: currently, we do not try to use ac prediction */
             put_bits(&s->pb, 1, 0);     /* no AC prediction */
         }
@@ -787,14 +785,12 @@ static void h263_encode_mb(MpegEncContext *const s,
         h263_encode_block(s, block[i], i);
 
         /* Update INTRADC for decoding */
-        if (s->h263_aic && s->mb_intra) {
+        if (s->c.h263_aic && s->c.mb_intra)
             block[i][0] = rec_intradc[i];
-
-        }
     }
 
     if(interleaved_stats){
-        if (!s->mb_intra) {
+        if (!s->c.mb_intra) {
             s->p_tex_bits+= get_bits_diff(s);
         }else{
             s->i_tex_bits+= get_bits_diff(s);
@@ -803,54 +799,54 @@ static void h263_encode_mb(MpegEncContext *const s,
     }
 }
 
-void ff_h263_update_mb(MpegEncContext *s)
+void ff_h263_update_mb(MPVEncContext *const s)
 {
-    const int mb_xy = s->mb_y * s->mb_stride + s->mb_x;
+    const int mb_xy = s->c.mb_y * s->c.mb_stride + s->c.mb_x;
 
-    if (s->cur_pic.mbskip_table)
-        s->cur_pic.mbskip_table[mb_xy] = s->mb_skipped;
+    if (s->c.cur_pic.mbskip_table)
+        s->c.cur_pic.mbskip_table[mb_xy] = s->c.mb_skipped;
 
-    if (s->mv_type == MV_TYPE_8X8)
-        s->cur_pic.mb_type[mb_xy] = MB_TYPE_FORWARD_MV | MB_TYPE_8x8;
-    else if(s->mb_intra)
-        s->cur_pic.mb_type[mb_xy] = MB_TYPE_INTRA;
+    if (s->c.mv_type == MV_TYPE_8X8)
+        s->c.cur_pic.mb_type[mb_xy] = MB_TYPE_FORWARD_MV | MB_TYPE_8x8;
+    else if(s->c.mb_intra)
+        s->c.cur_pic.mb_type[mb_xy] = MB_TYPE_INTRA;
     else
-        s->cur_pic.mb_type[mb_xy] = MB_TYPE_FORWARD_MV | MB_TYPE_16x16;
+        s->c.cur_pic.mb_type[mb_xy] = MB_TYPE_FORWARD_MV | MB_TYPE_16x16;
 
-    ff_h263_update_motion_val(s);
+    ff_h263_update_motion_val(&s->c);
 }
 
 av_cold void ff_h263_encode_init(MPVMainEncContext *const m)
 {
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
 
-    s->me.mv_penalty = ff_h263_get_mv_penalty(); // FIXME exact table for MSMPEG4 & H.263+
+    s->c.me.mv_penalty = ff_h263_get_mv_penalty(); // FIXME exact table for MSMPEG4 & H.263+
 
-    ff_h263dsp_init(&s->h263dsp);
+    ff_h263dsp_init(&s->c.h263dsp);
 
-    if (s->codec_id == AV_CODEC_ID_MPEG4)
+    if (s->c.codec_id == AV_CODEC_ID_MPEG4)
         return;
 
     s->intra_ac_vlc_length     =s->inter_ac_vlc_length     = uni_h263_inter_rl_len;
     s->intra_ac_vlc_last_length=s->inter_ac_vlc_last_length= uni_h263_inter_rl_len + 128*64;
-    if(s->h263_aic){
+    if (s->c.h263_aic) {
         s->intra_ac_vlc_length     = uni_h263_intra_aic_rl_len;
         s->intra_ac_vlc_last_length= uni_h263_intra_aic_rl_len + 128*64;
 
-        s->y_dc_scale_table =
-        s->c_dc_scale_table = ff_aic_dc_scale_table;
+        s->c.y_dc_scale_table =
+        s->c.c_dc_scale_table = ff_aic_dc_scale_table;
     }
     s->ac_esc_length= 7+1+6+8;
 
-    if (s->modified_quant)
-        s->chroma_qscale_table = ff_h263_chroma_qscale_table;
+    if (s->c.modified_quant)
+        s->c.chroma_qscale_table = ff_h263_chroma_qscale_table;
 
     // use fcodes >1 only for MPEG-4 & H.263 & H.263+ FIXME
-    switch(s->codec_id){
+    switch(s->c.codec_id){
     case AV_CODEC_ID_H263P:
-        if(s->umvplus)
+        if (s->c.umvplus)
             m->fcode_tab = umv_fcode_tab + MAX_MV;
-        if(s->modified_quant){
+        if (s->c.modified_quant) {
             s->min_qcoeff= -2047;
             s->max_qcoeff=  2047;
         }else{
@@ -861,7 +857,7 @@ av_cold void ff_h263_encode_init(MPVMainEncContext *const m)
         // Note for MPEG-4 & H.263 the dc-scale table will be set per frame as needed later
     case AV_CODEC_ID_FLV1:
         m->encode_picture_header = ff_flv_encode_picture_header;
-        if (s->h263_flv > 1) {
+        if (s->c.h263_flv > 1) {
             s->min_qcoeff= -1023;
             s->max_qcoeff=  1023;
         } else {
@@ -880,14 +876,14 @@ av_cold void ff_h263_encode_init(MPVMainEncContext *const m)
         s->encode_mb = h263_encode_mb;
 }
 
-void ff_h263_encode_mba(MpegEncContext *s)
+void ff_h263_encode_mba(MPVEncContext *const s)
 {
     int i, mb_pos;
 
     for(i=0; i<6; i++){
-        if(s->mb_num-1 <= ff_mba_max[i]) break;
+        if(s->c.mb_num-1 <= ff_mba_max[i]) break;
     }
-    mb_pos= s->mb_x + s->mb_width*s->mb_y;
+    mb_pos= s->c.mb_x + s->c.mb_width*s->c.mb_y;
     put_bits(&s->pb, ff_mba_length[i], mb_pos);
 }
 
@@ -895,7 +891,7 @@ void ff_h263_encode_mba(MpegEncContext *s)
 #define VE AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM
 static const AVOption h263_options[] = {
     { "obmc",         "use overlapped block motion compensation.", OFFSET(obmc), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE },
-    { "mb_info",      "emit macroblock info for RFC 2190 packetization, the parameter value is the maximum payload size", OFFSET(mb_info), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, INT_MAX, VE },
+    { "mb_info",      "emit macroblock info for RFC 2190 packetization, the parameter value is the maximum payload size", FF_MPV_OFFSET(mb_info), AV_OPT_TYPE_INT, { .i64 = 0 }, 0, INT_MAX, VE },
     FF_MPV_COMMON_OPTS
     FF_MPV_COMMON_MOTION_EST_OPTS
     { NULL },
diff --git a/libavcodec/me_cmp.c b/libavcodec/me_cmp.c
index f3e2f2482e..09a830d15e 100644
--- a/libavcodec/me_cmp.c
+++ b/libavcodec/me_cmp.c
@@ -69,7 +69,7 @@ const uint32_t ff_square_tab[512] = {
     57600, 58081, 58564, 59049, 59536, 60025, 60516, 61009, 61504, 62001, 62500, 63001, 63504, 64009, 64516, 65025,
 };
 
-static int sse4_c(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+static int sse4_c(MPVEncContext *unused, const uint8_t *pix1, const uint8_t *pix2,
                   ptrdiff_t stride, int h)
 {
     int s = 0, i;
@@ -86,7 +86,7 @@ static int sse4_c(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
     return s;
 }
 
-static int sse8_c(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+static int sse8_c(MPVEncContext *unused, const uint8_t *pix1, const uint8_t *pix2,
                   ptrdiff_t stride, int h)
 {
     int s = 0, i;
@@ -107,7 +107,7 @@ static int sse8_c(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
     return s;
 }
 
-static int sse16_c(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+static int sse16_c(MPVEncContext *unused, const uint8_t *pix1, const uint8_t *pix2,
                    ptrdiff_t stride, int h)
 {
     int s = 0, i;
@@ -149,7 +149,7 @@ static int sum_abs_dctelem_c(const int16_t *block)
 #define avg2(a, b) (((a) + (b) + 1) >> 1)
 #define avg4(a, b, c, d) (((a) + (b) + (c) + (d) + 2) >> 2)
 
-static inline int pix_abs16_c(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+static inline int pix_abs16_c(MPVEncContext *unused, const uint8_t *pix1, const uint8_t *pix2,
                               ptrdiff_t stride, int h)
 {
     int s = 0, i;
@@ -177,7 +177,7 @@ static inline int pix_abs16_c(MpegEncContext *v, const uint8_t *pix1, const uint
     return s;
 }
 
-static inline int pix_median_abs16_c(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+static inline int pix_median_abs16_c(MPVEncContext *unused, const uint8_t *pix1, const uint8_t *pix2,
                              ptrdiff_t stride, int h)
 {
     int s = 0, i, j;
@@ -216,7 +216,7 @@ static inline int pix_median_abs16_c(MpegEncContext *v, const uint8_t *pix1, con
     return s;
 }
 
-static int pix_abs16_x2_c(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+static int pix_abs16_x2_c(MPVEncContext *unused, const uint8_t *pix1, const uint8_t *pix2,
                           ptrdiff_t stride, int h)
 {
     int s = 0, i;
@@ -244,7 +244,7 @@ static int pix_abs16_x2_c(MpegEncContext *v, const uint8_t *pix1, const uint8_t
     return s;
 }
 
-static int pix_abs16_y2_c(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+static int pix_abs16_y2_c(MPVEncContext *unused, const uint8_t *pix1, const uint8_t *pix2,
                           ptrdiff_t stride, int h)
 {
     int s = 0, i;
@@ -274,7 +274,7 @@ static int pix_abs16_y2_c(MpegEncContext *v, const uint8_t *pix1, const uint8_t
     return s;
 }
 
-static int pix_abs16_xy2_c(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+static int pix_abs16_xy2_c(MPVEncContext *unused, const uint8_t *pix1, const uint8_t *pix2,
                            ptrdiff_t stride, int h)
 {
     int s = 0, i;
@@ -304,7 +304,7 @@ static int pix_abs16_xy2_c(MpegEncContext *v, const uint8_t *pix1, const uint8_t
     return s;
 }
 
-static inline int pix_abs8_c(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+static inline int pix_abs8_c(MPVEncContext *unused, const uint8_t *pix1, const uint8_t *pix2,
                              ptrdiff_t stride, int h)
 {
     int s = 0, i;
@@ -324,7 +324,7 @@ static inline int pix_abs8_c(MpegEncContext *v, const uint8_t *pix1, const uint8
     return s;
 }
 
-static inline int pix_median_abs8_c(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+static inline int pix_median_abs8_c(MPVEncContext *unused, const uint8_t *pix1, const uint8_t *pix2,
                              ptrdiff_t stride, int h)
 {
     int s = 0, i, j;
@@ -355,7 +355,7 @@ static inline int pix_median_abs8_c(MpegEncContext *v, const uint8_t *pix1, cons
     return s;
 }
 
-static int pix_abs8_x2_c(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+static int pix_abs8_x2_c(MPVEncContext *unused, const uint8_t *pix1, const uint8_t *pix2,
                          ptrdiff_t stride, int h)
 {
     int s = 0, i;
@@ -375,7 +375,7 @@ static int pix_abs8_x2_c(MpegEncContext *v, const uint8_t *pix1, const uint8_t *
     return s;
 }
 
-static int pix_abs8_y2_c(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+static int pix_abs8_y2_c(MPVEncContext *unused, const uint8_t *pix1, const uint8_t *pix2,
                          ptrdiff_t stride, int h)
 {
     int s = 0, i;
@@ -397,7 +397,7 @@ static int pix_abs8_y2_c(MpegEncContext *v, const uint8_t *pix1, const uint8_t *
     return s;
 }
 
-static int pix_abs8_xy2_c(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+static int pix_abs8_xy2_c(MPVEncContext *unused, const uint8_t *pix1, const uint8_t *pix2,
                           ptrdiff_t stride, int h)
 {
     int s = 0, i;
@@ -419,7 +419,7 @@ static int pix_abs8_xy2_c(MpegEncContext *v, const uint8_t *pix1, const uint8_t
     return s;
 }
 
-static int nsse16_c(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2,
+static int nsse16_c(MPVEncContext *const c, const uint8_t *s1, const uint8_t *s2,
                     ptrdiff_t stride, int h)
 {
     int score1 = 0, score2 = 0, x, y;
@@ -439,12 +439,12 @@ static int nsse16_c(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2,
     }
 
     if (c)
-        return score1 + FFABS(score2) * c->avctx->nsse_weight;
+        return score1 + FFABS(score2) * c->c.avctx->nsse_weight;
     else
         return score1 + FFABS(score2) * 8;
 }
 
-static int nsse8_c(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2,
+static int nsse8_c(MPVEncContext *const c, const uint8_t *s1, const uint8_t *s2,
                    ptrdiff_t stride, int h)
 {
     int score1 = 0, score2 = 0, x, y;
@@ -464,12 +464,12 @@ static int nsse8_c(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2,
     }
 
     if (c)
-        return score1 + FFABS(score2) * c->avctx->nsse_weight;
+        return score1 + FFABS(score2) * c->c.avctx->nsse_weight;
     else
         return score1 + FFABS(score2) * 8;
 }
 
-static int zero_cmp(MpegEncContext *s, const uint8_t *a, const uint8_t *b,
+static int zero_cmp(MPVEncContext *s, const uint8_t *a, const uint8_t *b,
                     ptrdiff_t stride, int h)
 {
     return 0;
@@ -546,7 +546,7 @@ av_cold int ff_set_cmp(const MECmpContext *c, me_cmp_func *cmp, int type, int mp
 
 #define BUTTERFLYA(x, y) (FFABS((x) + (y)) + FFABS((x) - (y)))
 
-static int hadamard8_diff8x8_c(MpegEncContext *s, const uint8_t *dst,
+static int hadamard8_diff8x8_c(MPVEncContext *unused, const uint8_t *dst,
                                const uint8_t *src, ptrdiff_t stride, int h)
 {
     int i, temp[64], sum = 0;
@@ -596,7 +596,7 @@ static int hadamard8_diff8x8_c(MpegEncContext *s, const uint8_t *dst,
     return sum;
 }
 
-static int hadamard8_intra8x8_c(MpegEncContext *s, const uint8_t *src,
+static int hadamard8_intra8x8_c(MPVEncContext *unused, const uint8_t *src,
                                 const uint8_t *dummy, ptrdiff_t stride, int h)
 {
     int i, temp[64], sum = 0;
@@ -646,7 +646,7 @@ static int hadamard8_intra8x8_c(MpegEncContext *s, const uint8_t *src,
     return sum;
 }
 
-static int dct_sad8x8_c(MpegEncContext *s, const uint8_t *src1,
+static int dct_sad8x8_c(MPVEncContext *const s, const uint8_t *src1,
                         const uint8_t *src2, ptrdiff_t stride, int h)
 {
     LOCAL_ALIGNED_16(int16_t, temp, [64]);
@@ -685,7 +685,7 @@ static int dct_sad8x8_c(MpegEncContext *s, const uint8_t *src1,
         DST(7, (a4 >> 2) - a7);                         \
     }
 
-static int dct264_sad8x8_c(MpegEncContext *s, const uint8_t *src1,
+static int dct264_sad8x8_c(MPVEncContext *const s, const uint8_t *src1,
                            const uint8_t *src2, ptrdiff_t stride, int h)
 {
     int16_t dct[8][8];
@@ -710,7 +710,7 @@ static int dct264_sad8x8_c(MpegEncContext *s, const uint8_t *src1,
 }
 #endif
 
-static int dct_max8x8_c(MpegEncContext *s, const uint8_t *src1,
+static int dct_max8x8_c(MPVEncContext *const s, const uint8_t *src1,
                         const uint8_t *src2, ptrdiff_t stride, int h)
 {
     LOCAL_ALIGNED_16(int16_t, temp, [64]);
@@ -725,22 +725,22 @@ static int dct_max8x8_c(MpegEncContext *s, const uint8_t *src1,
     return sum;
 }
 
-static int quant_psnr8x8_c(MpegEncContext *s, const uint8_t *src1,
+static int quant_psnr8x8_c(MPVEncContext *const s, const uint8_t *src1,
                            const uint8_t *src2, ptrdiff_t stride, int h)
 {
     LOCAL_ALIGNED_16(int16_t, temp, [64 * 2]);
     int16_t *const bak = temp + 64;
     int sum = 0, i;
 
-    s->mb_intra = 0;
+    s->c.mb_intra = 0;
 
     s->pdsp.diff_pixels_unaligned(temp, src1, src2, stride);
 
     memcpy(bak, temp, 64 * sizeof(int16_t));
 
-    s->block_last_index[0 /* FIXME */] =
-        s->dct_quantize(s, temp, 0 /* FIXME */, s->qscale, &i);
-    s->dct_unquantize_inter(s, temp, 0, s->qscale);
+    s->c.block_last_index[0 /* FIXME */] =
+        s->dct_quantize(s, temp, 0 /* FIXME */, s->c.qscale, &i);
+    s->c.dct_unquantize_inter(&s->c, temp, 0, s->c.qscale);
     ff_simple_idct_int16_8bit(temp); // FIXME
 
     for (i = 0; i < 64; i++)
@@ -749,10 +749,10 @@ static int quant_psnr8x8_c(MpegEncContext *s, const uint8_t *src1,
     return sum;
 }
 
-static int rd8x8_c(MpegEncContext *s, const uint8_t *src1, const uint8_t *src2,
+static int rd8x8_c(MPVEncContext *const s, const uint8_t *src1, const uint8_t *src2,
                    ptrdiff_t stride, int h)
 {
-    const uint8_t *scantable = s->intra_scantable.permutated;
+    const uint8_t *scantable = s->c.intra_scantable.permutated;
     LOCAL_ALIGNED_16(int16_t, temp, [64]);
     LOCAL_ALIGNED_16(uint8_t, lsrc1, [64]);
     LOCAL_ALIGNED_16(uint8_t, lsrc2, [64]);
@@ -765,13 +765,13 @@ static int rd8x8_c(MpegEncContext *s, const uint8_t *src1, const uint8_t *src2,
 
     s->pdsp.diff_pixels(temp, lsrc1, lsrc2, 8);
 
-    s->block_last_index[0 /* FIXME */] =
+    s->c.block_last_index[0 /* FIXME */] =
     last                               =
-        s->dct_quantize(s, temp, 0 /* FIXME */, s->qscale, &i);
+        s->dct_quantize(s, temp, 0 /* FIXME */, s->c.qscale, &i);
 
     bits = 0;
 
-    if (s->mb_intra) {
+    if (s->c.mb_intra) {
         start_i     = 1;
         length      = s->intra_ac_vlc_length;
         last_length = s->intra_ac_vlc_last_length;
@@ -811,23 +811,23 @@ static int rd8x8_c(MpegEncContext *s, const uint8_t *src1, const uint8_t *src2,
     }
 
     if (last >= 0) {
-        if (s->mb_intra)
-            s->dct_unquantize_intra(s, temp, 0, s->qscale);
+        if (s->c.mb_intra)
+            s->c.dct_unquantize_intra(&s->c, temp, 0, s->c.qscale);
         else
-            s->dct_unquantize_inter(s, temp, 0, s->qscale);
+            s->c.dct_unquantize_inter(&s->c, temp, 0, s->c.qscale);
     }
 
-    s->idsp.idct_add(lsrc2, 8, temp);
+    s->c.idsp.idct_add(lsrc2, 8, temp);
 
     distortion = s->sse_cmp[1](NULL, lsrc2, lsrc1, 8, 8);
 
-    return distortion + ((bits * s->qscale * s->qscale * 109 + 64) >> 7);
+    return distortion + ((bits * s->c.qscale * s->c.qscale * 109 + 64) >> 7);
 }
 
-static int bit8x8_c(MpegEncContext *s, const uint8_t *src1, const uint8_t *src2,
+static int bit8x8_c(MPVEncContext *const s, const uint8_t *src1, const uint8_t *src2,
                     ptrdiff_t stride, int h)
 {
-    const uint8_t *scantable = s->intra_scantable.permutated;
+    const uint8_t *scantable = s->c.intra_scantable.permutated;
     LOCAL_ALIGNED_16(int16_t, temp, [64]);
     int i, last, run, bits, level, start_i;
     const int esc_length = s->ac_esc_length;
@@ -835,13 +835,13 @@ static int bit8x8_c(MpegEncContext *s, const uint8_t *src1, const uint8_t *src2,
 
     s->pdsp.diff_pixels_unaligned(temp, src1, src2, stride);
 
-    s->block_last_index[0 /* FIXME */] =
+    s->c.block_last_index[0 /* FIXME */] =
     last                               =
-        s->dct_quantize(s, temp, 0 /* FIXME */, s->qscale, &i);
+        s->dct_quantize(s, temp, 0 /* FIXME */, s->c.qscale, &i);
 
     bits = 0;
 
-    if (s->mb_intra) {
+    if (s->c.mb_intra) {
         start_i     = 1;
         length      = s->intra_ac_vlc_length;
         last_length = s->intra_ac_vlc_last_length;
@@ -884,7 +884,7 @@ static int bit8x8_c(MpegEncContext *s, const uint8_t *src1, const uint8_t *src2,
 }
 
 #define VSAD_INTRA(size)                                                \
-static int vsad_intra ## size ## _c(MpegEncContext *c,                  \
+static int vsad_intra ## size ## _c(MPVEncContext *unused,              \
                                     const uint8_t *s, const uint8_t *dummy, \
                                     ptrdiff_t stride, int h)            \
 {                                                                       \
@@ -906,7 +906,7 @@ VSAD_INTRA(8)
 VSAD_INTRA(16)
 
 #define VSAD(size)                                                             \
-static int vsad ## size ## _c(MpegEncContext *c,                               \
+static int vsad ## size ## _c(MPVEncContext *unused,                           \
                               const uint8_t *s1, const uint8_t *s2,            \
                               ptrdiff_t stride, int h)                               \
 {                                                                              \
@@ -926,7 +926,7 @@ VSAD(16)
 
 #define SQ(a) ((a) * (a))
 #define VSSE_INTRA(size)                                                \
-static int vsse_intra ## size ## _c(MpegEncContext *c,                  \
+static int vsse_intra ## size ## _c(MPVEncContext *unused,              \
                                     const uint8_t *s, const uint8_t *dummy, \
                                     ptrdiff_t stride, int h)            \
 {                                                                       \
@@ -948,8 +948,8 @@ VSSE_INTRA(8)
 VSSE_INTRA(16)
 
 #define VSSE(size)                                                             \
-static int vsse ## size ## _c(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2, \
-                              ptrdiff_t stride, int h)                         \
+static int vsse ## size ## _c(MPVEncContext *unused, const uint8_t *s1,        \
+                              const uint8_t *s2, ptrdiff_t stride, int h)      \
 {                                                                              \
     int score = 0, x, y;                                                       \
                                                                                \
@@ -966,8 +966,8 @@ VSSE(8)
 VSSE(16)
 
 #define WRAPPER8_16_SQ(name8, name16)                                   \
-static int name16(MpegEncContext *s, const uint8_t *dst, const uint8_t *src, \
-                  ptrdiff_t stride, int h)                              \
+static int name16(MPVEncContext *const s, const uint8_t *dst,           \
+                  const uint8_t *src, ptrdiff_t stride, int h)          \
 {                                                                       \
     int score = 0;                                                      \
                                                                         \
diff --git a/libavcodec/me_cmp.h b/libavcodec/me_cmp.h
index 0857ed03e2..f1dbcd5146 100644
--- a/libavcodec/me_cmp.h
+++ b/libavcodec/me_cmp.h
@@ -41,13 +41,13 @@ EXTERN const uint32_t ff_square_tab[512];
  * !future video codecs might need functions with less strict alignment
  */
 
-struct MpegEncContext;
+typedef struct MPVEncContext MPVEncContext;
 /* Motion estimation:
  * h is limited to { width / 2, width, 2 * width },
  * but never larger than 16 and never smaller than 2.
  * Although currently h < 4 is not used as functions with
  * width < 8 are neither used nor implemented. */
-typedef int (*me_cmp_func)(struct MpegEncContext *c,
+typedef int (*me_cmp_func)(MPVEncContext *c,
                            const uint8_t *blk1 /* align width (8 or 16) */,
                            const uint8_t *blk2 /* align 1 */, ptrdiff_t stride,
                            int h);
@@ -86,7 +86,7 @@ void ff_me_cmp_init_mips(MECmpContext *c, AVCodecContext *avctx);
  * Fill the function pointer array cmp[6] with me_cmp_funcs from
  * c based upon type. If mpvenc is not set, an error is returned
  * if the type of comparison functions requires an initialized
- * MpegEncContext.
+ * MPVEncContext.
  */
 int ff_set_cmp(const MECmpContext *c, me_cmp_func *cmp,
                int type, int mpvenc);
diff --git a/libavcodec/mips/me_cmp_mips.h b/libavcodec/mips/me_cmp_mips.h
index 72b7de70b4..7e2c926d3a 100644
--- a/libavcodec/mips/me_cmp_mips.h
+++ b/libavcodec/mips/me_cmp_mips.h
@@ -21,38 +21,38 @@
 #ifndef AVCODEC_MIPS_ME_CMP_MIPS_H
 #define AVCODEC_MIPS_ME_CMP_MIPS_H
 
-#include "../mpegvideo.h"
+#include "../mpegvideoenc.h"
 #include "libavcodec/bit_depth_template.c"
 
-int ff_hadamard8_diff8x8_msa(MpegEncContext *s, const uint8_t *dst, const uint8_t *src,
+int ff_hadamard8_diff8x8_msa(MPVEncContext *s, const uint8_t *dst, const uint8_t *src,
                              ptrdiff_t stride, int h);
-int ff_hadamard8_intra8x8_msa(MpegEncContext *s, const uint8_t *dst, const uint8_t *src,
+int ff_hadamard8_intra8x8_msa(MPVEncContext *s, const uint8_t *dst, const uint8_t *src,
                               ptrdiff_t stride, int h);
-int ff_hadamard8_diff16_msa(MpegEncContext *s, const uint8_t *dst, const uint8_t *src,
+int ff_hadamard8_diff16_msa(MPVEncContext *s, const uint8_t *dst, const uint8_t *src,
                             ptrdiff_t stride, int h);
-int ff_hadamard8_intra16_msa(MpegEncContext *s, const uint8_t *dst, const uint8_t *src,
+int ff_hadamard8_intra16_msa(MPVEncContext *s, const uint8_t *dst, const uint8_t *src,
                              ptrdiff_t stride, int h);
-int ff_pix_abs16_msa(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_pix_abs16_msa(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                      ptrdiff_t stride, int h);
-int ff_pix_abs16_x2_msa(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_pix_abs16_x2_msa(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                         ptrdiff_t stride, int h);
-int ff_pix_abs16_y2_msa(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_pix_abs16_y2_msa(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                         ptrdiff_t stride, int h);
-int ff_pix_abs16_xy2_msa(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_pix_abs16_xy2_msa(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                          ptrdiff_t stride, int h);
-int ff_pix_abs8_msa(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_pix_abs8_msa(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                     ptrdiff_t stride, int h);
-int ff_pix_abs8_x2_msa(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_pix_abs8_x2_msa(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                        ptrdiff_t stride, int h);
-int ff_pix_abs8_y2_msa(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_pix_abs8_y2_msa(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                        ptrdiff_t stride, int h);
-int ff_pix_abs8_xy2_msa(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_pix_abs8_xy2_msa(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                         ptrdiff_t stride, int h);
-int ff_sse16_msa(MpegEncContext *v, const uint8_t *pu8Src, const uint8_t *pu8Ref,
+int ff_sse16_msa(MPVEncContext *v, const uint8_t *pu8Src, const uint8_t *pu8Ref,
                  ptrdiff_t stride, int i32Height);
-int ff_sse8_msa(MpegEncContext *v, const uint8_t *pu8Src, const uint8_t *pu8Ref,
+int ff_sse8_msa(MPVEncContext *v, const uint8_t *pu8Src, const uint8_t *pu8Ref,
                 ptrdiff_t stride, int i32Height);
-int ff_sse4_msa(MpegEncContext *v, const uint8_t *pu8Src, const uint8_t *pu8Ref,
+int ff_sse4_msa(MPVEncContext *v, const uint8_t *pu8Src, const uint8_t *pu8Ref,
                 ptrdiff_t stride, int i32Height);
 void ff_add_pixels8_msa(const uint8_t *restrict pixels, int16_t *block,
                         ptrdiff_t stride);
diff --git a/libavcodec/mips/me_cmp_msa.c b/libavcodec/mips/me_cmp_msa.c
index 351494161f..8ecc6352e6 100644
--- a/libavcodec/mips/me_cmp_msa.c
+++ b/libavcodec/mips/me_cmp_msa.c
@@ -732,79 +732,79 @@ static int32_t hadamard_intra_8x8_msa(const uint8_t *src, int32_t src_stride,
     return sum_res;
 }
 
-int ff_pix_abs16_msa(MpegEncContext *v, const uint8_t *src, const uint8_t *ref,
+int ff_pix_abs16_msa(MPVEncContext *v, const uint8_t *src, const uint8_t *ref,
                      ptrdiff_t stride, int height)
 {
     return sad_16width_msa(src, stride, ref, stride, height);
 }
 
-int ff_pix_abs8_msa(MpegEncContext *v, const uint8_t *src, const uint8_t *ref,
+int ff_pix_abs8_msa(MPVEncContext *v, const uint8_t *src, const uint8_t *ref,
                     ptrdiff_t stride, int height)
 {
     return sad_8width_msa(src, stride, ref, stride, height);
 }
 
-int ff_pix_abs16_x2_msa(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_pix_abs16_x2_msa(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                         ptrdiff_t stride, int h)
 {
     return sad_horiz_bilinear_filter_16width_msa(pix1, stride, pix2, stride, h);
 }
 
-int ff_pix_abs16_y2_msa(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_pix_abs16_y2_msa(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                         ptrdiff_t stride, int h)
 {
     return sad_vert_bilinear_filter_16width_msa(pix1, stride, pix2, stride, h);
 }
 
-int ff_pix_abs16_xy2_msa(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_pix_abs16_xy2_msa(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                          ptrdiff_t stride, int h)
 {
     return sad_hv_bilinear_filter_16width_msa(pix1, stride, pix2, stride, h);
 }
 
-int ff_pix_abs8_x2_msa(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_pix_abs8_x2_msa(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                        ptrdiff_t stride, int h)
 {
     return sad_horiz_bilinear_filter_8width_msa(pix1, stride, pix2, stride, h);
 }
 
-int ff_pix_abs8_y2_msa(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_pix_abs8_y2_msa(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                        ptrdiff_t stride, int h)
 {
     return sad_vert_bilinear_filter_8width_msa(pix1, stride, pix2, stride, h);
 }
 
-int ff_pix_abs8_xy2_msa(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_pix_abs8_xy2_msa(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                         ptrdiff_t stride, int h)
 {
     return sad_hv_bilinear_filter_8width_msa(pix1, stride, pix2, stride, h);
 }
 
-int ff_sse16_msa(MpegEncContext *v, const uint8_t *src, const uint8_t *ref,
+int ff_sse16_msa(MPVEncContext *v, const uint8_t *src, const uint8_t *ref,
                  ptrdiff_t stride, int height)
 {
     return sse_16width_msa(src, stride, ref, stride, height);
 }
 
-int ff_sse8_msa(MpegEncContext *v, const uint8_t *src, const uint8_t *ref,
+int ff_sse8_msa(MPVEncContext *v, const uint8_t *src, const uint8_t *ref,
                 ptrdiff_t stride, int height)
 {
     return sse_8width_msa(src, stride, ref, stride, height);
 }
 
-int ff_sse4_msa(MpegEncContext *v, const uint8_t *src, const uint8_t *ref,
+int ff_sse4_msa(MPVEncContext *v, const uint8_t *src, const uint8_t *ref,
                 ptrdiff_t stride, int height)
 {
     return sse_4width_msa(src, stride, ref, stride, height);
 }
 
-int ff_hadamard8_diff8x8_msa(MpegEncContext *s, const uint8_t *dst, const uint8_t *src,
+int ff_hadamard8_diff8x8_msa(MPVEncContext *s, const uint8_t *dst, const uint8_t *src,
                              ptrdiff_t stride, int h)
 {
     return hadamard_diff_8x8_msa(src, stride, dst, stride);
 }
 
-int ff_hadamard8_intra8x8_msa(MpegEncContext *s, const uint8_t *src, const uint8_t *dummy,
+int ff_hadamard8_intra8x8_msa(MPVEncContext *s, const uint8_t *src, const uint8_t *dummy,
                               ptrdiff_t stride, int h)
 {
     return hadamard_intra_8x8_msa(src, stride, dummy, stride);
@@ -812,7 +812,7 @@ int ff_hadamard8_intra8x8_msa(MpegEncContext *s, const uint8_t *src, const uint8
 
 /* Hadamard Transform functions */
 #define WRAPPER8_16_SQ(name8, name16)                      \
-int name16(MpegEncContext *s, const uint8_t *dst, const uint8_t *src,  \
+int name16(MPVEncContext *s, const uint8_t *dst, const uint8_t *src,  \
            ptrdiff_t stride, int h)                        \
 {                                                          \
     int score = 0;                                         \
diff --git a/libavcodec/mips/mpegvideo_mips.h b/libavcodec/mips/mpegvideo_mips.h
index 760d7b3295..72ffed6985 100644
--- a/libavcodec/mips/mpegvideo_mips.h
+++ b/libavcodec/mips/mpegvideo_mips.h
@@ -22,6 +22,7 @@
 #define AVCODEC_MIPS_MPEGVIDEO_MIPS_H
 
 #include "libavcodec/mpegvideo.h"
+#include "libavcodec/mpegvideoenc.h"
 
 void ff_dct_unquantize_h263_intra_mmi(MpegEncContext *s, int16_t *block,
         int n, int qscale);
@@ -33,6 +34,6 @@ void ff_dct_unquantize_mpeg1_inter_mmi(MpegEncContext *s, int16_t *block,
         int n, int qscale);
 void ff_dct_unquantize_mpeg2_intra_mmi(MpegEncContext *s, int16_t *block,
         int n, int qscale);
-void ff_denoise_dct_mmi(MpegEncContext *s, int16_t *block);
+void ff_denoise_dct_mmi(MPVEncContext *s, int16_t *block);
 
 #endif /* AVCODEC_MIPS_MPEGVIDEO_MIPS_H */
diff --git a/libavcodec/mips/mpegvideoenc_init_mips.c b/libavcodec/mips/mpegvideoenc_init_mips.c
index 5ef0664937..7831973eb8 100644
--- a/libavcodec/mips/mpegvideoenc_init_mips.c
+++ b/libavcodec/mips/mpegvideoenc_init_mips.c
@@ -23,7 +23,7 @@
 #include "libavcodec/mpegvideoenc.h"
 #include "mpegvideo_mips.h"
 
-av_cold void ff_mpvenc_dct_init_mips(MpegEncContext *s)
+av_cold void ff_mpvenc_dct_init_mips(MPVEncContext *s)
 {
     int cpu_flags = av_get_cpu_flags();
 
diff --git a/libavcodec/mips/mpegvideoenc_mmi.c b/libavcodec/mips/mpegvideoenc_mmi.c
index 65da155e9f..085be3b0ec 100644
--- a/libavcodec/mips/mpegvideoenc_mmi.c
+++ b/libavcodec/mips/mpegvideoenc_mmi.c
@@ -25,9 +25,9 @@
 #include "mpegvideo_mips.h"
 #include "libavutil/mips/mmiutils.h"
 
-void ff_denoise_dct_mmi(MpegEncContext *s, int16_t *block)
+void ff_denoise_dct_mmi(MPVEncContext *s, int16_t *block)
 {
-    const int intra = s->mb_intra;
+    const int intra = s->c.mb_intra;
     int *sum = s->dct_error_sum[intra];
     uint16_t *offset = s->dct_offset[intra];
     double ftmp[8];
diff --git a/libavcodec/mips/mpegvideoencdsp_init_mips.c b/libavcodec/mips/mpegvideoencdsp_init_mips.c
index 3efbeec34a..24a17b91db 100644
--- a/libavcodec/mips/mpegvideoencdsp_init_mips.c
+++ b/libavcodec/mips/mpegvideoencdsp_init_mips.c
@@ -21,6 +21,7 @@
 #include "libavutil/attributes.h"
 #include "libavutil/mips/cpu.h"
 #include "libavcodec/bit_depth_template.c"
+#include "libavcodec/mpegvideoencdsp.h"
 #include "h263dsp_mips.h"
 
 av_cold void ff_mpegvideoencdsp_init_mips(MpegvideoEncDSPContext *c,
diff --git a/libavcodec/mips/pixblockdsp_init_mips.c b/libavcodec/mips/pixblockdsp_init_mips.c
index 2e2d70953b..00f189d558 100644
--- a/libavcodec/mips/pixblockdsp_init_mips.c
+++ b/libavcodec/mips/pixblockdsp_init_mips.c
@@ -20,6 +20,7 @@
  */
 
 #include "libavutil/mips/cpu.h"
+#include "libavcodec/pixblockdsp.h"
 #include "pixblockdsp_mips.h"
 
 void ff_pixblockdsp_init_mips(PixblockDSPContext *c, AVCodecContext *avctx,
diff --git a/libavcodec/mips/pixblockdsp_mips.h b/libavcodec/mips/pixblockdsp_mips.h
index 7fd137cd09..fc387ea427 100644
--- a/libavcodec/mips/pixblockdsp_mips.h
+++ b/libavcodec/mips/pixblockdsp_mips.h
@@ -22,7 +22,8 @@
 #ifndef AVCODEC_MIPS_PIXBLOCKDSP_MIPS_H
 #define AVCODEC_MIPS_PIXBLOCKDSP_MIPS_H
 
-#include "../mpegvideo.h"
+#include <stdint.h>
+#include <stddef.h>
 
 void ff_diff_pixels_msa(int16_t *restrict block, const uint8_t *src1,
                         const uint8_t *src2, ptrdiff_t stride);
diff --git a/libavcodec/mjpegenc.c b/libavcodec/mjpegenc.c
index a6f202da0a..668065011c 100644
--- a/libavcodec/mjpegenc.c
+++ b/libavcodec/mjpegenc.c
@@ -61,8 +61,8 @@ typedef struct MJpegHuffmanCode {
 
 /* The following is the private context of MJPEG/AMV decoder.
  * Note that when using slice threading only the main thread's
- * MpegEncContext is followed by a MjpegContext; the other threads
- * can access this shared context via MpegEncContext.mjpeg. */
+ * MPVEncContext is followed by a MjpegContext; the other threads
+ * can access this shared context via MPVEncContext.mjpeg. */
 typedef struct MJPEGEncContext {
     MPVMainEncContext mpeg;
     MJpegContext   mjpeg;
@@ -92,22 +92,22 @@ static av_cold void init_uni_ac_vlc(const uint8_t huff_size_ac[256],
     }
 }
 
-static void mjpeg_encode_picture_header(MpegEncContext *s)
+static void mjpeg_encode_picture_header(MPVEncContext *const s)
 {
-    ff_mjpeg_encode_picture_header(s->avctx, &s->pb, s->cur_pic.ptr->f, s->mjpeg_ctx,
-                                   s->intra_scantable.permutated, 0,
-                                   s->intra_matrix, s->chroma_intra_matrix,
-                                   s->slice_context_count > 1);
+    ff_mjpeg_encode_picture_header(s->c.avctx, &s->pb, s->c.cur_pic.ptr->f, s->mjpeg_ctx,
+                                   s->c.intra_scantable.permutated, 0,
+                                   s->c.intra_matrix, s->c.chroma_intra_matrix,
+                                   s->c.slice_context_count > 1);
 
     s->esc_pos = put_bytes_count(&s->pb, 0);
-    for (int i = 1; i < s->slice_context_count; i++)
-        s->thread_context[i]->esc_pos = 0;
+    for (int i = 1; i < s->c.slice_context_count; i++)
+        s->c.enc_contexts[i]->esc_pos = 0;
 }
 
 static int mjpeg_amv_encode_picture_header(MPVMainEncContext *const m)
 {
     MJPEGEncContext *const m2 = (MJPEGEncContext*)m;
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
     av_assert2(s->mjpeg_ctx == &m2->mjpeg);
     /* s->huffman == HUFFMAN_TABLE_OPTIMAL can only be true for MJPEG. */
     if (!CONFIG_MJPEG_ENCODER || m2->mjpeg.huffman != HUFFMAN_TABLE_OPTIMAL)
@@ -120,11 +120,11 @@ static int mjpeg_amv_encode_picture_header(MPVMainEncContext *const m)
 /**
  * Encodes and outputs the entire frame in the JPEG format.
  *
- * @param s The MpegEncContext.
+ * @param main The MPVMainEncContext.
  */
 static void mjpeg_encode_picture_frame(MPVMainEncContext *const main)
 {
-    MpegEncContext *const s = &main->s;
+    MPVEncContext *const s = &main->s;
     int nbits, code, table_id;
     MJpegContext *m = s->mjpeg_ctx;
     uint8_t  *huff_size[4] = { m->huff_size_dc_luminance,
@@ -232,14 +232,14 @@ static void mjpeg_build_optimal_huffman(MJpegContext *m)
  *
  * Header + values + stuffing.
  *
- * @param s The MpegEncContext.
+ * @param s The MPVEncContext.
  * @return int Error code, 0 if successful.
  */
-int ff_mjpeg_encode_stuffing(MpegEncContext *s)
+int ff_mjpeg_encode_stuffing(MPVEncContext *const s)
 {
     MJpegContext *const m = s->mjpeg_ctx;
     PutBitContext *pbc = &s->pb;
-    int mb_y = s->mb_y - !s->mb_x;
+    int mb_y = s->c.mb_y - !s->c.mb_x;
     int ret;
 
 #if CONFIG_MJPEG_ENCODER
@@ -267,19 +267,19 @@ int ff_mjpeg_encode_stuffing(MpegEncContext *s)
     ret = ff_mpv_reallocate_putbitbuffer(s, put_bits_count(&s->pb) / 8 + 100,
                                             put_bits_count(&s->pb) / 4 + 1000);
     if (ret < 0) {
-        av_log(s->avctx, AV_LOG_ERROR, "Buffer reallocation failed\n");
+        av_log(s->c.avctx, AV_LOG_ERROR, "Buffer reallocation failed\n");
         goto fail;
     }
 
     ff_mjpeg_escape_FF(pbc, s->esc_pos);
 
-    if (s->slice_context_count > 1 && mb_y < s->mb_height - 1)
+    if (s->c.slice_context_count > 1 && mb_y < s->c.mb_height - 1)
         put_marker(pbc, RST0 + (mb_y&7));
     s->esc_pos = put_bytes_count(pbc, 0);
 
 fail:
     for (int i = 0; i < 3; i++)
-        s->last_dc[i] = 128 << s->intra_dc_precision;
+        s->c.last_dc[i] = 128 << s->c.intra_dc_precision;
 
     return ret;
 }
@@ -287,14 +287,14 @@ fail:
 static int alloc_huffman(MJPEGEncContext *const m2)
 {
     MJpegContext   *const m = &m2->mjpeg;
-    MpegEncContext *const s = &m2->mpeg.s;
+    MPVEncContext *const s = &m2->mpeg.s;
     static const char blocks_per_mb[] = {
         [CHROMA_420] = 6, [CHROMA_422] = 8, [CHROMA_444] = 12
     };
     size_t num_blocks, num_codes;
 
     // Make sure we have enough space to hold this frame.
-    num_blocks = s->mb_num * blocks_per_mb[s->chroma_format];
+    num_blocks = s->c.mb_num * blocks_per_mb[s->c.chroma_format];
     num_codes = num_blocks * 64;
 
     m->huff_buffer = av_malloc_array(num_codes,
@@ -358,11 +358,11 @@ static void mjpeg_encode_coef(MJpegContext *s, uint8_t table_id, int val, int ru
 /**
  * Add the block's data into the JPEG buffer.
  *
- * @param s The MpegEncContext that contains the JPEG buffer.
+ * @param s The MPVEncContext that contains the JPEG buffer.
  * @param block The block.
  * @param n The block's index or number.
  */
-static void record_block(MpegEncContext *s, int16_t *block, int n)
+static void record_block(MPVEncContext *const s, int16_t block[], int n)
 {
     int i, j, table_id;
     int component, dc, last_index, val, run;
@@ -372,20 +372,20 @@ static void record_block(MpegEncContext *s, int16_t *block, int n)
     component = (n <= 3 ? 0 : (n&1) + 1);
     table_id = (n <= 3 ? 0 : 1);
     dc = block[0]; /* overflow is impossible */
-    val = dc - s->last_dc[component];
+    val = dc - s->c.last_dc[component];
 
     mjpeg_encode_coef(m, table_id, val, 0);
 
-    s->last_dc[component] = dc;
+    s->c.last_dc[component] = dc;
 
     /* AC coefs */
 
     run = 0;
-    last_index = s->block_last_index[n];
+    last_index = s->c.block_last_index[n];
     table_id |= 2;
 
     for(i=1;i<=last_index;i++) {
-        j = s->intra_scantable.permutated[i];
+        j = s->c.intra_scantable.permutated[i];
         val = block[j];
 
         if (val == 0) {
@@ -405,7 +405,7 @@ static void record_block(MpegEncContext *s, int16_t *block, int n)
         mjpeg_encode_code(m, table_id, 0);
 }
 
-static void encode_block(MpegEncContext *s, int16_t *block, int n)
+static void encode_block(MPVEncContext *const s, int16_t block[], int n)
 {
     int mant, nbits, code, i, j;
     int component, dc, run, last_index, val;
@@ -416,7 +416,7 @@ static void encode_block(MpegEncContext *s, int16_t *block, int n)
     /* DC coef */
     component = (n <= 3 ? 0 : (n&1) + 1);
     dc = block[0]; /* overflow is impossible */
-    val = dc - s->last_dc[component];
+    val = dc - s->c.last_dc[component];
     if (n < 4) {
         ff_mjpeg_encode_dc(&s->pb, val, m->huff_size_dc_luminance, m->huff_code_dc_luminance);
         huff_size_ac = m->huff_size_ac_luminance;
@@ -426,14 +426,14 @@ static void encode_block(MpegEncContext *s, int16_t *block, int n)
         huff_size_ac = m->huff_size_ac_chrominance;
         huff_code_ac = m->huff_code_ac_chrominance;
     }
-    s->last_dc[component] = dc;
+    s->c.last_dc[component] = dc;
 
     /* AC coefs */
 
     run = 0;
-    last_index = s->block_last_index[n];
+    last_index = s->c.block_last_index[n];
     for(i=1;i<=last_index;i++) {
-        j = s->intra_scantable.permutated[i];
+        j = s->c.intra_scantable.permutated[i];
         val = block[j];
         if (val == 0) {
             run++;
@@ -463,10 +463,10 @@ static void encode_block(MpegEncContext *s, int16_t *block, int n)
         put_bits(&s->pb, huff_size_ac[0], huff_code_ac[0]);
 }
 
-static void mjpeg_record_mb(MpegEncContext *const s, int16_t block[][64],
+static void mjpeg_record_mb(MPVEncContext *const s, int16_t block[][64],
                             int unused_x, int unused_y)
 {
-    if (s->chroma_format == CHROMA_444) {
+    if (s->c.chroma_format == CHROMA_444) {
         record_block(s, block[0], 0);
         record_block(s, block[2], 2);
         record_block(s, block[4], 4);
@@ -474,7 +474,7 @@ static void mjpeg_record_mb(MpegEncContext *const s, int16_t block[][64],
         record_block(s, block[5], 5);
         record_block(s, block[9], 9);
 
-        if (16*s->mb_x+8 < s->width) {
+        if (16*s->c.mb_x+8 < s->c.width) {
             record_block(s, block[1],   1);
             record_block(s, block[3],   3);
             record_block(s, block[6],   6);
@@ -485,7 +485,7 @@ static void mjpeg_record_mb(MpegEncContext *const s, int16_t block[][64],
     } else {
         for (int i = 0; i < 5; i++)
             record_block(s, block[i], i);
-        if (s->chroma_format == CHROMA_420) {
+        if (s->c.chroma_format == CHROMA_420) {
             record_block(s, block[5], 5);
         } else {
             record_block(s, block[6], 6);
@@ -495,10 +495,10 @@ static void mjpeg_record_mb(MpegEncContext *const s, int16_t block[][64],
     }
 }
 
-static void mjpeg_encode_mb(MpegEncContext *const s, int16_t block[][64],
+static void mjpeg_encode_mb(MPVEncContext *const s, int16_t block[][64],
                             int unused_x, int unused_y)
 {
-    if (s->chroma_format == CHROMA_444) {
+    if (s->c.chroma_format == CHROMA_444) {
         encode_block(s, block[0], 0);
         encode_block(s, block[2], 2);
         encode_block(s, block[4], 4);
@@ -506,7 +506,7 @@ static void mjpeg_encode_mb(MpegEncContext *const s, int16_t block[][64],
         encode_block(s, block[5], 5);
         encode_block(s, block[9], 9);
 
-        if (16 * s->mb_x + 8 < s->width) {
+        if (16 * s->c.mb_x + 8 < s->c.width) {
             encode_block(s, block[1], 1);
             encode_block(s, block[3], 3);
             encode_block(s, block[6], 6);
@@ -517,7 +517,7 @@ static void mjpeg_encode_mb(MpegEncContext *const s, int16_t block[][64],
     } else {
         for (int i = 0; i < 5; i++)
             encode_block(s, block[i], i);
-        if (s->chroma_format == CHROMA_420) {
+        if (s->c.chroma_format == CHROMA_420) {
             encode_block(s, block[5], 5);
         } else {
             encode_block(s, block[6], 6);
@@ -533,7 +533,7 @@ static av_cold int mjpeg_encode_init(AVCodecContext *avctx)
 {
     MJPEGEncContext *const m2 = avctx->priv_data;
     MJpegContext    *const m  = &m2->mjpeg;
-    MpegEncContext  *const s  = &m2->mpeg.s;
+    MPVEncContext  *const s  = &m2->mpeg.s;
     int ret;
 
     s->mjpeg_ctx = m;
@@ -597,7 +597,7 @@ static av_cold int mjpeg_encode_init(AVCodecContext *avctx)
     // Buffers start out empty.
     m->huff_ncode = 0;
 
-    if (s->slice_context_count > 1)
+    if (s->c.slice_context_count > 1)
         m->huffman = HUFFMAN_TABLE_DEFAULT;
 
     if (m->huffman == HUFFMAN_TABLE_OPTIMAL) {
@@ -615,7 +615,7 @@ static av_cold int mjpeg_encode_init(AVCodecContext *avctx)
 static int amv_encode_picture(AVCodecContext *avctx, AVPacket *pkt,
                               const AVFrame *pic_arg, int *got_packet)
 {
-    MpegEncContext *s = avctx->priv_data;
+    MPVEncContext *const s = avctx->priv_data;
     AVFrame *pic;
     int i, ret;
     int chroma_v_shift = 1; /* AMV is 420-only */
@@ -635,7 +635,7 @@ static int amv_encode_picture(AVCodecContext *avctx, AVPacket *pkt,
     //picture should be flipped upside-down
     for(i=0; i < 3; i++) {
         int vsample = i ? 2 >> chroma_v_shift : 2;
-        pic->data[i] += pic->linesize[i] * (vsample * s->height / V_MAX - 1);
+        pic->data[i] += pic->linesize[i] * (vsample * s->c.height / V_MAX - 1);
         pic->linesize[i] *= -1;
     }
     ret = ff_mpv_encode_picture(avctx, pkt, pic, got_packet);
diff --git a/libavcodec/mjpegenc.h b/libavcodec/mjpegenc.h
index ceacba9893..92feed28b4 100644
--- a/libavcodec/mjpegenc.h
+++ b/libavcodec/mjpegenc.h
@@ -56,9 +56,9 @@ typedef struct MJpegContext {
     uint8_t huff_size_ac_chrominance[256];  ///< AC chrominance Huffman table size.
     uint16_t huff_code_ac_chrominance[256]; ///< AC chrominance Huffman table codes.
 
-    /** Storage for AC luminance VLC (in MpegEncContext) */
+    /** Storage for AC luminance VLC */
     uint8_t uni_ac_vlc_len[64 * 64 * 2];
-    /** Storage for AC chrominance VLC (in MpegEncContext) */
+    /** Storage for AC chrominance VLC */
     uint8_t uni_chroma_ac_vlc_len[64 * 64 * 2];
 
     // Default DC tables have exactly 12 values
@@ -92,8 +92,8 @@ static inline void put_marker(PutBitContext *p, enum JpegMarker code)
     put_bits(p, 8, code);
 }
 
-typedef struct MpegEncContext MpegEncContext;
+typedef struct MPVEncContext MPVEncContext;
 
-int  ff_mjpeg_encode_stuffing(MpegEncContext *s);
+int ff_mjpeg_encode_stuffing(MPVEncContext *s);
 
 #endif /* AVCODEC_MJPEGENC_H */
diff --git a/libavcodec/motion_est.c b/libavcodec/motion_est.c
index ffad3dbc79..923bf5687b 100644
--- a/libavcodec/motion_est.c
+++ b/libavcodec/motion_est.c
@@ -46,7 +46,7 @@
 #define ME_MAP_SHIFT 3
 #define ME_MAP_MV_BITS 11
 
-static int sad_hpel_motion_search(MpegEncContext * s,
+static int sad_hpel_motion_search(MPVEncContext *const s,
                                   int *mx_ptr, int *my_ptr, int dmin,
                                   int src_index, int ref_index,
                                   int size, int h);
@@ -106,10 +106,10 @@ static int get_flags(MotionEstContext *c, int direct, int chroma){
            + (chroma ? FLAG_CHROMA : 0);
 }
 
-static av_always_inline int cmp_direct_inline(MpegEncContext *s, const int x, const int y, const int subx, const int suby,
+static av_always_inline int cmp_direct_inline(MPVEncContext *const s, const int x, const int y, const int subx, const int suby,
                       const int size, const int h, int ref_index, int src_index,
                       me_cmp_func cmp_func, me_cmp_func chroma_cmp_func, int qpel){
-    MotionEstContext * const c= &s->me;
+    MotionEstContext *const c = &s->c.me;
     const int stride= c->stride;
     const int hx = subx + x * (1 << (1 + qpel));
     const int hy = suby + y * (1 << (1 + qpel));
@@ -119,10 +119,10 @@ static av_always_inline int cmp_direct_inline(MpegEncContext *s, const int x, co
     //FIXME check chroma 4mv, (no crashes ...)
         av_assert2(x >= c->xmin && hx <= c->xmax<<(qpel+1) && y >= c->ymin && hy <= c->ymax<<(qpel+1));
         if(x >= c->xmin && hx <= c->xmax<<(qpel+1) && y >= c->ymin && hy <= c->ymax<<(qpel+1)){
-            const int time_pp= s->pp_time;
-            const int time_pb= s->pb_time;
+            const int time_pp = s->c.pp_time;
+            const int time_pb = s->c.pb_time;
             const int mask= 2*qpel+1;
-            if(s->mv_type==MV_TYPE_8X8){
+            if (s->c.mv_type == MV_TYPE_8X8) {
                 int i;
                 for(i=0; i<4; i++){
                     int fx = c->direct_basis_mv[i][0] + hx;
@@ -159,14 +159,14 @@ static av_always_inline int cmp_direct_inline(MpegEncContext *s, const int x, co
                     c->qpel_avg[1][bxy](c->temp     + 8*stride, ref[8] + (bx>>2) + (by>>2)*stride     + 8*stride, stride);
                     c->qpel_avg[1][bxy](c->temp + 8 + 8*stride, ref[8] + (bx>>2) + (by>>2)*stride + 8 + 8*stride, stride);
                 }else{
-                    av_assert2((fx>>1) + 16*s->mb_x >= -16);
-                    av_assert2((fy>>1) + 16*s->mb_y >= -16);
-                    av_assert2((fx>>1) + 16*s->mb_x <= s->width);
-                    av_assert2((fy>>1) + 16*s->mb_y <= s->height);
-                    av_assert2((bx>>1) + 16*s->mb_x >= -16);
-                    av_assert2((by>>1) + 16*s->mb_y >= -16);
-                    av_assert2((bx>>1) + 16*s->mb_x <= s->width);
-                    av_assert2((by>>1) + 16*s->mb_y <= s->height);
+                    av_assert2((fx>>1) + 16*s->c.mb_x >= -16);
+                    av_assert2((fy>>1) + 16*s->c.mb_y >= -16);
+                    av_assert2((fx>>1) + 16*s->c.mb_x <= s->c.width);
+                    av_assert2((fy>>1) + 16*s->c.mb_y <= s->c.height);
+                    av_assert2((bx>>1) + 16*s->c.mb_x >= -16);
+                    av_assert2((by>>1) + 16*s->c.mb_y >= -16);
+                    av_assert2((bx>>1) + 16*s->c.mb_x <= s->c.width);
+                    av_assert2((by>>1) + 16*s->c.mb_y <= s->c.height);
 
                     c->hpel_put[0][fxy](c->temp, ref[0] + (fx>>1) + (fy>>1)*stride, stride, 16);
                     c->hpel_avg[0][bxy](c->temp, ref[8] + (bx>>1) + (by>>1)*stride, stride, 16);
@@ -178,10 +178,10 @@ static av_always_inline int cmp_direct_inline(MpegEncContext *s, const int x, co
     return d;
 }
 
-static av_always_inline int cmp_inline(MpegEncContext *s, const int x, const int y, const int subx, const int suby,
+static av_always_inline int cmp_inline(MPVEncContext *const s, const int x, const int y, const int subx, const int suby,
                       const int size, const int h, int ref_index, int src_index,
                       me_cmp_func cmp_func, me_cmp_func chroma_cmp_func, int qpel, int chroma){
-    MotionEstContext * const c= &s->me;
+    MotionEstContext *const c = &s->c.me;
     const int stride= c->stride;
     const int uvstride= c->uvstride;
     const int dxy= subx + (suby<<(1+qpel)); //FIXME log2_subpel?
@@ -230,13 +230,13 @@ static av_always_inline int cmp_inline(MpegEncContext *s, const int x, const int
     return d;
 }
 
-static int cmp_simple(MpegEncContext *s, const int x, const int y,
+static int cmp_simple(MPVEncContext *const s, const int x, const int y,
                       int ref_index, int src_index,
                       me_cmp_func cmp_func, me_cmp_func chroma_cmp_func){
     return cmp_inline(s,x,y,0,0,0,16,ref_index,src_index, cmp_func, chroma_cmp_func, 0, 0);
 }
 
-static int cmp_fpel_internal(MpegEncContext *s, const int x, const int y,
+static int cmp_fpel_internal(MPVEncContext *const s, const int x, const int y,
                       const int size, const int h, int ref_index, int src_index,
                       me_cmp_func cmp_func, me_cmp_func chroma_cmp_func, const int flags){
     if(flags&FLAG_DIRECT){
@@ -246,7 +246,7 @@ static int cmp_fpel_internal(MpegEncContext *s, const int x, const int y,
     }
 }
 
-static int cmp_internal(MpegEncContext *s, const int x, const int y, const int subx, const int suby,
+static int cmp_internal(MPVEncContext *const s, const int x, const int y, const int subx, const int suby,
                       const int size, const int h, int ref_index, int src_index,
                       me_cmp_func cmp_func, me_cmp_func chroma_cmp_func, const int flags){
     if(flags&FLAG_DIRECT){
@@ -259,7 +259,7 @@ static int cmp_internal(MpegEncContext *s, const int x, const int y, const int s
 /** @brief compares a block (either a full macroblock or a partition thereof)
     against a proposed motion-compensated prediction of that block
  */
-static av_always_inline int cmp(MpegEncContext *s, const int x, const int y, const int subx, const int suby,
+static av_always_inline int cmp(MPVEncContext *const s, const int x, const int y, const int subx, const int suby,
                       const int size, const int h, int ref_index, int src_index,
                       me_cmp_func cmp_func, me_cmp_func chroma_cmp_func, const int flags){
     if(av_builtin_constant_p(flags) && av_builtin_constant_p(h) && av_builtin_constant_p(size)
@@ -274,7 +274,7 @@ static av_always_inline int cmp(MpegEncContext *s, const int x, const int y, con
     }
 }
 
-static int cmp_hpel(MpegEncContext *s, const int x, const int y, const int subx, const int suby,
+static int cmp_hpel(MPVEncContext *const s, const int x, const int y, const int subx, const int suby,
                       const int size, const int h, int ref_index, int src_index,
                       me_cmp_func cmp_func, me_cmp_func chroma_cmp_func, const int flags){
     if(flags&FLAG_DIRECT){
@@ -284,7 +284,7 @@ static int cmp_hpel(MpegEncContext *s, const int x, const int y, const int subx,
     }
 }
 
-static int cmp_qpel(MpegEncContext *s, const int x, const int y, const int subx, const int suby,
+static int cmp_qpel(MPVEncContext *const s, const int x, const int y, const int subx, const int suby,
                       const int size, const int h, int ref_index, int src_index,
                       me_cmp_func cmp_func, me_cmp_func chroma_cmp_func, const int flags){
     if(flags&FLAG_DIRECT){
@@ -296,7 +296,7 @@ static int cmp_qpel(MpegEncContext *s, const int x, const int y, const int subx,
 
 #include "motion_est_template.c"
 
-static int zero_cmp(MpegEncContext *s, const uint8_t *a, const uint8_t *b,
+static int zero_cmp(MPVEncContext *const s, const uint8_t *a, const uint8_t *b,
                     ptrdiff_t stride, int h)
 {
     return 0;
@@ -367,32 +367,32 @@ av_cold int ff_me_init(MotionEstContext *c, AVCodecContext *avctx,
     return 0;
 }
 
-void ff_me_init_pic(MpegEncContext *s)
+void ff_me_init_pic(MPVEncContext *const s)
 {
-    MotionEstContext * const c= &s->me;
+    MotionEstContext * const c= &s->c.me;
 
-/*FIXME s->no_rounding b_type*/
+/*FIXME s->c.no_rounding b_type*/
     if (c->avctx->flags & AV_CODEC_FLAG_QPEL) {
-        c->qpel_avg = s->qdsp.avg_qpel_pixels_tab;
-        if (s->no_rounding)
-            c->qpel_put = s->qdsp.put_no_rnd_qpel_pixels_tab;
+        c->qpel_avg = s->c.qdsp.avg_qpel_pixels_tab;
+        if (s->c.no_rounding)
+            c->qpel_put = s->c.qdsp.put_no_rnd_qpel_pixels_tab;
         else
-            c->qpel_put = s->qdsp.put_qpel_pixels_tab;
+            c->qpel_put = s->c.qdsp.put_qpel_pixels_tab;
     }
-    c->hpel_avg = s->hdsp.avg_pixels_tab;
-    if (s->no_rounding)
-        c->hpel_put = s->hdsp.put_no_rnd_pixels_tab;
+    c->hpel_avg = s->c.hdsp.avg_pixels_tab;
+    if (s->c.no_rounding)
+        c->hpel_put = s->c.hdsp.put_no_rnd_pixels_tab;
     else
-        c->hpel_put = s->hdsp.put_pixels_tab;
+        c->hpel_put = s->c.hdsp.put_pixels_tab;
 
-    if(s->linesize){
-        c->stride  = s->linesize;
-        c->uvstride= s->uvlinesize;
+    if (s->c.linesize) {
+        c->stride   = s->c.linesize;
+        c->uvstride = s->c.uvlinesize;
     }else{
-        c->stride  = 16*s->mb_width + 32;
-        c->uvstride=  8*s->mb_width + 16;
+        c->stride   = 16*s->c.mb_width + 32;
+        c->uvstride =  8*s->c.mb_width + 16;
     }
-    if (s->codec_id != AV_CODEC_ID_SNOW) {
+    if (s->c.codec_id != AV_CODEC_ID_SNOW) {
         c->hpel_put[2][0]= c->hpel_put[2][1]=
         c->hpel_put[2][2]= c->hpel_put[2][3]= zero_hpel;
     }
@@ -405,12 +405,12 @@ void ff_me_init_pic(MpegEncContext *s)
     COPY3_IF_LT(dminh, d, dx, x, dy, y)\
 }
 
-static int sad_hpel_motion_search(MpegEncContext * s,
+static int sad_hpel_motion_search(MPVEncContext *const s,
                                   int *mx_ptr, int *my_ptr, int dmin,
                                   int src_index, int ref_index,
                                   int size, int h)
 {
-    MotionEstContext * const c= &s->me;
+    MotionEstContext *const c = &s->c.me;
     const int penalty_factor= c->sub_penalty_factor;
     int mx, my, dminh;
     const uint8_t *pix, *ptr;
@@ -510,58 +510,58 @@ static int sad_hpel_motion_search(MpegEncContext * s,
     return dminh;
 }
 
-static inline void set_p_mv_tables(MpegEncContext * s, int mx, int my, int mv4)
+static inline void set_p_mv_tables(MPVEncContext *const s, int mx, int my, int mv4)
 {
-    const int xy= s->mb_x + s->mb_y*s->mb_stride;
+    const int xy = s->c.mb_x + s->c.mb_y * s->c.mb_stride;
 
     s->p_mv_table[xy][0] = mx;
     s->p_mv_table[xy][1] = my;
 
     /* has already been set to the 4 MV if 4MV is done */
     if(mv4){
-        int mot_xy= s->block_index[0];
-
-        s->cur_pic.motion_val[0][mot_xy    ][0] = mx;
-        s->cur_pic.motion_val[0][mot_xy    ][1] = my;
-        s->cur_pic.motion_val[0][mot_xy + 1][0] = mx;
-        s->cur_pic.motion_val[0][mot_xy + 1][1] = my;
-
-        mot_xy += s->b8_stride;
-        s->cur_pic.motion_val[0][mot_xy    ][0] = mx;
-        s->cur_pic.motion_val[0][mot_xy    ][1] = my;
-        s->cur_pic.motion_val[0][mot_xy + 1][0] = mx;
-        s->cur_pic.motion_val[0][mot_xy + 1][1] = my;
+        int mot_xy = s->c.block_index[0];
+
+        s->c.cur_pic.motion_val[0][mot_xy    ][0] = mx;
+        s->c.cur_pic.motion_val[0][mot_xy    ][1] = my;
+        s->c.cur_pic.motion_val[0][mot_xy + 1][0] = mx;
+        s->c.cur_pic.motion_val[0][mot_xy + 1][1] = my;
+
+        mot_xy += s->c.b8_stride;
+        s->c.cur_pic.motion_val[0][mot_xy    ][0] = mx;
+        s->c.cur_pic.motion_val[0][mot_xy    ][1] = my;
+        s->c.cur_pic.motion_val[0][mot_xy + 1][0] = mx;
+        s->c.cur_pic.motion_val[0][mot_xy + 1][1] = my;
     }
 }
 
 /**
  * get fullpel ME search limits.
  */
-static inline void get_limits(MpegEncContext *s, int x, int y, int bframe)
+static inline void get_limits(MPVEncContext *const s, int x, int y, int bframe)
 {
-    MotionEstContext * const c= &s->me;
+    MotionEstContext *const c = &s->c.me;
     int range= c->avctx->me_range >> (1 + !!(c->flags&FLAG_QPEL));
     int max_range = MAX_MV >> (1 + !!(c->flags&FLAG_QPEL));
 /*
     if(c->avctx->me_range) c->range= c->avctx->me_range >> 1;
     else                   c->range= 16;
 */
-    if (s->unrestricted_mv) {
+    if (s->c.unrestricted_mv) {
         c->xmin = - x - 16;
         c->ymin = - y - 16;
-        c->xmax = - x + s->width;
-        c->ymax = - y + s->height;
-    } else if (!(av_builtin_constant_p(bframe) && bframe) && s->out_format == FMT_H261){
+        c->xmax = - x + s->c.width;
+        c->ymax = - y + s->c.height;
+    } else if (!(av_builtin_constant_p(bframe) && bframe) && s->c.out_format == FMT_H261){
         // Search range of H.261 is different from other codec standards
         c->xmin = (x > 15) ? - 15 : 0;
         c->ymin = (y > 15) ? - 15 : 0;
-        c->xmax = (x < s->mb_width * 16 - 16) ? 15 : 0;
-        c->ymax = (y < s->mb_height * 16 - 16) ? 15 : 0;
+        c->xmax = (x < s->c.mb_width * 16 - 16) ? 15 : 0;
+        c->ymax = (y < s->c.mb_height * 16 - 16) ? 15 : 0;
     } else {
         c->xmin = - x;
         c->ymin = - y;
-        c->xmax = - x + s->mb_width *16 - 16;
-        c->ymax = - y + s->mb_height*16 - 16;
+        c->xmax = - x + s->c.mb_width *16 - 16;
+        c->ymax = - y + s->c.mb_height*16 - 16;
     }
     if(!range || range > max_range)
         range = max_range;
@@ -584,9 +584,9 @@ static inline void init_mv4_ref(MotionEstContext *c){
     c->src[3][0] = c->src[2][0] + 8;
 }
 
-static inline int h263_mv4_search(MpegEncContext *s, int mx, int my, int shift)
+static inline int h263_mv4_search(MPVEncContext *const s, int mx, int my, int shift)
 {
-    MotionEstContext * const c= &s->me;
+    MotionEstContext *const c = &s->c.me;
     const int size= 1;
     const int h=8;
     int block;
@@ -595,7 +595,7 @@ static inline int h263_mv4_search(MpegEncContext *s, int mx, int my, int shift)
     int same=1;
     const int stride= c->stride;
     const uint8_t *mv_penalty = c->current_mv_penalty;
-    int safety_clipping= s->unrestricted_mv && (s->width&15) && (s->height&15);
+    int safety_clipping = s->c.unrestricted_mv && (s->c.width&15) && (s->c.height&15);
 
     init_mv4_ref(c);
 
@@ -604,28 +604,28 @@ static inline int h263_mv4_search(MpegEncContext *s, int mx, int my, int shift)
         int pred_x4, pred_y4;
         int dmin4;
         static const int off[4]= {2, 1, 1, -1};
-        const int mot_stride = s->b8_stride;
-        const int mot_xy = s->block_index[block];
+        const int mot_stride = s->c.b8_stride;
+        const int mot_xy = s->c.block_index[block];
 
         if(safety_clipping){
-            c->xmax = - 16*s->mb_x + s->width  - 8*(block &1);
-            c->ymax = - 16*s->mb_y + s->height - 8*(block>>1);
+            c->xmax = - 16*s->c.mb_x + s->c.width  - 8*(block &1);
+            c->ymax = - 16*s->c.mb_y + s->c.height - 8*(block>>1);
         }
 
-        P_LEFT[0] = s->cur_pic.motion_val[0][mot_xy - 1][0];
-        P_LEFT[1] = s->cur_pic.motion_val[0][mot_xy - 1][1];
+        P_LEFT[0] = s->c.cur_pic.motion_val[0][mot_xy - 1][0];
+        P_LEFT[1] = s->c.cur_pic.motion_val[0][mot_xy - 1][1];
 
         if (P_LEFT[0] > c->xmax * (1 << shift)) P_LEFT[0] = c->xmax * (1 << shift);
 
         /* special case for first line */
-        if (s->first_slice_line && block<2) {
+        if (s->c.first_slice_line && block < 2) {
             c->pred_x= pred_x4= P_LEFT[0];
             c->pred_y= pred_y4= P_LEFT[1];
         } else {
-            P_TOP[0]      = s->cur_pic.motion_val[0][mot_xy - mot_stride             ][0];
-            P_TOP[1]      = s->cur_pic.motion_val[0][mot_xy - mot_stride             ][1];
-            P_TOPRIGHT[0] = s->cur_pic.motion_val[0][mot_xy - mot_stride + off[block]][0];
-            P_TOPRIGHT[1] = s->cur_pic.motion_val[0][mot_xy - mot_stride + off[block]][1];
+            P_TOP[0]      = s->c.cur_pic.motion_val[0][mot_xy - mot_stride             ][0];
+            P_TOP[1]      = s->c.cur_pic.motion_val[0][mot_xy - mot_stride             ][1];
+            P_TOPRIGHT[0] = s->c.cur_pic.motion_val[0][mot_xy - mot_stride + off[block]][0];
+            P_TOPRIGHT[1] = s->c.cur_pic.motion_val[0][mot_xy - mot_stride + off[block]][1];
             if (P_TOP[1]      > c->ymax * (1 << shift)) P_TOP[1]      = c->ymax * (1 << shift);
             if (P_TOPRIGHT[0] < c->xmin * (1 << shift)) P_TOPRIGHT[0] = c->xmin * (1 << shift);
             if (P_TOPRIGHT[0] > c->xmax * (1 << shift)) P_TOPRIGHT[0] = c->xmax * (1 << shift);
@@ -641,7 +641,7 @@ static inline int h263_mv4_search(MpegEncContext *s, int mx, int my, int shift)
         P_MV1[1]= my;
         if(safety_clipping)
             for(i=1; i<10; i++){
-                if (s->first_slice_line && block<2 && i>1 && i<9)
+                if (s->c.first_slice_line && block < 2 && i > 1 && i < 9)
                     continue;
                 if (i>4 && i<9)
                     continue;
@@ -657,7 +657,7 @@ static inline int h263_mv4_search(MpegEncContext *s, int mx, int my, int shift)
             int dxy;
             const int offset= ((block&1) + (block>>1)*stride)*8;
             uint8_t *dest_y = c->scratchpad + offset;
-            if(s->quarter_sample){
+            if (s->c.quarter_sample) {
                 const uint8_t *ref = c->ref[block][0] + (mx4>>2) + (my4>>2)*stride;
                 dxy = ((my4 & 3) << 2) | (mx4 & 3);
 
@@ -672,7 +672,7 @@ static inline int h263_mv4_search(MpegEncContext *s, int mx, int my, int shift)
         }else
             dmin_sum+= dmin4;
 
-        if(s->quarter_sample){
+        if (s->c.quarter_sample) {
             mx4_sum+= mx4/2;
             my4_sum+= my4/2;
         }else{
@@ -680,8 +680,8 @@ static inline int h263_mv4_search(MpegEncContext *s, int mx, int my, int shift)
             my4_sum+= my4;
         }
 
-        s->cur_pic.motion_val[0][s->block_index[block]][0] = mx4;
-        s->cur_pic.motion_val[0][s->block_index[block]][1] = my4;
+        s->c.cur_pic.motion_val[0][s->c.block_index[block]][0] = mx4;
+        s->c.cur_pic.motion_val[0][s->c.block_index[block]][1] = my4;
 
         if(mx4 != mx || my4 != my) same=0;
     }
@@ -692,7 +692,7 @@ static inline int h263_mv4_search(MpegEncContext *s, int mx, int my, int shift)
     if (c->me_sub_cmp[0] != c->mb_cmp[0]) {
         dmin_sum += c->mb_cmp[0](s,
                                  s->new_pic->data[0] +
-                                 s->mb_x * 16 + s->mb_y * 16 * stride,
+                                 s->c.mb_x * 16 + s->c.mb_y * 16 * stride,
                                  c->scratchpad, stride, 16);
     }
 
@@ -705,13 +705,13 @@ static inline int h263_mv4_search(MpegEncContext *s, int mx, int my, int shift)
         my= ff_h263_round_chroma(my4_sum);
         dxy = ((my & 1) << 1) | (mx & 1);
 
-        offset= (s->mb_x*8 + (mx>>1)) + (s->mb_y*8 + (my>>1))*s->uvlinesize;
+        offset = (s->c.mb_x*8 + (mx>>1)) + (s->c.mb_y*8 + (my>>1))*s->c.uvlinesize;
 
-        c->hpel_put[1][dxy](c->scratchpad    , s->last_pic.data[1] + offset, s->uvlinesize, 8);
-        c->hpel_put[1][dxy](c->scratchpad + 8, s->last_pic.data[2] + offset, s->uvlinesize, 8);
+        c->hpel_put[1][dxy](c->scratchpad    , s->c.last_pic.data[1] + offset, s->c.uvlinesize, 8);
+        c->hpel_put[1][dxy](c->scratchpad + 8, s->c.last_pic.data[2] + offset, s->c.uvlinesize, 8);
 
-        dmin_sum += c->mb_cmp[1](s, s->new_pic->data[1] + s->mb_x * 8 + s->mb_y * 8 * s->uvlinesize, c->scratchpad,     s->uvlinesize, 8);
-        dmin_sum += c->mb_cmp[1](s, s->new_pic->data[2] + s->mb_x * 8 + s->mb_y * 8 * s->uvlinesize, c->scratchpad + 8, s->uvlinesize, 8);
+        dmin_sum += c->mb_cmp[1](s, s->new_pic->data[1] + s->c.mb_x * 8 + s->c.mb_y * 8 * s->c.uvlinesize, c->scratchpad,     s->c.uvlinesize, 8);
+        dmin_sum += c->mb_cmp[1](s, s->new_pic->data[2] + s->c.mb_x * 8 + s->c.mb_y * 8 * s->c.uvlinesize, c->scratchpad + 8, s->c.uvlinesize, 8);
     }
 
     c->pred_x= mx;
@@ -719,7 +719,7 @@ static inline int h263_mv4_search(MpegEncContext *s, int mx, int my, int shift)
 
     switch(c->avctx->mb_cmp&0xFF){
     /*case FF_CMP_SSE:
-        return dmin_sum+ 32*s->qscale*s->qscale;*/
+        return dmin_sum+ 32*s->c.qscale*s->c.qscale;*/
     case FF_CMP_RD:
         return dmin_sum;
     default:
@@ -727,33 +727,34 @@ static inline int h263_mv4_search(MpegEncContext *s, int mx, int my, int shift)
     }
 }
 
-static inline void init_interlaced_ref(MpegEncContext *s, int ref_index){
-    MotionEstContext * const c= &s->me;
+static inline void init_interlaced_ref(MPVEncContext *const s, int ref_index)
+{
+    MotionEstContext *const c = &s->c.me;
 
-    c->ref[1+ref_index][0] = c->ref[0+ref_index][0] + s->linesize;
-    c->src[1][0] = c->src[0][0] + s->linesize;
+    c->ref[1+ref_index][0] = c->ref[0+ref_index][0] + s->c.linesize;
+    c->src[1][0] = c->src[0][0] + s->c.linesize;
     if(c->flags & FLAG_CHROMA){
-        c->ref[1+ref_index][1] = c->ref[0+ref_index][1] + s->uvlinesize;
-        c->ref[1+ref_index][2] = c->ref[0+ref_index][2] + s->uvlinesize;
-        c->src[1][1] = c->src[0][1] + s->uvlinesize;
-        c->src[1][2] = c->src[0][2] + s->uvlinesize;
+        c->ref[1+ref_index][1] = c->ref[0+ref_index][1] + s->c.uvlinesize;
+        c->ref[1+ref_index][2] = c->ref[0+ref_index][2] + s->c.uvlinesize;
+        c->src[1][1] = c->src[0][1] + s->c.uvlinesize;
+        c->src[1][2] = c->src[0][2] + s->c.uvlinesize;
     }
 }
 
-static int interlaced_search(MpegEncContext *s, int ref_index,
+static int interlaced_search(MPVEncContext *const s, int ref_index,
                              int16_t (*mv_tables[2][2])[2], uint8_t *field_select_tables[2], int mx, int my, int user_field_select)
 {
-    MotionEstContext * const c= &s->me;
+    MotionEstContext *const c = &s->c.me;
     const int size=0;
     const int h=8;
     int block;
     int P[10][2];
     const uint8_t * const mv_penalty = c->current_mv_penalty;
     int same=1;
-    const int stride= 2*s->linesize;
+    const int stride = 2*s->c.linesize;
     int dmin_sum= 0;
-    const int mot_stride= s->mb_stride;
-    const int xy= s->mb_x + s->mb_y*mot_stride;
+    const int mot_stride = s->c.mb_stride;
+    const int xy = s->c.mb_x + s->c.mb_y*mot_stride;
 
     c->ymin>>=1;
     c->ymax>>=1;
@@ -784,7 +785,7 @@ static int interlaced_search(MpegEncContext *s, int ref_index,
             c->pred_x= P_LEFT[0];
             c->pred_y= P_LEFT[1];
 
-            if(!s->first_slice_line){
+            if (!s->c.first_slice_line) {
                 P_TOP[0]      = mv_table[xy - mot_stride][0];
                 P_TOP[1]      = mv_table[xy - mot_stride][1];
                 P_TOPRIGHT[0] = mv_table[xy - mot_stride + 1][0];
@@ -850,7 +851,7 @@ static int interlaced_search(MpegEncContext *s, int ref_index,
 
     switch(c->avctx->mb_cmp&0xFF){
     /*case FF_CMP_SSE:
-        return dmin_sum+ 32*s->qscale*s->qscale;*/
+        return dmin_sum+ 32*s->c.qscale*s->c.qscale;*/
     case FF_CMP_RD:
         return dmin_sum;
     default:
@@ -883,57 +884,57 @@ static inline int get_penalty_factor(int lambda, int lambda2, int type){
     }
 }
 
-void ff_estimate_p_frame_motion(MpegEncContext * s,
+void ff_estimate_p_frame_motion(MPVEncContext *const s,
                                 int mb_x, int mb_y)
 {
-    MotionEstContext * const c= &s->me;
+    MotionEstContext *const c = &s->c.me;
     const uint8_t *pix, *ppix;
     int sum, mx = 0, my = 0, dmin = 0;
     int varc;            ///< the variance of the block (sum of squared (p[y][x]-average))
     int vard;            ///< sum of squared differences with the estimated motion vector
     int P[10][2];
-    const int shift= 1+s->quarter_sample;
+    const int shift = 1 + s->c.quarter_sample;
     int mb_type=0;
 
-    init_ref(c, s->new_pic->data, s->last_pic.data, NULL, 16*mb_x, 16*mb_y, 0);
+    init_ref(c, s->new_pic->data, s->c.last_pic.data, NULL, 16*mb_x, 16*mb_y, 0);
 
-    av_assert0(s->quarter_sample==0 || s->quarter_sample==1);
-    av_assert0(s->linesize == c->stride);
-    av_assert0(s->uvlinesize == c->uvstride);
+    av_assert0(s->c.quarter_sample==0 || s->c.quarter_sample==1);
+    av_assert0(s->c.linesize == c->stride);
+    av_assert0(s->c.uvlinesize == c->uvstride);
 
-    c->penalty_factor    = get_penalty_factor(s->lambda, s->lambda2, c->avctx->me_cmp);
-    c->sub_penalty_factor= get_penalty_factor(s->lambda, s->lambda2, c->avctx->me_sub_cmp);
-    c->mb_penalty_factor = get_penalty_factor(s->lambda, s->lambda2, c->avctx->mb_cmp);
-    c->current_mv_penalty= c->mv_penalty[s->f_code] + MAX_DMV;
+    c->penalty_factor     = get_penalty_factor(s->c.lambda, s->c.lambda2, c->avctx->me_cmp);
+    c->sub_penalty_factor = get_penalty_factor(s->c.lambda, s->c.lambda2, c->avctx->me_sub_cmp);
+    c->mb_penalty_factor  = get_penalty_factor(s->c.lambda, s->c.lambda2, c->avctx->mb_cmp);
+    c->current_mv_penalty = c->mv_penalty[s->c.f_code] + MAX_DMV;
 
     get_limits(s, 16*mb_x, 16*mb_y, 0);
     c->skip=0;
 
     /* intra / predictive decision */
     pix = c->src[0][0];
-    sum  = s->mpvencdsp.pix_sum(pix, s->linesize);
-    varc = s->mpvencdsp.pix_norm1(pix, s->linesize) -
+    sum  = s->mpvencdsp.pix_sum(pix, s->c.linesize);
+    varc = s->mpvencdsp.pix_norm1(pix, s->c.linesize) -
            (((unsigned) sum * sum) >> 8) + 500;
 
-    s->mb_mean[s->mb_stride * mb_y + mb_x] = (sum+128)>>8;
-    s->mb_var [s->mb_stride * mb_y + mb_x] = (varc+128)>>8;
+    s->mb_mean[s->c.mb_stride * mb_y + mb_x] = (sum  + 128) >> 8;
+    s->mb_var [s->c.mb_stride * mb_y + mb_x] = (varc + 128) >> 8;
     c->mb_var_sum_temp += (varc+128)>>8;
 
     if (c->motion_est != FF_ME_ZERO) {
-        const int mot_stride = s->b8_stride;
-        const int mot_xy = s->block_index[0];
+        const int mot_stride = s->c.b8_stride;
+        const int mot_xy = s->c.block_index[0];
 
-        P_LEFT[0] = s->cur_pic.motion_val[0][mot_xy - 1][0];
-        P_LEFT[1] = s->cur_pic.motion_val[0][mot_xy - 1][1];
+        P_LEFT[0] = s->c.cur_pic.motion_val[0][mot_xy - 1][0];
+        P_LEFT[1] = s->c.cur_pic.motion_val[0][mot_xy - 1][1];
 
         if (P_LEFT[0] > (c->xmax << shift))
             P_LEFT[0] =  c->xmax << shift;
 
-        if (!s->first_slice_line) {
-            P_TOP[0]      = s->cur_pic.motion_val[0][mot_xy - mot_stride    ][0];
-            P_TOP[1]      = s->cur_pic.motion_val[0][mot_xy - mot_stride    ][1];
-            P_TOPRIGHT[0] = s->cur_pic.motion_val[0][mot_xy - mot_stride + 2][0];
-            P_TOPRIGHT[1] = s->cur_pic.motion_val[0][mot_xy - mot_stride + 2][1];
+        if (!s->c.first_slice_line) {
+            P_TOP[0]      = s->c.cur_pic.motion_val[0][mot_xy - mot_stride    ][0];
+            P_TOP[1]      = s->c.cur_pic.motion_val[0][mot_xy - mot_stride    ][1];
+            P_TOPRIGHT[0] = s->c.cur_pic.motion_val[0][mot_xy - mot_stride + 2][0];
+            P_TOPRIGHT[1] = s->c.cur_pic.motion_val[0][mot_xy - mot_stride + 2][1];
             if (P_TOP[1] > (c->ymax << shift))
                 P_TOP[1] =  c->ymax << shift;
             if (P_TOPRIGHT[0] < (c->xmin * (1 << shift)))
@@ -944,7 +945,7 @@ void ff_estimate_p_frame_motion(MpegEncContext * s,
             P_MEDIAN[0] = mid_pred(P_LEFT[0], P_TOP[0], P_TOPRIGHT[0]);
             P_MEDIAN[1] = mid_pred(P_LEFT[1], P_TOP[1], P_TOPRIGHT[1]);
 
-            if (s->out_format == FMT_H263) {
+            if (s->c.out_format == FMT_H263) {
                 c->pred_x = P_MEDIAN[0];
                 c->pred_y = P_MEDIAN[1];
             } else { /* MPEG-1 at least */
@@ -959,22 +960,22 @@ void ff_estimate_p_frame_motion(MpegEncContext * s,
     }
 
     /* At this point (mx,my) are full-pell and the relative displacement */
-    ppix = c->ref[0][0] + (my * s->linesize) + mx;
+    ppix = c->ref[0][0] + (my * s->c.linesize) + mx;
 
-    vard = c->sse(NULL, pix, ppix, s->linesize, 16);
+    vard = c->sse(NULL, pix, ppix, s->c.linesize, 16);
 
-    s->mc_mb_var[s->mb_stride * mb_y + mb_x] = (vard+128)>>8;
+    s->mc_mb_var[s->c.mb_stride * mb_y + mb_x] = (vard+128)>>8;
     c->mc_mb_var_sum_temp += (vard+128)>>8;
 
     if (c->avctx->mb_decision > FF_MB_DECISION_SIMPLE) {
-        int p_score= FFMIN(vard, varc-500+(s->lambda2>>FF_LAMBDA_SHIFT)*100);
-        int i_score= varc-500+(s->lambda2>>FF_LAMBDA_SHIFT)*20;
+        int p_score = FFMIN(vard, varc - 500 + (s->c.lambda2 >> FF_LAMBDA_SHIFT)*100);
+        int i_score = varc - 500 + (s->c.lambda2 >> FF_LAMBDA_SHIFT)*20;
         c->scene_change_score+= ff_sqrt(p_score) - ff_sqrt(i_score);
 
         if (vard*2 + 200*256 > varc && !s->intra_penalty)
             mb_type|= CANDIDATE_MB_TYPE_INTRA;
-        if (varc*2 + 200*256 > vard || s->qscale > 24){
-//        if (varc*2 + 200*256 + 50*(s->lambda2>>FF_LAMBDA_SHIFT) > vard){
+        if (varc*2 + 200*256 > vard || s->c.qscale > 24){
+//        if (varc*2 + 200*256 + 50*(s->c.lambda2>>FF_LAMBDA_SHIFT) > vard){
             mb_type|= CANDIDATE_MB_TYPE_INTER;
             c->sub_motion_search(s, &mx, &my, dmin, 0, 0, 0, 16);
             if (s->mpv_flags & FF_MPV_FLAG_MV0)
@@ -994,7 +995,7 @@ void ff_estimate_p_frame_motion(MpegEncContext * s,
             set_p_mv_tables(s, mx, my, 1);
         if ((c->avctx->flags & AV_CODEC_FLAG_INTERLACED_ME)
            && !c->skip){ //FIXME varc/d checks
-            if(interlaced_search(s, 0, s->p_field_mv_table, s->p_field_select_table, mx, my, 0) < INT_MAX)
+            if(interlaced_search(s, 0, s->c.p_field_mv_table, s->p_field_select_table, mx, my, 0) < INT_MAX)
                 mb_type |= CANDIDATE_MB_TYPE_INTER_I;
         }
     }else{
@@ -1015,7 +1016,7 @@ void ff_estimate_p_frame_motion(MpegEncContext * s,
         }
         if ((c->avctx->flags & AV_CODEC_FLAG_INTERLACED_ME)
            && !c->skip){ //FIXME varc/d checks
-            int dmin_i= interlaced_search(s, 0, s->p_field_mv_table, s->p_field_select_table, mx, my, 0);
+            int dmin_i= interlaced_search(s, 0, s->c.p_field_mv_table, s->p_field_select_table, mx, my, 0);
             if(dmin_i < dmin){
                 mb_type = CANDIDATE_MB_TYPE_INTER_I;
                 dmin= dmin_i;
@@ -1032,46 +1033,46 @@ void ff_estimate_p_frame_motion(MpegEncContext * s,
             mean*= 0x01010101;
 
             for(i=0; i<16; i++){
-                *(uint32_t*)(&c->scratchpad[i*s->linesize+ 0]) = mean;
-                *(uint32_t*)(&c->scratchpad[i*s->linesize+ 4]) = mean;
-                *(uint32_t*)(&c->scratchpad[i*s->linesize+ 8]) = mean;
-                *(uint32_t*)(&c->scratchpad[i*s->linesize+12]) = mean;
+                *(uint32_t*)(&c->scratchpad[i*s->c.linesize+ 0]) = mean;
+                *(uint32_t*)(&c->scratchpad[i*s->c.linesize+ 4]) = mean;
+                *(uint32_t*)(&c->scratchpad[i*s->c.linesize+ 8]) = mean;
+                *(uint32_t*)(&c->scratchpad[i*s->c.linesize+12]) = mean;
             }
 
-            intra_score= c->mb_cmp[0](s, c->scratchpad, pix, s->linesize, 16);
+            intra_score= c->mb_cmp[0](s, c->scratchpad, pix, s->c.linesize, 16);
         }
         intra_score += c->mb_penalty_factor*16 + s->intra_penalty;
 
         if(intra_score < dmin){
             mb_type= CANDIDATE_MB_TYPE_INTRA;
-            s->cur_pic.mb_type[mb_y*s->mb_stride + mb_x] = CANDIDATE_MB_TYPE_INTRA; //FIXME cleanup
+            s->c.cur_pic.mb_type[mb_y*s->c.mb_stride + mb_x] = CANDIDATE_MB_TYPE_INTRA; //FIXME cleanup
         }else
-            s->cur_pic.mb_type[mb_y*s->mb_stride + mb_x] = 0;
+            s->c.cur_pic.mb_type[mb_y*s->c.mb_stride + mb_x] = 0;
 
         {
-            int p_score= FFMIN(vard, varc-500+(s->lambda2>>FF_LAMBDA_SHIFT)*100);
-            int i_score= varc-500+(s->lambda2>>FF_LAMBDA_SHIFT)*20;
+            int p_score = FFMIN(vard, varc-500+(s->c.lambda2>>FF_LAMBDA_SHIFT)*100);
+            int i_score = varc-500+(s->c.lambda2>>FF_LAMBDA_SHIFT)*20;
             c->scene_change_score+= ff_sqrt(p_score) - ff_sqrt(i_score);
         }
     }
 
-    s->mb_type[mb_y*s->mb_stride + mb_x]= mb_type;
+    s->mb_type[mb_y*s->c.mb_stride + mb_x] = mb_type;
 }
 
-int ff_pre_estimate_p_frame_motion(MpegEncContext * s,
+int ff_pre_estimate_p_frame_motion(MPVEncContext *const s,
                                     int mb_x, int mb_y)
 {
-    MotionEstContext * const c= &s->me;
+    MotionEstContext *const c = &s->c.me;
     int mx, my, dmin;
     int P[10][2];
-    const int shift= 1+s->quarter_sample;
-    const int xy= mb_x + mb_y*s->mb_stride;
-    init_ref(c, s->new_pic->data, s->last_pic.data, NULL, 16*mb_x, 16*mb_y, 0);
+    const int shift = 1 + s->c.quarter_sample;
+    const int xy    = mb_x + mb_y*s->c.mb_stride;
+    init_ref(c, s->new_pic->data, s->c.last_pic.data, NULL, 16*mb_x, 16*mb_y, 0);
 
-    av_assert0(s->quarter_sample==0 || s->quarter_sample==1);
+    av_assert0(s->c.quarter_sample==0 || s->c.quarter_sample==1);
 
-    c->pre_penalty_factor    = get_penalty_factor(s->lambda, s->lambda2, c->avctx->me_pre_cmp);
-    c->current_mv_penalty= c->mv_penalty[s->f_code] + MAX_DMV;
+    c->pre_penalty_factor = get_penalty_factor(s->c.lambda, s->c.lambda2, c->avctx->me_pre_cmp);
+    c->current_mv_penalty = c->mv_penalty[s->c.f_code] + MAX_DMV;
 
     get_limits(s, 16*mb_x, 16*mb_y, 0);
     c->skip=0;
@@ -1082,16 +1083,16 @@ int ff_pre_estimate_p_frame_motion(MpegEncContext * s,
     if(P_LEFT[0]       < (c->xmin<<shift)) P_LEFT[0]       = (c->xmin<<shift);
 
     /* special case for first line */
-    if (s->first_slice_line) {
+    if (s->c.first_slice_line) {
         c->pred_x= P_LEFT[0];
         c->pred_y= P_LEFT[1];
         P_TOP[0]= P_TOPRIGHT[0]= P_MEDIAN[0]=
         P_TOP[1]= P_TOPRIGHT[1]= P_MEDIAN[1]= 0; //FIXME
     } else {
-        P_TOP[0]      = s->p_mv_table[xy + s->mb_stride    ][0];
-        P_TOP[1]      = s->p_mv_table[xy + s->mb_stride    ][1];
-        P_TOPRIGHT[0] = s->p_mv_table[xy + s->mb_stride - 1][0];
-        P_TOPRIGHT[1] = s->p_mv_table[xy + s->mb_stride - 1][1];
+        P_TOP[0]      = s->p_mv_table[xy + s->c.mb_stride    ][0];
+        P_TOP[1]      = s->p_mv_table[xy + s->c.mb_stride    ][1];
+        P_TOPRIGHT[0] = s->p_mv_table[xy + s->c.mb_stride - 1][0];
+        P_TOPRIGHT[1] = s->p_mv_table[xy + s->c.mb_stride - 1][1];
         if(P_TOP[1]      < (c->ymin<<shift)) P_TOP[1]     = (c->ymin<<shift);
         if(P_TOPRIGHT[0] > (c->xmax<<shift)) P_TOPRIGHT[0]= (c->xmax<<shift);
         if(P_TOPRIGHT[1] < (c->ymin<<shift)) P_TOPRIGHT[1]= (c->ymin<<shift);
@@ -1111,14 +1112,14 @@ int ff_pre_estimate_p_frame_motion(MpegEncContext * s,
     return dmin;
 }
 
-static int estimate_motion_b(MpegEncContext *s, int mb_x, int mb_y,
+static int estimate_motion_b(MPVEncContext *const s, int mb_x, int mb_y,
                              int16_t (*mv_table)[2], int ref_index, int f_code)
 {
-    MotionEstContext * const c= &s->me;
+    MotionEstContext * const c= &s->c.me;
     int mx = 0, my = 0, dmin = 0;
     int P[10][2];
-    const int shift= 1+s->quarter_sample;
-    const int mot_stride = s->mb_stride;
+    const int shift= 1+s->c.quarter_sample;
+    const int mot_stride = s->c.mb_stride;
     const int mot_xy = mb_y*mot_stride + mb_x;
     const uint8_t * const mv_penalty = c->mv_penalty[f_code] + MAX_DMV;
     int mv_scale;
@@ -1134,7 +1135,7 @@ static int estimate_motion_b(MpegEncContext *s, int mb_x, int mb_y,
         if (P_LEFT[0] > (c->xmax << shift)) P_LEFT[0] = (c->xmax << shift);
 
         /* special case for first line */
-        if (!s->first_slice_line) {
+        if (!s->c.first_slice_line) {
             P_TOP[0]      = mv_table[mot_xy - mot_stride    ][0];
             P_TOP[1]      = mv_table[mot_xy - mot_stride    ][1];
             P_TOPRIGHT[0] = mv_table[mot_xy - mot_stride + 1][0];
@@ -1150,9 +1151,9 @@ static int estimate_motion_b(MpegEncContext *s, int mb_x, int mb_y,
         c->pred_y = P_LEFT[1];
 
         if(mv_table == s->b_forw_mv_table){
-            mv_scale= (s->pb_time<<16) / (s->pp_time<<shift);
+            mv_scale= (s->c.pb_time<<16) / (s->c.pp_time<<shift);
         }else{
-            mv_scale = ((s->pb_time - s->pp_time) * (1 << 16)) / (s->pp_time<<shift);
+            mv_scale = ((s->c.pb_time - s->c.pp_time) * (1 << 16)) / (s->c.pp_time<<shift);
         }
 
         dmin = ff_epzs_motion_search(s, &mx, &my, P, 0, ref_index, s->p_mv_table, mv_scale, 0, 16);
@@ -1163,14 +1164,14 @@ static int estimate_motion_b(MpegEncContext *s, int mb_x, int mb_y,
     if(c->avctx->me_sub_cmp != c->avctx->mb_cmp && !c->skip)
         dmin= get_mb_score(s, mx, my, 0, ref_index, 0, 16, 1);
 
-//    s->mb_type[mb_y*s->mb_width + mb_x]= mb_type;
+//    s->mb_type[mb_y*s->c.mb_width + mb_x]= mb_type;
     mv_table[mot_xy][0]= mx;
     mv_table[mot_xy][1]= my;
 
     return dmin;
 }
 
-static inline int check_bidir_mv(MpegEncContext * s,
+static inline int check_bidir_mv(MPVEncContext *const s,
                    int motion_fx, int motion_fy,
                    int motion_bx, int motion_by,
                    int pred_fx, int pred_fy,
@@ -1180,9 +1181,9 @@ static inline int check_bidir_mv(MpegEncContext * s,
     //FIXME optimize?
     //FIXME better f_code prediction (max mv & distance)
     //FIXME pointers
-    MotionEstContext * const c= &s->me;
-    const uint8_t * const mv_penalty_f = c->mv_penalty[s->f_code] + MAX_DMV; // f_code of the prev frame
-    const uint8_t * const mv_penalty_b = c->mv_penalty[s->b_code] + MAX_DMV; // f_code of the prev frame
+    MotionEstContext * const c= &s->c.me;
+    const uint8_t * const mv_penalty_f = c->mv_penalty[s->c.f_code] + MAX_DMV; // f_code of the prev frame
+    const uint8_t * const mv_penalty_b = c->mv_penalty[s->c.b_code] + MAX_DMV; // f_code of the prev frame
     int stride= c->stride;
     uint8_t *dest_y = c->scratchpad;
     const uint8_t *ptr;
@@ -1193,34 +1194,34 @@ static inline int check_bidir_mv(MpegEncContext * s,
     const uint8_t *const *ref_data  = c->ref[0];
     const uint8_t *const *ref2_data = c->ref[2];
 
-    if(s->quarter_sample){
+    if(s->c.quarter_sample){
         dxy = ((motion_fy & 3) << 2) | (motion_fx & 3);
         src_x = motion_fx >> 2;
         src_y = motion_fy >> 2;
 
         ptr = ref_data[0] + (src_y * stride) + src_x;
-        s->qdsp.put_qpel_pixels_tab[0][dxy](dest_y, ptr, stride);
+        s->c.qdsp.put_qpel_pixels_tab[0][dxy](dest_y, ptr, stride);
 
         dxy = ((motion_by & 3) << 2) | (motion_bx & 3);
         src_x = motion_bx >> 2;
         src_y = motion_by >> 2;
 
         ptr = ref2_data[0] + (src_y * stride) + src_x;
-        s->qdsp.avg_qpel_pixels_tab[size][dxy](dest_y, ptr, stride);
+        s->c.qdsp.avg_qpel_pixels_tab[size][dxy](dest_y, ptr, stride);
     }else{
         dxy = ((motion_fy & 1) << 1) | (motion_fx & 1);
         src_x = motion_fx >> 1;
         src_y = motion_fy >> 1;
 
         ptr = ref_data[0] + (src_y * stride) + src_x;
-        s->hdsp.put_pixels_tab[size][dxy](dest_y    , ptr    , stride, h);
+        s->c.hdsp.put_pixels_tab[size][dxy](dest_y    , ptr    , stride, h);
 
         dxy = ((motion_by & 1) << 1) | (motion_bx & 1);
         src_x = motion_bx >> 1;
         src_y = motion_by >> 1;
 
         ptr = ref2_data[0] + (src_y * stride) + src_x;
-        s->hdsp.avg_pixels_tab[size][dxy](dest_y    , ptr    , stride, h);
+        s->c.hdsp.avg_pixels_tab[size][dxy](dest_y    , ptr    , stride, h);
     }
 
     fbmin = (mv_penalty_f[motion_fx-pred_fx] + mv_penalty_f[motion_fy-pred_fy])*c->mb_penalty_factor
@@ -1235,10 +1236,10 @@ static inline int check_bidir_mv(MpegEncContext * s,
 }
 
 /* refine the bidir vectors in hq mode and return the score in both lq & hq mode*/
-static inline int bidir_refine(MpegEncContext * s, int mb_x, int mb_y)
+static inline int bidir_refine(MPVEncContext *const s, int mb_x, int mb_y)
 {
-    MotionEstContext * const c= &s->me;
-    const int mot_stride = s->mb_stride;
+    MotionEstContext * const c= &s->c.me;
+    const int mot_stride = s->c.mb_stride;
     const int xy = mb_y *mot_stride + mb_x;
     int fbmin;
     int pred_fx= s->b_bidir_forw_mv_table[xy-1][0];
@@ -1382,16 +1383,16 @@ CHECK_BIDIR(-(a),-(b),-(c),-(d))
     return fbmin;
 }
 
-static inline int direct_search(MpegEncContext * s, int mb_x, int mb_y)
+static inline int direct_search(MPVEncContext *const s, int mb_x, int mb_y)
 {
-    MotionEstContext * const c= &s->me;
+    MotionEstContext * const c= &s->c.me;
     int P[10][2];
-    const int mot_stride = s->mb_stride;
+    const int mot_stride = s->c.mb_stride;
     const int mot_xy = mb_y*mot_stride + mb_x;
-    const int shift= 1+s->quarter_sample;
+    const int shift= 1+s->c.quarter_sample;
     int dmin, i;
-    const int time_pp= s->pp_time;
-    const int time_pb= s->pb_time;
+    const int time_pp= s->c.pp_time;
+    const int time_pb= s->c.pb_time;
     int mx, my, xmin, xmax, ymin, ymax;
     int16_t (*mv_table)[2]= s->b_direct_mv_table;
 
@@ -1399,18 +1400,18 @@ static inline int direct_search(MpegEncContext * s, int mb_x, int mb_y)
     ymin= xmin=(-32)>>shift;
     ymax= xmax=   31>>shift;
 
-    if (IS_8X8(s->next_pic.mb_type[mot_xy])) {
-        s->mv_type= MV_TYPE_8X8;
+    if (IS_8X8(s->c.next_pic.mb_type[mot_xy])) {
+        s->c.mv_type= MV_TYPE_8X8;
     }else{
-        s->mv_type= MV_TYPE_16X16;
+        s->c.mv_type= MV_TYPE_16X16;
     }
 
     for(i=0; i<4; i++){
-        int index= s->block_index[i];
+        int index= s->c.block_index[i];
         int min, max;
 
-        c->co_located_mv[i][0] = s->next_pic.motion_val[0][index][0];
-        c->co_located_mv[i][1] = s->next_pic.motion_val[0][index][1];
+        c->co_located_mv[i][0] = s->c.next_pic.motion_val[0][index][0];
+        c->co_located_mv[i][1] = s->c.next_pic.motion_val[0][index][1];
         c->direct_basis_mv[i][0]= c->co_located_mv[i][0]*time_pb/time_pp + ((i& 1)<<(shift+3));
         c->direct_basis_mv[i][1]= c->co_located_mv[i][1]*time_pb/time_pp + ((i>>1)<<(shift+3));
 //        c->direct_basis_mv[1][i][0]= c->co_located_mv[i][0]*(time_pb - time_pp)/time_pp + ((i &1)<<(shift+3);
@@ -1420,17 +1421,17 @@ static inline int direct_search(MpegEncContext * s, int mb_x, int mb_y)
         min= FFMIN(c->direct_basis_mv[i][0], c->direct_basis_mv[i][0] - c->co_located_mv[i][0])>>shift;
         max+= 16*mb_x + 1; // +-1 is for the simpler rounding
         min+= 16*mb_x - 1;
-        xmax= FFMIN(xmax, s->width - max);
+        xmax= FFMIN(xmax, s->c.width - max);
         xmin= FFMAX(xmin, - 16     - min);
 
         max= FFMAX(c->direct_basis_mv[i][1], c->direct_basis_mv[i][1] - c->co_located_mv[i][1])>>shift;
         min= FFMIN(c->direct_basis_mv[i][1], c->direct_basis_mv[i][1] - c->co_located_mv[i][1])>>shift;
         max+= 16*mb_y + 1; // +-1 is for the simpler rounding
         min+= 16*mb_y - 1;
-        ymax= FFMIN(ymax, s->height - max);
+        ymax= FFMIN(ymax, s->c.height - max);
         ymin= FFMAX(ymin, - 16      - min);
 
-        if(s->mv_type == MV_TYPE_16X16) break;
+        if(s->c.mv_type == MV_TYPE_16X16) break;
     }
 
     av_assert2(xmax <= 15 && ymax <= 15 && xmin >= -16 && ymin >= -16);
@@ -1455,7 +1456,7 @@ static inline int direct_search(MpegEncContext * s, int mb_x, int mb_y)
     P_LEFT[1] = av_clip(mv_table[mot_xy - 1][1], ymin * (1 << shift), ymax << shift);
 
     /* special case for first line */
-    if (!s->first_slice_line) { //FIXME maybe allow this over thread boundary as it is clipped
+    if (!s->c.first_slice_line) { //FIXME maybe allow this over thread boundary as it is clipped
         P_TOP[0]      = av_clip(mv_table[mot_xy - mot_stride    ][0], xmin * (1 << shift), xmax << shift);
         P_TOP[1]      = av_clip(mv_table[mot_xy - mot_stride    ][1], ymin * (1 << shift), ymax << shift);
         P_TOPRIGHT[0] = av_clip(mv_table[mot_xy - mot_stride + 1][0], xmin * (1 << shift), xmax << shift);
@@ -1484,47 +1485,47 @@ static inline int direct_search(MpegEncContext * s, int mb_x, int mb_y)
     return dmin;
 }
 
-void ff_estimate_b_frame_motion(MpegEncContext * s,
+void ff_estimate_b_frame_motion(MPVEncContext *const s,
                              int mb_x, int mb_y)
 {
-    MotionEstContext * const c= &s->me;
+    MotionEstContext * const c= &s->c.me;
     int fmin, bmin, dmin, fbmin, bimin, fimin;
     int type=0;
-    const int xy = mb_y*s->mb_stride + mb_x;
-    init_ref(c, s->new_pic->data, s->last_pic.data,
-             s->next_pic.data, 16 * mb_x, 16 * mb_y, 2);
+    const int xy = mb_y*s->c.mb_stride + mb_x;
+    init_ref(c, s->new_pic->data, s->c.last_pic.data,
+             s->c.next_pic.data, 16 * mb_x, 16 * mb_y, 2);
 
     get_limits(s, 16*mb_x, 16*mb_y, 1);
 
     c->skip=0;
 
-    if (s->codec_id == AV_CODEC_ID_MPEG4 && s->next_pic.mbskip_table[xy]) {
+    if (s->c.codec_id == AV_CODEC_ID_MPEG4 && s->c.next_pic.mbskip_table[xy]) {
         int score= direct_search(s, mb_x, mb_y); //FIXME just check 0,0
 
         score= ((unsigned)(score*score + 128*256))>>16;
         c->mc_mb_var_sum_temp += score;
-        s->mc_mb_var[mb_y*s->mb_stride + mb_x] = score; //FIXME use SSE
-        s->mb_type[mb_y*s->mb_stride + mb_x]= CANDIDATE_MB_TYPE_DIRECT0;
+        s->mc_mb_var[mb_y*s->c.mb_stride + mb_x] = score; //FIXME use SSE
+        s->mb_type[mb_y*s->c.mb_stride + mb_x]= CANDIDATE_MB_TYPE_DIRECT0;
 
         return;
     }
 
-    c->penalty_factor    = get_penalty_factor(s->lambda, s->lambda2, c->avctx->me_cmp);
-    c->sub_penalty_factor= get_penalty_factor(s->lambda, s->lambda2, c->avctx->me_sub_cmp);
-    c->mb_penalty_factor = get_penalty_factor(s->lambda, s->lambda2, c->avctx->mb_cmp);
+    c->penalty_factor    = get_penalty_factor(s->c.lambda, s->c.lambda2, c->avctx->me_cmp);
+    c->sub_penalty_factor= get_penalty_factor(s->c.lambda, s->c.lambda2, c->avctx->me_sub_cmp);
+    c->mb_penalty_factor = get_penalty_factor(s->c.lambda, s->c.lambda2, c->avctx->mb_cmp);
 
-    if (s->codec_id == AV_CODEC_ID_MPEG4)
+    if (s->c.codec_id == AV_CODEC_ID_MPEG4)
         dmin= direct_search(s, mb_x, mb_y);
     else
         dmin= INT_MAX;
 
 // FIXME penalty stuff for non-MPEG-4
     c->skip=0;
-    fmin = estimate_motion_b(s, mb_x, mb_y, s->b_forw_mv_table, 0, s->f_code) +
+    fmin = estimate_motion_b(s, mb_x, mb_y, s->b_forw_mv_table, 0, s->c.f_code) +
            3 * c->mb_penalty_factor;
 
     c->skip=0;
-    bmin = estimate_motion_b(s, mb_x, mb_y, s->b_back_mv_table, 2, s->b_code) +
+    bmin = estimate_motion_b(s, mb_x, mb_y, s->b_back_mv_table, 2, s->c.b_code) +
            2 * c->mb_penalty_factor;
     ff_dlog(c->avctx, " %d %d ", s->b_forw_mv_table[xy][0], s->b_forw_mv_table[xy][1]);
 
@@ -1535,11 +1536,11 @@ void ff_estimate_b_frame_motion(MpegEncContext * s,
     if (c->avctx->flags & AV_CODEC_FLAG_INTERLACED_ME) {
 //FIXME mb type penalty
         c->skip=0;
-        c->current_mv_penalty= c->mv_penalty[s->f_code] + MAX_DMV;
+        c->current_mv_penalty= c->mv_penalty[s->c.f_code] + MAX_DMV;
         fimin= interlaced_search(s, 0,
                                  s->b_field_mv_table[0], s->b_field_select_table[0],
                                  s->b_forw_mv_table[xy][0], s->b_forw_mv_table[xy][1], 0);
-        c->current_mv_penalty= c->mv_penalty[s->b_code] + MAX_DMV;
+        c->current_mv_penalty= c->mv_penalty[s->c.b_code] + MAX_DMV;
         bimin= interlaced_search(s, 2,
                                  s->b_field_mv_table[1], s->b_field_select_table[1],
                                  s->b_back_mv_table[xy][0], s->b_back_mv_table[xy][1], 0);
@@ -1573,7 +1574,7 @@ void ff_estimate_b_frame_motion(MpegEncContext * s,
 
         score= ((unsigned)(score*score + 128*256))>>16;
         c->mc_mb_var_sum_temp += score;
-        s->mc_mb_var[mb_y*s->mb_stride + mb_x] = score; //FIXME use SSE
+        s->mc_mb_var[mb_y*s->c.mb_stride + mb_x] = score; //FIXME use SSE
     }
 
     if(c->avctx->mb_decision > FF_MB_DECISION_SIMPLE){
@@ -1587,39 +1588,39 @@ void ff_estimate_b_frame_motion(MpegEncContext * s,
         }
          //FIXME something smarter
         if(dmin>256*256*16) type&= ~CANDIDATE_MB_TYPE_DIRECT; //do not try direct mode if it is invalid for this MB
-        if (s->codec_id == AV_CODEC_ID_MPEG4 && type&CANDIDATE_MB_TYPE_DIRECT &&
+        if (s->c.codec_id == AV_CODEC_ID_MPEG4 && type&CANDIDATE_MB_TYPE_DIRECT &&
             s->mpv_flags & FF_MPV_FLAG_MV0 && *(uint32_t*)s->b_direct_mv_table[xy])
             type |= CANDIDATE_MB_TYPE_DIRECT0;
     }
 
-    s->mb_type[mb_y*s->mb_stride + mb_x]= type;
+    s->mb_type[mb_y*s->c.mb_stride + mb_x]= type;
 }
 
 /* find best f_code for ME which do unlimited searches */
 int ff_get_best_fcode(MPVMainEncContext *const m, const int16_t (*mv_table)[2], int type)
 {
-    MpegEncContext *const s = &m->s;
-    MotionEstContext *const c = &s->me;
+    MPVEncContext *const s = &m->s;
+    MotionEstContext *const c = &s->c.me;
 
     if (c->motion_est != FF_ME_ZERO) {
         int score[8];
-        int i, y, range = c->avctx->me_range ? c->avctx->me_range : (INT_MAX/2);
+        int i, range = c->avctx->me_range ? c->avctx->me_range : (INT_MAX/2);
         const uint8_t * fcode_tab = m->fcode_tab;
         int best_fcode=-1;
         int best_score=-10000000;
 
-        if (s->msmpeg4_version != MSMP4_UNUSED)
+        if (s->c.msmpeg4_version != MSMP4_UNUSED)
             range= FFMIN(range, 16);
-        else if (s->codec_id == AV_CODEC_ID_MPEG2VIDEO &&
+        else if (s->c.codec_id == AV_CODEC_ID_MPEG2VIDEO &&
                  c->avctx->strict_std_compliance >= FF_COMPLIANCE_NORMAL)
             range= FFMIN(range, 256);
 
-        for(i=0; i<8; i++) score[i]= s->mb_num*(8-i);
+        for(i=0; i<8; i++) score[i]= s->c.mb_num*(8-i);
 
-        for(y=0; y<s->mb_height; y++){
+        for (int y = 0; y < s->c.mb_height; y++) {
             int x;
-            int xy= y*s->mb_stride;
-            for(x=0; x<s->mb_width; x++, xy++){
+            int xy= y*s->c.mb_stride;
+            for(x=0; x<s->c.mb_width; x++, xy++){
                 if(s->mb_type[xy] & type){
                     int mx= mv_table[xy][0];
                     int my= mv_table[xy][1];
@@ -1631,7 +1632,7 @@ int ff_get_best_fcode(MPVMainEncContext *const m, const int16_t (*mv_table)[2],
                         continue;
 
                     for(j=0; j<fcode && j<8; j++){
-                        if (s->pict_type == AV_PICTURE_TYPE_B ||
+                        if (s->c.pict_type == AV_PICTURE_TYPE_B ||
                             s->mc_mb_var[xy] < s->mb_var[xy])
                             score[j]-= 170;
                     }
@@ -1652,42 +1653,42 @@ int ff_get_best_fcode(MPVMainEncContext *const m, const int16_t (*mv_table)[2],
     }
 }
 
-void ff_fix_long_p_mvs(MpegEncContext * s, int type)
+void ff_fix_long_p_mvs(MPVEncContext *const s, int type)
 {
-    MotionEstContext * const c= &s->me;
-    const int f_code= s->f_code;
+    MotionEstContext * const c= &s->c.me;
+    const int f_code= s->c.f_code;
     int y, range;
-    av_assert0(s->pict_type==AV_PICTURE_TYPE_P);
+    av_assert0(s->c.pict_type==AV_PICTURE_TYPE_P);
 
-    range = (((s->out_format == FMT_MPEG1 || s->msmpeg4_version != MSMP4_UNUSED) ? 8 : 16) << f_code);
+    range = (((s->c.out_format == FMT_MPEG1 || s->c.msmpeg4_version != MSMP4_UNUSED) ? 8 : 16) << f_code);
 
-    av_assert0(range <= 16 || s->msmpeg4_version == MSMP4_UNUSED);
-    av_assert0(range <=256 || !(s->codec_id == AV_CODEC_ID_MPEG2VIDEO && c->avctx->strict_std_compliance >= FF_COMPLIANCE_NORMAL));
+    av_assert0(range <= 16 || s->c.msmpeg4_version == MSMP4_UNUSED);
+    av_assert0(range <=256 || !(s->c.codec_id == AV_CODEC_ID_MPEG2VIDEO && c->avctx->strict_std_compliance >= FF_COMPLIANCE_NORMAL));
 
     if(c->avctx->me_range && range > c->avctx->me_range) range= c->avctx->me_range;
 
     if (c->avctx->flags & AV_CODEC_FLAG_4MV) {
-        const int wrap= s->b8_stride;
+        const int wrap= s->c.b8_stride;
 
         /* clip / convert to intra 8x8 type MVs */
-        for(y=0; y<s->mb_height; y++){
+        for(y=0; y<s->c.mb_height; y++){
             int xy= y*2*wrap;
-            int i= y*s->mb_stride;
+            int i= y*s->c.mb_stride;
             int x;
 
-            for(x=0; x<s->mb_width; x++){
+            for(x=0; x<s->c.mb_width; x++){
                 if(s->mb_type[i]&CANDIDATE_MB_TYPE_INTER4V){
                     int block;
                     for(block=0; block<4; block++){
                         int off= (block& 1) + (block>>1)*wrap;
-                        int mx = s->cur_pic.motion_val[0][ xy + off ][0];
-                        int my = s->cur_pic.motion_val[0][ xy + off ][1];
+                        int mx = s->c.cur_pic.motion_val[0][ xy + off ][0];
+                        int my = s->c.cur_pic.motion_val[0][ xy + off ][1];
 
                         if(   mx >=range || mx <-range
                            || my >=range || my <-range){
                             s->mb_type[i] &= ~CANDIDATE_MB_TYPE_INTER4V;
                             s->mb_type[i] |= type;
-                            s->cur_pic.mb_type[i] = type;
+                            s->c.cur_pic.mb_type[i] = type;
                         }
                     }
                 }
@@ -1701,14 +1702,14 @@ void ff_fix_long_p_mvs(MpegEncContext * s, int type)
 /**
  * @param truncate 1 for truncation, 0 for using intra
  */
-void ff_fix_long_mvs(MpegEncContext * s, uint8_t *field_select_table, int field_select,
+void ff_fix_long_mvs(MPVEncContext *const s, uint8_t *field_select_table, int field_select,
                      int16_t (*mv_table)[2], int f_code, int type, int truncate)
 {
-    MotionEstContext * const c= &s->me;
+    MotionEstContext * const c= &s->c.me;
     int y, h_range, v_range;
 
     // RAL: 8 in MPEG-1, 16 in MPEG-4
-    int range = (((s->out_format == FMT_MPEG1 || s->msmpeg4_version != MSMP4_UNUSED) ? 8 : 16) << f_code);
+    int range = (((s->c.out_format == FMT_MPEG1 || s->c.msmpeg4_version != MSMP4_UNUSED) ? 8 : 16) << f_code);
 
     if(c->avctx->me_range && range > c->avctx->me_range) range= c->avctx->me_range;
 
@@ -1716,10 +1717,10 @@ void ff_fix_long_mvs(MpegEncContext * s, uint8_t *field_select_table, int field_
     v_range= field_select_table ? range>>1 : range;
 
     /* clip / convert to intra 16x16 type MVs */
-    for(y=0; y<s->mb_height; y++){
+    for(y=0; y<s->c.mb_height; y++){
         int x;
-        int xy= y*s->mb_stride;
-        for(x=0; x<s->mb_width; x++){
+        int xy= y*s->c.mb_stride;
+        for(x=0; x<s->c.mb_width; x++){
             if (s->mb_type[xy] & type){    // RAL: "type" test added...
                 if (!field_select_table || field_select_table[xy] == field_select) {
                     if(   mv_table[xy][0] >=h_range || mv_table[xy][0] <-h_range
diff --git a/libavcodec/motion_est.h b/libavcodec/motion_est.h
index 5fa96161c6..16975abfe1 100644
--- a/libavcodec/motion_est.h
+++ b/libavcodec/motion_est.h
@@ -28,7 +28,7 @@
 #include "me_cmp.h"
 #include "qpeldsp.h"
 
-struct MpegEncContext;
+typedef struct MPVEncContext MPVEncContext;
 typedef struct MPVMainEncContext MPVMainEncContext;
 
 #if ARCH_IA64 // Limit static arrays to avoid gcc failing "short data segment overflowed"
@@ -100,7 +100,7 @@ typedef struct MotionEstContext {
     qpel_mc_func(*qpel_avg)[16];
     const uint8_t (*mv_penalty)[MAX_DMV * 2 + 1]; ///< bit amount needed to encode a MV
     const uint8_t *current_mv_penalty;
-    int (*sub_motion_search)(struct MpegEncContext *s,
+    int (*sub_motion_search)(MPVEncContext *s,
                              int *mx_ptr, int *my_ptr, int dmin,
                              int src_index, int ref_index,
                              int size, int h);
@@ -122,27 +122,27 @@ static inline int ff_h263_round_chroma(int x)
 int ff_me_init(MotionEstContext *c, struct AVCodecContext *avctx,
                const struct MECmpContext *mecc, int mpvenc);
 
-void ff_me_init_pic(struct MpegEncContext *s);
+void ff_me_init_pic(MPVEncContext *s);
 
-void ff_estimate_p_frame_motion(struct MpegEncContext *s, int mb_x, int mb_y);
-void ff_estimate_b_frame_motion(struct MpegEncContext *s, int mb_x, int mb_y);
+void ff_estimate_p_frame_motion(MPVEncContext *s, int mb_x, int mb_y);
+void ff_estimate_b_frame_motion(MPVEncContext *s, int mb_x, int mb_y);
 
-int ff_pre_estimate_p_frame_motion(struct MpegEncContext *s,
+int ff_pre_estimate_p_frame_motion(MPVEncContext *s,
                                    int mb_x, int mb_y);
 
-int ff_epzs_motion_search(struct MpegEncContext *s, int *mx_ptr, int *my_ptr,
+int ff_epzs_motion_search(MPVEncContext *s, int *mx_ptr, int *my_ptr,
                           int P[10][2], int src_index, int ref_index,
                           const int16_t (*last_mv)[2], int ref_mv_scale,
                           int size, int h);
 
-int ff_get_mb_score(struct MpegEncContext *s, int mx, int my, int src_index,
+int ff_get_mb_score(MPVEncContext *s, int mx, int my, int src_index,
                     int ref_index, int size, int h, int add_rate);
 
 int ff_get_best_fcode(MPVMainEncContext *m,
                       const int16_t (*mv_table)[2], int type);
 
-void ff_fix_long_p_mvs(struct MpegEncContext *s, int type);
-void ff_fix_long_mvs(struct MpegEncContext *s, uint8_t *field_select_table,
+void ff_fix_long_p_mvs(MPVEncContext *s, int type);
+void ff_fix_long_mvs(MPVEncContext *s, uint8_t *field_select_table,
                      int field_select, int16_t (*mv_table)[2], int f_code,
                      int type, int truncate);
 
diff --git a/libavcodec/motion_est_template.c b/libavcodec/motion_est_template.c
index 5498f9c982..7c7e645625 100644
--- a/libavcodec/motion_est_template.c
+++ b/libavcodec/motion_est_template.c
@@ -25,7 +25,7 @@
  */
 
 #include "libavutil/qsort.h"
-#include "mpegvideo.h"
+#include "mpegvideoenc.h"
 
 //Let us hope gcc will remove the unused vars ...(gcc 3.2.2 seems to do it ...)
 #define LOAD_COMMON\
@@ -47,12 +47,12 @@
     COPY3_IF_LT(dmin, d, bx, hx, by, hy)\
 }
 
-static int hpel_motion_search(MpegEncContext * s,
+static int hpel_motion_search(MPVEncContext *const s,
                                   int *mx_ptr, int *my_ptr, int dmin,
                                   int src_index, int ref_index,
                                   int size, int h)
 {
-    MotionEstContext * const c= &s->me;
+    MotionEstContext * const c= &s->c.me;
     const int mx = *mx_ptr;
     const int my = *my_ptr;
     const int penalty_factor= c->sub_penalty_factor;
@@ -152,7 +152,7 @@ static int hpel_motion_search(MpegEncContext * s,
     return dmin;
 }
 
-static int no_sub_motion_search(MpegEncContext * s,
+static int no_sub_motion_search(MPVEncContext *const s,
           int *mx_ptr, int *my_ptr, int dmin,
                                   int src_index, int ref_index,
                                   int size, int h)
@@ -162,11 +162,11 @@ static int no_sub_motion_search(MpegEncContext * s,
     return dmin;
 }
 
-static inline int get_mb_score(MpegEncContext *s, int mx, int my,
+static inline int get_mb_score(MPVEncContext *const s, int mx, int my,
                                int src_index, int ref_index, int size,
                                int h, int add_rate)
 {
-    MotionEstContext * const c= &s->me;
+    MotionEstContext * const c= &s->c.me;
     const int penalty_factor= c->mb_penalty_factor;
     const int flags= c->mb_flags;
     const int qpel= flags & FLAG_QPEL;
@@ -189,7 +189,7 @@ static inline int get_mb_score(MpegEncContext *s, int mx, int my,
     return d;
 }
 
-int ff_get_mb_score(MpegEncContext *s, int mx, int my, int src_index,
+int ff_get_mb_score(MPVEncContext *const s, int mx, int my, int src_index,
                     int ref_index, int size, int h, int add_rate)
 {
     return get_mb_score(s, mx, my, src_index, ref_index, size, h, add_rate);
@@ -204,12 +204,12 @@ int ff_get_mb_score(MpegEncContext *s, int mx, int my, int src_index,
     COPY3_IF_LT(dmin, d, bx, hx, by, hy)\
 }
 
-static int qpel_motion_search(MpegEncContext * s,
+static int qpel_motion_search(MPVEncContext *const s,
                                   int *mx_ptr, int *my_ptr, int dmin,
                                   int src_index, int ref_index,
                                   int size, int h)
 {
-    MotionEstContext * const c= &s->me;
+    MotionEstContext * const c= &s->c.me;
     const int mx = *mx_ptr;
     const int my = *my_ptr;
     const int penalty_factor= c->sub_penalty_factor;
@@ -256,7 +256,7 @@ static int qpel_motion_search(MpegEncContext * s,
         int best_pos[8][2];
 
         memset(best, 64, sizeof(int)*8);
-        if(s->me.dia_size>=2){
+        if(s->c.me.dia_size>=2){
             const int tl= score_map[(index-(1<<ME_MAP_SHIFT)-1)&(ME_MAP_SIZE-1)];
             const int bl= score_map[(index+(1<<ME_MAP_SHIFT)-1)&(ME_MAP_SIZE-1)];
             const int tr= score_map[(index-(1<<ME_MAP_SHIFT)+1)&(ME_MAP_SIZE-1)];
@@ -403,21 +403,21 @@ static int qpel_motion_search(MpegEncContext * s,
 }
 
 #define check(x,y,S,v)\
-if( (x)<(xmin<<(S)) ) av_log(NULL, AV_LOG_ERROR, "%d %d %d %d %d xmin" #v, xmin, (x), (y), s->mb_x, s->mb_y);\
-if( (x)>(xmax<<(S)) ) av_log(NULL, AV_LOG_ERROR, "%d %d %d %d %d xmax" #v, xmax, (x), (y), s->mb_x, s->mb_y);\
-if( (y)<(ymin<<(S)) ) av_log(NULL, AV_LOG_ERROR, "%d %d %d %d %d ymin" #v, ymin, (x), (y), s->mb_x, s->mb_y);\
-if( (y)>(ymax<<(S)) ) av_log(NULL, AV_LOG_ERROR, "%d %d %d %d %d ymax" #v, ymax, (x), (y), s->mb_x, s->mb_y);\
+if( (x)<(xmin<<(S)) ) av_log(NULL, AV_LOG_ERROR, "%d %d %d %d %d xmin" #v, xmin, (x), (y), s->c.mb_x, s->c.mb_y);\
+if( (x)>(xmax<<(S)) ) av_log(NULL, AV_LOG_ERROR, "%d %d %d %d %d xmax" #v, xmax, (x), (y), s->c.mb_x, s->c.mb_y);\
+if( (y)<(ymin<<(S)) ) av_log(NULL, AV_LOG_ERROR, "%d %d %d %d %d ymin" #v, ymin, (x), (y), s->c.mb_x, s->c.mb_y);\
+if( (y)>(ymax<<(S)) ) av_log(NULL, AV_LOG_ERROR, "%d %d %d %d %d ymax" #v, ymax, (x), (y), s->c.mb_x, s->c.mb_y);\
 
 #define LOAD_COMMON2\
     uint32_t *map= c->map;\
     const int qpel= flags&FLAG_QPEL;\
     const int shift= 1+qpel;\
 
-static av_always_inline int small_diamond_search(MpegEncContext * s, int *best, int dmin,
+static av_always_inline int small_diamond_search(MPVEncContext *const s, int *best, int dmin,
                                        int src_index, int ref_index, const int penalty_factor,
                                        int size, int h, int flags)
 {
-    MotionEstContext * const c= &s->me;
+    MotionEstContext * const c= &s->c.me;
     me_cmp_func cmpf, chroma_cmpf;
     int next_dir=-1;
     LOAD_COMMON
@@ -454,11 +454,11 @@ static av_always_inline int small_diamond_search(MpegEncContext * s, int *best,
     }
 }
 
-static int funny_diamond_search(MpegEncContext * s, int *best, int dmin,
+static int funny_diamond_search(MPVEncContext *const s, int *best, int dmin,
                                        int src_index, int ref_index, const int penalty_factor,
                                        int size, int h, int flags)
 {
-    MotionEstContext * const c= &s->me;
+    MotionEstContext * const c= &s->c.me;
     me_cmp_func cmpf, chroma_cmpf;
     int dia_size;
     LOAD_COMMON
@@ -496,11 +496,11 @@ static int funny_diamond_search(MpegEncContext * s, int *best, int dmin,
     return dmin;
 }
 
-static int hex_search(MpegEncContext * s, int *best, int dmin,
+static int hex_search(MPVEncContext *const s, int *best, int dmin,
                                        int src_index, int ref_index, const int penalty_factor,
                                        int size, int h, int flags, int dia_size)
 {
-    MotionEstContext * const c= &s->me;
+    MotionEstContext * const c= &s->c.me;
     me_cmp_func cmpf, chroma_cmpf;
     LOAD_COMMON
     LOAD_COMMON2
@@ -530,11 +530,11 @@ static int hex_search(MpegEncContext * s, int *best, int dmin,
     return dmin;
 }
 
-static int l2s_dia_search(MpegEncContext * s, int *best, int dmin,
+static int l2s_dia_search(MPVEncContext *const s, int *best, int dmin,
                                        int src_index, int ref_index, const int penalty_factor,
                                        int size, int h, int flags)
 {
-    MotionEstContext * const c= &s->me;
+    MotionEstContext * const c= &s->c.me;
     me_cmp_func cmpf, chroma_cmpf;
     LOAD_COMMON
     LOAD_COMMON2
@@ -568,11 +568,11 @@ static int l2s_dia_search(MpegEncContext * s, int *best, int dmin,
     return dmin;
 }
 
-static int umh_search(MpegEncContext * s, int *best, int dmin,
+static int umh_search(MPVEncContext *const s, int *best, int dmin,
                                        int src_index, int ref_index, const int penalty_factor,
                                        int size, int h, int flags)
 {
-    MotionEstContext * const c= &s->me;
+    MotionEstContext * const c= &s->c.me;
     me_cmp_func cmpf, chroma_cmpf;
     LOAD_COMMON
     LOAD_COMMON2
@@ -615,11 +615,11 @@ static int umh_search(MpegEncContext * s, int *best, int dmin,
     return hex_search(s, best, dmin, src_index, ref_index, penalty_factor, size, h, flags, 2);
 }
 
-static int full_search(MpegEncContext * s, int *best, int dmin,
+static int full_search(MPVEncContext *const s, int *best, int dmin,
                                        int src_index, int ref_index, const int penalty_factor,
                                        int size, int h, int flags)
 {
-    MotionEstContext * const c= &s->me;
+    MotionEstContext * const c= &s->c.me;
     me_cmp_func cmpf, chroma_cmpf;
     LOAD_COMMON
     LOAD_COMMON2
@@ -678,11 +678,11 @@ static int full_search(MpegEncContext * s, int *best, int dmin,
 }
 
 #define MAX_SAB_SIZE ME_MAP_SIZE
-static int sab_diamond_search(MpegEncContext * s, int *best, int dmin,
+static int sab_diamond_search(MPVEncContext *const s, int *best, int dmin,
                                        int src_index, int ref_index, const int penalty_factor,
                                        int size, int h, int flags)
 {
-    MotionEstContext * const c= &s->me;
+    MotionEstContext * const c= &s->c.me;
     me_cmp_func cmpf, chroma_cmpf;
     Minima minima[MAX_SAB_SIZE];
     const int minima_count= FFABS(c->dia_size);
@@ -768,11 +768,11 @@ static int sab_diamond_search(MpegEncContext * s, int *best, int dmin,
     return dmin;
 }
 
-static int var_diamond_search(MpegEncContext * s, int *best, int dmin,
+static int var_diamond_search(MPVEncContext *const s, int *best, int dmin,
                                        int src_index, int ref_index, const int penalty_factor,
                                        int size, int h, int flags)
 {
-    MotionEstContext * const c= &s->me;
+    MotionEstContext * const c= &s->c.me;
     me_cmp_func cmpf, chroma_cmpf;
     int dia_size;
     LOAD_COMMON
@@ -829,10 +829,10 @@ static int var_diamond_search(MpegEncContext * s, int *best, int dmin,
     return dmin;
 }
 
-static av_always_inline int diamond_search(MpegEncContext * s, int *best, int dmin,
+static av_always_inline int diamond_search(MPVEncContext *const s, int *best, int dmin,
                                        int src_index, int ref_index, const int penalty_factor,
                                        int size, int h, int flags){
-    MotionEstContext * const c= &s->me;
+    MotionEstContext * const c= &s->c.me;
     if(c->dia_size==-1)
         return funny_diamond_search(s, best, dmin, src_index, ref_index, penalty_factor, size, h, flags);
     else if(c->dia_size<-1)
@@ -857,11 +857,11 @@ static av_always_inline int diamond_search(MpegEncContext * s, int *best, int dm
    it takes fewer iterations. And it increases the chance that we find the
    optimal mv.
  */
-static av_always_inline int epzs_motion_search_internal(MpegEncContext * s, int *mx_ptr, int *my_ptr,
+static av_always_inline int epzs_motion_search_internal(MPVEncContext *const s, int *mx_ptr, int *my_ptr,
                              int P[10][2], int src_index, int ref_index, const int16_t (*last_mv)[2],
                              int ref_mv_scale, int flags, int size, int h)
 {
-    MotionEstContext * const c= &s->me;
+    MotionEstContext * const c= &s->c.me;
     int best[2]={0, 0};      /**< x and y coordinates of the best motion vector.
                                i.e. the difference between the position of the
                                block currently being encoded and the position of
@@ -871,8 +871,8 @@ static av_always_inline int epzs_motion_search_internal(MpegEncContext * s, int
                                corresponding to the mv stored in best[]. */
     unsigned map_generation;
     int penalty_factor;
-    const int ref_mv_stride= s->mb_stride; //pass as arg  FIXME
-    const int ref_mv_xy = s->mb_x + s->mb_y * ref_mv_stride; // add to last_mv before passing FIXME
+    const int ref_mv_stride= s->c.mb_stride; //pass as arg  FIXME
+    const int ref_mv_xy = s->c.mb_x + s->c.mb_y * ref_mv_stride; // add to last_mv before passing FIXME
     me_cmp_func cmpf, chroma_cmpf;
 
     LOAD_COMMON
@@ -896,12 +896,12 @@ static av_always_inline int epzs_motion_search_internal(MpegEncContext * s, int
     score_map[0]= dmin;
 
     //FIXME precalc first term below?
-    if ((s->pict_type == AV_PICTURE_TYPE_B && !(c->flags & FLAG_DIRECT)) ||
+    if ((s->c.pict_type == AV_PICTURE_TYPE_B && !(c->flags & FLAG_DIRECT)) ||
         s->mpv_flags & FF_MPV_FLAG_MV0)
         dmin += (mv_penalty[pred_x] + mv_penalty[pred_y])*penalty_factor;
 
     /* first line */
-    if (s->first_slice_line) {
+    if (s->c.first_slice_line) {
         CHECK_MV(P_LEFT[0]>>shift, P_LEFT[1]>>shift)
         CHECK_CLIPPED_MV((last_mv[ref_mv_xy][0]*ref_mv_scale + (1<<15))>>16,
                         (last_mv[ref_mv_xy][1]*ref_mv_scale + (1<<15))>>16)
@@ -930,13 +930,13 @@ static av_always_inline int epzs_motion_search_internal(MpegEncContext * s, int
         if(c->pre_pass){
             CHECK_CLIPPED_MV((last_mv[ref_mv_xy-1][0]*ref_mv_scale + (1<<15))>>16,
                             (last_mv[ref_mv_xy-1][1]*ref_mv_scale + (1<<15))>>16)
-            if(!s->first_slice_line)
+            if(!s->c.first_slice_line)
                 CHECK_CLIPPED_MV((last_mv[ref_mv_xy-ref_mv_stride][0]*ref_mv_scale + (1<<15))>>16,
                                 (last_mv[ref_mv_xy-ref_mv_stride][1]*ref_mv_scale + (1<<15))>>16)
         }else{
             CHECK_CLIPPED_MV((last_mv[ref_mv_xy+1][0]*ref_mv_scale + (1<<15))>>16,
                             (last_mv[ref_mv_xy+1][1]*ref_mv_scale + (1<<15))>>16)
-            if(s->mb_y+1<s->end_mb_y)  //FIXME replace at least with last_slice_line
+            if(s->c.mb_y+1<s->c.end_mb_y)  //FIXME replace at least with last_slice_line
                 CHECK_CLIPPED_MV((last_mv[ref_mv_xy+ref_mv_stride][0]*ref_mv_scale + (1<<15))>>16,
                                 (last_mv[ref_mv_xy+ref_mv_stride][1]*ref_mv_scale + (1<<15))>>16)
         }
@@ -944,10 +944,10 @@ static av_always_inline int epzs_motion_search_internal(MpegEncContext * s, int
 
     if(c->avctx->last_predictor_count){
         const int count= c->avctx->last_predictor_count;
-        const int xstart= FFMAX(0, s->mb_x - count);
-        const int ystart= FFMAX(0, s->mb_y - count);
-        const int xend= FFMIN(s->mb_width , s->mb_x + count + 1);
-        const int yend= FFMIN(s->mb_height, s->mb_y + count + 1);
+        const int xstart= FFMAX(0, s->c.mb_x - count);
+        const int ystart= FFMAX(0, s->c.mb_y - count);
+        const int xend= FFMIN(s->c.mb_width , s->c.mb_x + count + 1);
+        const int yend= FFMIN(s->c.mb_height, s->c.mb_y + count + 1);
         int mb_y;
 
         for(mb_y=ystart; mb_y<yend; mb_y++){
@@ -974,12 +974,12 @@ static av_always_inline int epzs_motion_search_internal(MpegEncContext * s, int
 }
 
 //this function is dedicated to the brain damaged gcc
-int ff_epzs_motion_search(MpegEncContext *s, int *mx_ptr, int *my_ptr,
+int ff_epzs_motion_search(MPVEncContext *const s, int *mx_ptr, int *my_ptr,
                           int P[10][2], int src_index, int ref_index,
                           const int16_t (*last_mv)[2], int ref_mv_scale,
                           int size, int h)
 {
-    MotionEstContext * const c= &s->me;
+    MotionEstContext * const c= &s->c.me;
 //FIXME convert other functions in the same way if faster
     if(c->flags==0 && h==16 && size==0){
         return epzs_motion_search_internal(s, mx_ptr, my_ptr, P, src_index, ref_index, last_mv, ref_mv_scale, 0, 0, 16);
@@ -990,19 +990,19 @@ int ff_epzs_motion_search(MpegEncContext *s, int *mx_ptr, int *my_ptr,
     }
 }
 
-static int epzs_motion_search2(MpegEncContext * s,
+static int epzs_motion_search2(MPVEncContext *const s,
                              int *mx_ptr, int *my_ptr, int P[10][2],
                              int src_index, int ref_index, const int16_t (*last_mv)[2],
                              int ref_mv_scale, const int size)
 {
-    MotionEstContext * const c= &s->me;
+    MotionEstContext * const c= &s->c.me;
     int best[2]={0, 0};
     int d, dmin;
     unsigned map_generation;
     const int penalty_factor= c->penalty_factor;
     const int h=8;
-    const int ref_mv_stride= s->mb_stride;
-    const int ref_mv_xy= s->mb_x + s->mb_y *ref_mv_stride;
+    const int ref_mv_stride= s->c.mb_stride;
+    const int ref_mv_xy= s->c.mb_x + s->c.mb_y *ref_mv_stride;
     me_cmp_func cmpf, chroma_cmpf;
     LOAD_COMMON
     int flags= c->flags;
@@ -1016,7 +1016,7 @@ static int epzs_motion_search2(MpegEncContext * s,
     dmin = 1000000;
 
     /* first line */
-    if (s->first_slice_line) {
+    if (s->c.first_slice_line) {
         CHECK_MV(P_LEFT[0]>>shift, P_LEFT[1]>>shift)
         CHECK_CLIPPED_MV((last_mv[ref_mv_xy][0]*ref_mv_scale + (1<<15))>>16,
                         (last_mv[ref_mv_xy][1]*ref_mv_scale + (1<<15))>>16)
@@ -1034,7 +1034,7 @@ static int epzs_motion_search2(MpegEncContext * s,
     if(dmin>64*4){
         CHECK_CLIPPED_MV((last_mv[ref_mv_xy+1][0]*ref_mv_scale + (1<<15))>>16,
                         (last_mv[ref_mv_xy+1][1]*ref_mv_scale + (1<<15))>>16)
-        if(s->mb_y+1<s->end_mb_y)  //FIXME replace at least with last_slice_line
+        if(s->c.mb_y+1<s->c.end_mb_y)  //FIXME replace at least with last_slice_line
             CHECK_CLIPPED_MV((last_mv[ref_mv_xy+ref_mv_stride][0]*ref_mv_scale + (1<<15))>>16,
                             (last_mv[ref_mv_xy+ref_mv_stride][1]*ref_mv_scale + (1<<15))>>16)
     }
diff --git a/libavcodec/mpeg12enc.c b/libavcodec/mpeg12enc.c
index ae87f28d66..5a91f9fff1 100644
--- a/libavcodec/mpeg12enc.c
+++ b/libavcodec/mpeg12enc.c
@@ -137,7 +137,7 @@ av_cold void ff_mpeg1_init_uni_ac_vlc(const int8_t max_level[],
 }
 
 #if CONFIG_MPEG1VIDEO_ENCODER || CONFIG_MPEG2VIDEO_ENCODER
-static void put_header(MpegEncContext *s, uint32_t header)
+static void put_header(MPVEncContext *const s, uint32_t header)
 {
     align_put_bits(&s->pb);
     put_bits32(&s->pb, header);
@@ -146,16 +146,16 @@ static void put_header(MpegEncContext *s, uint32_t header)
 /* put sequence header if needed */
 static void mpeg1_encode_sequence_header(MPEG12EncContext *mpeg12)
 {
-    MpegEncContext *const s = &mpeg12->mpeg.s;
+    MPVEncContext *const s = &mpeg12->mpeg.s;
     unsigned int vbv_buffer_size, fps, v;
     int constraint_parameter_flag;
     AVRational framerate = ff_mpeg12_frame_rate_tab[mpeg12->frame_rate_index];
     uint64_t time_code;
     int64_t best_aspect_error = INT64_MAX;
-    AVRational aspect_ratio = s->avctx->sample_aspect_ratio;
+    AVRational aspect_ratio = s->c.avctx->sample_aspect_ratio;
     int aspect_ratio_info;
 
-    if (!(s->cur_pic.ptr->f->flags & AV_FRAME_FLAG_KEY))
+    if (!(s->c.cur_pic.ptr->f->flags & AV_FRAME_FLAG_KEY))
         return;
 
     if (aspect_ratio.num == 0 || aspect_ratio.den == 0)
@@ -164,15 +164,15 @@ static void mpeg1_encode_sequence_header(MPEG12EncContext *mpeg12)
     /* MPEG-1 header repeated every GOP */
     put_header(s, SEQ_START_CODE);
 
-    put_sbits(&s->pb, 12, s->width  & 0xFFF);
-    put_sbits(&s->pb, 12, s->height & 0xFFF);
+    put_sbits(&s->pb, 12, s->c.width  & 0xFFF);
+    put_sbits(&s->pb, 12, s->c.height & 0xFFF);
 
     for (int i = 1; i < 15; i++) {
         int64_t error = aspect_ratio.num * (1LL<<32) / aspect_ratio.den;
-        if (s->codec_id == AV_CODEC_ID_MPEG1VIDEO || i <= 1)
+        if (s->c.codec_id == AV_CODEC_ID_MPEG1VIDEO || i <= 1)
             error -= (1LL<<32) / ff_mpeg1_aspect[i];
         else
-            error -= (1LL<<32)*ff_mpeg2_aspect[i].num * s->height / s->width / ff_mpeg2_aspect[i].den;
+            error -= (1LL<<32)*ff_mpeg2_aspect[i].num * s->c.height / s->c.width / ff_mpeg2_aspect[i].den;
 
         error = FFABS(error);
 
@@ -185,16 +185,16 @@ static void mpeg1_encode_sequence_header(MPEG12EncContext *mpeg12)
     put_bits(&s->pb, 4, aspect_ratio_info);
     put_bits(&s->pb, 4, mpeg12->frame_rate_index);
 
-    if (s->avctx->rc_max_rate) {
-        v = (s->avctx->rc_max_rate + 399) / 400;
-        if (v > 0x3ffff && s->codec_id == AV_CODEC_ID_MPEG1VIDEO)
+    if (s->c.avctx->rc_max_rate) {
+        v = (s->c.avctx->rc_max_rate + 399) / 400;
+        if (v > 0x3ffff && s->c.codec_id == AV_CODEC_ID_MPEG1VIDEO)
             v = 0x3ffff;
     } else {
         v = 0x3FFFF;
     }
 
-    if (s->avctx->rc_buffer_size)
-        vbv_buffer_size = s->avctx->rc_buffer_size;
+    if (s->c.avctx->rc_buffer_size)
+        vbv_buffer_size = s->c.avctx->rc_buffer_size;
     else
         /* VBV calculation: Scaled so that a VCD has the proper
          * VBV size of 40 kilobytes */
@@ -206,48 +206,48 @@ static void mpeg1_encode_sequence_header(MPEG12EncContext *mpeg12)
     put_sbits(&s->pb, 10, vbv_buffer_size);
 
     constraint_parameter_flag =
-        s->width  <= 768                                    &&
-        s->height <= 576                                    &&
-        s->mb_width * s->mb_height                 <= 396   &&
-        s->mb_width * s->mb_height * framerate.num <= 396 * 25 * framerate.den &&
+        s->c.width  <= 768                                    &&
+        s->c.height <= 576                                    &&
+        s->c.mb_width * s->c.mb_height                 <= 396   &&
+        s->c.mb_width * s->c.mb_height * framerate.num <= 396 * 25 * framerate.den &&
         framerate.num <= framerate.den * 30                 &&
-        s->avctx->me_range                                  &&
-        s->avctx->me_range < 128                            &&
+        s->c.avctx->me_range                                  &&
+        s->c.avctx->me_range < 128                            &&
         vbv_buffer_size <= 20                               &&
         v <= 1856000 / 400                                  &&
-        s->codec_id == AV_CODEC_ID_MPEG1VIDEO;
+        s->c.codec_id == AV_CODEC_ID_MPEG1VIDEO;
 
     put_bits(&s->pb, 1, constraint_parameter_flag);
 
-    ff_write_quant_matrix(&s->pb, s->avctx->intra_matrix);
-    ff_write_quant_matrix(&s->pb, s->avctx->inter_matrix);
+    ff_write_quant_matrix(&s->pb, s->c.avctx->intra_matrix);
+    ff_write_quant_matrix(&s->pb, s->c.avctx->inter_matrix);
 
-    if (s->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
+    if (s->c.codec_id == AV_CODEC_ID_MPEG2VIDEO) {
         const AVFrameSideData *side_data;
-        int width = s->width;
-        int height = s->height;
+        int width = s->c.width;
+        int height = s->c.height;
         int use_seq_disp_ext;
 
         put_header(s, EXT_START_CODE);
         put_bits(&s->pb, 4, 1);                 // seq ext
 
-        put_bits(&s->pb, 1, s->avctx->profile == AV_PROFILE_MPEG2_422); // escx 1 for 4:2:2 profile
+        put_bits(&s->pb, 1, s->c.avctx->profile == AV_PROFILE_MPEG2_422); // escx 1 for 4:2:2 profile
 
-        put_bits(&s->pb, 3, s->avctx->profile); // profile
-        put_bits(&s->pb, 4, s->avctx->level);   // level
+        put_bits(&s->pb, 3, s->c.avctx->profile); // profile
+        put_bits(&s->pb, 4, s->c.avctx->level);   // level
 
-        put_bits(&s->pb, 1, s->progressive_sequence);
-        put_bits(&s->pb, 2, s->chroma_format);
-        put_bits(&s->pb, 2, s->width  >> 12);
-        put_bits(&s->pb, 2, s->height >> 12);
+        put_bits(&s->pb, 1, s->c.progressive_sequence);
+        put_bits(&s->pb, 2, s->c.chroma_format);
+        put_bits(&s->pb, 2, s->c.width  >> 12);
+        put_bits(&s->pb, 2, s->c.height >> 12);
         put_bits(&s->pb, 12, v >> 18);          // bitrate ext
         put_bits(&s->pb, 1, 1);                 // marker
         put_bits(&s->pb, 8, vbv_buffer_size >> 10); // vbv buffer ext
-        put_bits(&s->pb, 1, s->low_delay);
+        put_bits(&s->pb, 1, s->c.low_delay);
         put_bits(&s->pb, 2, mpeg12->frame_rate_ext.num-1); // frame_rate_ext_n
         put_bits(&s->pb, 5, mpeg12->frame_rate_ext.den-1); // frame_rate_ext_d
 
-        side_data = av_frame_get_side_data(s->cur_pic.ptr->f, AV_FRAME_DATA_PANSCAN);
+        side_data = av_frame_get_side_data(s->c.cur_pic.ptr->f, AV_FRAME_DATA_PANSCAN);
         if (side_data) {
             const AVPanScan *pan_scan = (AVPanScan *)side_data->data;
             if (pan_scan->width && pan_scan->height) {
@@ -256,11 +256,11 @@ static void mpeg1_encode_sequence_header(MPEG12EncContext *mpeg12)
             }
         }
 
-        use_seq_disp_ext = (width != s->width ||
-                            height != s->height ||
-                            s->avctx->color_primaries != AVCOL_PRI_UNSPECIFIED ||
-                            s->avctx->color_trc != AVCOL_TRC_UNSPECIFIED ||
-                            s->avctx->colorspace != AVCOL_SPC_UNSPECIFIED ||
+        use_seq_disp_ext = (width != s->c.width ||
+                            height != s->c.height ||
+                            s->c.avctx->color_primaries != AVCOL_PRI_UNSPECIFIED ||
+                            s->c.avctx->color_trc != AVCOL_TRC_UNSPECIFIED ||
+                            s->c.avctx->colorspace != AVCOL_SPC_UNSPECIFIED ||
                             mpeg12->video_format != VIDEO_FORMAT_UNSPECIFIED);
 
         if (mpeg12->seq_disp_ext == 1 ||
@@ -269,9 +269,9 @@ static void mpeg1_encode_sequence_header(MPEG12EncContext *mpeg12)
             put_bits(&s->pb, 4, 2);                         // sequence display extension
             put_bits(&s->pb, 3, mpeg12->video_format);      // video_format
             put_bits(&s->pb, 1, 1);                         // colour_description
-            put_bits(&s->pb, 8, s->avctx->color_primaries); // colour_primaries
-            put_bits(&s->pb, 8, s->avctx->color_trc);       // transfer_characteristics
-            put_bits(&s->pb, 8, s->avctx->colorspace);      // matrix_coefficients
+            put_bits(&s->pb, 8, s->c.avctx->color_primaries); // colour_primaries
+            put_bits(&s->pb, 8, s->c.avctx->color_trc);       // transfer_characteristics
+            put_bits(&s->pb, 8, s->c.avctx->colorspace);      // matrix_coefficients
             put_bits(&s->pb, 14, width);                    // display_horizontal_size
             put_bits(&s->pb, 1, 1);                         // marker_bit
             put_bits(&s->pb, 14, height);                   // display_vertical_size
@@ -284,10 +284,10 @@ static void mpeg1_encode_sequence_header(MPEG12EncContext *mpeg12)
     /* time code: we must convert from the real frame rate to a
      * fake MPEG frame rate in case of low frame rate */
     fps       = (framerate.num + framerate.den / 2) / framerate.den;
-    time_code = s->cur_pic.ptr->coded_picture_number +
+    time_code = s->c.cur_pic.ptr->coded_picture_number +
                 mpeg12->timecode_frame_start;
 
-    mpeg12->gop_picture_number = s->cur_pic.ptr->coded_picture_number;
+    mpeg12->gop_picture_number = s->c.cur_pic.ptr->coded_picture_number;
 
     av_assert0(mpeg12->drop_frame_timecode == !!(mpeg12->tc.flags & AV_TIMECODE_FLAG_DROPFRAME));
     if (mpeg12->drop_frame_timecode)
@@ -298,12 +298,12 @@ static void mpeg1_encode_sequence_header(MPEG12EncContext *mpeg12)
     put_bits(&s->pb, 1, 1);
     put_bits(&s->pb, 6, (uint32_t)((time_code / fps) % 60));
     put_bits(&s->pb, 6, (uint32_t)((time_code % fps)));
-    put_bits(&s->pb, 1, !!(s->avctx->flags & AV_CODEC_FLAG_CLOSED_GOP) ||
+    put_bits(&s->pb, 1, !!(s->c.avctx->flags & AV_CODEC_FLAG_CLOSED_GOP) ||
                         mpeg12->mpeg.intra_only || !mpeg12->gop_picture_number);
     put_bits(&s->pb, 1, 0);                     // broken link
 }
 
-static inline void encode_mb_skip_run(MpegEncContext *s, int run)
+static inline void encode_mb_skip_run(MPVEncContext *const s, int run)
 {
     while (run >= 33) {
         put_bits(&s->pb, 11, 0x008);
@@ -313,20 +313,20 @@ static inline void encode_mb_skip_run(MpegEncContext *s, int run)
              ff_mpeg12_mbAddrIncrTable[run][0]);
 }
 
-static av_always_inline void put_qscale(MpegEncContext *s)
+static av_always_inline void put_qscale(MPVEncContext *const s)
 {
-    put_bits(&s->pb, 5, s->qscale);
+    put_bits(&s->pb, 5, s->c.qscale);
 }
 
-void ff_mpeg1_encode_slice_header(MpegEncContext *s)
+void ff_mpeg1_encode_slice_header(MPVEncContext *const s)
 {
-    if (s->codec_id == AV_CODEC_ID_MPEG2VIDEO && s->height > 2800) {
-        put_header(s, SLICE_MIN_START_CODE + (s->mb_y & 127));
+    if (s->c.codec_id == AV_CODEC_ID_MPEG2VIDEO && s->c.height > 2800) {
+        put_header(s, SLICE_MIN_START_CODE + (s->c.mb_y & 127));
         /* slice_vertical_position_extension */
-        put_bits(&s->pb, 3, s->mb_y >> 7);
+        put_bits(&s->pb, 3, s->c.mb_y >> 7);
     } else {
-        av_assert1(s->mb_y <= SLICE_MAX_START_CODE - SLICE_MIN_START_CODE);
-        put_header(s, SLICE_MIN_START_CODE + s->mb_y);
+        av_assert1(s->c.mb_y <= SLICE_MAX_START_CODE - SLICE_MIN_START_CODE);
+        put_header(s, SLICE_MIN_START_CODE + s->c.mb_y);
     }
     put_qscale(s);
     /* slice extra information */
@@ -336,7 +336,7 @@ void ff_mpeg1_encode_slice_header(MpegEncContext *s)
 static int mpeg1_encode_picture_header(MPVMainEncContext *const m)
 {
     MPEG12EncContext *const mpeg12 = (MPEG12EncContext*)m;
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
     const AVFrameSideData *side_data;
 
     mpeg1_encode_sequence_header(mpeg12);
@@ -345,74 +345,74 @@ static int mpeg1_encode_picture_header(MPVMainEncContext *const m)
     put_header(s, PICTURE_START_CODE);
     /* temporal reference */
 
-    // RAL: s->picture_number instead of s->fake_picture_number
+    // RAL: s->c.picture_number instead of s->fake_picture_number
     put_bits(&s->pb, 10,
-             (s->picture_number - mpeg12->gop_picture_number) & 0x3ff);
-    put_bits(&s->pb, 3, s->pict_type);
+             (s->c.picture_number - mpeg12->gop_picture_number) & 0x3ff);
+    put_bits(&s->pb, 3, s->c.pict_type);
 
     m->vbv_delay_pos = put_bytes_count(&s->pb, 0);
     put_bits(&s->pb, 16, 0xFFFF);               /* vbv_delay */
 
     // RAL: Forward f_code also needed for B-frames
-    if (s->pict_type == AV_PICTURE_TYPE_P ||
-        s->pict_type == AV_PICTURE_TYPE_B) {
+    if (s->c.pict_type == AV_PICTURE_TYPE_P ||
+        s->c.pict_type == AV_PICTURE_TYPE_B) {
         put_bits(&s->pb, 1, 0);                 /* half pel coordinates */
-        if (s->codec_id == AV_CODEC_ID_MPEG1VIDEO)
-            put_bits(&s->pb, 3, s->f_code);     /* forward_f_code */
+        if (s->c.codec_id == AV_CODEC_ID_MPEG1VIDEO)
+            put_bits(&s->pb, 3, s->c.f_code);     /* forward_f_code */
         else
             put_bits(&s->pb, 3, 7);             /* forward_f_code */
     }
 
     // RAL: Backward f_code necessary for B-frames
-    if (s->pict_type == AV_PICTURE_TYPE_B) {
+    if (s->c.pict_type == AV_PICTURE_TYPE_B) {
         put_bits(&s->pb, 1, 0);                 /* half pel coordinates */
-        if (s->codec_id == AV_CODEC_ID_MPEG1VIDEO)
-            put_bits(&s->pb, 3, s->b_code);     /* backward_f_code */
+        if (s->c.codec_id == AV_CODEC_ID_MPEG1VIDEO)
+            put_bits(&s->pb, 3, s->c.b_code);     /* backward_f_code */
         else
             put_bits(&s->pb, 3, 7);             /* backward_f_code */
     }
 
     put_bits(&s->pb, 1, 0);                     /* extra bit picture */
 
-    s->frame_pred_frame_dct = 1;
-    if (s->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
+    s->c.frame_pred_frame_dct = 1;
+    if (s->c.codec_id == AV_CODEC_ID_MPEG2VIDEO) {
         put_header(s, EXT_START_CODE);
         put_bits(&s->pb, 4, 8);                 /* pic ext */
-        if (s->pict_type == AV_PICTURE_TYPE_P ||
-            s->pict_type == AV_PICTURE_TYPE_B) {
-            put_bits(&s->pb, 4, s->f_code);
-            put_bits(&s->pb, 4, s->f_code);
+        if (s->c.pict_type == AV_PICTURE_TYPE_P ||
+            s->c.pict_type == AV_PICTURE_TYPE_B) {
+            put_bits(&s->pb, 4, s->c.f_code);
+            put_bits(&s->pb, 4, s->c.f_code);
         } else {
             put_bits(&s->pb, 8, 255);
         }
-        if (s->pict_type == AV_PICTURE_TYPE_B) {
-            put_bits(&s->pb, 4, s->b_code);
-            put_bits(&s->pb, 4, s->b_code);
+        if (s->c.pict_type == AV_PICTURE_TYPE_B) {
+            put_bits(&s->pb, 4, s->c.b_code);
+            put_bits(&s->pb, 4, s->c.b_code);
         } else {
             put_bits(&s->pb, 8, 255);
         }
-        put_bits(&s->pb, 2, s->intra_dc_precision);
+        put_bits(&s->pb, 2, s->c.intra_dc_precision);
 
-        av_assert0(s->picture_structure == PICT_FRAME);
-        put_bits(&s->pb, 2, s->picture_structure);
-        if (s->progressive_sequence)
+        av_assert0(s->c.picture_structure == PICT_FRAME);
+        put_bits(&s->pb, 2, s->c.picture_structure);
+        if (s->c.progressive_sequence)
             put_bits(&s->pb, 1, 0);             /* no repeat */
         else
-            put_bits(&s->pb, 1, !!(s->cur_pic.ptr->f->flags & AV_FRAME_FLAG_TOP_FIELD_FIRST));
+            put_bits(&s->pb, 1, !!(s->c.cur_pic.ptr->f->flags & AV_FRAME_FLAG_TOP_FIELD_FIRST));
         /* XXX: optimize the generation of this flag with entropy measures */
-        s->frame_pred_frame_dct = s->progressive_sequence;
-
-        put_bits(&s->pb, 1, s->frame_pred_frame_dct);
-        put_bits(&s->pb, 1, s->concealment_motion_vectors);
-        put_bits(&s->pb, 1, s->q_scale_type);
-        put_bits(&s->pb, 1, s->intra_vlc_format);
-        put_bits(&s->pb, 1, s->alternate_scan);
-        put_bits(&s->pb, 1, s->repeat_first_field);
-        s->progressive_frame = s->progressive_sequence;
+        s->c.frame_pred_frame_dct = s->c.progressive_sequence;
+
+        put_bits(&s->pb, 1, s->c.frame_pred_frame_dct);
+        put_bits(&s->pb, 1, s->c.concealment_motion_vectors);
+        put_bits(&s->pb, 1, s->c.q_scale_type);
+        put_bits(&s->pb, 1, s->c.intra_vlc_format);
+        put_bits(&s->pb, 1, s->c.alternate_scan);
+        put_bits(&s->pb, 1, s->c.repeat_first_field);
+        s->c.progressive_frame = s->c.progressive_sequence;
         /* chroma_420_type */
-        put_bits(&s->pb, 1, s->chroma_format ==
-                            CHROMA_420 ? s->progressive_frame : 0);
-        put_bits(&s->pb, 1, s->progressive_frame);
+        put_bits(&s->pb, 1, s->c.chroma_format ==
+                            CHROMA_420 ? s->c.progressive_frame : 0);
+        put_bits(&s->pb, 1, s->c.progressive_frame);
         put_bits(&s->pb, 1, 0);                 /* composite_display_flag */
     }
     if (mpeg12->scan_offset) {
@@ -422,7 +422,7 @@ static int mpeg1_encode_picture_header(MPVMainEncContext *const m)
         for (i = 0; i < sizeof(svcd_scan_offset_placeholder); i++)
             put_bits(&s->pb, 8, svcd_scan_offset_placeholder[i]);
     }
-    side_data = av_frame_get_side_data(s->cur_pic.ptr->f,
+    side_data = av_frame_get_side_data(s->c.cur_pic.ptr->f,
                                        AV_FRAME_DATA_STEREO3D);
     if (side_data) {
         const AVStereo3D *stereo = (AVStereo3D *)side_data->data;
@@ -460,7 +460,7 @@ static int mpeg1_encode_picture_header(MPVMainEncContext *const m)
     }
 
     if (CONFIG_MPEG2VIDEO_ENCODER && mpeg12->a53_cc) {
-        side_data = av_frame_get_side_data(s->cur_pic.ptr->f,
+        side_data = av_frame_get_side_data(s->c.cur_pic.ptr->f,
             AV_FRAME_DATA_A53_CC);
         if (side_data) {
             if (side_data->size <= A53_MAX_CC_COUNT * 3 && side_data->size % 3 == 0) {
@@ -476,33 +476,33 @@ static int mpeg1_encode_picture_header(MPVMainEncContext *const m)
 
                 put_bits(&s->pb, 8, 0xff);                  // marker_bits
             } else {
-                av_log(s->avctx, AV_LOG_WARNING,
+                av_log(s->c.avctx, AV_LOG_WARNING,
                     "Closed Caption size (%"SIZE_SPECIFIER") can not exceed "
                     "93 bytes and must be a multiple of 3\n", side_data->size);
             }
         }
     }
 
-    s->mb_y = 0;
+    s->c.mb_y = 0;
     ff_mpeg1_encode_slice_header(s);
 
     return 0;
 }
 
-static inline void put_mb_modes(MpegEncContext *s, int n, int bits,
+static inline void put_mb_modes(MPVEncContext *const s, int n, int bits,
                                 int has_mv, int field_motion)
 {
     put_bits(&s->pb, n, bits);
-    if (!s->frame_pred_frame_dct) {
+    if (!s->c.frame_pred_frame_dct) {
         if (has_mv)
             /* motion_type: frame/field */
             put_bits(&s->pb, 2, 2 - field_motion);
-        put_bits(&s->pb, 1, s->interlaced_dct);
+        put_bits(&s->pb, 1, s->c.interlaced_dct);
     }
 }
 
 // RAL: Parameter added: f_or_b_code
-static void mpeg1_encode_motion(MpegEncContext *s, int val, int f_or_b_code)
+static void mpeg1_encode_motion(MPVEncContext *const s, int val, int f_or_b_code)
 {
     if (val == 0) {
         /* zero vector, corresponds to ff_mpeg12_mbMotionVectorTable[0] */
@@ -539,7 +539,7 @@ static void mpeg1_encode_motion(MpegEncContext *s, int val, int f_or_b_code)
     }
 }
 
-static inline void encode_dc(MpegEncContext *s, int diff, int component)
+static inline void encode_dc(MPVEncContext *const s, int diff, int component)
 {
     unsigned int diff_u = diff + 255;
     if (diff_u >= 511) {
@@ -573,23 +573,23 @@ static inline void encode_dc(MpegEncContext *s, int diff, int component)
     }
 }
 
-static void mpeg1_encode_block(MpegEncContext *s, const int16_t *block, int n)
+static void mpeg1_encode_block(MPVEncContext *const s, const int16_t block[], int n)
 {
     int alevel, level, last_non_zero, dc, diff, i, j, run, last_index, sign;
     int code, component;
     const uint16_t (*table_vlc)[2] = ff_mpeg1_vlc_table;
 
-    last_index = s->block_last_index[n];
+    last_index = s->c.block_last_index[n];
 
     /* DC coef */
-    if (s->mb_intra) {
+    if (s->c.mb_intra) {
         component = (n <= 3 ? 0 : (n & 1) + 1);
         dc        = block[0];                   /* overflow is impossible */
-        diff      = dc - s->last_dc[component];
+        diff      = dc - s->c.last_dc[component];
         encode_dc(s, diff, component);
-        s->last_dc[component] = dc;
+        s->c.last_dc[component] = dc;
         i = 1;
-        if (s->intra_vlc_format)
+        if (s->c.intra_vlc_format)
             table_vlc = ff_mpeg2_vlc_table;
     } else {
         /* encode the first coefficient: needs to be done here because
@@ -610,7 +610,7 @@ static void mpeg1_encode_block(MpegEncContext *s, const int16_t *block, int n)
     last_non_zero = i - 1;
 
     for (; i <= last_index; i++) {
-        j     = s->intra_scantable.permutated[i];
+        j     = s->c.intra_scantable.permutated[i];
         level = block[j];
 
 next_coef:
@@ -634,7 +634,7 @@ next_coef:
                 put_bits(&s->pb, 6, 0x01);
                 /* escape: only clip in this case */
                 put_bits(&s->pb, 6, run);
-                if (s->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
+                if (s->c.codec_id == AV_CODEC_ID_MPEG1VIDEO) {
                     if (alevel < 128) {
                         put_sbits(&s->pb, 8, level);
                     } else {
@@ -654,55 +654,55 @@ next_coef:
     put_bits(&s->pb, table_vlc[112][1], table_vlc[112][0]);
 }
 
-static av_always_inline void mpeg1_encode_mb_internal(MpegEncContext *s,
+static av_always_inline void mpeg1_encode_mb_internal(MPVEncContext *const s,
                                                       const int16_t block[8][64],
                                                       int motion_x, int motion_y,
                                                       int mb_block_count,
                                                       int chroma_y_shift)
 {
 /* MPEG-1 is always 420. */
-#define IS_MPEG1(s) (chroma_y_shift == 1 && (s)->codec_id == AV_CODEC_ID_MPEG1VIDEO)
+#define IS_MPEG1(s) (chroma_y_shift == 1 && (s)->c.codec_id == AV_CODEC_ID_MPEG1VIDEO)
     int i, cbp;
-    const int mb_x     = s->mb_x;
-    const int mb_y     = s->mb_y;
-    const int first_mb = mb_x == s->resync_mb_x && mb_y == s->resync_mb_y;
+    const int mb_x     = s->c.mb_x;
+    const int mb_y     = s->c.mb_y;
+    const int first_mb = mb_x == s->c.resync_mb_x && mb_y == s->c.resync_mb_y;
 
     /* compute cbp */
     cbp = 0;
     for (i = 0; i < mb_block_count; i++)
-        if (s->block_last_index[i] >= 0)
+        if (s->c.block_last_index[i] >= 0)
             cbp |= 1 << (mb_block_count - 1 - i);
 
-    if (cbp == 0 && !first_mb && s->mv_type == MV_TYPE_16X16 &&
-        (mb_x != s->mb_width - 1 ||
-         (mb_y != s->end_mb_y - 1 && IS_MPEG1(s))) &&
-        ((s->pict_type == AV_PICTURE_TYPE_P && (motion_x | motion_y) == 0) ||
-         (s->pict_type == AV_PICTURE_TYPE_B && s->mv_dir == s->last_mv_dir &&
-          (((s->mv_dir & MV_DIR_FORWARD)
-            ? ((s->mv[0][0][0] - s->last_mv[0][0][0]) |
-               (s->mv[0][0][1] - s->last_mv[0][0][1])) : 0) |
-           ((s->mv_dir & MV_DIR_BACKWARD)
-            ? ((s->mv[1][0][0] - s->last_mv[1][0][0]) |
-               (s->mv[1][0][1] - s->last_mv[1][0][1])) : 0)) == 0))) {
-        s->mb_skip_run++;
-        s->qscale -= s->dquant;
+    if (cbp == 0 && !first_mb && s->c.mv_type == MV_TYPE_16X16 &&
+        (mb_x != s->c.mb_width - 1 ||
+         (mb_y != s->c.end_mb_y - 1 && IS_MPEG1(s))) &&
+        ((s->c.pict_type == AV_PICTURE_TYPE_P && (motion_x | motion_y) == 0) ||
+         (s->c.pict_type == AV_PICTURE_TYPE_B && s->c.mv_dir == s->last_mv_dir &&
+          (((s->c.mv_dir & MV_DIR_FORWARD)
+            ? ((s->c.mv[0][0][0] - s->c.last_mv[0][0][0]) |
+               (s->c.mv[0][0][1] - s->c.last_mv[0][0][1])) : 0) |
+           ((s->c.mv_dir & MV_DIR_BACKWARD)
+            ? ((s->c.mv[1][0][0] - s->c.last_mv[1][0][0]) |
+               (s->c.mv[1][0][1] - s->c.last_mv[1][0][1])) : 0)) == 0))) {
+        s->c.mb_skip_run++;
+        s->c.qscale -= s->dquant;
         s->misc_bits++;
         s->last_bits++;
-        if (s->pict_type == AV_PICTURE_TYPE_P) {
-            s->last_mv[0][0][0] =
-            s->last_mv[0][0][1] =
-            s->last_mv[0][1][0] =
-            s->last_mv[0][1][1] = 0;
+        if (s->c.pict_type == AV_PICTURE_TYPE_P) {
+            s->c.last_mv[0][0][0] =
+            s->c.last_mv[0][0][1] =
+            s->c.last_mv[0][1][0] =
+            s->c.last_mv[0][1][1] = 0;
         }
     } else {
         if (first_mb) {
-            av_assert0(s->mb_skip_run == 0);
-            encode_mb_skip_run(s, s->mb_x);
+            av_assert0(s->c.mb_skip_run == 0);
+            encode_mb_skip_run(s, s->c.mb_x);
         } else {
-            encode_mb_skip_run(s, s->mb_skip_run);
+            encode_mb_skip_run(s, s->c.mb_skip_run);
         }
 
-        if (s->pict_type == AV_PICTURE_TYPE_I) {
+        if (s->c.pict_type == AV_PICTURE_TYPE_I) {
             if (s->dquant && cbp) {
                 /* macroblock_type: macroblock_quant = 1 */
                 put_mb_modes(s, 2, 1, 0, 0);
@@ -710,23 +710,23 @@ static av_always_inline void mpeg1_encode_mb_internal(MpegEncContext *s,
             } else {
                 /* macroblock_type: macroblock_quant = 0 */
                 put_mb_modes(s, 1, 1, 0, 0);
-                s->qscale -= s->dquant;
+                s->c.qscale -= s->dquant;
             }
             s->misc_bits += get_bits_diff(s);
             s->i_count++;
-        } else if (s->mb_intra) {
+        } else if (s->c.mb_intra) {
             if (s->dquant && cbp) {
                 put_mb_modes(s, 6, 0x01, 0, 0);
                 put_qscale(s);
             } else {
                 put_mb_modes(s, 5, 0x03, 0, 0);
-                s->qscale -= s->dquant;
+                s->c.qscale -= s->dquant;
             }
             s->misc_bits += get_bits_diff(s);
             s->i_count++;
-            memset(s->last_mv, 0, sizeof(s->last_mv));
-        } else if (s->pict_type == AV_PICTURE_TYPE_P) {
-            if (s->mv_type == MV_TYPE_16X16) {
+            memset(s->c.last_mv, 0, sizeof(s->c.last_mv));
+        } else if (s->c.pict_type == AV_PICTURE_TYPE_P) {
+            if (s->c.mv_type == MV_TYPE_16X16) {
                 if (cbp != 0) {
                     if ((motion_x | motion_y) == 0) {
                         if (s->dquant) {
@@ -748,34 +748,34 @@ static av_always_inline void mpeg1_encode_mb_internal(MpegEncContext *s,
                         s->misc_bits += get_bits_diff(s);
                         // RAL: f_code parameter added
                         mpeg1_encode_motion(s,
-                                            motion_x - s->last_mv[0][0][0],
-                                            s->f_code);
+                                            motion_x - s->c.last_mv[0][0][0],
+                                            s->c.f_code);
                         // RAL: f_code parameter added
                         mpeg1_encode_motion(s,
-                                            motion_y - s->last_mv[0][0][1],
-                                            s->f_code);
+                                            motion_y - s->c.last_mv[0][0][1],
+                                            s->c.f_code);
                         s->mv_bits += get_bits_diff(s);
                     }
                 } else {
                     put_bits(&s->pb, 3, 1);         /* motion only */
-                    if (!s->frame_pred_frame_dct)
+                    if (!s->c.frame_pred_frame_dct)
                         put_bits(&s->pb, 2, 2);     /* motion_type: frame */
                     s->misc_bits += get_bits_diff(s);
                     // RAL: f_code parameter added
                     mpeg1_encode_motion(s,
-                                        motion_x - s->last_mv[0][0][0],
-                                        s->f_code);
+                                        motion_x - s->c.last_mv[0][0][0],
+                                        s->c.f_code);
                     // RAL: f_code parameter added
                     mpeg1_encode_motion(s,
-                                        motion_y - s->last_mv[0][0][1],
-                                        s->f_code);
-                    s->qscale  -= s->dquant;
+                                        motion_y - s->c.last_mv[0][0][1],
+                                        s->c.f_code);
+                    s->c.qscale  -= s->dquant;
                     s->mv_bits += get_bits_diff(s);
                 }
-                s->last_mv[0][1][0] = s->last_mv[0][0][0] = motion_x;
-                s->last_mv[0][1][1] = s->last_mv[0][0][1] = motion_y;
+                s->c.last_mv[0][1][0] = s->c.last_mv[0][0][0] = motion_x;
+                s->c.last_mv[0][1][1] = s->c.last_mv[0][0][1] = motion_y;
             } else {
-                av_assert2(!s->frame_pred_frame_dct && s->mv_type == MV_TYPE_FIELD);
+                av_assert2(!s->c.frame_pred_frame_dct && s->c.mv_type == MV_TYPE_FIELD);
 
                 if (cbp) {
                     if (s->dquant) {
@@ -787,19 +787,19 @@ static av_always_inline void mpeg1_encode_mb_internal(MpegEncContext *s,
                 } else {
                     put_bits(&s->pb, 3, 1);             /* motion only */
                     put_bits(&s->pb, 2, 1);             /* motion_type: field */
-                    s->qscale -= s->dquant;
+                    s->c.qscale -= s->dquant;
                 }
                 s->misc_bits += get_bits_diff(s);
                 for (i = 0; i < 2; i++) {
-                    put_bits(&s->pb, 1, s->field_select[0][i]);
+                    put_bits(&s->pb, 1, s->c.field_select[0][i]);
                     mpeg1_encode_motion(s,
-                                        s->mv[0][i][0] - s->last_mv[0][i][0],
-                                        s->f_code);
+                                        s->c.mv[0][i][0] - s->c.last_mv[0][i][0],
+                                        s->c.f_code);
                     mpeg1_encode_motion(s,
-                                        s->mv[0][i][1] - (s->last_mv[0][i][1] >> 1),
-                                        s->f_code);
-                    s->last_mv[0][i][0] = s->mv[0][i][0];
-                    s->last_mv[0][i][1] = 2 * s->mv[0][i][1];
+                                        s->c.mv[0][i][1] - (s->c.last_mv[0][i][1] >> 1),
+                                        s->c.f_code);
+                    s->c.last_mv[0][i][0] = s->c.mv[0][i][0];
+                    s->c.last_mv[0][i][1] = 2 * s->c.mv[0][i][1];
                 }
                 s->mv_bits += get_bits_diff(s);
             }
@@ -816,91 +816,91 @@ static av_always_inline void mpeg1_encode_mb_internal(MpegEncContext *s,
                 }
             }
         } else {
-            if (s->mv_type == MV_TYPE_16X16) {
+            if (s->c.mv_type == MV_TYPE_16X16) {
                 if (cbp) {                      // With coded bloc pattern
                     if (s->dquant) {
-                        if (s->mv_dir == MV_DIR_FORWARD)
+                        if (s->c.mv_dir == MV_DIR_FORWARD)
                             put_mb_modes(s, 6, 3, 1, 0);
                         else
-                            put_mb_modes(s, 8 - s->mv_dir, 2, 1, 0);
+                            put_mb_modes(s, 8 - s->c.mv_dir, 2, 1, 0);
                         put_qscale(s);
                     } else {
-                        put_mb_modes(s, 5 - s->mv_dir, 3, 1, 0);
+                        put_mb_modes(s, 5 - s->c.mv_dir, 3, 1, 0);
                     }
                 } else {                        // No coded bloc pattern
-                    put_bits(&s->pb, 5 - s->mv_dir, 2);
-                    if (!s->frame_pred_frame_dct)
+                    put_bits(&s->pb, 5 - s->c.mv_dir, 2);
+                    if (!s->c.frame_pred_frame_dct)
                         put_bits(&s->pb, 2, 2); /* motion_type: frame */
-                    s->qscale -= s->dquant;
+                    s->c.qscale -= s->dquant;
                 }
                 s->misc_bits += get_bits_diff(s);
-                if (s->mv_dir & MV_DIR_FORWARD) {
+                if (s->c.mv_dir & MV_DIR_FORWARD) {
                     mpeg1_encode_motion(s,
-                                        s->mv[0][0][0] - s->last_mv[0][0][0],
-                                        s->f_code);
+                                        s->c.mv[0][0][0] - s->c.last_mv[0][0][0],
+                                        s->c.f_code);
                     mpeg1_encode_motion(s,
-                                        s->mv[0][0][1] - s->last_mv[0][0][1],
-                                        s->f_code);
-                    s->last_mv[0][0][0] =
-                    s->last_mv[0][1][0] = s->mv[0][0][0];
-                    s->last_mv[0][0][1] =
-                    s->last_mv[0][1][1] = s->mv[0][0][1];
+                                        s->c.mv[0][0][1] - s->c.last_mv[0][0][1],
+                                        s->c.f_code);
+                    s->c.last_mv[0][0][0] =
+                    s->c.last_mv[0][1][0] = s->c.mv[0][0][0];
+                    s->c.last_mv[0][0][1] =
+                    s->c.last_mv[0][1][1] = s->c.mv[0][0][1];
                 }
-                if (s->mv_dir & MV_DIR_BACKWARD) {
+                if (s->c.mv_dir & MV_DIR_BACKWARD) {
                     mpeg1_encode_motion(s,
-                                        s->mv[1][0][0] - s->last_mv[1][0][0],
-                                        s->b_code);
+                                        s->c.mv[1][0][0] - s->c.last_mv[1][0][0],
+                                        s->c.b_code);
                     mpeg1_encode_motion(s,
-                                        s->mv[1][0][1] - s->last_mv[1][0][1],
-                                        s->b_code);
-                    s->last_mv[1][0][0] =
-                    s->last_mv[1][1][0] = s->mv[1][0][0];
-                    s->last_mv[1][0][1] =
-                    s->last_mv[1][1][1] = s->mv[1][0][1];
+                                        s->c.mv[1][0][1] - s->c.last_mv[1][0][1],
+                                        s->c.b_code);
+                    s->c.last_mv[1][0][0] =
+                    s->c.last_mv[1][1][0] = s->c.mv[1][0][0];
+                    s->c.last_mv[1][0][1] =
+                    s->c.last_mv[1][1][1] = s->c.mv[1][0][1];
                 }
             } else {
-                av_assert2(s->mv_type == MV_TYPE_FIELD);
-                av_assert2(!s->frame_pred_frame_dct);
+                av_assert2(s->c.mv_type == MV_TYPE_FIELD);
+                av_assert2(!s->c.frame_pred_frame_dct);
                 if (cbp) {                      // With coded bloc pattern
                     if (s->dquant) {
-                        if (s->mv_dir == MV_DIR_FORWARD)
+                        if (s->c.mv_dir == MV_DIR_FORWARD)
                             put_mb_modes(s, 6, 3, 1, 1);
                         else
-                            put_mb_modes(s, 8 - s->mv_dir, 2, 1, 1);
+                            put_mb_modes(s, 8 - s->c.mv_dir, 2, 1, 1);
                         put_qscale(s);
                     } else {
-                        put_mb_modes(s, 5 - s->mv_dir, 3, 1, 1);
+                        put_mb_modes(s, 5 - s->c.mv_dir, 3, 1, 1);
                     }
                 } else {                        // No coded bloc pattern
-                    put_bits(&s->pb, 5 - s->mv_dir, 2);
+                    put_bits(&s->pb, 5 - s->c.mv_dir, 2);
                     put_bits(&s->pb, 2, 1);     /* motion_type: field */
-                    s->qscale -= s->dquant;
+                    s->c.qscale -= s->dquant;
                 }
                 s->misc_bits += get_bits_diff(s);
-                if (s->mv_dir & MV_DIR_FORWARD) {
+                if (s->c.mv_dir & MV_DIR_FORWARD) {
                     for (i = 0; i < 2; i++) {
-                        put_bits(&s->pb, 1, s->field_select[0][i]);
+                        put_bits(&s->pb, 1, s->c.field_select[0][i]);
                         mpeg1_encode_motion(s,
-                                            s->mv[0][i][0] - s->last_mv[0][i][0],
-                                            s->f_code);
+                                            s->c.mv[0][i][0] - s->c.last_mv[0][i][0],
+                                            s->c.f_code);
                         mpeg1_encode_motion(s,
-                                            s->mv[0][i][1] - (s->last_mv[0][i][1] >> 1),
-                                            s->f_code);
-                        s->last_mv[0][i][0] = s->mv[0][i][0];
-                        s->last_mv[0][i][1] = s->mv[0][i][1] * 2;
+                                            s->c.mv[0][i][1] - (s->c.last_mv[0][i][1] >> 1),
+                                            s->c.f_code);
+                        s->c.last_mv[0][i][0] = s->c.mv[0][i][0];
+                        s->c.last_mv[0][i][1] = s->c.mv[0][i][1] * 2;
                     }
                 }
-                if (s->mv_dir & MV_DIR_BACKWARD) {
+                if (s->c.mv_dir & MV_DIR_BACKWARD) {
                     for (i = 0; i < 2; i++) {
-                        put_bits(&s->pb, 1, s->field_select[1][i]);
+                        put_bits(&s->pb, 1, s->c.field_select[1][i]);
                         mpeg1_encode_motion(s,
-                                            s->mv[1][i][0] - s->last_mv[1][i][0],
-                                            s->b_code);
+                                            s->c.mv[1][i][0] - s->c.last_mv[1][i][0],
+                                            s->c.b_code);
                         mpeg1_encode_motion(s,
-                                            s->mv[1][i][1] - (s->last_mv[1][i][1] >> 1),
-                                            s->b_code);
-                        s->last_mv[1][i][0] = s->mv[1][i][0];
-                        s->last_mv[1][i][1] = s->mv[1][i][1] * 2;
+                                            s->c.mv[1][i][1] - (s->c.last_mv[1][i][1] >> 1),
+                                            s->c.b_code);
+                        s->c.last_mv[1][i][0] = s->c.mv[1][i][0];
+                        s->c.last_mv[1][i][1] = s->c.mv[1][i][1] * 2;
                     }
                 }
             }
@@ -921,20 +921,20 @@ static av_always_inline void mpeg1_encode_mb_internal(MpegEncContext *s,
         for (i = 0; i < mb_block_count; i++)
             if (cbp & (1 << (mb_block_count - 1 - i)))
                 mpeg1_encode_block(s, block[i], i);
-        s->mb_skip_run = 0;
-        if (s->mb_intra)
+        s->c.mb_skip_run = 0;
+        if (s->c.mb_intra)
             s->i_tex_bits += get_bits_diff(s);
         else
             s->p_tex_bits += get_bits_diff(s);
     }
 }
 
-static void mpeg12_encode_mb(MpegEncContext *s, int16_t block[][64],
+static void mpeg12_encode_mb(MPVEncContext *const s, int16_t block[][64],
                              int motion_x, int motion_y)
 {
-    if (!s->mb_intra)
-        s->last_dc[0] = s->last_dc[1] = s->last_dc[2] = 128 << s->intra_dc_precision;
-    if (s->chroma_format == CHROMA_420)
+    if (!s->c.mb_intra)
+        s->c.last_dc[0] = s->c.last_dc[1] = s->c.last_dc[2] = 128 << s->c.intra_dc_precision;
+    if (s->c.chroma_format == CHROMA_420)
         mpeg1_encode_mb_internal(s, block, motion_x, motion_y, 6, 1);
     else
         mpeg1_encode_mb_internal(s, block, motion_x, motion_y, 8, 0);
@@ -1048,7 +1048,7 @@ static av_cold int encode_init(AVCodecContext *avctx)
     static AVOnce init_static_once = AV_ONCE_INIT;
     MPEG12EncContext *const mpeg12 = avctx->priv_data;
     MPVMainEncContext *const m = &mpeg12->mpeg;
-    MpegEncContext    *const s = &m->s;
+    MPVEncContext    *const s = &m->s;
     int ret;
     int max_size = avctx->codec_id == AV_CODEC_ID_MPEG2VIDEO ? 16383 : 4095;
 
@@ -1071,7 +1071,7 @@ static av_cold int encode_init(AVCodecContext *avctx)
         }
     }
 
-    if (s->q_scale_type == 1) {
+    if (s->c.q_scale_type == 1) {
         if (avctx->qmax > 28) {
             av_log(avctx, AV_LOG_ERROR,
                    "non linear quant only supports qmax <= 28 currently\n");
@@ -1113,7 +1113,7 @@ static av_cold int encode_init(AVCodecContext *avctx)
     m->encode_picture_header = mpeg1_encode_picture_header;
     s->encode_mb             = mpeg12_encode_mb;
 
-    s->me.mv_penalty = mv_penalty;
+    s->c.me.mv_penalty = mv_penalty;
     m->fcode_tab     = fcode_tab + MAX_MV;
     if (avctx->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
         s->min_qcoeff = -255;
@@ -1121,9 +1121,9 @@ static av_cold int encode_init(AVCodecContext *avctx)
     } else {
         s->min_qcoeff = -2047;
         s->max_qcoeff = 2047;
-        s->mpeg_quant = 1;
+        s->c.mpeg_quant = 1;
     }
-    if (s->intra_vlc_format) {
+    if (s->c.intra_vlc_format) {
         s->intra_ac_vlc_length      =
         s->intra_ac_vlc_last_length = uni_mpeg2_ac_vlc_len;
     } else {
@@ -1138,7 +1138,7 @@ static av_cold int encode_init(AVCodecContext *avctx)
         return ret;
 
     if (avctx->codec_id == AV_CODEC_ID_MPEG1VIDEO &&
-        s->thread_context[s->slice_context_count - 1]->start_mb_y >
+        s->c.thread_context[s->c.slice_context_count - 1]->start_mb_y >
             SLICE_MAX_START_CODE - SLICE_MIN_START_CODE) {
         // MPEG-1 slices must not start at a MB row number that would make
         // their start code > SLICE_MAX_START_CODE. So make the last slice
@@ -1148,15 +1148,15 @@ static av_cold int encode_init(AVCodecContext *avctx)
                       "the case in which there is no work to do for some "
                       "slice contexts.");
         const int mb_height = SLICE_MAX_START_CODE - SLICE_MIN_START_CODE;
-        const int nb_slices = s->slice_context_count - 1;
+        const int nb_slices = s->c.slice_context_count - 1;
 
-        s->thread_context[nb_slices]->start_mb_y = mb_height;
+        s->c.thread_context[nb_slices]->start_mb_y = mb_height;
 
         av_assert1(nb_slices >= 1);
         for (int i = 0; i < nb_slices; i++) {
-            s->thread_context[i]->start_mb_y =
+            s->c.thread_context[i]->start_mb_y =
                 (mb_height * (i    ) + nb_slices / 2) / nb_slices;
-            s->thread_context[i]->end_mb_y   =
+            s->c.thread_context[i]->end_mb_y   =
                 (mb_height * (i + 1) + nb_slices / 2) / nb_slices;
         }
     }
@@ -1229,9 +1229,9 @@ static const AVOption mpeg1_options[] = {
 static const AVOption mpeg2_options[] = {
     COMMON_OPTS
     { "intra_vlc",        "Use MPEG-2 intra VLC table.",
-      FF_MPV_OFFSET(intra_vlc_format),    AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE },
-    { "non_linear_quant", "Use nonlinear quantizer.",    FF_MPV_OFFSET(q_scale_type),   AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE },
-    { "alternate_scan",   "Enable alternate scantable.", FF_MPV_OFFSET(alternate_scan), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE },
+      FF_MPV_OFFSET(c.intra_vlc_format),    AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE },
+    { "non_linear_quant", "Use nonlinear quantizer.",    FF_MPV_OFFSET(c.q_scale_type),   AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE },
+    { "alternate_scan",   "Enable alternate scantable.", FF_MPV_OFFSET(c.alternate_scan), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE },
     { "a53cc", "Use A53 Closed Captions (if available)", OFFSET(a53_cc),         AV_OPT_TYPE_BOOL, { .i64 = 1 }, 0, 1, VE },
     { "seq_disp_ext",     "Write sequence_display_extension blocks.", OFFSET(seq_disp_ext), AV_OPT_TYPE_INT, { .i64 = -1 }, -1, 1, VE, .unit = "seq_disp_ext" },
     {     "auto",   NULL, 0, AV_OPT_TYPE_CONST,  {.i64 = -1},  0, 0, VE, .unit = "seq_disp_ext" },
diff --git a/libavcodec/mpeg12enc.h b/libavcodec/mpeg12enc.h
index 8ffa471a63..a8aeadbb3e 100644
--- a/libavcodec/mpeg12enc.h
+++ b/libavcodec/mpeg12enc.h
@@ -24,16 +24,16 @@
 
 #include <stdint.h>
 
-#include "mpegvideo.h"
+#include "mpegvideoenc.h"
 #include "mpegvideodata.h"
 
-void ff_mpeg1_encode_slice_header(MpegEncContext *s);
+void ff_mpeg1_encode_slice_header(MPVEncContext *s);
 
 // Must not be called before intra_dc_precision has been sanitized in ff_mpv_encode_init()
-static inline void ff_mpeg1_encode_init(MpegEncContext *s)
+static inline void ff_mpeg1_encode_init(MPVEncContext *s)
 {
-    s->y_dc_scale_table =
-    s->c_dc_scale_table = ff_mpeg12_dc_scale_table[s->intra_dc_precision];
+    s->c.y_dc_scale_table =
+    s->c.c_dc_scale_table = ff_mpeg12_dc_scale_table[s->c.intra_dc_precision];
 }
 
 #endif /* AVCODEC_MPEG12ENC_H */
diff --git a/libavcodec/mpeg4videoenc.c b/libavcodec/mpeg4videoenc.c
index ddb6958229..9f933b517e 100644
--- a/libavcodec/mpeg4videoenc.c
+++ b/libavcodec/mpeg4videoenc.c
@@ -86,7 +86,7 @@ static inline Mpeg4EncContext *mainctx_to_mpeg4(MPVMainEncContext *m)
  * Return the number of bits that encoding the 8x8 block in block would need.
  * @param[in]  block_last_index last index in scantable order that refers to a non zero element in block.
  */
-static inline int get_block_rate(MpegEncContext *s, int16_t block[64],
+static inline int get_block_rate(MPVEncContext *const s, int16_t block[64],
                                  int block_last_index, const uint8_t scantable[64])
 {
     int last = 0;
@@ -115,113 +115,113 @@ static inline int get_block_rate(MpegEncContext *s, int16_t block[64],
 
 /**
  * Restore the ac coefficients in block that have been changed by decide_ac_pred().
- * This function also restores s->block_last_index.
+ * This function also restores s->c.block_last_index.
  * @param[in,out] block MB coefficients, these will be restored
  * @param[in] dir ac prediction direction for each 8x8 block
  * @param[out] st scantable for each 8x8 block
  * @param[in] zigzag_last_index index referring to the last non zero coefficient in zigzag order
  */
-static inline void restore_ac_coeffs(MpegEncContext *s, int16_t block[6][64],
+static inline void restore_ac_coeffs(MPVEncContext *const s, int16_t block[6][64],
                                      const int dir[6], const uint8_t *st[6],
                                      const int zigzag_last_index[6])
 {
     int i, n;
-    memcpy(s->block_last_index, zigzag_last_index, sizeof(int) * 6);
+    memcpy(s->c.block_last_index, zigzag_last_index, sizeof(int) * 6);
 
     for (n = 0; n < 6; n++) {
-        int16_t *ac_val = &s->ac_val[0][0][0] + s->block_index[n] * 16;
+        int16_t *ac_val = &s->c.ac_val[0][0][0] + s->c.block_index[n] * 16;
 
-        st[n] = s->intra_scantable.permutated;
+        st[n] = s->c.intra_scantable.permutated;
         if (dir[n]) {
             /* top prediction */
             for (i = 1; i < 8; i++)
-                block[n][s->idsp.idct_permutation[i]] = ac_val[i + 8];
+                block[n][s->c.idsp.idct_permutation[i]] = ac_val[i + 8];
         } else {
             /* left prediction */
             for (i = 1; i < 8; i++)
-                block[n][s->idsp.idct_permutation[i << 3]] = ac_val[i];
+                block[n][s->c.idsp.idct_permutation[i << 3]] = ac_val[i];
         }
     }
 }
 
 /**
  * Return the optimal value (0 or 1) for the ac_pred element for the given MB in MPEG-4.
- * This function will also update s->block_last_index and s->ac_val.
+ * This function will also update s->c.block_last_index and s->c.ac_val.
  * @param[in,out] block MB coefficients, these will be updated if 1 is returned
  * @param[in] dir ac prediction direction for each 8x8 block
  * @param[out] st scantable for each 8x8 block
  * @param[out] zigzag_last_index index referring to the last non zero coefficient in zigzag order
  */
-static inline int decide_ac_pred(MpegEncContext *s, int16_t block[6][64],
+static inline int decide_ac_pred(MPVEncContext *const s, int16_t block[6][64],
                                  const int dir[6], const uint8_t *st[6],
                                  int zigzag_last_index[6])
 {
     int score = 0;
     int i, n;
-    const int8_t *const qscale_table = s->cur_pic.qscale_table;
+    const int8_t *const qscale_table = s->c.cur_pic.qscale_table;
 
-    memcpy(zigzag_last_index, s->block_last_index, sizeof(int) * 6);
+    memcpy(zigzag_last_index, s->c.block_last_index, sizeof(int) * 6);
 
     for (n = 0; n < 6; n++) {
         int16_t *ac_val, *ac_val1;
 
-        score -= get_block_rate(s, block[n], s->block_last_index[n],
-                                s->intra_scantable.permutated);
+        score -= get_block_rate(s, block[n], s->c.block_last_index[n],
+                                s->c.intra_scantable.permutated);
 
-        ac_val  = &s->ac_val[0][0][0] + s->block_index[n] * 16;
+        ac_val  = &s->c.ac_val[0][0][0] + s->c.block_index[n] * 16;
         ac_val1 = ac_val;
         if (dir[n]) {
-            const int xy = s->mb_x + s->mb_y * s->mb_stride - s->mb_stride;
+            const int xy = s->c.mb_x + s->c.mb_y * s->c.mb_stride - s->c.mb_stride;
             /* top prediction */
-            ac_val -= s->block_wrap[n] * 16;
-            if (s->mb_y == 0 || s->qscale == qscale_table[xy] || n == 2 || n == 3) {
+            ac_val -= s->c.block_wrap[n] * 16;
+            if (s->c.mb_y == 0 || s->c.qscale == qscale_table[xy] || n == 2 || n == 3) {
                 /* same qscale */
                 for (i = 1; i < 8; i++) {
-                    const int level = block[n][s->idsp.idct_permutation[i]];
-                    block[n][s->idsp.idct_permutation[i]] = level - ac_val[i + 8];
-                    ac_val1[i]     = block[n][s->idsp.idct_permutation[i << 3]];
+                    const int level = block[n][s->c.idsp.idct_permutation[i]];
+                    block[n][s->c.idsp.idct_permutation[i]] = level - ac_val[i + 8];
+                    ac_val1[i]     = block[n][s->c.idsp.idct_permutation[i << 3]];
                     ac_val1[i + 8] = level;
                 }
             } else {
                 /* different qscale, we must rescale */
                 for (i = 1; i < 8; i++) {
-                    const int level = block[n][s->idsp.idct_permutation[i]];
-                    block[n][s->idsp.idct_permutation[i]] = level - ROUNDED_DIV(ac_val[i + 8] * qscale_table[xy], s->qscale);
-                    ac_val1[i]     = block[n][s->idsp.idct_permutation[i << 3]];
+                    const int level = block[n][s->c.idsp.idct_permutation[i]];
+                    block[n][s->c.idsp.idct_permutation[i]] = level - ROUNDED_DIV(ac_val[i + 8] * qscale_table[xy], s->c.qscale);
+                    ac_val1[i]     = block[n][s->c.idsp.idct_permutation[i << 3]];
                     ac_val1[i + 8] = level;
                 }
             }
-            st[n] = s->permutated_intra_h_scantable;
+            st[n] = s->c.permutated_intra_h_scantable;
         } else {
-            const int xy = s->mb_x - 1 + s->mb_y * s->mb_stride;
+            const int xy = s->c.mb_x - 1 + s->c.mb_y * s->c.mb_stride;
             /* left prediction */
             ac_val -= 16;
-            if (s->mb_x == 0 || s->qscale == qscale_table[xy] || n == 1 || n == 3) {
+            if (s->c.mb_x == 0 || s->c.qscale == qscale_table[xy] || n == 1 || n == 3) {
                 /* same qscale */
                 for (i = 1; i < 8; i++) {
-                    const int level = block[n][s->idsp.idct_permutation[i << 3]];
-                    block[n][s->idsp.idct_permutation[i << 3]] = level - ac_val[i];
+                    const int level = block[n][s->c.idsp.idct_permutation[i << 3]];
+                    block[n][s->c.idsp.idct_permutation[i << 3]] = level - ac_val[i];
                     ac_val1[i]     = level;
-                    ac_val1[i + 8] = block[n][s->idsp.idct_permutation[i]];
+                    ac_val1[i + 8] = block[n][s->c.idsp.idct_permutation[i]];
                 }
             } else {
                 /* different qscale, we must rescale */
                 for (i = 1; i < 8; i++) {
-                    const int level = block[n][s->idsp.idct_permutation[i << 3]];
-                    block[n][s->idsp.idct_permutation[i << 3]] = level - ROUNDED_DIV(ac_val[i] * qscale_table[xy], s->qscale);
+                    const int level = block[n][s->c.idsp.idct_permutation[i << 3]];
+                    block[n][s->c.idsp.idct_permutation[i << 3]] = level - ROUNDED_DIV(ac_val[i] * qscale_table[xy], s->c.qscale);
                     ac_val1[i]     = level;
-                    ac_val1[i + 8] = block[n][s->idsp.idct_permutation[i]];
+                    ac_val1[i + 8] = block[n][s->c.idsp.idct_permutation[i]];
                 }
             }
-            st[n] = s->permutated_intra_v_scantable;
+            st[n] = s->c.permutated_intra_v_scantable;
         }
 
         for (i = 63; i > 0; i--)  // FIXME optimize
             if (block[n][st[n][i]])
                 break;
-        s->block_last_index[n] = i;
+        s->c.block_last_index[n] = i;
 
-        score += get_block_rate(s, block[n], s->block_last_index[n], st[n]);
+        score += get_block_rate(s, block[n], s->c.block_last_index[n], st[n]);
     }
 
     if (score < 0) {
@@ -235,39 +235,38 @@ static inline int decide_ac_pred(MpegEncContext *s, int16_t block[6][64],
 /**
  * modify mb_type & qscale so that encoding is actually possible in MPEG-4
  */
-void ff_clean_mpeg4_qscales(MpegEncContext *s)
+void ff_clean_mpeg4_qscales(MPVEncContext *const s)
 {
-    int i;
-    int8_t *const qscale_table = s->cur_pic.qscale_table;
+    int8_t *const qscale_table = s->c.cur_pic.qscale_table;
 
     ff_clean_h263_qscales(s);
 
-    if (s->pict_type == AV_PICTURE_TYPE_B) {
+    if (s->c.pict_type == AV_PICTURE_TYPE_B) {
         int odd = 0;
         /* ok, come on, this isn't funny anymore, there's more code for
          * handling this MPEG-4 mess than for the actual adaptive quantization */
 
-        for (i = 0; i < s->mb_num; i++) {
-            int mb_xy = s->mb_index2xy[i];
+        for (int i = 0; i < s->c.mb_num; i++) {
+            int mb_xy = s->c.mb_index2xy[i];
             odd += qscale_table[mb_xy] & 1;
         }
 
-        if (2 * odd > s->mb_num)
+        if (2 * odd > s->c.mb_num)
             odd = 1;
         else
             odd = 0;
 
-        for (i = 0; i < s->mb_num; i++) {
-            int mb_xy = s->mb_index2xy[i];
+        for (int i = 0; i < s->c.mb_num; i++) {
+            int mb_xy = s->c.mb_index2xy[i];
             if ((qscale_table[mb_xy] & 1) != odd)
                 qscale_table[mb_xy]++;
             if (qscale_table[mb_xy] > 31)
                 qscale_table[mb_xy] = 31;
         }
 
-        for (i = 1; i < s->mb_num; i++) {
-            int mb_xy = s->mb_index2xy[i];
-            if (qscale_table[mb_xy] != qscale_table[s->mb_index2xy[i - 1]] &&
+        for (int i = 1; i < s->c.mb_num; i++) {
+            int mb_xy = s->c.mb_index2xy[i];
+            if (qscale_table[mb_xy] != qscale_table[s->c.mb_index2xy[i - 1]] &&
                 (s->mb_type[mb_xy] & CANDIDATE_MB_TYPE_DIRECT)) {
                 s->mb_type[mb_xy] |= CANDIDATE_MB_TYPE_BIDIR;
             }
@@ -304,7 +303,7 @@ static inline int mpeg4_get_dc_length(int level, int n)
  * Encode an 8x8 block.
  * @param n block index (0-3 are luma, 4-5 are chroma)
  */
-static inline void mpeg4_encode_block(const MpegEncContext *s,
+static inline void mpeg4_encode_block(const MPVEncContext *const s,
                                       const int16_t *block, int n, int intra_dc,
                                       const uint8_t *scan_table, PutBitContext *dc_pb,
                                       PutBitContext *ac_pb)
@@ -312,9 +311,9 @@ static inline void mpeg4_encode_block(const MpegEncContext *s,
     int i, last_non_zero;
     const uint32_t *bits_tab;
     const uint8_t *len_tab;
-    const int last_index = s->block_last_index[n];
+    const int last_index = s->c.block_last_index[n];
 
-    if (s->mb_intra) {  // Note gcc (3.2.1 at least) will optimize this away
+    if (s->c.mb_intra) {  // Note gcc (3.2.1 at least) will optimize this away
         /* MPEG-4 based DC predictor */
         mpeg4_encode_dc(dc_pb, intra_dc, n);
         if (last_index < 1)
@@ -365,16 +364,16 @@ static inline void mpeg4_encode_block(const MpegEncContext *s,
     }
 }
 
-static int mpeg4_get_block_length(MpegEncContext *s,
+static int mpeg4_get_block_length(MPVEncContext *const s,
                                   const int16_t *block, int n,
                                   int intra_dc, const uint8_t *scan_table)
 {
     int i, last_non_zero;
     const uint8_t *len_tab;
-    const int last_index = s->block_last_index[n];
+    const int last_index = s->c.block_last_index[n];
     int len = 0;
 
-    if (s->mb_intra) {  // Note gcc (3.2.1 at least) will optimize this away
+    if (s->c.mb_intra) {  // Note gcc (3.2.1 at least) will optimize this away
         /* MPEG-4 based DC predictor */
         len += mpeg4_get_dc_length(intra_dc, n);
         if (last_index < 1)
@@ -419,7 +418,7 @@ static int mpeg4_get_block_length(MpegEncContext *s,
     return len;
 }
 
-static inline void mpeg4_encode_blocks(MpegEncContext *s,
+static inline void mpeg4_encode_blocks(MPVEncContext *const s,
                                        const int16_t block[6][64],
                                        const int intra_dc[6],
                                        const uint8_t * const *scan_table,
@@ -429,7 +428,7 @@ static inline void mpeg4_encode_blocks(MpegEncContext *s,
     int i;
 
     if (scan_table) {
-        if (s->avctx->flags2 & AV_CODEC_FLAG2_NO_OUTPUT) {
+        if (s->c.avctx->flags2 & AV_CODEC_FLAG2_NO_OUTPUT) {
             for (i = 0; i < 6; i++)
                 skip_put_bits(&s->pb,
                               mpeg4_get_block_length(s, block[i], i,
@@ -441,28 +440,28 @@ static inline void mpeg4_encode_blocks(MpegEncContext *s,
                                    intra_dc[i], scan_table[i], dc_pb, ac_pb);
         }
     } else {
-        if (s->avctx->flags2 & AV_CODEC_FLAG2_NO_OUTPUT) {
+        if (s->c.avctx->flags2 & AV_CODEC_FLAG2_NO_OUTPUT) {
             for (i = 0; i < 6; i++)
                 skip_put_bits(&s->pb,
                               mpeg4_get_block_length(s, block[i], i, 0,
-                                                     s->intra_scantable.permutated));
+                                                     s->c.intra_scantable.permutated));
         } else {
             /* encode each block */
             for (i = 0; i < 6; i++)
                 mpeg4_encode_block(s, block[i], i, 0,
-                                   s->intra_scantable.permutated, dc_pb, ac_pb);
+                                   s->c.intra_scantable.permutated, dc_pb, ac_pb);
         }
     }
 }
 
-static inline int get_b_cbp(MpegEncContext *s, int16_t block[6][64],
+static inline int get_b_cbp(MPVEncContext *const s, int16_t block[6][64],
                             int motion_x, int motion_y, int mb_type)
 {
     int cbp = 0, i;
 
     if (s->mpv_flags & FF_MPV_FLAG_CBP_RD) {
         int score        = 0;
-        const int lambda = s->lambda2 >> (FF_LAMBDA_SHIFT - 6);
+        const int lambda = s->c.lambda2 >> (FF_LAMBDA_SHIFT - 6);
 
         for (i = 0; i < 6; i++) {
             if (s->coded_score[i] < 0) {
@@ -482,14 +481,14 @@ static inline int get_b_cbp(MpegEncContext *s, int16_t block[6][64],
         }
 
         for (i = 0; i < 6; i++) {
-            if (s->block_last_index[i] >= 0 && ((cbp >> (5 - i)) & 1) == 0) {
-                s->block_last_index[i] = -1;
-                s->bdsp.clear_block(s->block[i]);
+            if (s->c.block_last_index[i] >= 0 && ((cbp >> (5 - i)) & 1) == 0) {
+                s->c.block_last_index[i] = -1;
+                s->c.bdsp.clear_block(s->c.block[i]);
             }
         }
     } else {
         for (i = 0; i < 6; i++) {
-            if (s->block_last_index[i] >= 0)
+            if (s->c.block_last_index[i] >= 0)
                 cbp |= 1 << (5 - i);
         }
     }
@@ -499,29 +498,29 @@ static inline int get_b_cbp(MpegEncContext *s, int16_t block[6][64],
 // FIXME this is duplicated to h263.c
 static const int dquant_code[5] = { 1, 0, 9, 2, 3 };
 
-static void mpeg4_encode_mb(MpegEncContext *const s, int16_t block[][64],
+static void mpeg4_encode_mb(MPVEncContext *const s, int16_t block[][64],
                             int motion_x, int motion_y)
 {
     int cbpc, cbpy, pred_x, pred_y;
-    PutBitContext *const pb2    = s->data_partitioning ? &s->pb2 : &s->pb;
-    PutBitContext *const tex_pb = s->data_partitioning && s->pict_type != AV_PICTURE_TYPE_B ? &s->tex_pb : &s->pb;
-    PutBitContext *const dc_pb  = s->data_partitioning && s->pict_type != AV_PICTURE_TYPE_I ? &s->pb2 : &s->pb;
-    const int interleaved_stats = (s->avctx->flags & AV_CODEC_FLAG_PASS1) && !s->data_partitioning ? 1 : 0;
+    PutBitContext *const pb2    = s->c.data_partitioning ? &s->pb2 : &s->pb;
+    PutBitContext *const tex_pb = s->c.data_partitioning && s->c.pict_type != AV_PICTURE_TYPE_B ? &s->tex_pb : &s->pb;
+    PutBitContext *const dc_pb  = s->c.data_partitioning && s->c.pict_type != AV_PICTURE_TYPE_I ? &s->pb2 : &s->pb;
+    const int interleaved_stats = (s->c.avctx->flags & AV_CODEC_FLAG_PASS1) && !s->c.data_partitioning ? 1 : 0;
 
-    if (!s->mb_intra) {
+    if (!s->c.mb_intra) {
         int i, cbp;
 
-        if (s->pict_type == AV_PICTURE_TYPE_B) {
+        if (s->c.pict_type == AV_PICTURE_TYPE_B) {
             /* convert from mv_dir to type */
             static const int mb_type_table[8] = { -1, 3, 2, 1, -1, -1, -1, 0 };
-            int mb_type = mb_type_table[s->mv_dir];
+            int mb_type = mb_type_table[s->c.mv_dir];
 
-            if (s->mb_x == 0) {
+            if (s->c.mb_x == 0) {
                 for (i = 0; i < 2; i++)
-                    s->last_mv[i][0][0] =
-                    s->last_mv[i][0][1] =
-                    s->last_mv[i][1][0] =
-                    s->last_mv[i][1][1] = 0;
+                    s->c.last_mv[i][0][0] =
+                    s->c.last_mv[i][0][1] =
+                    s->c.last_mv[i][1][0] =
+                    s->c.last_mv[i][1][1] = 0;
             }
 
             av_assert2(s->dquant >= -2 && s->dquant <= 2);
@@ -529,14 +528,14 @@ static void mpeg4_encode_mb(MpegEncContext *const s, int16_t block[][64],
             av_assert2(mb_type >= 0);
 
             /* nothing to do if this MB was skipped in the next P-frame */
-            if (s->next_pic.mbskip_table[s->mb_y * s->mb_stride + s->mb_x]) {  // FIXME avoid DCT & ...
-                s->mv[0][0][0] =
-                s->mv[0][0][1] =
-                s->mv[1][0][0] =
-                s->mv[1][0][1] = 0;
-                s->mv_dir  = MV_DIR_FORWARD;  // doesn't matter
-                s->qscale -= s->dquant;
-//                s->mb_skipped = 1;
+            if (s->c.next_pic.mbskip_table[s->c.mb_y * s->c.mb_stride + s->c.mb_x]) {  // FIXME avoid DCT & ...
+                s->c.mv[0][0][0] =
+                s->c.mv[0][0][1] =
+                s->c.mv[1][0][0] =
+                s->c.mv[1][0][1] = 0;
+                s->c.mv_dir  = MV_DIR_FORWARD;  // doesn't matter
+                s->c.qscale -= s->dquant;
+//                s->c.mb_skipped = 1;
 
                 return;
             }
@@ -568,71 +567,71 @@ static void mpeg4_encode_mb(MpegEncContext *const s, int16_t block[][64],
                 else
                     put_bits(&s->pb, 1, 0);
             } else
-                s->qscale -= s->dquant;
+                s->c.qscale -= s->dquant;
 
-            if (!s->progressive_sequence) {
+            if (!s->c.progressive_sequence) {
                 if (cbp)
-                    put_bits(&s->pb, 1, s->interlaced_dct);
+                    put_bits(&s->pb, 1, s->c.interlaced_dct);
                 if (mb_type)                  // not direct mode
-                    put_bits(&s->pb, 1, s->mv_type == MV_TYPE_FIELD);
+                    put_bits(&s->pb, 1, s->c.mv_type == MV_TYPE_FIELD);
             }
 
             if (interleaved_stats)
                 s->misc_bits += get_bits_diff(s);
 
             if (!mb_type) {
-                av_assert2(s->mv_dir & MV_DIRECT);
+                av_assert2(s->c.mv_dir & MV_DIRECT);
                 ff_h263_encode_motion_vector(s, motion_x, motion_y, 1);
             } else {
                 av_assert2(mb_type > 0 && mb_type < 4);
-                if (s->mv_type != MV_TYPE_FIELD) {
-                    if (s->mv_dir & MV_DIR_FORWARD) {
+                if (s->c.mv_type != MV_TYPE_FIELD) {
+                    if (s->c.mv_dir & MV_DIR_FORWARD) {
                         ff_h263_encode_motion_vector(s,
-                                                     s->mv[0][0][0] - s->last_mv[0][0][0],
-                                                     s->mv[0][0][1] - s->last_mv[0][0][1],
-                                                     s->f_code);
-                        s->last_mv[0][0][0] =
-                        s->last_mv[0][1][0] = s->mv[0][0][0];
-                        s->last_mv[0][0][1] =
-                        s->last_mv[0][1][1] = s->mv[0][0][1];
+                                                     s->c.mv[0][0][0] - s->c.last_mv[0][0][0],
+                                                     s->c.mv[0][0][1] - s->c.last_mv[0][0][1],
+                                                     s->c.f_code);
+                        s->c.last_mv[0][0][0] =
+                        s->c.last_mv[0][1][0] = s->c.mv[0][0][0];
+                        s->c.last_mv[0][0][1] =
+                        s->c.last_mv[0][1][1] = s->c.mv[0][0][1];
                     }
-                    if (s->mv_dir & MV_DIR_BACKWARD) {
+                    if (s->c.mv_dir & MV_DIR_BACKWARD) {
                         ff_h263_encode_motion_vector(s,
-                                                     s->mv[1][0][0] - s->last_mv[1][0][0],
-                                                     s->mv[1][0][1] - s->last_mv[1][0][1],
-                                                     s->b_code);
-                        s->last_mv[1][0][0] =
-                        s->last_mv[1][1][0] = s->mv[1][0][0];
-                        s->last_mv[1][0][1] =
-                        s->last_mv[1][1][1] = s->mv[1][0][1];
+                                                     s->c.mv[1][0][0] - s->c.last_mv[1][0][0],
+                                                     s->c.mv[1][0][1] - s->c.last_mv[1][0][1],
+                                                     s->c.b_code);
+                        s->c.last_mv[1][0][0] =
+                        s->c.last_mv[1][1][0] = s->c.mv[1][0][0];
+                        s->c.last_mv[1][0][1] =
+                        s->c.last_mv[1][1][1] = s->c.mv[1][0][1];
                     }
                 } else {
-                    if (s->mv_dir & MV_DIR_FORWARD) {
-                        put_bits(&s->pb, 1, s->field_select[0][0]);
-                        put_bits(&s->pb, 1, s->field_select[0][1]);
+                    if (s->c.mv_dir & MV_DIR_FORWARD) {
+                        put_bits(&s->pb, 1, s->c.field_select[0][0]);
+                        put_bits(&s->pb, 1, s->c.field_select[0][1]);
                     }
-                    if (s->mv_dir & MV_DIR_BACKWARD) {
-                        put_bits(&s->pb, 1, s->field_select[1][0]);
-                        put_bits(&s->pb, 1, s->field_select[1][1]);
+                    if (s->c.mv_dir & MV_DIR_BACKWARD) {
+                        put_bits(&s->pb, 1, s->c.field_select[1][0]);
+                        put_bits(&s->pb, 1, s->c.field_select[1][1]);
                     }
-                    if (s->mv_dir & MV_DIR_FORWARD) {
+                    if (s->c.mv_dir & MV_DIR_FORWARD) {
                         for (i = 0; i < 2; i++) {
                             ff_h263_encode_motion_vector(s,
-                                                         s->mv[0][i][0] - s->last_mv[0][i][0],
-                                                         s->mv[0][i][1] - s->last_mv[0][i][1] / 2,
-                                                         s->f_code);
-                            s->last_mv[0][i][0] = s->mv[0][i][0];
-                            s->last_mv[0][i][1] = s->mv[0][i][1] * 2;
+                                                         s->c.mv[0][i][0] - s->c.last_mv[0][i][0],
+                                                         s->c.mv[0][i][1] - s->c.last_mv[0][i][1] / 2,
+                                                         s->c.f_code);
+                            s->c.last_mv[0][i][0] = s->c.mv[0][i][0];
+                            s->c.last_mv[0][i][1] = s->c.mv[0][i][1] * 2;
                         }
                     }
-                    if (s->mv_dir & MV_DIR_BACKWARD) {
+                    if (s->c.mv_dir & MV_DIR_BACKWARD) {
                         for (i = 0; i < 2; i++) {
                             ff_h263_encode_motion_vector(s,
-                                                         s->mv[1][i][0] - s->last_mv[1][i][0],
-                                                         s->mv[1][i][1] - s->last_mv[1][i][1] / 2,
-                                                         s->b_code);
-                            s->last_mv[1][i][0] = s->mv[1][i][0];
-                            s->last_mv[1][i][1] = s->mv[1][i][1] * 2;
+                                                         s->c.mv[1][i][0] - s->c.last_mv[1][i][0],
+                                                         s->c.mv[1][i][1] - s->c.last_mv[1][i][1] / 2,
+                                                         s->c.b_code);
+                            s->c.last_mv[1][i][0] = s->c.mv[1][i][0];
+                            s->c.last_mv[1][i][1] = s->c.mv[1][i][1] * 2;
                         }
                     }
                 }
@@ -645,11 +644,11 @@ static void mpeg4_encode_mb(MpegEncContext *const s, int16_t block[][64],
 
             if (interleaved_stats)
                 s->p_tex_bits += get_bits_diff(s);
-        } else { /* s->pict_type==AV_PICTURE_TYPE_B */
+        } else { /* s->c.pict_type==AV_PICTURE_TYPE_B */
             cbp = get_p_cbp(s, block, motion_x, motion_y);
 
             if ((cbp | motion_x | motion_y | s->dquant) == 0 &&
-                s->mv_type == MV_TYPE_16X16) {
+                s->c.mv_type == MV_TYPE_16X16) {
                 const MPVMainEncContext *const m = slice_to_mainenc(s);
                 /* Check if the B-frames can skip it too, as we must skip it
                  * if we skip here why didn't they just compress
@@ -658,13 +657,13 @@ static void mpeg4_encode_mb(MpegEncContext *const s, int16_t block[][64],
                     int x, y, offset;
                     const uint8_t *p_pic;
 
-                    x = s->mb_x * 16;
-                    y = s->mb_y * 16;
+                    x = s->c.mb_x * 16;
+                    y = s->c.mb_y * 16;
 
-                    offset = x + y * s->linesize;
+                    offset = x + y * s->c.linesize;
                     p_pic  = s->new_pic->data[0] + offset;
 
-                    s->mb_skipped = 1;
+                    s->c.mb_skipped = 1;
                     for (int i = 0; i < m->max_b_frames; i++) {
                         const uint8_t *b_pic;
                         int diff;
@@ -677,29 +676,29 @@ static void mpeg4_encode_mb(MpegEncContext *const s, int16_t block[][64],
                         if (!pic->shared)
                             b_pic += INPLACE_OFFSET;
 
-                        if (x + 16 > s->width || y + 16 > s->height) {
+                        if (x + 16 > s->c.width || y + 16 > s->c.height) {
                             int x1, y1;
-                            int xe = FFMIN(16, s->width - x);
-                            int ye = FFMIN(16, s->height - y);
+                            int xe = FFMIN(16, s->c.width - x);
+                            int ye = FFMIN(16, s->c.height - y);
                             diff = 0;
                             for (y1 = 0; y1 < ye; y1++) {
                                 for (x1 = 0; x1 < xe; x1++) {
-                                    diff += FFABS(p_pic[x1 + y1 * s->linesize] - b_pic[x1 + y1 * s->linesize]);
+                                    diff += FFABS(p_pic[x1 + y1 * s->c.linesize] - b_pic[x1 + y1 * s->c.linesize]);
                                 }
                             }
                             diff = diff * 256 / (xe * ye);
                         } else {
-                            diff = s->sad_cmp[0](NULL, p_pic, b_pic, s->linesize, 16);
+                            diff = s->sad_cmp[0](NULL, p_pic, b_pic, s->c.linesize, 16);
                         }
-                        if (diff > s->qscale * 70) {  // FIXME check that 70 is optimal
-                            s->mb_skipped = 0;
+                        if (diff > s->c.qscale * 70) {  // FIXME check that 70 is optimal
+                            s->c.mb_skipped = 0;
                             break;
                         }
                     }
                 } else
-                    s->mb_skipped = 1;
+                    s->c.mb_skipped = 1;
 
-                if (s->mb_skipped == 1) {
+                if (s->c.mb_skipped == 1) {
                     /* skip macroblock */
                     put_bits(&s->pb, 1, 1);
 
@@ -716,7 +715,7 @@ static void mpeg4_encode_mb(MpegEncContext *const s, int16_t block[][64],
             cbpc  = cbp & 3;
             cbpy  = cbp >> 2;
             cbpy ^= 0xf;
-            if (s->mv_type == MV_TYPE_16X16) {
+            if (s->c.mv_type == MV_TYPE_16X16) {
                 if (s->dquant)
                     cbpc += 8;
                 put_bits(&s->pb,
@@ -727,9 +726,9 @@ static void mpeg4_encode_mb(MpegEncContext *const s, int16_t block[][64],
                 if (s->dquant)
                     put_bits(pb2, 2, dquant_code[s->dquant + 2]);
 
-                if (!s->progressive_sequence) {
+                if (!s->c.progressive_sequence) {
                     if (cbp)
-                        put_bits(pb2, 1, s->interlaced_dct);
+                        put_bits(pb2, 1, s->c.interlaced_dct);
                     put_bits(pb2, 1, 0);
                 }
 
@@ -737,13 +736,13 @@ static void mpeg4_encode_mb(MpegEncContext *const s, int16_t block[][64],
                     s->misc_bits += get_bits_diff(s);
 
                 /* motion vectors: 16x16 mode */
-                ff_h263_pred_motion(s, 0, 0, &pred_x, &pred_y);
+                ff_h263_pred_motion(&s->c, 0, 0, &pred_x, &pred_y);
 
                 ff_h263_encode_motion_vector(s,
                                              motion_x - pred_x,
                                              motion_y - pred_y,
-                                             s->f_code);
-            } else if (s->mv_type == MV_TYPE_FIELD) {
+                                             s->c.f_code);
+            } else if (s->c.mv_type == MV_TYPE_FIELD) {
                 if (s->dquant)
                     cbpc += 8;
                 put_bits(&s->pb,
@@ -754,50 +753,50 @@ static void mpeg4_encode_mb(MpegEncContext *const s, int16_t block[][64],
                 if (s->dquant)
                     put_bits(pb2, 2, dquant_code[s->dquant + 2]);
 
-                av_assert2(!s->progressive_sequence);
+                av_assert2(!s->c.progressive_sequence);
                 if (cbp)
-                    put_bits(pb2, 1, s->interlaced_dct);
+                    put_bits(pb2, 1, s->c.interlaced_dct);
                 put_bits(pb2, 1, 1);
 
                 if (interleaved_stats)
                     s->misc_bits += get_bits_diff(s);
 
                 /* motion vectors: 16x8 interlaced mode */
-                ff_h263_pred_motion(s, 0, 0, &pred_x, &pred_y);
+                ff_h263_pred_motion(&s->c, 0, 0, &pred_x, &pred_y);
                 pred_y /= 2;
 
-                put_bits(&s->pb, 1, s->field_select[0][0]);
-                put_bits(&s->pb, 1, s->field_select[0][1]);
+                put_bits(&s->pb, 1, s->c.field_select[0][0]);
+                put_bits(&s->pb, 1, s->c.field_select[0][1]);
 
                 ff_h263_encode_motion_vector(s,
-                                             s->mv[0][0][0] - pred_x,
-                                             s->mv[0][0][1] - pred_y,
-                                             s->f_code);
+                                             s->c.mv[0][0][0] - pred_x,
+                                             s->c.mv[0][0][1] - pred_y,
+                                             s->c.f_code);
                 ff_h263_encode_motion_vector(s,
-                                             s->mv[0][1][0] - pred_x,
-                                             s->mv[0][1][1] - pred_y,
-                                             s->f_code);
+                                             s->c.mv[0][1][0] - pred_x,
+                                             s->c.mv[0][1][1] - pred_y,
+                                             s->c.f_code);
             } else {
-                av_assert2(s->mv_type == MV_TYPE_8X8);
+                av_assert2(s->c.mv_type == MV_TYPE_8X8);
                 put_bits(&s->pb,
                          ff_h263_inter_MCBPC_bits[cbpc + 16],
                          ff_h263_inter_MCBPC_code[cbpc + 16]);
                 put_bits(pb2, ff_h263_cbpy_tab[cbpy][1], ff_h263_cbpy_tab[cbpy][0]);
 
-                if (!s->progressive_sequence && cbp)
-                    put_bits(pb2, 1, s->interlaced_dct);
+                if (!s->c.progressive_sequence && cbp)
+                    put_bits(pb2, 1, s->c.interlaced_dct);
 
                 if (interleaved_stats)
                     s->misc_bits += get_bits_diff(s);
 
                 for (i = 0; i < 4; i++) {
                     /* motion vectors: 8x8 mode*/
-                    ff_h263_pred_motion(s, i, 0, &pred_x, &pred_y);
+                    ff_h263_pred_motion(&s->c, i, 0, &pred_x, &pred_y);
 
                     ff_h263_encode_motion_vector(s,
-                                                 s->cur_pic.motion_val[0][s->block_index[i]][0] - pred_x,
-                                                 s->cur_pic.motion_val[0][s->block_index[i]][1] - pred_y,
-                                                 s->f_code);
+                                                 s->c.cur_pic.motion_val[0][s->c.block_index[i]][0] - pred_x,
+                                                 s->c.cur_pic.motion_val[0][s->c.block_index[i]][1] - pred_y,
+                                                 s->c.f_code);
                 }
             }
 
@@ -818,29 +817,29 @@ static void mpeg4_encode_mb(MpegEncContext *const s, int16_t block[][64],
         int i;
 
         for (int i = 0; i < 6; i++) {
-            int pred  = ff_mpeg4_pred_dc(s, i, &dir[i]);
-            int scale = i < 4 ? s->y_dc_scale : s->c_dc_scale;
+            int pred  = ff_mpeg4_pred_dc(&s->c, i, &dir[i]);
+            int scale = i < 4 ? s->c.y_dc_scale : s->c.c_dc_scale;
 
             pred = FASTDIV((pred + (scale >> 1)), scale);
             dc_diff[i] = block[i][0] - pred;
-            s->dc_val[0][s->block_index[i]] = av_clip_uintp2(block[i][0] * scale, 11);
+            s->c.dc_val[0][s->c.block_index[i]] = av_clip_uintp2(block[i][0] * scale, 11);
         }
 
-        if (s->avctx->flags & AV_CODEC_FLAG_AC_PRED) {
-            s->ac_pred = decide_ac_pred(s, block, dir, scan_table, zigzag_last_index);
+        if (s->c.avctx->flags & AV_CODEC_FLAG_AC_PRED) {
+            s->c.ac_pred = decide_ac_pred(s, block, dir, scan_table, zigzag_last_index);
         } else {
             for (i = 0; i < 6; i++)
-                scan_table[i] = s->intra_scantable.permutated;
+                scan_table[i] = s->c.intra_scantable.permutated;
         }
 
         /* compute cbp */
         cbp = 0;
         for (i = 0; i < 6; i++)
-            if (s->block_last_index[i] >= 1)
+            if (s->c.block_last_index[i] >= 1)
                 cbp |= 1 << (5 - i);
 
         cbpc = cbp & 3;
-        if (s->pict_type == AV_PICTURE_TYPE_I) {
+        if (s->c.pict_type == AV_PICTURE_TYPE_I) {
             if (s->dquant)
                 cbpc += 4;
             put_bits(&s->pb,
@@ -854,14 +853,14 @@ static void mpeg4_encode_mb(MpegEncContext *const s, int16_t block[][64],
                      ff_h263_inter_MCBPC_bits[cbpc + 4],
                      ff_h263_inter_MCBPC_code[cbpc + 4]);
         }
-        put_bits(pb2, 1, s->ac_pred);
+        put_bits(pb2, 1, s->c.ac_pred);
         cbpy = cbp >> 2;
         put_bits(pb2, ff_h263_cbpy_tab[cbpy][1], ff_h263_cbpy_tab[cbpy][0]);
         if (s->dquant)
             put_bits(dc_pb, 2, dquant_code[s->dquant + 2]);
 
-        if (!s->progressive_sequence)
-            put_bits(dc_pb, 1, s->interlaced_dct);
+        if (!s->c.progressive_sequence)
+            put_bits(dc_pb, 1, s->c.interlaced_dct);
 
         if (interleaved_stats)
             s->misc_bits += get_bits_diff(s);
@@ -874,7 +873,7 @@ static void mpeg4_encode_mb(MpegEncContext *const s, int16_t block[][64],
 
         /* restore ac coeffs & last_index stuff
          * if we messed them up with the prediction */
-        if (s->ac_pred)
+        if (s->c.ac_pred)
             restore_ac_coeffs(s, block, dir, scan_table, zigzag_last_index);
     }
 }
@@ -890,31 +889,31 @@ void ff_mpeg4_stuffing(PutBitContext *pbc)
 }
 
 /* must be called before writing the header */
-void ff_set_mpeg4_time(MpegEncContext *s)
+void ff_set_mpeg4_time(MPVEncContext *const s)
 {
-    if (s->pict_type == AV_PICTURE_TYPE_B) {
-        ff_mpeg4_init_direct_mv(s);
+    if (s->c.pict_type == AV_PICTURE_TYPE_B) {
+        ff_mpeg4_init_direct_mv(&s->c);
     } else {
-        s->last_time_base = s->time_base;
-        s->time_base      = FFUDIV(s->time, s->avctx->time_base.den);
+        s->c.last_time_base = s->c.time_base;
+        s->c.time_base      = FFUDIV(s->c.time, s->c.avctx->time_base.den);
     }
 }
 
 static void mpeg4_encode_gop_header(MPVMainEncContext *const m)
 {
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
     int64_t hours, minutes, seconds;
     int64_t time;
 
     put_bits32(&s->pb, GOP_STARTCODE);
 
-    time = s->cur_pic.ptr->f->pts;
+    time = s->c.cur_pic.ptr->f->pts;
     if (m->reordered_input_picture[1])
         time = FFMIN(time, m->reordered_input_picture[1]->f->pts);
-    time = time * s->avctx->time_base.num;
-    s->last_time_base = FFUDIV(time, s->avctx->time_base.den);
+    time = time * s->c.avctx->time_base.num;
+    s->c.last_time_base = FFUDIV(time, s->c.avctx->time_base.den);
 
-    seconds = FFUDIV(time, s->avctx->time_base.den);
+    seconds = FFUDIV(time, s->c.avctx->time_base.den);
     minutes = FFUDIV(seconds, 60); seconds = FFUMOD(seconds, 60);
     hours   = FFUDIV(minutes, 60); minutes = FFUMOD(minutes, 60);
     hours   = FFUMOD(hours  , 24);
@@ -924,7 +923,7 @@ static void mpeg4_encode_gop_header(MPVMainEncContext *const m)
     put_bits(&s->pb, 1, 1);
     put_bits(&s->pb, 6, seconds);
 
-    put_bits(&s->pb, 1, !!(s->avctx->flags & AV_CODEC_FLAG_CLOSED_GOP));
+    put_bits(&s->pb, 1, !!(s->c.avctx->flags & AV_CODEC_FLAG_CLOSED_GOP));
     put_bits(&s->pb, 1, 0);  // broken link == NO
 
     ff_mpeg4_stuffing(&s->pb);
@@ -932,20 +931,20 @@ static void mpeg4_encode_gop_header(MPVMainEncContext *const m)
 
 static void mpeg4_encode_visual_object_header(MPVMainEncContext *const m)
 {
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
     int profile_and_level_indication;
     int vo_ver_id;
 
-    if (s->avctx->profile != AV_PROFILE_UNKNOWN) {
-        profile_and_level_indication = s->avctx->profile << 4;
-    } else if (m->max_b_frames || s->quarter_sample) {
+    if (s->c.avctx->profile != AV_PROFILE_UNKNOWN) {
+        profile_and_level_indication = s->c.avctx->profile << 4;
+    } else if (m->max_b_frames || s->c.quarter_sample) {
         profile_and_level_indication = 0xF0;  // adv simple
     } else {
         profile_and_level_indication = 0x00;  // simple
     }
 
-    if (s->avctx->level != AV_LEVEL_UNKNOWN)
-        profile_and_level_indication |= s->avctx->level;
+    if (s->c.avctx->level != AV_LEVEL_UNKNOWN)
+        profile_and_level_indication |= s->c.avctx->level;
     else
         profile_and_level_indication |= 1;   // level 1
 
@@ -977,10 +976,10 @@ static void mpeg4_encode_vol_header(Mpeg4EncContext *const m4,
                                     int vo_number,
                                     int vol_number)
 {
-    MpegEncContext *const s = &m4->m.s;
+    MPVEncContext *const s = &m4->m.s;
     int vo_ver_id, vo_type, aspect_ratio_info;
 
-    if (m4->m.max_b_frames || s->quarter_sample) {
+    if (m4->m.max_b_frames || s->c.quarter_sample) {
         vo_ver_id  = 5;
         vo_type = ADV_SIMPLE_VO_TYPE;
     } else {
@@ -997,35 +996,35 @@ static void mpeg4_encode_vol_header(Mpeg4EncContext *const m4,
     put_bits(&s->pb, 4, vo_ver_id);     /* is obj layer ver id */
     put_bits(&s->pb, 3, 1);             /* is obj layer priority */
 
-    aspect_ratio_info = ff_h263_aspect_to_info(s->avctx->sample_aspect_ratio);
+    aspect_ratio_info = ff_h263_aspect_to_info(s->c.avctx->sample_aspect_ratio);
 
     put_bits(&s->pb, 4, aspect_ratio_info); /* aspect ratio info */
     if (aspect_ratio_info == FF_ASPECT_EXTENDED) {
-        av_reduce(&s->avctx->sample_aspect_ratio.num, &s->avctx->sample_aspect_ratio.den,
-                   s->avctx->sample_aspect_ratio.num,  s->avctx->sample_aspect_ratio.den, 255);
-        put_bits(&s->pb, 8, s->avctx->sample_aspect_ratio.num);
-        put_bits(&s->pb, 8, s->avctx->sample_aspect_ratio.den);
+        av_reduce(&s->c.avctx->sample_aspect_ratio.num, &s->c.avctx->sample_aspect_ratio.den,
+                   s->c.avctx->sample_aspect_ratio.num,  s->c.avctx->sample_aspect_ratio.den, 255);
+        put_bits(&s->pb, 8, s->c.avctx->sample_aspect_ratio.num);
+        put_bits(&s->pb, 8, s->c.avctx->sample_aspect_ratio.den);
     }
 
     put_bits(&s->pb, 1, 1);             /* vol control parameters= yes */
     put_bits(&s->pb, 2, 1);             /* chroma format YUV 420/YV12 */
-    put_bits(&s->pb, 1, s->low_delay);
+    put_bits(&s->pb, 1, s->c.low_delay);
     put_bits(&s->pb, 1, 0);             /* vbv parameters= no */
 
     put_bits(&s->pb, 2, RECT_SHAPE);    /* vol shape= rectangle */
     put_bits(&s->pb, 1, 1);             /* marker bit */
 
-    put_bits(&s->pb, 16, s->avctx->time_base.den);
+    put_bits(&s->pb, 16, s->c.avctx->time_base.den);
     if (m4->time_increment_bits < 1)
         m4->time_increment_bits = 1;
     put_bits(&s->pb, 1, 1);             /* marker bit */
     put_bits(&s->pb, 1, 0);             /* fixed vop rate=no */
     put_bits(&s->pb, 1, 1);             /* marker bit */
-    put_bits(&s->pb, 13, s->width);     /* vol width */
+    put_bits(&s->pb, 13, s->c.width);     /* vol width */
     put_bits(&s->pb, 1, 1);             /* marker bit */
-    put_bits(&s->pb, 13, s->height);    /* vol height */
+    put_bits(&s->pb, 13, s->c.height);    /* vol height */
     put_bits(&s->pb, 1, 1);             /* marker bit */
-    put_bits(&s->pb, 1, s->progressive_sequence ? 0 : 1);
+    put_bits(&s->pb, 1, s->c.progressive_sequence ? 0 : 1);
     put_bits(&s->pb, 1, 1);             /* obmc disable */
     if (vo_ver_id == 1)
         put_bits(&s->pb, 1, 0);       /* sprite enable */
@@ -1033,19 +1032,19 @@ static void mpeg4_encode_vol_header(Mpeg4EncContext *const m4,
         put_bits(&s->pb, 2, 0);       /* sprite enable */
 
     put_bits(&s->pb, 1, 0);             /* not 8 bit == false */
-    put_bits(&s->pb, 1, s->mpeg_quant); /* quant type = (0 = H.263 style) */
+    put_bits(&s->pb, 1, s->c.mpeg_quant); /* quant type = (0 = H.263 style) */
 
-    if (s->mpeg_quant) {
-        ff_write_quant_matrix(&s->pb, s->avctx->intra_matrix);
-        ff_write_quant_matrix(&s->pb, s->avctx->inter_matrix);
+    if (s->c.mpeg_quant) {
+        ff_write_quant_matrix(&s->pb, s->c.avctx->intra_matrix);
+        ff_write_quant_matrix(&s->pb, s->c.avctx->inter_matrix);
     }
 
     if (vo_ver_id != 1)
-        put_bits(&s->pb, 1, s->quarter_sample);
+        put_bits(&s->pb, 1, s->c.quarter_sample);
     put_bits(&s->pb, 1, 1);             /* complexity estimation disable */
     put_bits(&s->pb, 1, s->rtp_mode ? 0 : 1); /* resync marker disable */
-    put_bits(&s->pb, 1, s->data_partitioning ? 1 : 0);
-    if (s->data_partitioning)
+    put_bits(&s->pb, 1, s->c.data_partitioning ? 1 : 0);
+    if (s->c.data_partitioning)
         put_bits(&s->pb, 1, 0);         /* no rvlc */
 
     if (vo_ver_id != 1) {
@@ -1057,7 +1056,7 @@ static void mpeg4_encode_vol_header(Mpeg4EncContext *const m4,
     ff_mpeg4_stuffing(&s->pb);
 
     /* user data */
-    if (!(s->avctx->flags & AV_CODEC_FLAG_BITEXACT)) {
+    if (!(s->c.avctx->flags & AV_CODEC_FLAG_BITEXACT)) {
         put_bits32(&s->pb, USER_DATA_STARTCODE);
         ff_put_string(&s->pb, LIBAVCODEC_IDENT, 0);
     }
@@ -1067,32 +1066,32 @@ static void mpeg4_encode_vol_header(Mpeg4EncContext *const m4,
 static int mpeg4_encode_picture_header(MPVMainEncContext *const m)
 {
     Mpeg4EncContext *const m4 = mainctx_to_mpeg4(m);
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
     uint64_t time_incr;
     int64_t time_div, time_mod;
 
-    if (s->pict_type == AV_PICTURE_TYPE_I) {
-        if (!(s->avctx->flags & AV_CODEC_FLAG_GLOBAL_HEADER)) {
-            if (s->avctx->strict_std_compliance < FF_COMPLIANCE_VERY_STRICT)  // HACK, the reference sw is buggy
+    if (s->c.pict_type == AV_PICTURE_TYPE_I) {
+        if (!(s->c.avctx->flags & AV_CODEC_FLAG_GLOBAL_HEADER)) {
+            if (s->c.avctx->strict_std_compliance < FF_COMPLIANCE_VERY_STRICT)  // HACK, the reference sw is buggy
                 mpeg4_encode_visual_object_header(m);
-            if (s->avctx->strict_std_compliance < FF_COMPLIANCE_VERY_STRICT || s->picture_number == 0)  // HACK, the reference sw is buggy
+            if (s->c.avctx->strict_std_compliance < FF_COMPLIANCE_VERY_STRICT || s->c.picture_number == 0)  // HACK, the reference sw is buggy
                 mpeg4_encode_vol_header(m4, 0, 0);
         }
         mpeg4_encode_gop_header(m);
     }
 
-    s->partitioned_frame = s->data_partitioning && s->pict_type != AV_PICTURE_TYPE_B;
+    s->c.partitioned_frame = s->c.data_partitioning && s->c.pict_type != AV_PICTURE_TYPE_B;
 
     put_bits32(&s->pb, VOP_STARTCODE);      /* vop header */
-    put_bits(&s->pb, 2, s->pict_type - 1);  /* pict type: I = 0 , P = 1 */
+    put_bits(&s->pb, 2, s->c.pict_type - 1);  /* pict type: I = 0 , P = 1 */
 
-    time_div  = FFUDIV(s->time, s->avctx->time_base.den);
-    time_mod  = FFUMOD(s->time, s->avctx->time_base.den);
-    time_incr = time_div - s->last_time_base;
+    time_div  = FFUDIV(s->c.time, s->c.avctx->time_base.den);
+    time_mod  = FFUMOD(s->c.time, s->c.avctx->time_base.den);
+    time_incr = time_div - s->c.last_time_base;
 
     // This limits the frame duration to max 1 day
     if (time_incr > 3600*24) {
-        av_log(s->avctx, AV_LOG_ERROR, "time_incr %"PRIu64" too large\n", time_incr);
+        av_log(s->c.avctx, AV_LOG_ERROR, "time_incr %"PRIu64" too large\n", time_incr);
         return AVERROR(EINVAL);
     }
     while (time_incr--)
@@ -1104,22 +1103,22 @@ static int mpeg4_encode_picture_header(MPVMainEncContext *const m)
     put_bits(&s->pb, m4->time_increment_bits, time_mod); /* time increment */
     put_bits(&s->pb, 1, 1);                             /* marker */
     put_bits(&s->pb, 1, 1);                             /* vop coded */
-    if (s->pict_type == AV_PICTURE_TYPE_P) {
-        put_bits(&s->pb, 1, s->no_rounding);    /* rounding type */
+    if (s->c.pict_type == AV_PICTURE_TYPE_P) {
+        put_bits(&s->pb, 1, s->c.no_rounding);    /* rounding type */
     }
     put_bits(&s->pb, 3, 0);     /* intra dc VLC threshold */
-    if (!s->progressive_sequence) {
-        put_bits(&s->pb, 1, !!(s->cur_pic.ptr->f->flags & AV_FRAME_FLAG_TOP_FIELD_FIRST));
-        put_bits(&s->pb, 1, s->alternate_scan);
+    if (!s->c.progressive_sequence) {
+        put_bits(&s->pb, 1, !!(s->c.cur_pic.ptr->f->flags & AV_FRAME_FLAG_TOP_FIELD_FIRST));
+        put_bits(&s->pb, 1, s->c.alternate_scan);
     }
     // FIXME sprite stuff
 
-    put_bits(&s->pb, 5, s->qscale);
+    put_bits(&s->pb, 5, s->c.qscale);
 
-    if (s->pict_type != AV_PICTURE_TYPE_I)
-        put_bits(&s->pb, 3, s->f_code);  /* fcode_for */
-    if (s->pict_type == AV_PICTURE_TYPE_B)
-        put_bits(&s->pb, 3, s->b_code);  /* fcode_back */
+    if (s->c.pict_type != AV_PICTURE_TYPE_I)
+        put_bits(&s->pb, 3, s->c.f_code);  /* fcode_for */
+    if (s->c.pict_type == AV_PICTURE_TYPE_B)
+        put_bits(&s->pb, 3, s->c.b_code);  /* fcode_back */
 
     return 0;
 }
@@ -1294,7 +1293,7 @@ static av_cold int encode_init(AVCodecContext *avctx)
     static AVOnce init_static_once = AV_ONCE_INIT;
     Mpeg4EncContext *const m4 = avctx->priv_data;
     MPVMainEncContext *const m = &m4->m;
-    MpegEncContext  *const  s = &m->s;
+    MPVEncContext *const s = &m->s;
     int ret;
 
     if (avctx->width >= (1<<13) || avctx->height >= (1<<13)) {
@@ -1315,10 +1314,10 @@ static av_cold int encode_init(AVCodecContext *avctx)
     s->inter_ac_vlc_last_length = uni_mpeg4_inter_rl_len + 128 * 64;
     s->luma_dc_vlc_length       = uni_DCtab_lum_len;
     s->ac_esc_length            = 7 + 2 + 1 + 6 + 1 + 12 + 1;
-    s->y_dc_scale_table         = ff_mpeg4_y_dc_scale_table;
-    s->c_dc_scale_table         = ff_mpeg4_c_dc_scale_table;
+    s->c.y_dc_scale_table         = ff_mpeg4_y_dc_scale_table;
+    s->c.c_dc_scale_table         = ff_mpeg4_c_dc_scale_table;
 
-    ff_qpeldsp_init(&s->qdsp);
+    ff_qpeldsp_init(&s->c.qdsp);
     if ((ret = ff_mpv_encode_init(avctx)) < 0)
         return ret;
 
@@ -1335,23 +1334,23 @@ static av_cold int encode_init(AVCodecContext *avctx)
 
     m4->time_increment_bits     = av_log2(avctx->time_base.den - 1) + 1;
 
-    if (s->avctx->flags & AV_CODEC_FLAG_GLOBAL_HEADER) {
-        s->avctx->extradata = av_malloc(1024);
-        if (!s->avctx->extradata)
+    if (avctx->flags & AV_CODEC_FLAG_GLOBAL_HEADER) {
+        avctx->extradata = av_malloc(1024);
+        if (!avctx->extradata)
             return AVERROR(ENOMEM);
-        init_put_bits(&s->pb, s->avctx->extradata, 1024);
+        init_put_bits(&s->pb, avctx->extradata, 1024);
 
         mpeg4_encode_visual_object_header(m);
         mpeg4_encode_vol_header(m4, 0, 0);
 
 //            ff_mpeg4_stuffing(&s->pb); ?
         flush_put_bits(&s->pb);
-        s->avctx->extradata_size = put_bytes_output(&s->pb);
+        avctx->extradata_size = put_bytes_output(&s->pb);
     }
     return 0;
 }
 
-void ff_mpeg4_init_partitions(MpegEncContext *s)
+void ff_mpeg4_init_partitions(MPVEncContext *const s)
 {
     uint8_t *start = put_bits_ptr(&s->pb);
     uint8_t *end   = s->pb.buf_end;
@@ -1364,13 +1363,13 @@ void ff_mpeg4_init_partitions(MpegEncContext *s)
     init_put_bits(&s->pb2, start + pb_size + tex_size, pb_size);
 }
 
-void ff_mpeg4_merge_partitions(MpegEncContext *s)
+void ff_mpeg4_merge_partitions(MPVEncContext *const s)
 {
     const int pb2_len    = put_bits_count(&s->pb2);
     const int tex_pb_len = put_bits_count(&s->tex_pb);
     const int bits       = put_bits_count(&s->pb);
 
-    if (s->pict_type == AV_PICTURE_TYPE_I) {
+    if (s->c.pict_type == AV_PICTURE_TYPE_I) {
         put_bits(&s->pb, 19, DC_MARKER);
         s->misc_bits  += 19 + pb2_len + bits - s->last_bits;
         s->i_tex_bits += tex_pb_len;
@@ -1390,19 +1389,19 @@ void ff_mpeg4_merge_partitions(MpegEncContext *s)
     s->last_bits = put_bits_count(&s->pb);
 }
 
-void ff_mpeg4_encode_video_packet_header(MpegEncContext *s)
+void ff_mpeg4_encode_video_packet_header(MPVEncContext *const s)
 {
-    int mb_num_bits = av_log2(s->mb_num - 1) + 1;
+    int mb_num_bits = av_log2(s->c.mb_num - 1) + 1;
 
-    put_bits(&s->pb, ff_mpeg4_get_video_packet_prefix_length(s), 0);
+    put_bits(&s->pb, ff_mpeg4_get_video_packet_prefix_length(&s->c), 0);
     put_bits(&s->pb, 1, 1);
 
-    put_bits(&s->pb, mb_num_bits, s->mb_x + s->mb_y * s->mb_width);
-    put_bits(&s->pb, 5 /* quant_precision */, s->qscale);
+    put_bits(&s->pb, mb_num_bits, s->c.mb_x + s->c.mb_y * s->c.mb_width);
+    put_bits(&s->pb, 5 /* quant_precision */, s->c.qscale);
     put_bits(&s->pb, 1, 0); /* no HEC */
 }
 
-#define OFFSET(x) offsetof(MpegEncContext, x)
+#define OFFSET(x) offsetof(MPVEncContext, c.x)
 #define VE AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM
 static const AVOption options[] = {
     { "data_partitioning", "Use data partitioning.",      OFFSET(data_partitioning), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, VE },
diff --git a/libavcodec/mpeg4videoenc.h b/libavcodec/mpeg4videoenc.h
index 0727be6750..815f16f073 100644
--- a/libavcodec/mpeg4videoenc.h
+++ b/libavcodec/mpeg4videoenc.h
@@ -27,14 +27,14 @@
 
 #include "put_bits.h"
 
-typedef struct MpegEncContext MpegEncContext;
+typedef struct MPVEncContext MPVEncContext;
 
-void ff_set_mpeg4_time(MpegEncContext *s);
+void ff_set_mpeg4_time(MPVEncContext *s);
 
-void ff_mpeg4_encode_video_packet_header(MpegEncContext *s);
+void ff_mpeg4_encode_video_packet_header(MPVEncContext *s);
 void ff_mpeg4_stuffing(PutBitContext *pbc);
-void ff_mpeg4_init_partitions(MpegEncContext *s);
-void ff_mpeg4_merge_partitions(MpegEncContext *s);
-void ff_clean_mpeg4_qscales(MpegEncContext *s);
+void ff_mpeg4_init_partitions(MPVEncContext *s);
+void ff_mpeg4_merge_partitions(MPVEncContext *s);
+void ff_clean_mpeg4_qscales(MPVEncContext *s);
 
 #endif
diff --git a/libavcodec/mpegvideo.c b/libavcodec/mpegvideo.c
index 126fefa1be..a65125cc13 100644
--- a/libavcodec/mpegvideo.c
+++ b/libavcodec/mpegvideo.c
@@ -435,9 +435,6 @@ static void backup_duplicate_context(MpegEncContext *bak, MpegEncContext *src)
     COPY(start_mb_y);
     COPY(end_mb_y);
     COPY(me.map_generation);
-    COPY(dct_error_sum);
-    COPY(dct_count[0]);
-    COPY(dct_count[1]);
     COPY(ac_val_base);
     COPY(ac_val[0]);
     COPY(ac_val[1]);
diff --git a/libavcodec/mpegvideo.h b/libavcodec/mpegvideo.h
index 85227fdb8f..adaa0cf2d0 100644
--- a/libavcodec/mpegvideo.h
+++ b/libavcodec/mpegvideo.h
@@ -30,18 +30,13 @@
 
 #include "blockdsp.h"
 #include "error_resilience.h"
-#include "fdctdsp.h"
 #include "get_bits.h"
 #include "h264chroma.h"
 #include "h263dsp.h"
 #include "hpeldsp.h"
 #include "idctdsp.h"
-#include "me_cmp.h"
 #include "motion_est.h"
 #include "mpegpicture.h"
-#include "mpegvideoencdsp.h"
-#include "pixblockdsp.h"
-#include "put_bits.h"
 #include "qpeldsp.h"
 #include "videodsp.h"
 
@@ -105,8 +100,6 @@ typedef struct MpegEncContext {
 
     enum AVCodecID codec_id;     /* see AV_CODEC_ID_xxx */
     int encoding;     ///< true if we are encoding (vs decoding)
-    int luma_elim_threshold;
-    int chroma_elim_threshold;
     int workaround_bugs;       ///< workaround bugs in encoders which cannot be detected automatically
     int codec_tag;             ///< internal codec_tag upper case converted from avctx codec_tag
     /* the following fields are managed internally by the encoder */
@@ -125,12 +118,12 @@ typedef struct MpegEncContext {
 
     BufferPoolContext buffer_pools;
 
-    /** bit output */
-    PutBitContext pb;
-
     int start_mb_y;            ///< start mb_y of this thread (so current thread should process start_mb_y <= row < end_mb_y)
     int end_mb_y;              ///< end   mb_y of this thread (so current thread should process start_mb_y <= row < end_mb_y)
-    struct MpegEncContext *thread_context[MAX_THREADS];
+    union {
+        struct MpegEncContext *thread_context[MAX_THREADS];
+        struct MPVEncContext  *enc_contexts[MAX_THREADS];
+    };
     int slice_context_count;   ///< number of used thread_contexts
 
     /**
@@ -145,12 +138,6 @@ typedef struct MpegEncContext {
      */
     MPVWorkPicture next_pic;
 
-    /**
-     * Reference to the source picture for encoding.
-     * note, linesize & data, might not match the source picture (for field pictures)
-     */
-    AVFrame *new_pic;
-
     /**
      * copy of the current picture structure.
      * note, linesize & data, might not match the current picture (for field pictures)
@@ -181,46 +168,24 @@ typedef struct MpegEncContext {
     int chroma_qscale;          ///< chroma QP
     unsigned int lambda;        ///< Lagrange multiplier used in rate distortion
     unsigned int lambda2;       ///< (lambda*lambda) >> FF_LAMBDA_SHIFT
-    int *lambda_table;
-    int adaptive_quant;         ///< use adaptive quantization
-    int dquant;                 ///< qscale difference to prev qscale
     int pict_type;              ///< AV_PICTURE_TYPE_I, AV_PICTURE_TYPE_P, AV_PICTURE_TYPE_B, ...
     int droppable;
-    int skipdct;                ///< skip dct and code zero residual
 
     /* motion compensation */
     int unrestricted_mv;        ///< mv can point outside of the coded picture
     int h263_long_vectors;      ///< use horrible H.263v1 long vector mode
 
     BlockDSPContext bdsp;
-    FDCTDSPContext fdsp;
     H264ChromaContext h264chroma;
     HpelDSPContext hdsp;
     IDCTDSPContext idsp;
-    MpegvideoEncDSPContext mpvencdsp;
-    PixblockDSPContext pdsp;
     QpelDSPContext qdsp;
     VideoDSPContext vdsp;
     H263DSPContext h263dsp;
     int f_code;                 ///< forward MV resolution
     int b_code;                 ///< backward MV resolution for B-frames (MPEG-4)
     int16_t (*p_field_mv_table_base)[2];
-    int16_t (*p_mv_table)[2];            ///< MV table (1MV per MB) P-frame encoding
-    int16_t (*b_forw_mv_table)[2];       ///< MV table (1MV per MB) forward mode B-frame encoding
-    int16_t (*b_back_mv_table)[2];       ///< MV table (1MV per MB) backward mode B-frame encoding
-    int16_t (*b_bidir_forw_mv_table)[2]; ///< MV table (1MV per MB) bidir mode B-frame encoding
-    int16_t (*b_bidir_back_mv_table)[2]; ///< MV table (1MV per MB) bidir mode B-frame encoding
-    int16_t (*b_direct_mv_table)[2];     ///< MV table (1MV per MB) direct mode B-frame encoding
     int16_t (*p_field_mv_table[2][2])[2];   ///< MV table (2MV per MB) interlaced P-frame encoding
-    int16_t (*b_field_mv_table[2][2][2])[2];///< MV table (4MV per MB) interlaced B-frame encoding
-    uint8_t (*p_field_select_table[2]);  ///< Only the first element is allocated
-    uint8_t (*b_field_select_table[2][2]); ///< allocated jointly with p_field_select_table
-
-    /* The following fields are encoder-only */
-    uint16_t *mb_var;           ///< Table for MB variances
-    uint16_t *mc_mb_var;        ///< Table for motion compensated MB variances
-    uint8_t *mb_mean;           ///< Table for MB luminance
-    uint64_t encoding_error[MPV_MAX_PLANES];
 
     int mv_dir;
 #define MV_DIR_FORWARD   1
@@ -251,7 +216,6 @@ typedef struct MpegEncContext {
     int mb_x, mb_y;
     int mb_skip_run;
     int mb_intra;
-    uint16_t *mb_type;  ///< Table for candidate MB types for encoding (defines in mpegvideoenc.h)
 
     int block_index[6]; ///< index to current MB in block based arrays with edges
     int block_wrap[6];
@@ -265,43 +229,6 @@ typedef struct MpegEncContext {
     uint16_t inter_matrix[64];
     uint16_t chroma_inter_matrix[64];
 
-    int intra_quant_bias;    ///< bias for the quantizer
-    int inter_quant_bias;    ///< bias for the quantizer
-    int min_qcoeff;          ///< minimum encodable coefficient
-    int max_qcoeff;          ///< maximum encodable coefficient
-    int ac_esc_length;       ///< num of bits needed to encode the longest esc
-    uint8_t *intra_ac_vlc_length;
-    uint8_t *intra_ac_vlc_last_length;
-    uint8_t *intra_chroma_ac_vlc_length;
-    uint8_t *intra_chroma_ac_vlc_last_length;
-    uint8_t *inter_ac_vlc_length;
-    uint8_t *inter_ac_vlc_last_length;
-    uint8_t *luma_dc_vlc_length;
-
-    int coded_score[12];
-
-    /** precomputed matrix (combine qscale and DCT renorm) */
-    int (*q_intra_matrix)[64];
-    int (*q_chroma_intra_matrix)[64];
-    int (*q_inter_matrix)[64];
-    /** identical to the above but for MMX & these are not permutated, second 64 entries are bias*/
-    uint16_t (*q_intra_matrix16)[2][64];
-    uint16_t (*q_chroma_intra_matrix16)[2][64];
-    uint16_t (*q_inter_matrix16)[2][64];
-
-    /* noise reduction */
-    int (*dct_error_sum)[64];
-    int dct_count[2];
-    uint16_t (*dct_offset)[64];
-
-    /* statistics, used for 2-pass encoding */
-    int mv_bits;
-    int i_tex_bits;
-    int p_tex_bits;
-    int i_count;
-    int misc_bits; ///< cbp, mb_type
-    int last_bits; ///< temp var used for calculating the above vars
-
     /* error concealment / resync */
     int resync_mb_x;                 ///< x position of last resync marker
     int resync_mb_y;                 ///< y position of last resync marker
@@ -311,10 +238,6 @@ typedef struct MpegEncContext {
     /* H.263 specific */
     int gob_index;
     int obmc;                       ///< overlapped block motion compensation
-    int mb_info;                    ///< interval for outputting info about mb offsets as side data
-    int prev_mb_info, last_mb_info;
-    uint8_t *mb_info_ptr;
-    int mb_info_size;
     int ehc_mode;
 
     /* H.263+ specific */
@@ -342,8 +265,6 @@ typedef struct MpegEncContext {
     int data_partitioning;           ///< data partitioning flag from header
     int partitioned_frame;           ///< is current frame partitioned
     int low_delay;                   ///< no reordering needed / has no B-frames
-    PutBitContext tex_pb;            ///< used for data partitioned VOPs
-    PutBitContext pb2;               ///< used for data partitioned VOPs
     int mpeg_quant;
     int padding_bug_score;             ///< used to detect the VERY common padding bug in MPEG-4
 
@@ -354,10 +275,6 @@ typedef struct MpegEncContext {
     int rv10_version; ///< RV10 version: 0 or 3
     int rv10_first_dc_coded[3];
 
-    /* MJPEG specific */
-    struct MJpegContext *mjpeg_ctx;
-    int esc_pos;
-
     /* MSMPEG4 specific */
     int slice_height;      ///< in macroblocks
     int first_slice_line;  ///< used in MPEG-4 too to handle resync markers
@@ -371,16 +288,12 @@ typedef struct MpegEncContext {
         MSMP4_WMV2,
         MSMP4_VC1,        ///< for VC1 (image), WMV3 (image) and MSS2.
     } msmpeg4_version;
-    int esc3_level_length;
     int inter_intra_pred;
     int mspel;
 
     /* decompression specific */
     GetBitContext gb;
 
-    /* MPEG-1 specific */
-    int last_mv_dir;         ///< last mv_dir, used for B-frame encoding
-
     /* MPEG-2-specific - I wished not to have to support this mess. */
     int progressive_sequence;
     int mpeg_f_code[2][2];
@@ -409,19 +322,9 @@ typedef struct MpegEncContext {
     int interlaced_dct;
     int first_field;         ///< is 1 for the first field of a field picture 0 otherwise
 
-    /* RTP specific */
-    int rtp_mode;
-    int rtp_payload_size;
-
-    uint8_t *ptr_lastgob;
-
     int16_t (*block)[64]; ///< points to one of the following blocks
     int16_t (*blocks)[12][64]; // for HQ mode we need to keep the best block
-    union {
     int (*decode_mb)(struct MpegEncContext *s, int16_t block[12][64]); // used by some codecs to avoid a switch()
-        void (*encode_mb)(struct MpegEncContext *s, int16_t block[][64],
-                          int motion_x, int motion_y);
-    };
 
 #define SLICE_OK         0
 #define SLICE_ERROR     -1
@@ -444,20 +347,6 @@ typedef struct MpegEncContext {
                            int16_t *block/*align 16*/, int n, int qscale);
     void (*dct_unquantize_inter)(struct MpegEncContext *s, // unquantizer to use (MPEG-4 can use both)
                            int16_t *block/*align 16*/, int n, int qscale);
-    int (*dct_quantize)(struct MpegEncContext *s, int16_t *block/*align 16*/, int n, int qscale, int *overflow);
-    void (*denoise_dct)(struct MpegEncContext *s, int16_t *block);
-
-    int mpv_flags;      ///< flags set by private options
-    int quantizer_noise_shaping;
-
-    me_cmp_func ildct_cmp[2]; ///< 0 = intra, 1 = non-intra
-    me_cmp_func n_sse_cmp[2]; ///< either SSE or NSSE cmp func
-    me_cmp_func sad_cmp[2];
-    me_cmp_func sse_cmp[2];
-    int (*sum_abs_dctelem)(const int16_t *block);
-
-    /// Bitfield containing information which frames to reconstruct.
-    int frame_reconstruction_bitfield;
 
     /* flag to indicate a reinitialization is required, e.g. after
      * a frame size change */
@@ -467,10 +356,6 @@ typedef struct MpegEncContext {
     unsigned slice_ctx_size;
 
     ERContext er;
-
-    int error_rate;
-
-    int intra_penalty;
 } MpegEncContext;
 
 
diff --git a/libavcodec/mpegvideo_dec.c b/libavcodec/mpegvideo_dec.c
index 2856dbfbd6..8c84b59c5e 100644
--- a/libavcodec/mpegvideo_dec.c
+++ b/libavcodec/mpegvideo_dec.c
@@ -136,7 +136,7 @@ int ff_mpeg_update_thread_context(AVCodecContext *dst,
 
     // MPEG-2/interlacing info
     memcpy(&s->progressive_sequence, &s1->progressive_sequence,
-           (char *) &s1->rtp_mode - (char *) &s1->progressive_sequence);
+           (char *) &s1->first_field + sizeof(s1->first_field) - (char *) &s1->progressive_sequence);
 
     return 0;
 }
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index 33241d6cb0..7061ad0719 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -84,13 +84,13 @@
 #define QMAT_SHIFT 21
 
 static int encode_picture(MPVMainEncContext *const s, const AVPacket *pkt);
-static int dct_quantize_refine(MpegEncContext *s, int16_t *block, int16_t *weight, int16_t *orig, int n, int qscale);
-static int sse_mb(MpegEncContext *s);
-static void denoise_dct_c(MpegEncContext *s, int16_t *block);
-static int dct_quantize_c(MpegEncContext *s,
+static int dct_quantize_refine(MPVEncContext *const s, int16_t *block, int16_t *weight, int16_t *orig, int n, int qscale);
+static int sse_mb(MPVEncContext *const s);
+static void denoise_dct_c(MPVEncContext *const s, int16_t *block);
+static int dct_quantize_c(MPVEncContext *const s,
                           int16_t *block, int n,
                           int qscale, int *overflow);
-static int dct_quantize_trellis_c(MpegEncContext *s, int16_t *block, int n, int qscale, int *overflow);
+static int dct_quantize_trellis_c(MPVEncContext *const s, int16_t *block, int n, int qscale, int *overflow);
 
 static uint8_t default_fcode_tab[MAX_MV * 2 + 1];
 
@@ -107,7 +107,7 @@ const AVClass ff_mpv_enc_class = {
     .version    = LIBAVUTIL_VERSION_INT,
 };
 
-void ff_convert_matrix(MpegEncContext *s, int (*qmat)[64],
+void ff_convert_matrix(MPVEncContext *const s, int (*qmat)[64],
                        uint16_t (*qmat16)[2][64],
                        const uint16_t *quant_matrix,
                        int bias, int qmin, int qmax, int intra)
@@ -120,7 +120,7 @@ void ff_convert_matrix(MpegEncContext *s, int (*qmat)[64],
         int i;
         int qscale2;
 
-        if (s->q_scale_type) qscale2 = ff_mpeg2_non_linear_qscale[qscale];
+        if (s->c.q_scale_type) qscale2 = ff_mpeg2_non_linear_qscale[qscale];
         else                 qscale2 = qscale << 1;
 
         if (fdsp->fdct == ff_jpeg_fdct_islow_8  ||
@@ -129,7 +129,7 @@ void ff_convert_matrix(MpegEncContext *s, int (*qmat)[64],
 #endif /* CONFIG_FAANDCT */
             fdsp->fdct == ff_jpeg_fdct_islow_10) {
             for (i = 0; i < 64; i++) {
-                const int j = s->idsp.idct_permutation[i];
+                const int j = s->c.idsp.idct_permutation[i];
                 int64_t den = (int64_t) qscale2 * quant_matrix[j];
                 /* 1 * 1 <= qscale2 * quant_matrix[j] <= 112 * 255
                  * Assume x = qscale2 * quant_matrix[j]
@@ -141,7 +141,7 @@ void ff_convert_matrix(MpegEncContext *s, int (*qmat)[64],
             }
         } else if (fdsp->fdct == ff_fdct_ifast) {
             for (i = 0; i < 64; i++) {
-                const int j = s->idsp.idct_permutation[i];
+                const int j = s->c.idsp.idct_permutation[i];
                 int64_t den = ff_aanscales[i] * (int64_t) qscale2 * quant_matrix[j];
                 /* 1247 * 1 * 1 <= ff_aanscales[i] * qscale2 * quant_matrix[j] <= 31521 * 112 * 255
                  * Assume x = ff_aanscales[i] * qscale2 * quant_matrix[j]
@@ -153,7 +153,7 @@ void ff_convert_matrix(MpegEncContext *s, int (*qmat)[64],
             }
         } else {
             for (i = 0; i < 64; i++) {
-                const int j = s->idsp.idct_permutation[i];
+                const int j = s->c.idsp.idct_permutation[i];
                 int64_t den = (int64_t) qscale2 * quant_matrix[j];
                 /* 1 * 1 <= qscale2 * quant_matrix[j] <= 112 * 255
                  * Assume x = qscale2 * quant_matrix[j]
@@ -188,7 +188,7 @@ void ff_convert_matrix(MpegEncContext *s, int (*qmat)[64],
         }
     }
     if (shift) {
-        av_log(s->avctx, AV_LOG_INFO,
+        av_log(s->c.avctx, AV_LOG_INFO,
                "Warning, QMAT_SHIFT is larger than %d, overflows possible\n",
                QMAT_SHIFT - shift);
     }
@@ -196,31 +196,31 @@ void ff_convert_matrix(MpegEncContext *s, int (*qmat)[64],
 
 static inline void update_qscale(MPVMainEncContext *const m)
 {
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
 
-    if (s->q_scale_type == 1 && 0) {
+    if (s->c.q_scale_type == 1 && 0) {
         int i;
         int bestdiff=INT_MAX;
         int best = 1;
 
         for (i = 0 ; i<FF_ARRAY_ELEMS(ff_mpeg2_non_linear_qscale); i++) {
-            int diff = FFABS((ff_mpeg2_non_linear_qscale[i]<<(FF_LAMBDA_SHIFT + 6)) - (int)s->lambda * 139);
-            if (ff_mpeg2_non_linear_qscale[i] < s->avctx->qmin ||
-                (ff_mpeg2_non_linear_qscale[i] > s->avctx->qmax && !m->vbv_ignore_qmax))
+            int diff = FFABS((ff_mpeg2_non_linear_qscale[i]<<(FF_LAMBDA_SHIFT + 6)) - (int)s->c.lambda * 139);
+            if (ff_mpeg2_non_linear_qscale[i] < s->c.avctx->qmin ||
+                (ff_mpeg2_non_linear_qscale[i] > s->c.avctx->qmax && !m->vbv_ignore_qmax))
                 continue;
             if (diff < bestdiff) {
                 bestdiff = diff;
                 best = i;
             }
         }
-        s->qscale = best;
+        s->c.qscale = best;
     } else {
-        s->qscale = (s->lambda * 139 + FF_LAMBDA_SCALE * 64) >>
+        s->c.qscale = (s->c.lambda * 139 + FF_LAMBDA_SCALE * 64) >>
                     (FF_LAMBDA_SHIFT + 7);
-        s->qscale = av_clip(s->qscale, s->avctx->qmin, m->vbv_ignore_qmax ? 31 : s->avctx->qmax);
+        s->c.qscale = av_clip(s->c.qscale, s->c.avctx->qmin, m->vbv_ignore_qmax ? 31 : s->c.avctx->qmax);
     }
 
-    s->lambda2 = (s->lambda * s->lambda + FF_LAMBDA_SCALE / 2) >>
+    s->c.lambda2 = (s->c.lambda * s->c.lambda + FF_LAMBDA_SCALE / 2) >>
                  FF_LAMBDA_SHIFT;
 }
 
@@ -238,34 +238,34 @@ void ff_write_quant_matrix(PutBitContext *pb, uint16_t *matrix)
 }
 
 /**
- * init s->cur_pic.qscale_table from s->lambda_table
+ * init s->c.cur_pic.qscale_table from s->lambda_table
  */
-static void init_qscale_tab(MpegEncContext *s)
+static void init_qscale_tab(MPVEncContext *const s)
 {
-    int8_t * const qscale_table = s->cur_pic.qscale_table;
+    int8_t * const qscale_table = s->c.cur_pic.qscale_table;
     int i;
 
-    for (i = 0; i < s->mb_num; i++) {
-        unsigned int lam = s->lambda_table[s->mb_index2xy[i]];
+    for (i = 0; i < s->c.mb_num; i++) {
+        unsigned int lam = s->lambda_table[s->c.mb_index2xy[i]];
         int qp = (lam * 139 + FF_LAMBDA_SCALE * 64) >> (FF_LAMBDA_SHIFT + 7);
-        qscale_table[s->mb_index2xy[i]] = av_clip(qp, s->avctx->qmin,
-                                                  s->avctx->qmax);
+        qscale_table[s->c.mb_index2xy[i]] = av_clip(qp, s->c.avctx->qmin,
+                                                  s->c.avctx->qmax);
     }
 }
 
-static void update_duplicate_context_after_me(MpegEncContext *dst,
-                                              const MpegEncContext *src)
+static void update_duplicate_context_after_me(MPVEncContext *const dst,
+                                              const MPVEncContext *const src)
 {
-#define COPY(a) dst->a= src->a
-    COPY(pict_type);
-    COPY(f_code);
-    COPY(b_code);
-    COPY(qscale);
-    COPY(lambda);
-    COPY(lambda2);
-    COPY(frame_pred_frame_dct); // FIXME don't set in encode_header
-    COPY(progressive_frame);    // FIXME don't set in encode_header
-    COPY(partitioned_frame);    // FIXME don't set in encode_header
+#define COPY(a) dst->a = src->a
+    COPY(c.pict_type);
+    COPY(c.f_code);
+    COPY(c.b_code);
+    COPY(c.qscale);
+    COPY(c.lambda);
+    COPY(c.lambda2);
+    COPY(c.frame_pred_frame_dct); // FIXME don't set in encode_header
+    COPY(c.progressive_frame);    // FIXME don't set in encode_header
+    COPY(c.partitioned_frame);    // FIXME don't set in encode_header
 #undef COPY
 }
 
@@ -276,26 +276,26 @@ static av_cold void mpv_encode_init_static(void)
 }
 
 /**
- * Set the given MpegEncContext to defaults for encoding.
+ * Set the given MPVEncContext to defaults for encoding.
  */
 static av_cold void mpv_encode_defaults(MPVMainEncContext *const m)
 {
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
     static AVOnce init_static_once = AV_ONCE_INIT;
 
-    ff_mpv_common_defaults(s);
+    ff_mpv_common_defaults(&s->c);
 
     if (!m->fcode_tab) {
         m->fcode_tab = default_fcode_tab + MAX_MV;
         ff_thread_once(&init_static_once, mpv_encode_init_static);
     }
-    if (!s->y_dc_scale_table) {
-        s->y_dc_scale_table =
-        s->c_dc_scale_table = ff_mpeg1_dc_scale_table;
+    if (!s->c.y_dc_scale_table) {
+        s->c.y_dc_scale_table =
+        s->c.c_dc_scale_table = ff_mpeg1_dc_scale_table;
     }
 }
 
-av_cold void ff_dct_encode_init(MpegEncContext *s)
+av_cold void ff_dct_encode_init(MPVEncContext *const s)
 {
     s->dct_quantize = dct_quantize_c;
     s->denoise_dct  = denoise_dct_c;
@@ -306,19 +306,19 @@ av_cold void ff_dct_encode_init(MpegEncContext *s)
     ff_dct_encode_init_x86(s);
 #endif
 
-    if (s->avctx->trellis)
+    if (s->c.avctx->trellis)
         s->dct_quantize  = dct_quantize_trellis_c;
 }
 
 static av_cold int me_cmp_init(MPVMainEncContext *const m, AVCodecContext *avctx)
 {
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
     MECmpContext mecc;
     me_cmp_func me_cmp[6];
     int ret;
 
     ff_me_cmp_init(&mecc, avctx);
-    ret = ff_me_init(&s->me, avctx, &mecc, 1);
+    ret = ff_me_init(&s->c.me, avctx, &mecc, 1);
     if (ret < 0)
         return ret;
     ret = ff_set_cmp(&mecc, me_cmp, m->frame_skip_cmp, 1);
@@ -355,8 +355,8 @@ static av_cold int me_cmp_init(MPVMainEncContext *const m, AVCodecContext *avctx
 #define ALLOCZ_ARRAYS(p, mult, numb) ((p) = av_calloc(numb, mult * sizeof(*(p))))
 static av_cold int init_matrices(MPVMainEncContext *const m, AVCodecContext *avctx)
 {
-    MpegEncContext *const s = &m->s;
-    const int nb_matrices = 1 + (s->out_format == FMT_MJPEG) + !m->intra_only;
+    MPVEncContext *const s = &m->s;
+    const int nb_matrices = 1 + (s->c.out_format == FMT_MJPEG) + !m->intra_only;
     const uint16_t *intra_matrix, *inter_matrix;
     int ret;
 
@@ -364,7 +364,7 @@ static av_cold int init_matrices(MPVMainEncContext *const m, AVCodecContext *avc
         !ALLOCZ_ARRAYS(s->q_intra_matrix16, 32, nb_matrices))
         return AVERROR(ENOMEM);
 
-    if (s->out_format == FMT_MJPEG) {
+    if (s->c.out_format == FMT_MJPEG) {
         s->q_chroma_intra_matrix   = s->q_intra_matrix   + 32;
         s->q_chroma_intra_matrix16 = s->q_intra_matrix16 + 32;
         // No need to set q_inter_matrix
@@ -380,11 +380,11 @@ static av_cold int init_matrices(MPVMainEncContext *const m, AVCodecContext *avc
         s->q_inter_matrix16 = s->q_intra_matrix16 + 32;
     }
 
-    if (CONFIG_MPEG4_ENCODER && s->codec_id == AV_CODEC_ID_MPEG4 &&
-        s->mpeg_quant) {
+    if (CONFIG_MPEG4_ENCODER && s->c.codec_id == AV_CODEC_ID_MPEG4 &&
+        s->c.mpeg_quant) {
         intra_matrix = ff_mpeg4_default_intra_matrix;
         inter_matrix = ff_mpeg4_default_non_intra_matrix;
-    } else if (s->out_format == FMT_H263 || s->out_format == FMT_H261) {
+    } else if (s->c.out_format == FMT_H263 || s->c.out_format == FMT_H261) {
         intra_matrix =
         inter_matrix = ff_mpeg1_default_non_intra_matrix;
     } else {
@@ -399,10 +399,10 @@ static av_cold int init_matrices(MPVMainEncContext *const m, AVCodecContext *avc
 
     /* init q matrix */
     for (int i = 0; i < 64; i++) {
-        int j = s->idsp.idct_permutation[i];
+        int j = s->c.idsp.idct_permutation[i];
 
-        s->intra_matrix[j] = s->chroma_intra_matrix[j] = intra_matrix[i];
-        s->inter_matrix[j] = inter_matrix[i];
+        s->c.intra_matrix[j] = s->c.chroma_intra_matrix[j] = intra_matrix[i];
+        s->c.inter_matrix[j] = inter_matrix[i];
     }
 
     /* precompute matrix */
@@ -411,11 +411,11 @@ static av_cold int init_matrices(MPVMainEncContext *const m, AVCodecContext *avc
         return ret;
 
     ff_convert_matrix(s, s->q_intra_matrix, s->q_intra_matrix16,
-                      s->intra_matrix, s->intra_quant_bias, avctx->qmin,
+                      s->c.intra_matrix, s->intra_quant_bias, avctx->qmin,
                       31, 1);
     if (s->q_inter_matrix)
         ff_convert_matrix(s, s->q_inter_matrix, s->q_inter_matrix16,
-                          s->inter_matrix, s->inter_quant_bias, avctx->qmin,
+                          s->c.inter_matrix, s->inter_quant_bias, avctx->qmin,
                           31, 0);
 
     return 0;
@@ -423,7 +423,7 @@ static av_cold int init_matrices(MPVMainEncContext *const m, AVCodecContext *avc
 
 static av_cold int init_buffers(MPVMainEncContext *const m, AVCodecContext *avctx)
 {
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
     // Align the following per-thread buffers to avoid false sharing.
     enum {
 #ifndef _MSC_VER
@@ -432,12 +432,12 @@ static av_cold int init_buffers(MPVMainEncContext *const m, AVCodecContext *avct
 #else
         ALIGN = 128,
 #endif
-        ME_MAP_ALLOC_SIZE = FFALIGN(2 * ME_MAP_SIZE * sizeof(*s->me.map), ALIGN),
+        ME_MAP_ALLOC_SIZE = FFALIGN(2 * ME_MAP_SIZE * sizeof(*s->c.me.map), ALIGN),
         DCT_ERROR_SIZE    = FFALIGN(2 * sizeof(*s->dct_error_sum), ALIGN),
     };
     static_assert(FFMAX(ME_MAP_ALLOC_SIZE, DCT_ERROR_SIZE) * MAX_THREADS + ALIGN - 1 <= SIZE_MAX,
                   "Need checks for potential overflow.");
-    unsigned nb_slices = s->slice_context_count, mv_table_size, mb_array_size;
+    unsigned nb_slices = s->c.slice_context_count, mv_table_size, mb_array_size;
     char *dct_error = NULL, *me_map;
     int has_b_frames = !!m->max_b_frames, nb_mv_tables = 1 + 5 * has_b_frames;
     int16_t (*mv_table)[2];
@@ -458,16 +458,16 @@ static av_cold int init_buffers(MPVMainEncContext *const m, AVCodecContext *avct
     me_map += FFALIGN((uintptr_t)me_map, ALIGN) - (uintptr_t)me_map;
 
     /* Allocate MB type table */
-    mb_array_size = s->mb_stride * s->mb_height;
+    mb_array_size = s->c.mb_stride * s->c.mb_height;
     s->mb_type = av_calloc(mb_array_size, 3 * sizeof(*s->mb_type) + sizeof(*s->mb_mean));
     if (!s->mb_type)
         return AVERROR(ENOMEM);
     if (!FF_ALLOCZ_TYPED_ARRAY(s->lambda_table, mb_array_size))
         return AVERROR(ENOMEM);
 
-    mv_table_size = (s->mb_height + 2) * s->mb_stride + 1;
-    if (s->codec_id == AV_CODEC_ID_MPEG4 ||
-        (s->avctx->flags & AV_CODEC_FLAG_INTERLACED_ME)) {
+    mv_table_size = (s->c.mb_height + 2) * s->c.mb_stride + 1;
+    if (s->c.codec_id == AV_CODEC_ID_MPEG4 ||
+        (s->c.avctx->flags & AV_CODEC_FLAG_INTERLACED_ME)) {
         nb_mv_tables += 8 * has_b_frames;
         if (!ALLOCZ_ARRAYS(s->p_field_select_table[0], 2 * (2 + 4 * has_b_frames), mv_table_size))
             return AVERROR(ENOMEM);
@@ -477,10 +477,10 @@ static av_cold int init_buffers(MPVMainEncContext *const m, AVCodecContext *avct
     if (!mv_table)
         return AVERROR(ENOMEM);
     m->mv_table_base = mv_table;
-    mv_table += s->mb_stride + 1;
+    mv_table += s->c.mb_stride + 1;
 
     for (unsigned i = 0; i < nb_slices; ++i) {
-        MpegEncContext *const s2 = s->thread_context[i];
+        MPVEncContext *const s2 = s->c.enc_contexts[i];
         int16_t (*tmp_mv_table)[2] = mv_table;
 
         if (dct_error) {
@@ -495,8 +495,8 @@ static av_cold int init_buffers(MPVMainEncContext *const m, AVCodecContext *avct
         s2->mb_mean      = (uint8_t*)(s2->mb_var + mb_array_size);
         s2->lambda_table = s->lambda_table;
 
-        s2->me.map       = (uint32_t*)me_map;
-        s2->me.score_map = s2->me.map + ME_MAP_SIZE;
+        s2->c.me.map     = (uint32_t*)me_map;
+        s2->c.me.score_map = s2->c.me.map + ME_MAP_SIZE;
         me_map          += ME_MAP_ALLOC_SIZE;
 
         s2->p_mv_table            = tmp_mv_table;
@@ -532,7 +532,7 @@ static av_cold int init_buffers(MPVMainEncContext *const m, AVCodecContext *avct
 av_cold int ff_mpv_encode_init(AVCodecContext *avctx)
 {
     MPVMainEncContext *const m = avctx->priv_data;
-    MpegEncContext    *const s = &m->s;
+    MPVEncContext    *const s = &m->s;
     AVCPBProperties *cpb_props;
     int i, ret;
 
@@ -541,24 +541,24 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx)
     switch (avctx->pix_fmt) {
     case AV_PIX_FMT_YUVJ444P:
     case AV_PIX_FMT_YUV444P:
-        s->chroma_format = CHROMA_444;
+        s->c.chroma_format = CHROMA_444;
         break;
     case AV_PIX_FMT_YUVJ422P:
     case AV_PIX_FMT_YUV422P:
-        s->chroma_format = CHROMA_422;
+        s->c.chroma_format = CHROMA_422;
         break;
     case AV_PIX_FMT_YUVJ420P:
     case AV_PIX_FMT_YUV420P:
     default:
-        s->chroma_format = CHROMA_420;
+        s->c.chroma_format = CHROMA_420;
         break;
     }
 
     avctx->bits_per_raw_sample = av_clip(avctx->bits_per_raw_sample, 0, 8);
 
     m->bit_rate = avctx->bit_rate;
-    s->width    = avctx->width;
-    s->height   = avctx->height;
+    s->c.width    = avctx->width;
+    s->c.height   = avctx->height;
     if (avctx->gop_size > 600 &&
         avctx->strict_std_compliance > FF_COMPLIANCE_EXPERIMENTAL) {
         av_log(avctx, AV_LOG_WARNING,
@@ -567,7 +567,7 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx)
         avctx->gop_size = 600;
     }
     m->gop_size     = avctx->gop_size;
-    s->avctx        = avctx;
+    s->c.avctx        = avctx;
     if (avctx->max_b_frames > MPVENC_MAX_B_FRAMES) {
         av_log(avctx, AV_LOG_ERROR, "Too many B-frames requested, maximum "
                "is " AV_STRINGIFY(MPVENC_MAX_B_FRAMES) ".\n");
@@ -578,30 +578,30 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx)
         return AVERROR(EINVAL);
     }
     m->max_b_frames = avctx->max_b_frames;
-    s->codec_id     = avctx->codec->id;
+    s->c.codec_id     = avctx->codec->id;
     if (m->max_b_frames && !(avctx->codec->capabilities & AV_CODEC_CAP_DELAY)) {
         av_log(avctx, AV_LOG_ERROR, "B-frames not supported by codec\n");
         return AVERROR(EINVAL);
     }
 
-    s->quarter_sample     = (avctx->flags & AV_CODEC_FLAG_QPEL) != 0;
+    s->c.quarter_sample     = (avctx->flags & AV_CODEC_FLAG_QPEL) != 0;
     s->rtp_mode           = !!s->rtp_payload_size;
-    s->intra_dc_precision = avctx->intra_dc_precision;
+    s->c.intra_dc_precision = avctx->intra_dc_precision;
 
     // workaround some differences between how applications specify dc precision
-    if (s->intra_dc_precision < 0) {
-        s->intra_dc_precision += 8;
-    } else if (s->intra_dc_precision >= 8)
-        s->intra_dc_precision -= 8;
+    if (s->c.intra_dc_precision < 0) {
+        s->c.intra_dc_precision += 8;
+    } else if (s->c.intra_dc_precision >= 8)
+        s->c.intra_dc_precision -= 8;
 
-    if (s->intra_dc_precision < 0) {
+    if (s->c.intra_dc_precision < 0) {
         av_log(avctx, AV_LOG_ERROR,
                 "intra dc precision must be positive, note some applications use"
                 " 0 and some 8 as base meaning 8bit, the value must not be smaller than that\n");
         return AVERROR(EINVAL);
     }
 
-    if (s->intra_dc_precision > (avctx->codec_id == AV_CODEC_ID_MPEG2VIDEO ? 3 : 0)) {
+    if (s->c.intra_dc_precision > (avctx->codec_id == AV_CODEC_ID_MPEG2VIDEO ? 3 : 0)) {
         av_log(avctx, AV_LOG_ERROR, "intra dc precision too large\n");
         return AVERROR(EINVAL);
     }
@@ -626,7 +626,7 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx)
                          (s->mpv_flags & FF_MPV_FLAG_QP_RD)) &&
                         !m->fixed_qscale;
 
-    s->loop_filter = !!(avctx->flags & AV_CODEC_FLAG_LOOP_FILTER);
+    s->c.loop_filter = !!(avctx->flags & AV_CODEC_FLAG_LOOP_FILTER);
 
     if (avctx->rc_max_rate && !avctx->rc_buffer_size) {
         switch(avctx->codec_id) {
@@ -700,27 +700,27 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx)
             avctx->bit_rate_tolerance = INT_MAX;
     }
 
-    if ((avctx->flags & AV_CODEC_FLAG_4MV) && s->codec_id != AV_CODEC_ID_MPEG4 &&
-        s->codec_id != AV_CODEC_ID_H263 && s->codec_id != AV_CODEC_ID_H263P &&
-        s->codec_id != AV_CODEC_ID_FLV1) {
+    if ((avctx->flags & AV_CODEC_FLAG_4MV) && s->c.codec_id != AV_CODEC_ID_MPEG4 &&
+        s->c.codec_id != AV_CODEC_ID_H263 && s->c.codec_id != AV_CODEC_ID_H263P &&
+        s->c.codec_id != AV_CODEC_ID_FLV1) {
         av_log(avctx, AV_LOG_ERROR, "4MV not supported by codec\n");
         return AVERROR(EINVAL);
     }
 
-    if (s->obmc && avctx->mb_decision != FF_MB_DECISION_SIMPLE) {
+    if (s->c.obmc && avctx->mb_decision != FF_MB_DECISION_SIMPLE) {
         av_log(avctx, AV_LOG_ERROR,
                "OBMC is only supported with simple mb decision\n");
         return AVERROR(EINVAL);
     }
 
-    if (s->quarter_sample && s->codec_id != AV_CODEC_ID_MPEG4) {
+    if (s->c.quarter_sample && s->c.codec_id != AV_CODEC_ID_MPEG4) {
         av_log(avctx, AV_LOG_ERROR, "qpel not supported by codec\n");
         return AVERROR(EINVAL);
     }
 
-    if ((s->codec_id == AV_CODEC_ID_MPEG4 ||
-         s->codec_id == AV_CODEC_ID_H263  ||
-         s->codec_id == AV_CODEC_ID_H263P) &&
+    if ((s->c.codec_id == AV_CODEC_ID_MPEG4 ||
+         s->c.codec_id == AV_CODEC_ID_H263  ||
+         s->c.codec_id == AV_CODEC_ID_H263P) &&
         (avctx->sample_aspect_ratio.num > 255 ||
          avctx->sample_aspect_ratio.den > 255)) {
         av_log(avctx, AV_LOG_WARNING,
@@ -730,44 +730,44 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx)
                    avctx->sample_aspect_ratio.num,  avctx->sample_aspect_ratio.den, 255);
     }
 
-    if ((s->codec_id == AV_CODEC_ID_H263  ||
-         s->codec_id == AV_CODEC_ID_H263P) &&
+    if ((s->c.codec_id == AV_CODEC_ID_H263  ||
+         s->c.codec_id == AV_CODEC_ID_H263P) &&
         (avctx->width  > 2048 ||
          avctx->height > 1152 )) {
         av_log(avctx, AV_LOG_ERROR, "H.263 does not support resolutions above 2048x1152\n");
         return AVERROR(EINVAL);
     }
-    if (s->codec_id == AV_CODEC_ID_FLV1 &&
+    if (s->c.codec_id == AV_CODEC_ID_FLV1 &&
         (avctx->width  > 65535 ||
          avctx->height > 65535 )) {
         av_log(avctx, AV_LOG_ERROR, "FLV does not support resolutions above 16bit\n");
         return AVERROR(EINVAL);
     }
-    if ((s->codec_id == AV_CODEC_ID_H263  ||
-         s->codec_id == AV_CODEC_ID_H263P ||
-         s->codec_id == AV_CODEC_ID_RV20) &&
+    if ((s->c.codec_id == AV_CODEC_ID_H263  ||
+         s->c.codec_id == AV_CODEC_ID_H263P ||
+         s->c.codec_id == AV_CODEC_ID_RV20) &&
         ((avctx->width &3) ||
          (avctx->height&3) )) {
         av_log(avctx, AV_LOG_ERROR, "width and height must be a multiple of 4\n");
         return AVERROR(EINVAL);
     }
 
-    if (s->codec_id == AV_CODEC_ID_RV10 &&
+    if (s->c.codec_id == AV_CODEC_ID_RV10 &&
         (avctx->width &15 ||
          avctx->height&15 )) {
         av_log(avctx, AV_LOG_ERROR, "width and height must be a multiple of 16\n");
         return AVERROR(EINVAL);
     }
 
-    if ((s->codec_id == AV_CODEC_ID_WMV1 ||
-         s->codec_id == AV_CODEC_ID_WMV2) &&
+    if ((s->c.codec_id == AV_CODEC_ID_WMV1 ||
+         s->c.codec_id == AV_CODEC_ID_WMV2) &&
          avctx->width & 1) {
         av_log(avctx, AV_LOG_ERROR, "width must be multiple of 2\n");
         return AVERROR(EINVAL);
     }
 
     if ((avctx->flags & (AV_CODEC_FLAG_INTERLACED_DCT | AV_CODEC_FLAG_INTERLACED_ME)) &&
-        s->codec_id != AV_CODEC_ID_MPEG4 && s->codec_id != AV_CODEC_ID_MPEG2VIDEO) {
+        s->c.codec_id != AV_CODEC_ID_MPEG4 && s->c.codec_id != AV_CODEC_ID_MPEG2VIDEO) {
         av_log(avctx, AV_LOG_ERROR, "interlacing not supported by codec\n");
         return AVERROR(EINVAL);
     }
@@ -792,7 +792,7 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx)
     }
 
     if (avctx->flags & AV_CODEC_FLAG_LOW_DELAY) {
-        if (s->codec_id != AV_CODEC_ID_MPEG2VIDEO &&
+        if (s->c.codec_id != AV_CODEC_ID_MPEG2VIDEO &&
             avctx->strict_std_compliance >= FF_COMPLIANCE_NORMAL) {
             av_log(avctx, AV_LOG_ERROR,
                    "low delay forcing is only available for mpeg2, "
@@ -826,7 +826,7 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx)
         //return -1;
     }
 
-    if (s->mpeg_quant || s->codec_id == AV_CODEC_ID_MPEG1VIDEO || s->codec_id == AV_CODEC_ID_MPEG2VIDEO || s->codec_id == AV_CODEC_ID_MJPEG || s->codec_id == AV_CODEC_ID_AMV || s->codec_id == AV_CODEC_ID_SPEEDHQ) {
+    if (s->c.mpeg_quant || s->c.codec_id == AV_CODEC_ID_MPEG1VIDEO || s->c.codec_id == AV_CODEC_ID_MPEG2VIDEO || s->c.codec_id == AV_CODEC_ID_MJPEG || s->c.codec_id == AV_CODEC_ID_AMV || s->c.codec_id == AV_CODEC_ID_SPEEDHQ) {
         // (a + x * 3 / 8) / x
         s->intra_quant_bias = 3 << (QUANT_BIAS_SHIFT - 3);
         s->inter_quant_bias = 0;
@@ -849,148 +849,148 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx)
         s->rtp_mode   = 1;
         /* fallthrough */
     case AV_CODEC_ID_MPEG1VIDEO:
-        s->out_format = FMT_MPEG1;
-        s->low_delay  = !!(avctx->flags & AV_CODEC_FLAG_LOW_DELAY);
-        avctx->delay  = s->low_delay ? 0 : (m->max_b_frames + 1);
+        s->c.out_format = FMT_MPEG1;
+        s->c.low_delay  = !!(avctx->flags & AV_CODEC_FLAG_LOW_DELAY);
+        avctx->delay  = s->c.low_delay ? 0 : (m->max_b_frames + 1);
         ff_mpeg1_encode_init(s);
         break;
 #endif
 #if CONFIG_MJPEG_ENCODER || CONFIG_AMV_ENCODER
     case AV_CODEC_ID_MJPEG:
     case AV_CODEC_ID_AMV:
-        s->out_format = FMT_MJPEG;
+        s->c.out_format = FMT_MJPEG;
         m->intra_only = 1; /* force intra only for jpeg */
         avctx->delay = 0;
-        s->low_delay = 1;
+        s->c.low_delay = 1;
         break;
 #endif
     case AV_CODEC_ID_SPEEDHQ:
-        s->out_format = FMT_SPEEDHQ;
+        s->c.out_format = FMT_SPEEDHQ;
         m->intra_only = 1; /* force intra only for SHQ */
         avctx->delay = 0;
-        s->low_delay = 1;
+        s->c.low_delay = 1;
         break;
     case AV_CODEC_ID_H261:
-        s->out_format = FMT_H261;
+        s->c.out_format = FMT_H261;
         avctx->delay  = 0;
-        s->low_delay  = 1;
+        s->c.low_delay  = 1;
         s->rtp_mode   = 0; /* Sliced encoding not supported */
         break;
     case AV_CODEC_ID_H263:
         if (!CONFIG_H263_ENCODER)
             return AVERROR_ENCODER_NOT_FOUND;
         if (ff_match_2uint16(ff_h263_format, FF_ARRAY_ELEMS(ff_h263_format),
-                             s->width, s->height) == 8) {
+                             s->c.width, s->c.height) == 8) {
             av_log(avctx, AV_LOG_ERROR,
                    "The specified picture size of %dx%d is not valid for "
                    "the H.263 codec.\nValid sizes are 128x96, 176x144, "
                    "352x288, 704x576, and 1408x1152. "
-                   "Try H.263+.\n", s->width, s->height);
+                   "Try H.263+.\n", s->c.width, s->c.height);
             return AVERROR(EINVAL);
         }
-        s->out_format = FMT_H263;
+        s->c.out_format = FMT_H263;
         avctx->delay  = 0;
-        s->low_delay  = 1;
+        s->c.low_delay  = 1;
         break;
     case AV_CODEC_ID_H263P:
-        s->out_format = FMT_H263;
-        s->h263_plus  = 1;
+        s->c.out_format = FMT_H263;
+        s->c.h263_plus  = 1;
         /* Fx */
-        s->h263_aic        = (avctx->flags & AV_CODEC_FLAG_AC_PRED) ? 1 : 0;
-        s->modified_quant  = s->h263_aic;
-        s->loop_filter     = (avctx->flags & AV_CODEC_FLAG_LOOP_FILTER) ? 1 : 0;
-        s->unrestricted_mv = s->obmc || s->loop_filter || s->umvplus;
-        s->flipflop_rounding = 1;
+        s->c.h263_aic        = (avctx->flags & AV_CODEC_FLAG_AC_PRED) ? 1 : 0;
+        s->c.modified_quant  = s->c.h263_aic;
+        s->c.loop_filter     = (avctx->flags & AV_CODEC_FLAG_LOOP_FILTER) ? 1 : 0;
+        s->c.unrestricted_mv = s->c.obmc || s->c.loop_filter || s->c.umvplus;
+        s->c.flipflop_rounding = 1;
 
         /* /Fx */
         /* These are just to be sure */
         avctx->delay = 0;
-        s->low_delay = 1;
+        s->c.low_delay = 1;
         break;
     case AV_CODEC_ID_FLV1:
-        s->out_format      = FMT_H263;
-        s->h263_flv        = 2; /* format = 1; 11-bit codes */
-        s->unrestricted_mv = 1;
+        s->c.out_format      = FMT_H263;
+        s->c.h263_flv        = 2; /* format = 1; 11-bit codes */
+        s->c.unrestricted_mv = 1;
         s->rtp_mode  = 0; /* don't allow GOB */
         avctx->delay = 0;
-        s->low_delay = 1;
+        s->c.low_delay = 1;
         break;
 #if CONFIG_RV10_ENCODER
     case AV_CODEC_ID_RV10:
         m->encode_picture_header = ff_rv10_encode_picture_header;
-        s->out_format = FMT_H263;
+        s->c.out_format = FMT_H263;
         avctx->delay  = 0;
-        s->low_delay  = 1;
+        s->c.low_delay  = 1;
         break;
 #endif
 #if CONFIG_RV20_ENCODER
     case AV_CODEC_ID_RV20:
         m->encode_picture_header = ff_rv20_encode_picture_header;
-        s->out_format      = FMT_H263;
+        s->c.out_format      = FMT_H263;
         avctx->delay       = 0;
-        s->low_delay       = 1;
-        s->modified_quant  = 1;
-        s->h263_aic        = 1;
-        s->h263_plus       = 1;
-        s->loop_filter     = 1;
-        s->unrestricted_mv = 0;
+        s->c.low_delay       = 1;
+        s->c.modified_quant  = 1;
+        s->c.h263_aic        = 1;
+        s->c.h263_plus       = 1;
+        s->c.loop_filter     = 1;
+        s->c.unrestricted_mv = 0;
         break;
 #endif
     case AV_CODEC_ID_MPEG4:
-        s->out_format      = FMT_H263;
-        s->h263_pred       = 1;
-        s->unrestricted_mv = 1;
-        s->flipflop_rounding = 1;
-        s->low_delay       = m->max_b_frames ? 0 : 1;
-        avctx->delay       = s->low_delay ? 0 : (m->max_b_frames + 1);
+        s->c.out_format      = FMT_H263;
+        s->c.h263_pred       = 1;
+        s->c.unrestricted_mv = 1;
+        s->c.flipflop_rounding = 1;
+        s->c.low_delay       = m->max_b_frames ? 0 : 1;
+        avctx->delay       = s->c.low_delay ? 0 : (m->max_b_frames + 1);
         break;
     case AV_CODEC_ID_MSMPEG4V2:
-        s->out_format      = FMT_H263;
-        s->h263_pred       = 1;
-        s->unrestricted_mv = 1;
-        s->msmpeg4_version = MSMP4_V2;
+        s->c.out_format      = FMT_H263;
+        s->c.h263_pred       = 1;
+        s->c.unrestricted_mv = 1;
+        s->c.msmpeg4_version = MSMP4_V2;
         avctx->delay       = 0;
-        s->low_delay       = 1;
+        s->c.low_delay       = 1;
         break;
     case AV_CODEC_ID_MSMPEG4V3:
-        s->out_format        = FMT_H263;
-        s->h263_pred         = 1;
-        s->unrestricted_mv   = 1;
-        s->msmpeg4_version   = MSMP4_V3;
-        s->flipflop_rounding = 1;
+        s->c.out_format        = FMT_H263;
+        s->c.h263_pred         = 1;
+        s->c.unrestricted_mv   = 1;
+        s->c.msmpeg4_version   = MSMP4_V3;
+        s->c.flipflop_rounding = 1;
         avctx->delay         = 0;
-        s->low_delay         = 1;
+        s->c.low_delay         = 1;
         break;
     case AV_CODEC_ID_WMV1:
-        s->out_format        = FMT_H263;
-        s->h263_pred         = 1;
-        s->unrestricted_mv   = 1;
-        s->msmpeg4_version   = MSMP4_WMV1;
-        s->flipflop_rounding = 1;
+        s->c.out_format        = FMT_H263;
+        s->c.h263_pred         = 1;
+        s->c.unrestricted_mv   = 1;
+        s->c.msmpeg4_version   = MSMP4_WMV1;
+        s->c.flipflop_rounding = 1;
         avctx->delay         = 0;
-        s->low_delay         = 1;
+        s->c.low_delay         = 1;
         break;
     case AV_CODEC_ID_WMV2:
-        s->out_format        = FMT_H263;
-        s->h263_pred         = 1;
-        s->unrestricted_mv   = 1;
-        s->msmpeg4_version   = MSMP4_WMV2;
-        s->flipflop_rounding = 1;
+        s->c.out_format        = FMT_H263;
+        s->c.h263_pred         = 1;
+        s->c.unrestricted_mv   = 1;
+        s->c.msmpeg4_version   = MSMP4_WMV2;
+        s->c.flipflop_rounding = 1;
         avctx->delay         = 0;
-        s->low_delay         = 1;
+        s->c.low_delay         = 1;
         break;
     default:
         return AVERROR(EINVAL);
     }
 
-    avctx->has_b_frames = !s->low_delay;
+    avctx->has_b_frames = !s->c.low_delay;
 
-    s->encoding = 1;
+    s->c.encoding = 1;
 
-    s->progressive_frame    =
-    s->progressive_sequence = !(avctx->flags & (AV_CODEC_FLAG_INTERLACED_DCT |
-                                                AV_CODEC_FLAG_INTERLACED_ME) ||
-                                s->alternate_scan);
+    s->c.progressive_frame    =
+    s->c.progressive_sequence = !(avctx->flags & (AV_CODEC_FLAG_INTERLACED_DCT |
+                                                  AV_CODEC_FLAG_INTERLACED_ME) ||
+                                s->c.alternate_scan);
 
     if (avctx->flags & AV_CODEC_FLAG_PSNR || avctx->mb_decision == FF_MB_DECISION_RD ||
         m->frame_skip_threshold || m->frame_skip_factor) {
@@ -1009,8 +1009,10 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx)
         m->lmin = m->lmax;
     }
 
-    /* init */
-    ff_mpv_idct_init(s);
+    /* ff_mpv_common_init() will copy (memdup) the contents main slice
+     * to the slice contexts, so we initialize various fields of it
+     * before calling ff_mpv_common_init(). */
+    ff_mpv_idct_init(&s->c);
     ff_fdctdsp_init(&s->fdsp, avctx);
     ff_mpegvideoencdsp_init(&s->mpvencdsp, avctx);
     ff_pixblockdsp_init(&s->pdsp, avctx);
@@ -1020,7 +1022,7 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx)
 
     if (!(avctx->stats_out = av_mallocz(256))               ||
         !(s->new_pic = av_frame_alloc()) ||
-        !(s->picture_pool = ff_mpv_alloc_pic_pool(0)))
+        !(s->c.picture_pool = ff_mpv_alloc_pic_pool(0)))
         return AVERROR(ENOMEM);
 
     ret = init_matrices(m, avctx);
@@ -1029,35 +1031,36 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx)
 
     ff_dct_encode_init(s);
 
-    if (s->mpeg_quant || s->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
-        s->dct_unquantize_intra = s->dct_unquantize_mpeg2_intra;
-        s->dct_unquantize_inter = s->dct_unquantize_mpeg2_inter;
-    } else if (s->out_format == FMT_H263 || s->out_format == FMT_H261) {
-        s->dct_unquantize_intra = s->dct_unquantize_h263_intra;
-        s->dct_unquantize_inter = s->dct_unquantize_h263_inter;
+    if (s->c.mpeg_quant || s->c.codec_id == AV_CODEC_ID_MPEG2VIDEO) {
+        s->c.dct_unquantize_intra = s->c.dct_unquantize_mpeg2_intra;
+        s->c.dct_unquantize_inter = s->c.dct_unquantize_mpeg2_inter;
+    } else if (s->c.out_format == FMT_H263 || s->c.out_format == FMT_H261) {
+        s->c.dct_unquantize_intra = s->c.dct_unquantize_h263_intra;
+        s->c.dct_unquantize_inter = s->c.dct_unquantize_h263_inter;
     } else {
-        s->dct_unquantize_intra = s->dct_unquantize_mpeg1_intra;
-        s->dct_unquantize_inter = s->dct_unquantize_mpeg1_inter;
+        s->c.dct_unquantize_intra = s->c.dct_unquantize_mpeg1_intra;
+        s->c.dct_unquantize_inter = s->c.dct_unquantize_mpeg1_inter;
     }
 
-    if (CONFIG_H263_ENCODER && s->out_format == FMT_H263) {
+    if (CONFIG_H263_ENCODER && s->c.out_format == FMT_H263) {
         ff_h263_encode_init(m);
 #if CONFIG_MSMPEG4ENC
-        if (s->msmpeg4_version != MSMP4_UNUSED)
+        if (s->c.msmpeg4_version != MSMP4_UNUSED)
             ff_msmpeg4_encode_init(m);
 #endif
     }
 
-    ret = ff_mpv_common_init(s);
+    s->c.slice_ctx_size = sizeof(*s);
+    ret = ff_mpv_common_init(&s->c);
     if (ret < 0)
         return ret;
 
-    if (s->slice_context_count > 1) {
-        for (int i = 0; i < s->slice_context_count; ++i) {
-            s->thread_context[i]->rtp_mode = 1;
+    if (s->c.slice_context_count > 1) {
+        for (int i = 0; i < s->c.slice_context_count; ++i) {
+            s->c.enc_contexts[i]->rtp_mode = 1;
 
             if (avctx->codec_id == AV_CODEC_ID_H263P)
-                s->thread_context[i]->h263_slice_structured = 1;
+                s->c.enc_contexts[i]->c.h263_slice_structured = 1;
         }
     }
 
@@ -1076,8 +1079,8 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx)
                 return AVERROR(ENOMEM);
 
             m->tmp_frames[i]->format = AV_PIX_FMT_YUV420P;
-            m->tmp_frames[i]->width  = s->width  >> m->brd_scale;
-            m->tmp_frames[i]->height = s->height >> m->brd_scale;
+            m->tmp_frames[i]->width  = s->c.width  >> m->brd_scale;
+            m->tmp_frames[i]->height = s->c.height >> m->brd_scale;
 
             ret = av_frame_get_buffer(m->tmp_frames[i], 0);
             if (ret < 0)
@@ -1099,12 +1102,12 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx)
 av_cold int ff_mpv_encode_end(AVCodecContext *avctx)
 {
     MPVMainEncContext *const m = avctx->priv_data;
-    MpegEncContext    *const s = &m->s;
+    MPVEncContext    *const s = &m->s;
 
     ff_rate_control_uninit(&m->rc_context);
 
-    ff_mpv_common_end(s);
-    av_refstruct_pool_uninit(&s->picture_pool);
+    ff_mpv_common_end(&s->c);
+    av_refstruct_pool_uninit(&s->c.picture_pool);
 
     for (int i = 0; i < MPVENC_MAX_B_FRAMES + 1; i++) {
         av_refstruct_unref(&m->input_picture[i]);
@@ -1133,88 +1136,88 @@ av_cold int ff_mpv_encode_end(AVCodecContext *avctx)
 }
 
 /* put block[] to dest[] */
-static inline void put_dct(MpegEncContext *s,
+static inline void put_dct(MPVEncContext *const s,
                            int16_t *block, int i, uint8_t *dest, int line_size, int qscale)
 {
-    s->dct_unquantize_intra(s, block, i, qscale);
-    s->idsp.idct_put(dest, line_size, block);
+    s->c.dct_unquantize_intra(&s->c, block, i, qscale);
+    s->c.idsp.idct_put(dest, line_size, block);
 }
 
-static inline void add_dequant_dct(MpegEncContext *s,
+static inline void add_dequant_dct(MPVEncContext *const s,
                            int16_t *block, int i, uint8_t *dest, int line_size, int qscale)
 {
-    if (s->block_last_index[i] >= 0) {
-        s->dct_unquantize_inter(s, block, i, qscale);
+    if (s->c.block_last_index[i] >= 0) {
+        s->c.dct_unquantize_inter(&s->c, block, i, qscale);
 
-        s->idsp.idct_add(dest, line_size, block);
+        s->c.idsp.idct_add(dest, line_size, block);
     }
 }
 
 /**
  * Performs dequantization and IDCT (if necessary)
  */
-static void mpv_reconstruct_mb(MpegEncContext *s, int16_t block[12][64])
+static void mpv_reconstruct_mb(MPVEncContext *const s, int16_t block[12][64])
 {
-    if (s->avctx->debug & FF_DEBUG_DCT_COEFF) {
+    if (s->c.avctx->debug & FF_DEBUG_DCT_COEFF) {
        /* print DCT coefficients */
-       av_log(s->avctx, AV_LOG_DEBUG, "DCT coeffs of MB at %dx%d:\n", s->mb_x, s->mb_y);
+       av_log(s->c.avctx, AV_LOG_DEBUG, "DCT coeffs of MB at %dx%d:\n", s->c.mb_x, s->c.mb_y);
        for (int i = 0; i < 6; i++) {
            for (int j = 0; j < 64; j++) {
-               av_log(s->avctx, AV_LOG_DEBUG, "%5d",
-                      block[i][s->idsp.idct_permutation[j]]);
+               av_log(s->c.avctx, AV_LOG_DEBUG, "%5d",
+                      block[i][s->c.idsp.idct_permutation[j]]);
            }
-           av_log(s->avctx, AV_LOG_DEBUG, "\n");
+           av_log(s->c.avctx, AV_LOG_DEBUG, "\n");
        }
     }
 
-    if ((1 << s->pict_type) & s->frame_reconstruction_bitfield) {
-        uint8_t *dest_y = s->dest[0], *dest_cb = s->dest[1], *dest_cr = s->dest[2];
+    if ((1 << s->c.pict_type) & s->frame_reconstruction_bitfield) {
+        uint8_t *dest_y = s->c.dest[0], *dest_cb = s->c.dest[1], *dest_cr = s->c.dest[2];
         int dct_linesize, dct_offset;
-        const int linesize   = s->cur_pic.linesize[0];
-        const int uvlinesize = s->cur_pic.linesize[1];
+        const int linesize   = s->c.cur_pic.linesize[0];
+        const int uvlinesize = s->c.cur_pic.linesize[1];
         const int block_size = 8;
 
-        dct_linesize = linesize << s->interlaced_dct;
-        dct_offset   = s->interlaced_dct ? linesize : linesize * block_size;
+        dct_linesize = linesize << s->c.interlaced_dct;
+        dct_offset   = s->c.interlaced_dct ? linesize : linesize * block_size;
 
-        if (!s->mb_intra) {
+        if (!s->c.mb_intra) {
             /* No MC, as that was already done otherwise */
-            add_dequant_dct(s, block[0], 0, dest_y                          , dct_linesize, s->qscale);
-            add_dequant_dct(s, block[1], 1, dest_y              + block_size, dct_linesize, s->qscale);
-            add_dequant_dct(s, block[2], 2, dest_y + dct_offset             , dct_linesize, s->qscale);
-            add_dequant_dct(s, block[3], 3, dest_y + dct_offset + block_size, dct_linesize, s->qscale);
-
-            if (!CONFIG_GRAY || !(s->avctx->flags & AV_CODEC_FLAG_GRAY)) {
-                if (s->chroma_y_shift) {
-                    add_dequant_dct(s, block[4], 4, dest_cb, uvlinesize, s->chroma_qscale);
-                    add_dequant_dct(s, block[5], 5, dest_cr, uvlinesize, s->chroma_qscale);
+            add_dequant_dct(s, block[0], 0, dest_y                          , dct_linesize, s->c.qscale);
+            add_dequant_dct(s, block[1], 1, dest_y              + block_size, dct_linesize, s->c.qscale);
+            add_dequant_dct(s, block[2], 2, dest_y + dct_offset             , dct_linesize, s->c.qscale);
+            add_dequant_dct(s, block[3], 3, dest_y + dct_offset + block_size, dct_linesize, s->c.qscale);
+
+            if (!CONFIG_GRAY || !(s->c.avctx->flags & AV_CODEC_FLAG_GRAY)) {
+                if (s->c.chroma_y_shift) {
+                    add_dequant_dct(s, block[4], 4, dest_cb, uvlinesize, s->c.chroma_qscale);
+                    add_dequant_dct(s, block[5], 5, dest_cr, uvlinesize, s->c.chroma_qscale);
                 } else {
                     dct_linesize >>= 1;
                     dct_offset   >>= 1;
-                    add_dequant_dct(s, block[4], 4, dest_cb,              dct_linesize, s->chroma_qscale);
-                    add_dequant_dct(s, block[5], 5, dest_cr,              dct_linesize, s->chroma_qscale);
-                    add_dequant_dct(s, block[6], 6, dest_cb + dct_offset, dct_linesize, s->chroma_qscale);
-                    add_dequant_dct(s, block[7], 7, dest_cr + dct_offset, dct_linesize, s->chroma_qscale);
+                    add_dequant_dct(s, block[4], 4, dest_cb,              dct_linesize, s->c.chroma_qscale);
+                    add_dequant_dct(s, block[5], 5, dest_cr,              dct_linesize, s->c.chroma_qscale);
+                    add_dequant_dct(s, block[6], 6, dest_cb + dct_offset, dct_linesize, s->c.chroma_qscale);
+                    add_dequant_dct(s, block[7], 7, dest_cr + dct_offset, dct_linesize, s->c.chroma_qscale);
                 }
             }
         } else {
             /* dct only in intra block */
-            put_dct(s, block[0], 0, dest_y                          , dct_linesize, s->qscale);
-            put_dct(s, block[1], 1, dest_y              + block_size, dct_linesize, s->qscale);
-            put_dct(s, block[2], 2, dest_y + dct_offset             , dct_linesize, s->qscale);
-            put_dct(s, block[3], 3, dest_y + dct_offset + block_size, dct_linesize, s->qscale);
-
-            if (!CONFIG_GRAY || !(s->avctx->flags & AV_CODEC_FLAG_GRAY)) {
-                if (s->chroma_y_shift) {
-                    put_dct(s, block[4], 4, dest_cb, uvlinesize, s->chroma_qscale);
-                    put_dct(s, block[5], 5, dest_cr, uvlinesize, s->chroma_qscale);
+            put_dct(s, block[0], 0, dest_y                          , dct_linesize, s->c.qscale);
+            put_dct(s, block[1], 1, dest_y              + block_size, dct_linesize, s->c.qscale);
+            put_dct(s, block[2], 2, dest_y + dct_offset             , dct_linesize, s->c.qscale);
+            put_dct(s, block[3], 3, dest_y + dct_offset + block_size, dct_linesize, s->c.qscale);
+
+            if (!CONFIG_GRAY || !(s->c.avctx->flags & AV_CODEC_FLAG_GRAY)) {
+                if (s->c.chroma_y_shift) {
+                    put_dct(s, block[4], 4, dest_cb, uvlinesize, s->c.chroma_qscale);
+                    put_dct(s, block[5], 5, dest_cr, uvlinesize, s->c.chroma_qscale);
                 } else {
                     dct_offset   >>= 1;
                     dct_linesize >>= 1;
-                    put_dct(s, block[4], 4, dest_cb,              dct_linesize, s->chroma_qscale);
-                    put_dct(s, block[5], 5, dest_cr,              dct_linesize, s->chroma_qscale);
-                    put_dct(s, block[6], 6, dest_cb + dct_offset, dct_linesize, s->chroma_qscale);
-                    put_dct(s, block[7], 7, dest_cr + dct_offset, dct_linesize, s->chroma_qscale);
+                    put_dct(s, block[4], 4, dest_cb,              dct_linesize, s->c.chroma_qscale);
+                    put_dct(s, block[5], 5, dest_cr,              dct_linesize, s->c.chroma_qscale);
+                    put_dct(s, block[6], 6, dest_cb + dct_offset, dct_linesize, s->c.chroma_qscale);
+                    put_dct(s, block[7], 7, dest_cr + dct_offset, dct_linesize, s->c.chroma_qscale);
                 }
             }
         }
@@ -1235,14 +1238,14 @@ static int get_sae(const uint8_t *src, int ref, int stride)
     return acc;
 }
 
-static int get_intra_count(MpegEncContext *s, const uint8_t *src,
+static int get_intra_count(MPVEncContext *const s, const uint8_t *src,
                            const uint8_t *ref, int stride)
 {
     int x, y, w, h;
     int acc = 0;
 
-    w = s->width  & ~15;
-    h = s->height & ~15;
+    w = s->c.width  & ~15;
+    h = s->c.height & ~15;
 
     for (y = 0; y < h; y += 16) {
         for (x = 0; x < w; x += 16) {
@@ -1262,9 +1265,9 @@ static int get_intra_count(MpegEncContext *s, const uint8_t *src,
  * Allocates new buffers for an AVFrame and copies the properties
  * from another AVFrame.
  */
-static int prepare_picture(MpegEncContext *s, AVFrame *f, const AVFrame *props_frame)
+static int prepare_picture(MPVEncContext *const s, AVFrame *f, const AVFrame *props_frame)
 {
-    AVCodecContext *avctx = s->avctx;
+    AVCodecContext *avctx = s->c.avctx;
     int ret;
 
     f->width  = avctx->width  + 2 * EDGE_WIDTH;
@@ -1274,14 +1277,14 @@ static int prepare_picture(MpegEncContext *s, AVFrame *f, const AVFrame *props_f
     if (ret < 0)
         return ret;
 
-    ret = ff_mpv_pic_check_linesize(avctx, f, &s->linesize, &s->uvlinesize);
+    ret = ff_mpv_pic_check_linesize(avctx, f, &s->c.linesize, &s->c.uvlinesize);
     if (ret < 0)
         return ret;
 
     for (int i = 0; f->data[i]; i++) {
-        int offset = (EDGE_WIDTH >> (i ? s->chroma_y_shift : 0)) *
+        int offset = (EDGE_WIDTH >> (i ? s->c.chroma_y_shift : 0)) *
                      f->linesize[i] +
-                     (EDGE_WIDTH >> (i ? s->chroma_x_shift : 0));
+                     (EDGE_WIDTH >> (i ? s->c.chroma_x_shift : 0));
         f->data[i] += offset;
     }
     f->width  = avctx->width;
@@ -1296,12 +1299,12 @@ static int prepare_picture(MpegEncContext *s, AVFrame *f, const AVFrame *props_f
 
 static int load_input_picture(MPVMainEncContext *const m, const AVFrame *pic_arg)
 {
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
     MPVPicture *pic = NULL;
     int64_t pts;
     int display_picture_number = 0, ret;
     int encoding_delay = m->max_b_frames ? m->max_b_frames
-                                         : (s->low_delay ? 0 : 1);
+                                         : (s->c.low_delay ? 0 : 1);
     int flush_offset = 1;
     int direct = 1;
 
@@ -1316,13 +1319,13 @@ static int load_input_picture(MPVMainEncContext *const m, const AVFrame *pic_arg
                 int64_t last = m->user_specified_pts;
 
                 if (pts <= last) {
-                    av_log(s->avctx, AV_LOG_ERROR,
+                    av_log(s->c.avctx, AV_LOG_ERROR,
                            "Invalid pts (%"PRId64") <= last (%"PRId64")\n",
                            pts, last);
                     return AVERROR(EINVAL);
                 }
 
-                if (!s->low_delay && display_picture_number == 1)
+                if (!s->c.low_delay && display_picture_number == 1)
                     m->dts_delta = pts - last;
             }
             m->user_specified_pts = pts;
@@ -1330,7 +1333,7 @@ static int load_input_picture(MPVMainEncContext *const m, const AVFrame *pic_arg
             if (m->user_specified_pts != AV_NOPTS_VALUE) {
                 m->user_specified_pts =
                 pts = m->user_specified_pts + 1;
-                av_log(s->avctx, AV_LOG_INFO,
+                av_log(s->c.avctx, AV_LOG_INFO,
                        "Warning: AVFrame.pts=? trying to guess (%"PRId64")\n",
                        pts);
             } else {
@@ -1338,21 +1341,21 @@ static int load_input_picture(MPVMainEncContext *const m, const AVFrame *pic_arg
             }
         }
 
-        if (pic_arg->linesize[0] != s->linesize ||
-            pic_arg->linesize[1] != s->uvlinesize ||
-            pic_arg->linesize[2] != s->uvlinesize)
+        if (pic_arg->linesize[0] != s->c.linesize ||
+            pic_arg->linesize[1] != s->c.uvlinesize ||
+            pic_arg->linesize[2] != s->c.uvlinesize)
             direct = 0;
-        if ((s->width & 15) || (s->height & 15))
+        if ((s->c.width & 15) || (s->c.height & 15))
             direct = 0;
         if (((intptr_t)(pic_arg->data[0])) & (STRIDE_ALIGN-1))
             direct = 0;
-        if (s->linesize & (STRIDE_ALIGN-1))
+        if (s->c.linesize & (STRIDE_ALIGN-1))
             direct = 0;
 
-        ff_dlog(s->avctx, "%d %d %"PTRDIFF_SPECIFIER" %"PTRDIFF_SPECIFIER"\n", pic_arg->linesize[0],
-                pic_arg->linesize[1], s->linesize, s->uvlinesize);
+        ff_dlog(s->c.avctx, "%d %d %"PTRDIFF_SPECIFIER" %"PTRDIFF_SPECIFIER"\n", pic_arg->linesize[0],
+                pic_arg->linesize[1], s->c.linesize, s->c.uvlinesize);
 
-        pic = av_refstruct_pool_get(s->picture_pool);
+        pic = av_refstruct_pool_get(s->c.picture_pool);
         if (!pic)
             return AVERROR(ENOMEM);
 
@@ -1367,21 +1370,21 @@ static int load_input_picture(MPVMainEncContext *const m, const AVFrame *pic_arg
 
             for (int i = 0; i < 3; i++) {
                 ptrdiff_t src_stride = pic_arg->linesize[i];
-                ptrdiff_t dst_stride = i ? s->uvlinesize : s->linesize;
-                int h_shift = i ? s->chroma_x_shift : 0;
-                int v_shift = i ? s->chroma_y_shift : 0;
-                int w = AV_CEIL_RSHIFT(s->width , h_shift);
-                int h = AV_CEIL_RSHIFT(s->height, v_shift);
+                ptrdiff_t dst_stride = i ? s->c.uvlinesize : s->c.linesize;
+                int h_shift = i ? s->c.chroma_x_shift : 0;
+                int v_shift = i ? s->c.chroma_y_shift : 0;
+                int w = AV_CEIL_RSHIFT(s->c.width , h_shift);
+                int h = AV_CEIL_RSHIFT(s->c.height, v_shift);
                 const uint8_t *src = pic_arg->data[i];
                 uint8_t *dst = pic->f->data[i];
                 int vpad = 16;
 
-                if (   s->codec_id == AV_CODEC_ID_MPEG2VIDEO
-                    && !s->progressive_sequence
-                    && FFALIGN(s->height, 32) - s->height > 16)
+                if (   s->c.codec_id == AV_CODEC_ID_MPEG2VIDEO
+                    && !s->c.progressive_sequence
+                    && FFALIGN(s->c.height, 32) - s->c.height > 16)
                     vpad = 32;
 
-                if (!s->avctx->rc_buffer_size)
+                if (!s->c.avctx->rc_buffer_size)
                     dst += INPLACE_OFFSET;
 
                 if (src_stride == dst_stride)
@@ -1395,7 +1398,7 @@ static int load_input_picture(MPVMainEncContext *const m, const AVFrame *pic_arg
                         src += src_stride;
                     }
                 }
-                if ((s->width & 15) || (s->height & (vpad-1))) {
+                if ((s->c.width & 15) || (s->c.height & (vpad-1))) {
                     s->mpvencdsp.draw_edges(dst, dst_stride,
                                             w, h,
                                             16 >> h_shift,
@@ -1438,7 +1441,7 @@ fail:
 static int skip_check(MPVMainEncContext *const m,
                       const MPVPicture *p, const MPVPicture *ref)
 {
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
     int x, y, plane;
     int score = 0;
     int64_t score64 = 0;
@@ -1446,8 +1449,8 @@ static int skip_check(MPVMainEncContext *const m,
     for (plane = 0; plane < 3; plane++) {
         const int stride = p->f->linesize[plane];
         const int bw = plane ? 1 : 2;
-        for (y = 0; y < s->mb_height * bw; y++) {
-            for (x = 0; x < s->mb_width * bw; x++) {
+        for (y = 0; y < s->c.mb_height * bw; y++) {
+            for (x = 0; x < s->c.mb_width * bw; x++) {
                 int off = p->shared ? 0 : 16;
                 const uint8_t *dptr = p->f->data[plane] + 8 * (x + y * stride) + off;
                 const uint8_t *rptr = ref->f->data[plane] + 8 * (x + y * stride);
@@ -1468,12 +1471,12 @@ static int skip_check(MPVMainEncContext *const m,
     if (score)
         score64 = score;
     if (m->frame_skip_exp < 0)
-        score64 = pow(score64 / (double)(s->mb_width * s->mb_height),
+        score64 = pow(score64 / (double)(s->c.mb_width * s->c.mb_height),
                       -1.0/m->frame_skip_exp);
 
     if (score64 < m->frame_skip_threshold)
         return 1;
-    if (score64 < ((m->frame_skip_factor * (int64_t) s->lambda) >> 8))
+    if (score64 < ((m->frame_skip_factor * (int64_t) s->c.lambda) >> 8))
         return 1;
     return 0;
 }
@@ -1501,11 +1504,11 @@ static int encode_frame(AVCodecContext *c, const AVFrame *frame, AVPacket *pkt)
 
 static int estimate_best_b_count(MPVMainEncContext *const m)
 {
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
     AVPacket *pkt;
     const int scale = m->brd_scale;
-    int width  = s->width  >> scale;
-    int height = s->height >> scale;
+    int width  = s->c.width  >> scale;
+    int height = s->c.height >> scale;
     int out_size, p_lambda, b_lambda, lambda2;
     int64_t best_rd  = INT64_MAX;
     int best_b_count = -1;
@@ -1519,7 +1522,7 @@ static int estimate_best_b_count(MPVMainEncContext *const m)
 
     //emms_c();
     p_lambda = m->last_lambda_for[AV_PICTURE_TYPE_P];
-    //p_lambda * FFABS(s->avctx->b_quant_factor) + s->avctx->b_quant_offset;
+    //p_lambda * FFABS(s->c.avctx->b_quant_factor) + s->c.avctx->b_quant_offset;
     b_lambda = m->last_lambda_for[AV_PICTURE_TYPE_B];
     if (!b_lambda) // FIXME we should do this somewhere else
         b_lambda = p_lambda;
@@ -1528,7 +1531,7 @@ static int estimate_best_b_count(MPVMainEncContext *const m)
 
     for (int i = 0; i < m->max_b_frames + 2; i++) {
         const MPVPicture *pre_input_ptr = i ? m->input_picture[i - 1] :
-                                           s->next_pic.ptr;
+                                           s->c.next_pic.ptr;
 
         if (pre_input_ptr) {
             const uint8_t *data[4];
@@ -1574,16 +1577,16 @@ static int estimate_best_b_count(MPVMainEncContext *const m)
         c->width        = width;
         c->height       = height;
         c->flags        = AV_CODEC_FLAG_QSCALE | AV_CODEC_FLAG_PSNR;
-        c->flags       |= s->avctx->flags & AV_CODEC_FLAG_QPEL;
-        c->mb_decision  = s->avctx->mb_decision;
-        c->me_cmp       = s->avctx->me_cmp;
-        c->mb_cmp       = s->avctx->mb_cmp;
-        c->me_sub_cmp   = s->avctx->me_sub_cmp;
+        c->flags       |= s->c.avctx->flags & AV_CODEC_FLAG_QPEL;
+        c->mb_decision  = s->c.avctx->mb_decision;
+        c->me_cmp       = s->c.avctx->me_cmp;
+        c->mb_cmp       = s->c.avctx->mb_cmp;
+        c->me_sub_cmp   = s->c.avctx->me_sub_cmp;
         c->pix_fmt      = AV_PIX_FMT_YUV420P;
-        c->time_base    = s->avctx->time_base;
+        c->time_base    = s->c.avctx->time_base;
         c->max_b_frames = m->max_b_frames;
 
-        ret = avcodec_open2(c, s->avctx->codec, NULL);
+        ret = avcodec_open2(c, s->c.avctx->codec, NULL);
         if (ret < 0)
             goto fail;
 
@@ -1654,7 +1657,7 @@ fail:
  */
 static int set_bframe_chain_length(MPVMainEncContext *const m)
 {
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
 
     /* Either nothing to do or can't do anything */
     if (m->reordered_input_picture[0] || !m->input_picture[0])
@@ -1663,8 +1666,8 @@ static int set_bframe_chain_length(MPVMainEncContext *const m)
     /* set next picture type & ordering */
     if (m->frame_skip_threshold || m->frame_skip_factor) {
         if (m->picture_in_gop_number < m->gop_size &&
-            s->next_pic.ptr &&
-            skip_check(m, m->input_picture[0], s->next_pic.ptr)) {
+            s->c.next_pic.ptr &&
+            skip_check(m, m->input_picture[0], s->c.next_pic.ptr)) {
             // FIXME check that the gop check above is +-1 correct
             av_refstruct_unref(&m->input_picture[0]);
 
@@ -1675,7 +1678,7 @@ static int set_bframe_chain_length(MPVMainEncContext *const m)
     }
 
     if (/* m->picture_in_gop_number >= m->gop_size || */
-        !s->next_pic.ptr || m->intra_only) {
+        !s->c.next_pic.ptr || m->intra_only) {
         m->reordered_input_picture[0] = m->input_picture[0];
         m->input_picture[0] = NULL;
         m->reordered_input_picture[0]->f->pict_type = AV_PICTURE_TYPE_I;
@@ -1684,7 +1687,7 @@ static int set_bframe_chain_length(MPVMainEncContext *const m)
     } else {
         int b_frames = 0;
 
-        if (s->avctx->flags & AV_CODEC_FLAG_PASS2) {
+        if (s->c.avctx->flags & AV_CODEC_FLAG_PASS2) {
             for (int i = 0; i < m->max_b_frames + 1; i++) {
                 int pict_num = m->input_picture[0]->display_picture_number + i;
 
@@ -1712,13 +1715,13 @@ static int set_bframe_chain_length(MPVMainEncContext *const m)
                         get_intra_count(s,
                                         m->input_picture[i    ]->f->data[0],
                                         m->input_picture[i - 1]->f->data[0],
-                                        s->linesize) + 1;
+                                        s->c.linesize) + 1;
                 }
             }
             for (int i = 0; i < m->max_b_frames + 1; i++) {
                 if (!m->input_picture[i] ||
                     m->input_picture[i]->b_frame_score - 1 >
-                        s->mb_num / m->b_sensitivity) {
+                        s->c.mb_num / m->b_sensitivity) {
                     b_frames = FFMAX(0, i - 1);
                     break;
                 }
@@ -1744,7 +1747,7 @@ static int set_bframe_chain_length(MPVMainEncContext *const m)
         }
         if (m->input_picture[b_frames]->f->pict_type == AV_PICTURE_TYPE_B &&
             b_frames == m->max_b_frames) {
-            av_log(s->avctx, AV_LOG_ERROR,
+            av_log(s->c.avctx, AV_LOG_ERROR,
                     "warning, too many B-frames in a row\n");
         }
 
@@ -1753,13 +1756,13 @@ static int set_bframe_chain_length(MPVMainEncContext *const m)
                 m->gop_size > m->picture_in_gop_number) {
                 b_frames = m->gop_size - m->picture_in_gop_number - 1;
             } else {
-                if (s->avctx->flags & AV_CODEC_FLAG_CLOSED_GOP)
+                if (s->c.avctx->flags & AV_CODEC_FLAG_CLOSED_GOP)
                     b_frames = 0;
                 m->input_picture[b_frames]->f->pict_type = AV_PICTURE_TYPE_I;
             }
         }
 
-        if ((s->avctx->flags & AV_CODEC_FLAG_CLOSED_GOP) && b_frames &&
+        if ((s->c.avctx->flags & AV_CODEC_FLAG_CLOSED_GOP) && b_frames &&
             m->input_picture[b_frames]->f->pict_type == AV_PICTURE_TYPE_I)
             b_frames--;
 
@@ -1784,7 +1787,7 @@ static int set_bframe_chain_length(MPVMainEncContext *const m)
 
 static int select_input_picture(MPVMainEncContext *const m)
 {
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
     int ret;
 
     av_assert1(!m->reordered_input_picture[0]);
@@ -1804,7 +1807,7 @@ static int select_input_picture(MPVMainEncContext *const m)
         m->reordered_input_picture[0]->reference =
            m->reordered_input_picture[0]->f->pict_type != AV_PICTURE_TYPE_B;
 
-        if (m->reordered_input_picture[0]->shared || s->avctx->rc_buffer_size) {
+        if (m->reordered_input_picture[0]->shared || s->c.avctx->rc_buffer_size) {
             // input is a shared pix, so we can't modify it -> allocate a new
             // one & ensure that the shared one is reuseable
             av_frame_move_ref(s->new_pic, m->reordered_input_picture[0]->f);
@@ -1822,18 +1825,18 @@ static int select_input_picture(MPVMainEncContext *const m)
                     s->new_pic->data[i] += INPLACE_OFFSET;
             }
         }
-        s->cur_pic.ptr = m->reordered_input_picture[0];
+        s->c.cur_pic.ptr = m->reordered_input_picture[0];
         m->reordered_input_picture[0] = NULL;
-        av_assert1(s->mb_width  == s->buffer_pools.alloc_mb_width);
-        av_assert1(s->mb_height == s->buffer_pools.alloc_mb_height);
-        av_assert1(s->mb_stride == s->buffer_pools.alloc_mb_stride);
-        ret = ff_mpv_alloc_pic_accessories(s->avctx, &s->cur_pic,
-                                           &s->sc, &s->buffer_pools, s->mb_height);
+        av_assert1(s->c.mb_width  == s->c.buffer_pools.alloc_mb_width);
+        av_assert1(s->c.mb_height == s->c.buffer_pools.alloc_mb_height);
+        av_assert1(s->c.mb_stride == s->c.buffer_pools.alloc_mb_stride);
+        ret = ff_mpv_alloc_pic_accessories(s->c.avctx, &s->c.cur_pic,
+                                           &s->c.sc, &s->c.buffer_pools, s->c.mb_height);
         if (ret < 0) {
-            ff_mpv_unref_picture(&s->cur_pic);
+            ff_mpv_unref_picture(&s->c.cur_pic);
             return ret;
         }
-        s->picture_number = s->cur_pic.ptr->display_picture_number;
+        s->c.picture_number = s->c.cur_pic.ptr->display_picture_number;
 
     }
     return 0;
@@ -1844,29 +1847,29 @@ fail:
 
 static void frame_end(MPVMainEncContext *const m)
 {
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
 
-    if (s->unrestricted_mv &&
-        s->cur_pic.reference &&
+    if (s->c.unrestricted_mv &&
+        s->c.cur_pic.reference &&
         !m->intra_only) {
-        int hshift = s->chroma_x_shift;
-        int vshift = s->chroma_y_shift;
-        s->mpvencdsp.draw_edges(s->cur_pic.data[0],
-                                s->cur_pic.linesize[0],
-                                s->h_edge_pos, s->v_edge_pos,
+        int hshift = s->c.chroma_x_shift;
+        int vshift = s->c.chroma_y_shift;
+        s->mpvencdsp.draw_edges(s->c.cur_pic.data[0],
+                                s->c.cur_pic.linesize[0],
+                                s->c.h_edge_pos, s->c.v_edge_pos,
                                 EDGE_WIDTH, EDGE_WIDTH,
                                 EDGE_TOP | EDGE_BOTTOM);
-        s->mpvencdsp.draw_edges(s->cur_pic.data[1],
-                                s->cur_pic.linesize[1],
-                                s->h_edge_pos >> hshift,
-                                s->v_edge_pos >> vshift,
+        s->mpvencdsp.draw_edges(s->c.cur_pic.data[1],
+                                s->c.cur_pic.linesize[1],
+                                s->c.h_edge_pos >> hshift,
+                                s->c.v_edge_pos >> vshift,
                                 EDGE_WIDTH >> hshift,
                                 EDGE_WIDTH >> vshift,
                                 EDGE_TOP | EDGE_BOTTOM);
-        s->mpvencdsp.draw_edges(s->cur_pic.data[2],
-                                s->cur_pic.linesize[2],
-                                s->h_edge_pos >> hshift,
-                                s->v_edge_pos >> vshift,
+        s->mpvencdsp.draw_edges(s->c.cur_pic.data[2],
+                                s->c.cur_pic.linesize[2],
+                                s->c.h_edge_pos >> hshift,
+                                s->c.v_edge_pos >> vshift,
                                 EDGE_WIDTH >> hshift,
                                 EDGE_WIDTH >> vshift,
                                 EDGE_TOP | EDGE_BOTTOM);
@@ -1874,15 +1877,15 @@ static void frame_end(MPVMainEncContext *const m)
 
     emms_c();
 
-    m->last_pict_type                = s->pict_type;
-    m->last_lambda_for[s->pict_type] = s->cur_pic.ptr->f->quality;
-    if (s->pict_type!= AV_PICTURE_TYPE_B)
-        m->last_non_b_pict_type = s->pict_type;
+    m->last_pict_type                = s->c.pict_type;
+    m->last_lambda_for[s->c.pict_type] = s->c.cur_pic.ptr->f->quality;
+    if (s->c.pict_type!= AV_PICTURE_TYPE_B)
+        m->last_non_b_pict_type = s->c.pict_type;
 }
 
 static void update_noise_reduction(MPVMainEncContext *const m)
 {
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
     int intra, i;
 
     for (intra = 0; intra < 2; intra++) {
@@ -1904,13 +1907,13 @@ static void update_noise_reduction(MPVMainEncContext *const m)
 
 static void frame_start(MPVMainEncContext *const m)
 {
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
 
-    s->cur_pic.ptr->f->pict_type = s->pict_type;
+    s->c.cur_pic.ptr->f->pict_type = s->c.pict_type;
 
-    if (s->pict_type != AV_PICTURE_TYPE_B) {
-        ff_mpv_replace_picture(&s->last_pic, &s->next_pic);
-        ff_mpv_replace_picture(&s->next_pic, &s->cur_pic);
+    if (s->c.pict_type != AV_PICTURE_TYPE_B) {
+        ff_mpv_replace_picture(&s->c.last_pic, &s->c.next_pic);
+        ff_mpv_replace_picture(&s->c.next_pic, &s->c.cur_pic);
     }
 
     av_assert2(!!m->noise_reduction == !!s->dct_error_sum);
@@ -1923,11 +1926,11 @@ int ff_mpv_encode_picture(AVCodecContext *avctx, AVPacket *pkt,
                           const AVFrame *pic_arg, int *got_packet)
 {
     MPVMainEncContext *const m = avctx->priv_data;
-    MpegEncContext    *const s = &m->s;
+    MPVEncContext    *const s = &m->s;
     int stuffing_count, ret;
-    int context_count = s->slice_context_count;
+    int context_count = s->c.slice_context_count;
 
-    ff_mpv_unref_picture(&s->cur_pic);
+    ff_mpv_unref_picture(&s->c.cur_pic);
 
     m->vbv_ignore_qmax = 0;
 
@@ -1943,8 +1946,8 @@ int ff_mpv_encode_picture(AVCodecContext *avctx, AVPacket *pkt,
 
     /* output? */
     if (s->new_pic->data[0]) {
-        int growing_buffer = context_count == 1 && !s->data_partitioning;
-        size_t pkt_size = 10000 + s->mb_width * s->mb_height *
+        int growing_buffer = context_count == 1 && !s->c.data_partitioning;
+        size_t pkt_size = 10000 + s->c.mb_width * s->c.mb_height *
                                   (growing_buffer ? 64 : (MAX_MB_BYTES + 100));
         if (CONFIG_MJPEG_ENCODER && avctx->codec_id == AV_CODEC_ID_MJPEG) {
             ret = ff_mjpeg_add_icc_profile_size(avctx, s->new_pic, &pkt_size);
@@ -1957,13 +1960,13 @@ int ff_mpv_encode_picture(AVCodecContext *avctx, AVPacket *pkt,
         if (s->mb_info) {
             s->mb_info_ptr = av_packet_new_side_data(pkt,
                                  AV_PKT_DATA_H263_MB_INFO,
-                                 s->mb_width*s->mb_height*12);
+                                 s->c.mb_width*s->c.mb_height*12);
             if (!s->mb_info_ptr)
                 return AVERROR(ENOMEM);
             s->prev_mb_info = s->last_mb_info = s->mb_info_size = 0;
         }
 
-        s->pict_type = s->new_pic->pict_type;
+        s->c.pict_type = s->new_pic->pict_type;
         //emms_c();
         frame_start(m);
 vbv_retry:
@@ -1978,7 +1981,7 @@ vbv_retry:
 
         frame_end(m);
 
-       if ((CONFIG_MJPEG_ENCODER || CONFIG_AMV_ENCODER) && s->out_format == FMT_MJPEG)
+       if ((CONFIG_MJPEG_ENCODER || CONFIG_AMV_ENCODER) && s->c.out_format == FMT_MJPEG)
             ff_mjpeg_encode_picture_trailer(&s->pb, m->header_bits);
 
         if (avctx->rc_buffer_size) {
@@ -1988,25 +1991,24 @@ vbv_retry:
             int min_step = hq ? 1 : (1<<(FF_LAMBDA_SHIFT + 7))/139;
 
             if (put_bits_count(&s->pb) > max_size &&
-                s->lambda < m->lmax) {
-                m->next_lambda = FFMAX(s->lambda + min_step, s->lambda *
-                                       (s->qscale + 1) / s->qscale);
+                s->c.lambda < m->lmax) {
+                m->next_lambda = FFMAX(s->c.lambda + min_step, s->c.lambda *
+                                       (s->c.qscale + 1) / s->c.qscale);
                 if (s->adaptive_quant) {
-                    int i;
-                    for (i = 0; i < s->mb_height * s->mb_stride; i++)
+                    for (int i = 0; i < s->c.mb_height * s->c.mb_stride; i++)
                         s->lambda_table[i] =
                             FFMAX(s->lambda_table[i] + min_step,
-                                  s->lambda_table[i] * (s->qscale + 1) /
-                                  s->qscale);
+                                  s->lambda_table[i] * (s->c.qscale + 1) /
+                                  s->c.qscale);
                 }
-                s->mb_skipped = 0;        // done in frame_start()
+                s->c.mb_skipped = 0;        // done in frame_start()
                 // done in encode_picture() so we must undo it
-                if (s->pict_type == AV_PICTURE_TYPE_P) {
-                    s->no_rounding ^= s->flipflop_rounding;
+                if (s->c.pict_type == AV_PICTURE_TYPE_P) {
+                    s->c.no_rounding ^= s->c.flipflop_rounding;
                 }
-                if (s->pict_type != AV_PICTURE_TYPE_B) {
-                    s->time_base       = s->last_time_base;
-                    s->last_non_b_time = s->time - s->pp_time;
+                if (s->c.pict_type != AV_PICTURE_TYPE_B) {
+                    s->c.time_base       = s->c.last_time_base;
+                    s->c.last_non_b_time = s->c.time - s->c.pp_time;
                 }
                 m->vbv_ignore_qmax = 1;
                 av_log(avctx, AV_LOG_VERBOSE, "reencoding frame due to VBV\n");
@@ -2021,10 +2023,10 @@ vbv_retry:
 
         for (int i = 0; i < MPV_MAX_PLANES; i++)
             avctx->error[i] += s->encoding_error[i];
-        ff_side_data_set_encoder_stats(pkt, s->cur_pic.ptr->f->quality,
+        ff_side_data_set_encoder_stats(pkt, s->c.cur_pic.ptr->f->quality,
                                        s->encoding_error,
                                        (avctx->flags&AV_CODEC_FLAG_PSNR) ? MPV_MAX_PLANES : 0,
-                                       s->pict_type);
+                                       s->c.pict_type);
 
         if (avctx->flags & AV_CODEC_FLAG_PASS1)
             assert(put_bits_count(&s->pb) == m->header_bits + s->mv_bits +
@@ -2041,7 +2043,7 @@ vbv_retry:
                 return -1;
             }
 
-            switch (s->codec_id) {
+            switch (s->c.codec_id) {
             case AV_CODEC_ID_MPEG1VIDEO:
             case AV_CODEC_ID_MPEG2VIDEO:
                 while (stuffing_count--) {
@@ -2067,7 +2069,7 @@ vbv_retry:
         /* update MPEG-1/2 vbv_delay for CBR */
         if (avctx->rc_max_rate                          &&
             avctx->rc_min_rate == avctx->rc_max_rate &&
-            s->out_format == FMT_MPEG1                     &&
+            s->c.out_format == FMT_MPEG1                     &&
             90000LL * (avctx->rc_buffer_size - 1) <=
                 avctx->rc_max_rate * 0xFFFFLL) {
             AVCPBProperties *props;
@@ -2085,7 +2087,7 @@ vbv_retry:
                 av_log(avctx, AV_LOG_ERROR,
                        "Internal error, negative bits\n");
 
-            av_assert1(s->repeat_first_field == 0);
+            av_assert1(s->c.repeat_first_field == 0);
 
             vbv_delay = bits * 90000 / avctx->rc_max_rate;
             min_delay = (minbits * 90000LL + avctx->rc_max_rate - 1) /
@@ -2115,10 +2117,10 @@ vbv_retry:
         }
         m->total_bits += m->frame_bits;
 
-        pkt->pts = s->cur_pic.ptr->f->pts;
-        pkt->duration = s->cur_pic.ptr->f->duration;
-        if (!s->low_delay && s->pict_type != AV_PICTURE_TYPE_B) {
-            if (!s->cur_pic.ptr->coded_picture_number)
+        pkt->pts = s->c.cur_pic.ptr->f->pts;
+        pkt->duration = s->c.cur_pic.ptr->f->duration;
+        if (!s->c.low_delay && s->c.pict_type != AV_PICTURE_TYPE_B) {
+            if (!s->c.cur_pic.ptr->coded_picture_number)
                 pkt->dts = pkt->pts - m->dts_delta;
             else
                 pkt->dts = m->reordered_pts;
@@ -2128,12 +2130,12 @@ vbv_retry:
 
         // the no-delay case is handled in generic code
         if (avctx->codec->capabilities & AV_CODEC_CAP_DELAY) {
-            ret = ff_encode_reordered_opaque(avctx, pkt, s->cur_pic.ptr->f);
+            ret = ff_encode_reordered_opaque(avctx, pkt, s->c.cur_pic.ptr->f);
             if (ret < 0)
                 return ret;
         }
 
-        if (s->cur_pic.ptr->f->flags & AV_FRAME_FLAG_KEY)
+        if (s->c.cur_pic.ptr->f->flags & AV_FRAME_FLAG_KEY)
             pkt->flags |= AV_PKT_FLAG_KEY;
         if (s->mb_info)
             av_packet_shrink_side_data(pkt, AV_PKT_DATA_H263_MB_INFO, s->mb_info_size);
@@ -2141,7 +2143,7 @@ vbv_retry:
         m->frame_bits = 0;
     }
 
-    ff_mpv_unref_picture(&s->cur_pic);
+    ff_mpv_unref_picture(&s->c.cur_pic);
 
     av_assert1((m->frame_bits & 7) == 0);
 
@@ -2150,7 +2152,7 @@ vbv_retry:
     return 0;
 }
 
-static inline void dct_single_coeff_elimination(MpegEncContext *s,
+static inline void dct_single_coeff_elimination(MPVEncContext *const s,
                                                 int n, int threshold)
 {
     static const char tab[64] = {
@@ -2166,8 +2168,8 @@ static inline void dct_single_coeff_elimination(MpegEncContext *s,
     int score = 0;
     int run = 0;
     int i;
-    int16_t *block = s->block[n];
-    const int last_index = s->block_last_index[n];
+    int16_t *block = s->c.block[n];
+    const int last_index = s->c.block_last_index[n];
     int skip_dc;
 
     if (threshold < 0) {
@@ -2181,7 +2183,7 @@ static inline void dct_single_coeff_elimination(MpegEncContext *s,
         return;
 
     for (i = 0; i <= last_index; i++) {
-        const int j = s->intra_scantable.permutated[i];
+        const int j = s->c.intra_scantable.permutated[i];
         const int level = FFABS(block[j]);
         if (level == 1) {
             if (skip_dc && i == 0)
@@ -2197,16 +2199,16 @@ static inline void dct_single_coeff_elimination(MpegEncContext *s,
     if (score >= threshold)
         return;
     for (i = skip_dc; i <= last_index; i++) {
-        const int j = s->intra_scantable.permutated[i];
+        const int j = s->c.intra_scantable.permutated[i];
         block[j] = 0;
     }
     if (block[0])
-        s->block_last_index[n] = 0;
+        s->c.block_last_index[n] = 0;
     else
-        s->block_last_index[n] = -1;
+        s->c.block_last_index[n] = -1;
 }
 
-static inline void clip_coeffs(MpegEncContext *s, int16_t *block,
+static inline void clip_coeffs(const MPVEncContext *const s, int16_t block[],
                                int last_index)
 {
     int i;
@@ -2214,13 +2216,13 @@ static inline void clip_coeffs(MpegEncContext *s, int16_t *block,
     const int minlevel = s->min_qcoeff;
     int overflow = 0;
 
-    if (s->mb_intra) {
+    if (s->c.mb_intra) {
         i = 1; // skip clipping of intra dc
     } else
         i = 0;
 
     for (; i <= last_index; i++) {
-        const int j = s->intra_scantable.permutated[i];
+        const int j = s->c.intra_scantable.permutated[i];
         int level = block[j];
 
         if (level > maxlevel) {
@@ -2234,8 +2236,8 @@ static inline void clip_coeffs(MpegEncContext *s, int16_t *block,
         block[j] = level;
     }
 
-    if (overflow && s->avctx->mb_decision == FF_MB_DECISION_SIMPLE)
-        av_log(s->avctx, AV_LOG_INFO,
+    if (overflow && s->c.avctx->mb_decision == FF_MB_DECISION_SIMPLE)
+        av_log(s->c.avctx, AV_LOG_INFO,
                "warning, clipping %d dct coefficients to %d..%d\n",
                overflow, minlevel, maxlevel);
 }
@@ -2264,7 +2266,7 @@ static void get_visual_weight(int16_t *weight, const uint8_t *ptr, int stride)
     }
 }
 
-static av_always_inline void encode_mb_internal(MpegEncContext *s,
+static av_always_inline void encode_mb_internal(MPVEncContext *const s,
                                                 int motion_x, int motion_y,
                                                 int mb_block_height,
                                                 int mb_block_width,
@@ -2276,15 +2278,15 @@ static av_always_inline void encode_mb_internal(MpegEncContext *s,
 /* Interlaced DCT is only possible with MPEG-2 and MPEG-4
  * and neither of these encoders currently supports 444. */
 #define INTERLACED_DCT(s) ((chroma_format == CHROMA_420 || chroma_format == CHROMA_422) && \
-                           (s)->avctx->flags & AV_CODEC_FLAG_INTERLACED_DCT)
+                           (s)->c.avctx->flags & AV_CODEC_FLAG_INTERLACED_DCT)
     int16_t weight[12][64];
     int16_t orig[12][64];
-    const int mb_x = s->mb_x;
-    const int mb_y = s->mb_y;
+    const int mb_x = s->c.mb_x;
+    const int mb_y = s->c.mb_y;
     int i;
     int skip_dct[12];
-    int dct_offset = s->linesize * 8; // default for progressive frames
-    int uv_dct_offset = s->uvlinesize * 8;
+    int dct_offset = s->c.linesize * 8; // default for progressive frames
+    int uv_dct_offset = s->c.uvlinesize * 8;
     const uint8_t *ptr_y, *ptr_cb, *ptr_cr;
     ptrdiff_t wrap_y, wrap_c;
 
@@ -2292,37 +2294,37 @@ static av_always_inline void encode_mb_internal(MpegEncContext *s,
         skip_dct[i] = s->skipdct;
 
     if (s->adaptive_quant) {
-        const int last_qp = s->qscale;
-        const int mb_xy = mb_x + mb_y * s->mb_stride;
+        const int last_qp = s->c.qscale;
+        const int mb_xy = mb_x + mb_y * s->c.mb_stride;
 
-        s->lambda = s->lambda_table[mb_xy];
-        s->lambda2 = (s->lambda * s->lambda + FF_LAMBDA_SCALE / 2) >>
-                     FF_LAMBDA_SHIFT;
+        s->c.lambda = s->lambda_table[mb_xy];
+        s->c.lambda2 = (s->c.lambda * s->c.lambda + FF_LAMBDA_SCALE / 2) >>
+                       FF_LAMBDA_SHIFT;
 
         if (!(s->mpv_flags & FF_MPV_FLAG_QP_RD)) {
-            s->dquant = s->cur_pic.qscale_table[mb_xy] - last_qp;
+            s->dquant = s->c.cur_pic.qscale_table[mb_xy] - last_qp;
 
-            if (s->out_format == FMT_H263) {
+            if (s->c.out_format == FMT_H263) {
                 s->dquant = av_clip(s->dquant, -2, 2);
 
-                if (s->codec_id == AV_CODEC_ID_MPEG4) {
-                    if (!s->mb_intra) {
-                        if (s->pict_type == AV_PICTURE_TYPE_B) {
-                            if (s->dquant & 1 || s->mv_dir & MV_DIRECT)
+                if (s->c.codec_id == AV_CODEC_ID_MPEG4) {
+                    if (!s->c.mb_intra) {
+                        if (s->c.pict_type == AV_PICTURE_TYPE_B) {
+                            if (s->dquant & 1 || s->c.mv_dir & MV_DIRECT)
                                 s->dquant = 0;
                         }
-                        if (s->mv_type == MV_TYPE_8X8)
+                        if (s->c.mv_type == MV_TYPE_8X8)
                             s->dquant = 0;
                     }
                 }
             }
         }
-        ff_set_qscale(s, last_qp + s->dquant);
+        ff_set_qscale(&s->c, last_qp + s->dquant);
     } else if (s->mpv_flags & FF_MPV_FLAG_QP_RD)
-        ff_set_qscale(s, s->qscale + s->dquant);
+        ff_set_qscale(&s->c, s->c.qscale + s->dquant);
 
-    wrap_y = s->linesize;
-    wrap_c = s->uvlinesize;
+    wrap_y = s->c.linesize;
+    wrap_c = s->c.uvlinesize;
     ptr_y  = s->new_pic->data[0] +
              (mb_y * 16 * wrap_y)              + mb_x * 16;
     ptr_cb = s->new_pic->data[1] +
@@ -2330,22 +2332,23 @@ static av_always_inline void encode_mb_internal(MpegEncContext *s,
     ptr_cr = s->new_pic->data[2] +
              (mb_y * mb_block_height * wrap_c) + mb_x * mb_block_width;
 
-    if((mb_x * 16 + 16 > s->width || mb_y * 16 + 16 > s->height) && s->codec_id != AV_CODEC_ID_AMV){
-        uint8_t *ebuf = s->sc.edge_emu_buffer + 38 * wrap_y;
-        int cw = (s->width  + chroma_x_shift) >> chroma_x_shift;
-        int ch = (s->height + chroma_y_shift) >> chroma_y_shift;
-        s->vdsp.emulated_edge_mc(ebuf, ptr_y,
+    if ((mb_x * 16 + 16 > s->c.width || mb_y * 16 + 16 > s->c.height) &&
+        s->c.codec_id != AV_CODEC_ID_AMV) {
+        uint8_t *ebuf = s->c.sc.edge_emu_buffer + 38 * wrap_y;
+        int cw = (s->c.width  + chroma_x_shift) >> chroma_x_shift;
+        int ch = (s->c.height + chroma_y_shift) >> chroma_y_shift;
+        s->c.vdsp.emulated_edge_mc(ebuf, ptr_y,
                                  wrap_y, wrap_y,
                                  16, 16, mb_x * 16, mb_y * 16,
-                                 s->width, s->height);
+                                 s->c.width, s->c.height);
         ptr_y = ebuf;
-        s->vdsp.emulated_edge_mc(ebuf + 16 * wrap_y, ptr_cb,
+        s->c.vdsp.emulated_edge_mc(ebuf + 16 * wrap_y, ptr_cb,
                                  wrap_c, wrap_c,
                                  mb_block_width, mb_block_height,
                                  mb_x * mb_block_width, mb_y * mb_block_height,
                                  cw, ch);
         ptr_cb = ebuf + 16 * wrap_y;
-        s->vdsp.emulated_edge_mc(ebuf + 16 * wrap_y + 16, ptr_cr,
+        s->c.vdsp.emulated_edge_mc(ebuf + 16 * wrap_y + 16, ptr_cr,
                                  wrap_c, wrap_c,
                                  mb_block_width, mb_block_height,
                                  mb_x * mb_block_width, mb_y * mb_block_height,
@@ -2353,11 +2356,11 @@ static av_always_inline void encode_mb_internal(MpegEncContext *s,
         ptr_cr = ebuf + 16 * wrap_y + 16;
     }
 
-    if (s->mb_intra) {
+    if (s->c.mb_intra) {
         if (INTERLACED_DCT(s)) {
             int progressive_score, interlaced_score;
 
-            s->interlaced_dct = 0;
+            s->c.interlaced_dct = 0;
             progressive_score = s->ildct_cmp[1](s, ptr_y, NULL, wrap_y, 8) +
                                 s->ildct_cmp[1](s, ptr_y + wrap_y * 8,
                                                 NULL, wrap_y, 8) - 400;
@@ -2368,7 +2371,7 @@ static av_always_inline void encode_mb_internal(MpegEncContext *s,
                                    s->ildct_cmp[1](s, ptr_y + wrap_y,
                                                    NULL, wrap_y * 2, 8);
                 if (progressive_score > interlaced_score) {
-                    s->interlaced_dct = 1;
+                    s->c.interlaced_dct = 1;
 
                     dct_offset = wrap_y;
                     uv_dct_offset = wrap_c;
@@ -2380,27 +2383,27 @@ static av_always_inline void encode_mb_internal(MpegEncContext *s,
             }
         }
 
-        s->pdsp.get_pixels(s->block[0], ptr_y,                  wrap_y);
-        s->pdsp.get_pixels(s->block[1], ptr_y + 8,              wrap_y);
-        s->pdsp.get_pixels(s->block[2], ptr_y + dct_offset,     wrap_y);
-        s->pdsp.get_pixels(s->block[3], ptr_y + dct_offset + 8, wrap_y);
+        s->pdsp.get_pixels(s->c.block[0], ptr_y,                  wrap_y);
+        s->pdsp.get_pixels(s->c.block[1], ptr_y + 8,              wrap_y);
+        s->pdsp.get_pixels(s->c.block[2], ptr_y + dct_offset,     wrap_y);
+        s->pdsp.get_pixels(s->c.block[3], ptr_y + dct_offset + 8, wrap_y);
 
-        if (s->avctx->flags & AV_CODEC_FLAG_GRAY) {
+        if (s->c.avctx->flags & AV_CODEC_FLAG_GRAY) {
             skip_dct[4] = 1;
             skip_dct[5] = 1;
         } else {
-            s->pdsp.get_pixels(s->block[4], ptr_cb, wrap_c);
-            s->pdsp.get_pixels(s->block[5], ptr_cr, wrap_c);
+            s->pdsp.get_pixels(s->c.block[4], ptr_cb, wrap_c);
+            s->pdsp.get_pixels(s->c.block[5], ptr_cr, wrap_c);
             if (chroma_format == CHROMA_422) {
-                s->pdsp.get_pixels(s->block[6], ptr_cb + uv_dct_offset, wrap_c);
-                s->pdsp.get_pixels(s->block[7], ptr_cr + uv_dct_offset, wrap_c);
+                s->pdsp.get_pixels(s->c.block[6], ptr_cb + uv_dct_offset, wrap_c);
+                s->pdsp.get_pixels(s->c.block[7], ptr_cr + uv_dct_offset, wrap_c);
             } else if (chroma_format == CHROMA_444) {
-                s->pdsp.get_pixels(s->block[ 6], ptr_cb + 8, wrap_c);
-                s->pdsp.get_pixels(s->block[ 7], ptr_cr + 8, wrap_c);
-                s->pdsp.get_pixels(s->block[ 8], ptr_cb + uv_dct_offset, wrap_c);
-                s->pdsp.get_pixels(s->block[ 9], ptr_cr + uv_dct_offset, wrap_c);
-                s->pdsp.get_pixels(s->block[10], ptr_cb + uv_dct_offset + 8, wrap_c);
-                s->pdsp.get_pixels(s->block[11], ptr_cr + uv_dct_offset + 8, wrap_c);
+                s->pdsp.get_pixels(s->c.block[ 6], ptr_cb + 8, wrap_c);
+                s->pdsp.get_pixels(s->c.block[ 7], ptr_cr + 8, wrap_c);
+                s->pdsp.get_pixels(s->c.block[ 8], ptr_cb + uv_dct_offset, wrap_c);
+                s->pdsp.get_pixels(s->c.block[ 9], ptr_cr + uv_dct_offset, wrap_c);
+                s->pdsp.get_pixels(s->c.block[10], ptr_cb + uv_dct_offset + 8, wrap_c);
+                s->pdsp.get_pixels(s->c.block[11], ptr_cr + uv_dct_offset + 8, wrap_c);
             }
         }
     } else {
@@ -2408,41 +2411,41 @@ static av_always_inline void encode_mb_internal(MpegEncContext *s,
         qpel_mc_func (*op_qpix)[16];
         uint8_t *dest_y, *dest_cb, *dest_cr;
 
-        dest_y  = s->dest[0];
-        dest_cb = s->dest[1];
-        dest_cr = s->dest[2];
+        dest_y  = s->c.dest[0];
+        dest_cb = s->c.dest[1];
+        dest_cr = s->c.dest[2];
 
-        if ((!s->no_rounding) || s->pict_type == AV_PICTURE_TYPE_B) {
-            op_pix  = s->hdsp.put_pixels_tab;
-            op_qpix = s->qdsp.put_qpel_pixels_tab;
+        if ((!s->c.no_rounding) || s->c.pict_type == AV_PICTURE_TYPE_B) {
+            op_pix  = s->c.hdsp.put_pixels_tab;
+            op_qpix = s->c.qdsp.put_qpel_pixels_tab;
         } else {
-            op_pix  = s->hdsp.put_no_rnd_pixels_tab;
-            op_qpix = s->qdsp.put_no_rnd_qpel_pixels_tab;
+            op_pix  = s->c.hdsp.put_no_rnd_pixels_tab;
+            op_qpix = s->c.qdsp.put_no_rnd_qpel_pixels_tab;
         }
 
-        if (s->mv_dir & MV_DIR_FORWARD) {
-            ff_mpv_motion(s, dest_y, dest_cb, dest_cr, 0,
-                          s->last_pic.data,
+        if (s->c.mv_dir & MV_DIR_FORWARD) {
+            ff_mpv_motion(&s->c, dest_y, dest_cb, dest_cr, 0,
+                          s->c.last_pic.data,
                           op_pix, op_qpix);
-            op_pix  = s->hdsp.avg_pixels_tab;
-            op_qpix = s->qdsp.avg_qpel_pixels_tab;
+            op_pix  = s->c.hdsp.avg_pixels_tab;
+            op_qpix = s->c.qdsp.avg_qpel_pixels_tab;
         }
-        if (s->mv_dir & MV_DIR_BACKWARD) {
-            ff_mpv_motion(s, dest_y, dest_cb, dest_cr, 1,
-                          s->next_pic.data,
+        if (s->c.mv_dir & MV_DIR_BACKWARD) {
+            ff_mpv_motion(&s->c, dest_y, dest_cb, dest_cr, 1,
+                          s->c.next_pic.data,
                           op_pix, op_qpix);
         }
 
         if (INTERLACED_DCT(s)) {
             int progressive_score, interlaced_score;
 
-            s->interlaced_dct = 0;
+            s->c.interlaced_dct = 0;
             progressive_score = s->ildct_cmp[0](s, dest_y, ptr_y, wrap_y, 8) +
                                 s->ildct_cmp[0](s, dest_y + wrap_y * 8,
                                                 ptr_y + wrap_y * 8,
                                                 wrap_y, 8) - 400;
 
-            if (s->avctx->ildct_cmp == FF_CMP_VSSE)
+            if (s->c.avctx->ildct_cmp == FF_CMP_VSSE)
                 progressive_score -= 400;
 
             if (progressive_score > 0) {
@@ -2453,7 +2456,7 @@ static av_always_inline void encode_mb_internal(MpegEncContext *s,
                                                    wrap_y * 2, 8);
 
                 if (progressive_score > interlaced_score) {
-                    s->interlaced_dct = 1;
+                    s->c.interlaced_dct = 1;
 
                     dct_offset = wrap_y;
                     uv_dct_offset = wrap_c;
@@ -2464,51 +2467,51 @@ static av_always_inline void encode_mb_internal(MpegEncContext *s,
             }
         }
 
-        s->pdsp.diff_pixels(s->block[0], ptr_y, dest_y, wrap_y);
-        s->pdsp.diff_pixels(s->block[1], ptr_y + 8, dest_y + 8, wrap_y);
-        s->pdsp.diff_pixels(s->block[2], ptr_y + dct_offset,
+        s->pdsp.diff_pixels(s->c.block[0], ptr_y, dest_y, wrap_y);
+        s->pdsp.diff_pixels(s->c.block[1], ptr_y + 8, dest_y + 8, wrap_y);
+        s->pdsp.diff_pixels(s->c.block[2], ptr_y + dct_offset,
                             dest_y + dct_offset, wrap_y);
-        s->pdsp.diff_pixels(s->block[3], ptr_y + dct_offset + 8,
+        s->pdsp.diff_pixels(s->c.block[3], ptr_y + dct_offset + 8,
                             dest_y + dct_offset + 8, wrap_y);
 
-        if (s->avctx->flags & AV_CODEC_FLAG_GRAY) {
+        if (s->c.avctx->flags & AV_CODEC_FLAG_GRAY) {
             skip_dct[4] = 1;
             skip_dct[5] = 1;
         } else {
-            s->pdsp.diff_pixels(s->block[4], ptr_cb, dest_cb, wrap_c);
-            s->pdsp.diff_pixels(s->block[5], ptr_cr, dest_cr, wrap_c);
+            s->pdsp.diff_pixels(s->c.block[4], ptr_cb, dest_cb, wrap_c);
+            s->pdsp.diff_pixels(s->c.block[5], ptr_cr, dest_cr, wrap_c);
             if (!chroma_y_shift) { /* 422 */
-                s->pdsp.diff_pixels(s->block[6], ptr_cb + uv_dct_offset,
+                s->pdsp.diff_pixels(s->c.block[6], ptr_cb + uv_dct_offset,
                                     dest_cb + uv_dct_offset, wrap_c);
-                s->pdsp.diff_pixels(s->block[7], ptr_cr + uv_dct_offset,
+                s->pdsp.diff_pixels(s->c.block[7], ptr_cr + uv_dct_offset,
                                     dest_cr + uv_dct_offset, wrap_c);
             }
         }
         /* pre quantization */
-        if (s->mc_mb_var[s->mb_stride * mb_y + mb_x] < 2 * s->qscale * s->qscale) {
+        if (s->mc_mb_var[s->c.mb_stride * mb_y + mb_x] < 2 * s->c.qscale * s->c.qscale) {
             // FIXME optimize
-            if (s->sad_cmp[1](NULL, ptr_y, dest_y, wrap_y, 8) < 20 * s->qscale)
+            if (s->sad_cmp[1](NULL, ptr_y, dest_y, wrap_y, 8) < 20 * s->c.qscale)
                 skip_dct[0] = 1;
-            if (s->sad_cmp[1](NULL, ptr_y + 8, dest_y + 8, wrap_y, 8) < 20 * s->qscale)
+            if (s->sad_cmp[1](NULL, ptr_y + 8, dest_y + 8, wrap_y, 8) < 20 * s->c.qscale)
                 skip_dct[1] = 1;
             if (s->sad_cmp[1](NULL, ptr_y + dct_offset, dest_y + dct_offset,
-                              wrap_y, 8) < 20 * s->qscale)
+                              wrap_y, 8) < 20 * s->c.qscale)
                 skip_dct[2] = 1;
             if (s->sad_cmp[1](NULL, ptr_y + dct_offset + 8, dest_y + dct_offset + 8,
-                              wrap_y, 8) < 20 * s->qscale)
+                              wrap_y, 8) < 20 * s->c.qscale)
                 skip_dct[3] = 1;
-            if (s->sad_cmp[1](NULL, ptr_cb, dest_cb, wrap_c, 8) < 20 * s->qscale)
+            if (s->sad_cmp[1](NULL, ptr_cb, dest_cb, wrap_c, 8) < 20 * s->c.qscale)
                 skip_dct[4] = 1;
-            if (s->sad_cmp[1](NULL, ptr_cr, dest_cr, wrap_c, 8) < 20 * s->qscale)
+            if (s->sad_cmp[1](NULL, ptr_cr, dest_cr, wrap_c, 8) < 20 * s->c.qscale)
                 skip_dct[5] = 1;
             if (!chroma_y_shift) { /* 422 */
                 if (s->sad_cmp[1](NULL, ptr_cb + uv_dct_offset,
                                   dest_cb + uv_dct_offset,
-                                  wrap_c, 8) < 20 * s->qscale)
+                                  wrap_c, 8) < 20 * s->c.qscale)
                     skip_dct[6] = 1;
                 if (s->sad_cmp[1](NULL, ptr_cr + uv_dct_offset,
                                   dest_cr + uv_dct_offset,
-                                  wrap_c, 8) < 20 * s->qscale)
+                                  wrap_c, 8) < 20 * s->c.qscale)
                     skip_dct[7] = 1;
             }
         }
@@ -2535,102 +2538,102 @@ static av_always_inline void encode_mb_internal(MpegEncContext *s,
                 get_visual_weight(weight[7], ptr_cr + uv_dct_offset,
                                   wrap_c);
         }
-        memcpy(orig[0], s->block[0], sizeof(int16_t) * 64 * mb_block_count);
+        memcpy(orig[0], s->c.block[0], sizeof(int16_t) * 64 * mb_block_count);
     }
 
     /* DCT & quantize */
-    av_assert2(s->out_format != FMT_MJPEG || s->qscale == 8);
+    av_assert2(s->c.out_format != FMT_MJPEG || s->c.qscale == 8);
     {
         for (i = 0; i < mb_block_count; i++) {
             if (!skip_dct[i]) {
                 int overflow;
-                s->block_last_index[i] = s->dct_quantize(s, s->block[i], i, s->qscale, &overflow);
+                s->c.block_last_index[i] = s->dct_quantize(s, s->c.block[i], i, s->c.qscale, &overflow);
                 // FIXME we could decide to change to quantizer instead of
                 // clipping
                 // JS: I don't think that would be a good idea it could lower
                 //     quality instead of improve it. Just INTRADC clipping
                 //     deserves changes in quantizer
                 if (overflow)
-                    clip_coeffs(s, s->block[i], s->block_last_index[i]);
+                    clip_coeffs(s, s->c.block[i], s->c.block_last_index[i]);
             } else
-                s->block_last_index[i] = -1;
+                s->c.block_last_index[i] = -1;
         }
         if (s->quantizer_noise_shaping) {
             for (i = 0; i < mb_block_count; i++) {
                 if (!skip_dct[i]) {
-                    s->block_last_index[i] =
-                        dct_quantize_refine(s, s->block[i], weight[i],
-                                            orig[i], i, s->qscale);
+                    s->c.block_last_index[i] =
+                        dct_quantize_refine(s, s->c.block[i], weight[i],
+                                            orig[i], i, s->c.qscale);
                 }
             }
         }
 
-        if (s->luma_elim_threshold && !s->mb_intra)
+        if (s->luma_elim_threshold && !s->c.mb_intra)
             for (i = 0; i < 4; i++)
                 dct_single_coeff_elimination(s, i, s->luma_elim_threshold);
-        if (s->chroma_elim_threshold && !s->mb_intra)
+        if (s->chroma_elim_threshold && !s->c.mb_intra)
             for (i = 4; i < mb_block_count; i++)
                 dct_single_coeff_elimination(s, i, s->chroma_elim_threshold);
 
         if (s->mpv_flags & FF_MPV_FLAG_CBP_RD) {
             for (i = 0; i < mb_block_count; i++) {
-                if (s->block_last_index[i] == -1)
+                if (s->c.block_last_index[i] == -1)
                     s->coded_score[i] = INT_MAX / 256;
             }
         }
     }
 
-    if ((s->avctx->flags & AV_CODEC_FLAG_GRAY) && s->mb_intra) {
-        s->block_last_index[4] =
-        s->block_last_index[5] = 0;
-        s->block[4][0] =
-        s->block[5][0] = (1024 + s->c_dc_scale / 2) / s->c_dc_scale;
+    if ((s->c.avctx->flags & AV_CODEC_FLAG_GRAY) && s->c.mb_intra) {
+        s->c.block_last_index[4] =
+        s->c.block_last_index[5] = 0;
+        s->c.block[4][0] =
+        s->c.block[5][0] = (1024 + s->c.c_dc_scale / 2) / s->c.c_dc_scale;
         if (!chroma_y_shift) { /* 422 / 444 */
             for (i=6; i<12; i++) {
-                s->block_last_index[i] = 0;
-                s->block[i][0] = s->block[4][0];
+                s->c.block_last_index[i] = 0;
+                s->c.block[i][0] = s->c.block[4][0];
             }
         }
     }
 
     // non c quantize code returns incorrect block_last_index FIXME
-    if (s->alternate_scan && s->dct_quantize != dct_quantize_c) {
+    if (s->c.alternate_scan && s->dct_quantize != dct_quantize_c) {
         for (i = 0; i < mb_block_count; i++) {
             int j;
-            if (s->block_last_index[i] > 0) {
+            if (s->c.block_last_index[i] > 0) {
                 for (j = 63; j > 0; j--) {
-                    if (s->block[i][s->intra_scantable.permutated[j]])
+                    if (s->c.block[i][s->c.intra_scantable.permutated[j]])
                         break;
                 }
-                s->block_last_index[i] = j;
+                s->c.block_last_index[i] = j;
             }
         }
     }
 
-    s->encode_mb(s, s->block, motion_x, motion_y);
+    s->encode_mb(s, s->c.block, motion_x, motion_y);
 }
 
-static void encode_mb(MpegEncContext *s, int motion_x, int motion_y)
+static void encode_mb(MPVEncContext *const s, int motion_x, int motion_y)
 {
-    if (s->chroma_format == CHROMA_420)
+    if (s->c.chroma_format == CHROMA_420)
         encode_mb_internal(s, motion_x, motion_y,  8, 8, 6, 1, 1, CHROMA_420);
-    else if (s->chroma_format == CHROMA_422)
+    else if (s->c.chroma_format == CHROMA_422)
         encode_mb_internal(s, motion_x, motion_y, 16, 8, 8, 1, 0, CHROMA_422);
     else
         encode_mb_internal(s, motion_x, motion_y, 16, 16, 12, 0, 0, CHROMA_444);
 }
 
-static inline void copy_context_before_encode(MpegEncContext *d,
-                                              const MpegEncContext *s)
+static inline void copy_context_before_encode(MPVEncContext *const d,
+                                              const MPVEncContext *const s)
 {
     int i;
 
-    memcpy(d->last_mv, s->last_mv, 2*2*2*sizeof(int)); //FIXME is memcpy faster than a loop?
+    memcpy(d->c.last_mv, s->c.last_mv, 2*2*2*sizeof(int)); //FIXME is memcpy faster than a loop?
 
     /* MPEG-1 */
-    d->mb_skip_run= s->mb_skip_run;
+    d->c.mb_skip_run = s->c.mb_skip_run;
     for(i=0; i<3; i++)
-        d->last_dc[i] = s->last_dc[i];
+        d->c.last_dc[i] = s->c.last_dc[i];
 
     /* statistics */
     d->mv_bits= s->mv_bits;
@@ -2640,25 +2643,25 @@ static inline void copy_context_before_encode(MpegEncContext *d,
     d->misc_bits= s->misc_bits;
     d->last_bits= 0;
 
-    d->mb_skipped= 0;
-    d->qscale= s->qscale;
+    d->c.mb_skipped = 0;
+    d->c.qscale = s->c.qscale;
     d->dquant= s->dquant;
 
     d->esc3_level_length= s->esc3_level_length;
 }
 
-static inline void copy_context_after_encode(MpegEncContext *d,
-                                             const MpegEncContext *s)
+static inline void copy_context_after_encode(MPVEncContext *const d,
+                                             const MPVEncContext *const s)
 {
     int i;
 
-    memcpy(d->mv, s->mv, 2*4*2*sizeof(int));
-    memcpy(d->last_mv, s->last_mv, 2*2*2*sizeof(int)); //FIXME is memcpy faster than a loop?
+    memcpy(d->c.mv, s->c.mv, 2*4*2*sizeof(int));
+    memcpy(d->c.last_mv, s->c.last_mv, 2*2*2*sizeof(int)); //FIXME is memcpy faster than a loop?
 
     /* MPEG-1 */
-    d->mb_skip_run= s->mb_skip_run;
+    d->c.mb_skip_run = s->c.mb_skip_run;
     for(i=0; i<3; i++)
-        d->last_dc[i] = s->last_dc[i];
+        d->c.last_dc[i] = s->c.last_dc[i];
 
     /* statistics */
     d->mv_bits= s->mv_bits;
@@ -2667,25 +2670,25 @@ static inline void copy_context_after_encode(MpegEncContext *d,
     d->i_count= s->i_count;
     d->misc_bits= s->misc_bits;
 
-    d->mb_intra= s->mb_intra;
-    d->mb_skipped= s->mb_skipped;
-    d->mv_type= s->mv_type;
-    d->mv_dir= s->mv_dir;
+    d->c.mb_intra   = s->c.mb_intra;
+    d->c.mb_skipped = s->c.mb_skipped;
+    d->c.mv_type    = s->c.mv_type;
+    d->c.mv_dir     = s->c.mv_dir;
     d->pb= s->pb;
-    if(s->data_partitioning){
+    if (s->c.data_partitioning) {
         d->pb2= s->pb2;
         d->tex_pb= s->tex_pb;
     }
-    d->block= s->block;
+    d->c.block = s->c.block;
     for(i=0; i<8; i++)
-        d->block_last_index[i]= s->block_last_index[i];
-    d->interlaced_dct= s->interlaced_dct;
-    d->qscale= s->qscale;
+        d->c.block_last_index[i]= s->c.block_last_index[i];
+    d->c.interlaced_dct = s->c.interlaced_dct;
+    d->c.qscale = s->c.qscale;
 
     d->esc3_level_length= s->esc3_level_length;
 }
 
-static void encode_mb_hq(MpegEncContext *s, MpegEncContext *backup, MpegEncContext *best,
+static void encode_mb_hq(MPVEncContext *const s, MPVEncContext *const backup, MPVEncContext *const best,
                          PutBitContext pb[2], PutBitContext pb2[2], PutBitContext tex_pb[2],
                          int *dmin, int *next_block, int motion_x, int motion_y)
 {
@@ -2694,38 +2697,38 @@ static void encode_mb_hq(MpegEncContext *s, MpegEncContext *backup, MpegEncConte
 
     copy_context_before_encode(s, backup);
 
-    s->block= s->blocks[*next_block];
-    s->pb= pb[*next_block];
-    if(s->data_partitioning){
+    s->c.block = s->c.blocks[*next_block];
+    s->pb      = pb[*next_block];
+    if (s->c.data_partitioning) {
         s->pb2   = pb2   [*next_block];
         s->tex_pb= tex_pb[*next_block];
     }
 
     if(*next_block){
-        memcpy(dest_backup, s->dest, sizeof(s->dest));
-        s->dest[0] = s->sc.rd_scratchpad;
-        s->dest[1] = s->sc.rd_scratchpad + 16*s->linesize;
-        s->dest[2] = s->sc.rd_scratchpad + 16*s->linesize + 8;
-        av_assert0(s->linesize >= 32); //FIXME
+        memcpy(dest_backup, s->c.dest, sizeof(s->c.dest));
+        s->c.dest[0] = s->c.sc.rd_scratchpad;
+        s->c.dest[1] = s->c.sc.rd_scratchpad + 16*s->c.linesize;
+        s->c.dest[2] = s->c.sc.rd_scratchpad + 16*s->c.linesize + 8;
+        av_assert0(s->c.linesize >= 32); //FIXME
     }
 
     encode_mb(s, motion_x, motion_y);
 
     score= put_bits_count(&s->pb);
-    if(s->data_partitioning){
+    if (s->c.data_partitioning) {
         score+= put_bits_count(&s->pb2);
         score+= put_bits_count(&s->tex_pb);
     }
 
-    if(s->avctx->mb_decision == FF_MB_DECISION_RD){
-        mpv_reconstruct_mb(s, s->block);
+    if(s->c.avctx->mb_decision == FF_MB_DECISION_RD){
+        mpv_reconstruct_mb(s, s->c.block);
 
-        score *= s->lambda2;
+        score *= s->c.lambda2;
         score += sse_mb(s) << FF_LAMBDA_SHIFT;
     }
 
     if(*next_block){
-        memcpy(s->dest, dest_backup, sizeof(s->dest));
+        memcpy(s->c.dest, dest_backup, sizeof(s->c.dest));
     }
 
     if(score<*dmin){
@@ -2736,7 +2739,8 @@ static void encode_mb_hq(MpegEncContext *s, MpegEncContext *backup, MpegEncConte
     }
 }
 
-static int sse(MpegEncContext *s, const uint8_t *src1, const uint8_t *src2, int w, int h, int stride){
+static int sse(const MPVEncContext *const s, const uint8_t *src1, const uint8_t *src2, int w, int h, int stride)
+{
     const uint32_t *sq = ff_square_tab + 256;
     int acc=0;
     int x,y;
@@ -2757,129 +2761,128 @@ static int sse(MpegEncContext *s, const uint8_t *src1, const uint8_t *src2, int
     return acc;
 }
 
-static int sse_mb(MpegEncContext *s){
+static int sse_mb(MPVEncContext *const s)
+{
     int w= 16;
     int h= 16;
-    int chroma_mb_w = w >> s->chroma_x_shift;
-    int chroma_mb_h = h >> s->chroma_y_shift;
+    int chroma_mb_w = w >> s->c.chroma_x_shift;
+    int chroma_mb_h = h >> s->c.chroma_y_shift;
 
-    if(s->mb_x*16 + 16 > s->width ) w= s->width - s->mb_x*16;
-    if(s->mb_y*16 + 16 > s->height) h= s->height- s->mb_y*16;
+    if (s->c.mb_x*16 + 16 > s->c.width ) w = s->c.width - s->c.mb_x*16;
+    if (s->c.mb_y*16 + 16 > s->c.height) h = s->c.height- s->c.mb_y*16;
 
     if(w==16 && h==16)
-        return s->n_sse_cmp[0](s, s->new_pic->data[0] + s->mb_x * 16 + s->mb_y * s->linesize * 16,
-                               s->dest[0], s->linesize, 16) +
-               s->n_sse_cmp[1](s, s->new_pic->data[1] + s->mb_x * chroma_mb_w + s->mb_y * s->uvlinesize * chroma_mb_h,
-                               s->dest[1], s->uvlinesize, chroma_mb_h) +
-               s->n_sse_cmp[1](s, s->new_pic->data[2] + s->mb_x * chroma_mb_w + s->mb_y * s->uvlinesize * chroma_mb_h,
-                               s->dest[2], s->uvlinesize, chroma_mb_h);
+        return s->n_sse_cmp[0](s, s->new_pic->data[0] + s->c.mb_x * 16 + s->c.mb_y * s->c.linesize * 16,
+                               s->c.dest[0], s->c.linesize, 16) +
+               s->n_sse_cmp[1](s, s->new_pic->data[1] + s->c.mb_x * chroma_mb_w + s->c.mb_y * s->c.uvlinesize * chroma_mb_h,
+                               s->c.dest[1], s->c.uvlinesize, chroma_mb_h) +
+               s->n_sse_cmp[1](s, s->new_pic->data[2] + s->c.mb_x * chroma_mb_w + s->c.mb_y * s->c.uvlinesize * chroma_mb_h,
+                               s->c.dest[2], s->c.uvlinesize, chroma_mb_h);
     else
-        return  sse(s, s->new_pic->data[0] + s->mb_x * 16 + s->mb_y * s->linesize * 16,
-                    s->dest[0], w, h, s->linesize) +
-                sse(s, s->new_pic->data[1] + s->mb_x * chroma_mb_w + s->mb_y * s->uvlinesize * chroma_mb_h,
-                    s->dest[1], w >> s->chroma_x_shift, h >> s->chroma_y_shift, s->uvlinesize) +
-                sse(s, s->new_pic->data[2] + s->mb_x * chroma_mb_w + s->mb_y * s->uvlinesize * chroma_mb_h,
-                    s->dest[2], w >> s->chroma_x_shift, h >> s->chroma_y_shift, s->uvlinesize);
+        return  sse(s, s->new_pic->data[0] + s->c.mb_x * 16 + s->c.mb_y * s->c.linesize * 16,
+                    s->c.dest[0], w, h, s->c.linesize) +
+                sse(s, s->new_pic->data[1] + s->c.mb_x * chroma_mb_w + s->c.mb_y * s->c.uvlinesize * chroma_mb_h,
+                    s->c.dest[1], w >> s->c.chroma_x_shift, h >> s->c.chroma_y_shift, s->c.uvlinesize) +
+                sse(s, s->new_pic->data[2] + s->c.mb_x * chroma_mb_w + s->c.mb_y * s->c.uvlinesize * chroma_mb_h,
+                    s->c.dest[2], w >> s->c.chroma_x_shift, h >> s->c.chroma_y_shift, s->c.uvlinesize);
 }
 
 static int pre_estimate_motion_thread(AVCodecContext *c, void *arg){
-    MpegEncContext *s= *(void**)arg;
+    MPVEncContext *const s = *(void**)arg;
 
 
-    s->me.pre_pass=1;
-    s->me.dia_size= s->avctx->pre_dia_size;
-    s->first_slice_line=1;
-    for(s->mb_y= s->end_mb_y-1; s->mb_y >= s->start_mb_y; s->mb_y--) {
-        for(s->mb_x=s->mb_width-1; s->mb_x >=0 ;s->mb_x--) {
-            ff_pre_estimate_p_frame_motion(s, s->mb_x, s->mb_y);
-        }
-        s->first_slice_line=0;
+    s->c.me.pre_pass = 1;
+    s->c.me.dia_size = s->c.avctx->pre_dia_size;
+    s->c.first_slice_line = 1;
+    for (s->c.mb_y = s->c.end_mb_y - 1; s->c.mb_y >= s->c.start_mb_y; s->c.mb_y--) {
+        for (s->c.mb_x = s->c.mb_width - 1; s->c.mb_x >=0 ; s->c.mb_x--)
+            ff_pre_estimate_p_frame_motion(s, s->c.mb_x, s->c.mb_y);
+        s->c.first_slice_line = 0;
     }
 
-    s->me.pre_pass=0;
+    s->c.me.pre_pass = 0;
 
     return 0;
 }
 
 static int estimate_motion_thread(AVCodecContext *c, void *arg){
-    MpegEncContext *s= *(void**)arg;
-
-    s->me.dia_size= s->avctx->dia_size;
-    s->first_slice_line=1;
-    for(s->mb_y= s->start_mb_y; s->mb_y < s->end_mb_y; s->mb_y++) {
-        s->mb_x=0; //for block init below
-        ff_init_block_index(s);
-        for(s->mb_x=0; s->mb_x < s->mb_width; s->mb_x++) {
-            s->block_index[0]+=2;
-            s->block_index[1]+=2;
-            s->block_index[2]+=2;
-            s->block_index[3]+=2;
+    MPVEncContext *const s = *(void**)arg;
+
+    s->c.me.dia_size= s->c.avctx->dia_size;
+    s->c.first_slice_line=1;
+    for (s->c.mb_y = s->c.start_mb_y; s->c.mb_y < s->c.end_mb_y; s->c.mb_y++) {
+        s->c.mb_x=0; //for block init below
+        ff_init_block_index(&s->c);
+        for (s->c.mb_x = 0; s->c.mb_x < s->c.mb_width; s->c.mb_x++) {
+            s->c.block_index[0] += 2;
+            s->c.block_index[1] += 2;
+            s->c.block_index[2] += 2;
+            s->c.block_index[3] += 2;
 
             /* compute motion vector & mb_type and store in context */
-            if(s->pict_type==AV_PICTURE_TYPE_B)
-                ff_estimate_b_frame_motion(s, s->mb_x, s->mb_y);
+            if(s->c.pict_type==AV_PICTURE_TYPE_B)
+                ff_estimate_b_frame_motion(s, s->c.mb_x, s->c.mb_y);
             else
-                ff_estimate_p_frame_motion(s, s->mb_x, s->mb_y);
+                ff_estimate_p_frame_motion(s, s->c.mb_x, s->c.mb_y);
         }
-        s->first_slice_line=0;
+        s->c.first_slice_line = 0;
     }
     return 0;
 }
 
 static int mb_var_thread(AVCodecContext *c, void *arg){
-    MpegEncContext *s= *(void**)arg;
-    int mb_x, mb_y;
+    MPVEncContext *const s = *(void**)arg;
 
-    for(mb_y=s->start_mb_y; mb_y < s->end_mb_y; mb_y++) {
-        for(mb_x=0; mb_x < s->mb_width; mb_x++) {
+    for (int mb_y = s->c.start_mb_y; mb_y < s->c.end_mb_y; mb_y++) {
+        for (int mb_x = 0; mb_x < s->c.mb_width; mb_x++) {
             int xx = mb_x * 16;
             int yy = mb_y * 16;
-            const uint8_t *pix = s->new_pic->data[0] + (yy * s->linesize) + xx;
+            const uint8_t *pix = s->new_pic->data[0] + (yy * s->c.linesize) + xx;
             int varc;
-            int sum = s->mpvencdsp.pix_sum(pix, s->linesize);
+            int sum = s->mpvencdsp.pix_sum(pix, s->c.linesize);
 
-            varc = (s->mpvencdsp.pix_norm1(pix, s->linesize) -
+            varc = (s->mpvencdsp.pix_norm1(pix, s->c.linesize) -
                     (((unsigned) sum * sum) >> 8) + 500 + 128) >> 8;
 
-            s->mb_var [s->mb_stride * mb_y + mb_x] = varc;
-            s->mb_mean[s->mb_stride * mb_y + mb_x] = (sum+128)>>8;
-            s->me.mb_var_sum_temp    += varc;
+            s->mb_var [s->c.mb_stride * mb_y + mb_x] = varc;
+            s->mb_mean[s->c.mb_stride * mb_y + mb_x] = (sum+128)>>8;
+            s->c.me.mb_var_sum_temp    += varc;
         }
     }
     return 0;
 }
 
-static void write_slice_end(MpegEncContext *s){
-    if(CONFIG_MPEG4_ENCODER && s->codec_id==AV_CODEC_ID_MPEG4){
-        if(s->partitioned_frame){
+static void write_slice_end(MPVEncContext *const s)
+{
+    if(CONFIG_MPEG4_ENCODER && s->c.codec_id == AV_CODEC_ID_MPEG4) {
+        if (s->c.partitioned_frame)
             ff_mpeg4_merge_partitions(s);
-        }
 
         ff_mpeg4_stuffing(&s->pb);
     } else if ((CONFIG_MJPEG_ENCODER || CONFIG_AMV_ENCODER) &&
-               s->out_format == FMT_MJPEG) {
+               s->c.out_format == FMT_MJPEG) {
         ff_mjpeg_encode_stuffing(s);
-    } else if (CONFIG_SPEEDHQ_ENCODER && s->out_format == FMT_SPEEDHQ) {
+    } else if (CONFIG_SPEEDHQ_ENCODER && s->c.out_format == FMT_SPEEDHQ) {
         ff_speedhq_end_slice(s);
     }
 
     flush_put_bits(&s->pb);
 
-    if ((s->avctx->flags & AV_CODEC_FLAG_PASS1) && !s->partitioned_frame)
+    if ((s->c.avctx->flags & AV_CODEC_FLAG_PASS1) && !s->c.partitioned_frame)
         s->misc_bits+= get_bits_diff(s);
 }
 
-static void write_mb_info(MpegEncContext *s)
+static void write_mb_info(MPVEncContext *const s)
 {
     uint8_t *ptr = s->mb_info_ptr + s->mb_info_size - 12;
     int offset = put_bits_count(&s->pb);
-    int mba  = s->mb_x + s->mb_width * (s->mb_y % s->gob_index);
-    int gobn = s->mb_y / s->gob_index;
+    int mba  = s->c.mb_x + s->c.mb_width * (s->c.mb_y % s->c.gob_index);
+    int gobn = s->c.mb_y / s->c.gob_index;
     int pred_x, pred_y;
     if (CONFIG_H263_ENCODER)
-        ff_h263_pred_motion(s, 0, 0, &pred_x, &pred_y);
+        ff_h263_pred_motion(&s->c, 0, 0, &pred_x, &pred_y);
     bytestream_put_le32(&ptr, offset);
-    bytestream_put_byte(&ptr, s->qscale);
+    bytestream_put_byte(&ptr, s->c.qscale);
     bytestream_put_byte(&ptr, gobn);
     bytestream_put_le16(&ptr, mba);
     bytestream_put_byte(&ptr, pred_x); /* hmv1 */
@@ -2889,7 +2892,7 @@ static void write_mb_info(MpegEncContext *s)
     bytestream_put_byte(&ptr, 0); /* vmv2 */
 }
 
-static void update_mb_info(MpegEncContext *s, int startcode)
+static void update_mb_info(MPVEncContext *const s, int startcode)
 {
     if (!s->mb_info)
         return;
@@ -2912,32 +2915,32 @@ static void update_mb_info(MpegEncContext *s, int startcode)
     write_mb_info(s);
 }
 
-int ff_mpv_reallocate_putbitbuffer(MpegEncContext *s, size_t threshold, size_t size_increase)
+int ff_mpv_reallocate_putbitbuffer(MPVEncContext *const s, size_t threshold, size_t size_increase)
 {
     if (put_bytes_left(&s->pb, 0) < threshold
-        && s->slice_context_count == 1
-        && s->pb.buf == s->avctx->internal->byte_buffer) {
+        && s->c.slice_context_count == 1
+        && s->pb.buf == s->c.avctx->internal->byte_buffer) {
         int lastgob_pos = s->ptr_lastgob - s->pb.buf;
 
         uint8_t *new_buffer = NULL;
         int new_buffer_size = 0;
 
-        if ((s->avctx->internal->byte_buffer_size + size_increase) >= INT_MAX/8) {
-            av_log(s->avctx, AV_LOG_ERROR, "Cannot reallocate putbit buffer\n");
+        if ((s->c.avctx->internal->byte_buffer_size + size_increase) >= INT_MAX/8) {
+            av_log(s->c.avctx, AV_LOG_ERROR, "Cannot reallocate putbit buffer\n");
             return AVERROR(ENOMEM);
         }
 
         emms_c();
 
         av_fast_padded_malloc(&new_buffer, &new_buffer_size,
-                              s->avctx->internal->byte_buffer_size + size_increase);
+                              s->c.avctx->internal->byte_buffer_size + size_increase);
         if (!new_buffer)
             return AVERROR(ENOMEM);
 
-        memcpy(new_buffer, s->avctx->internal->byte_buffer, s->avctx->internal->byte_buffer_size);
-        av_free(s->avctx->internal->byte_buffer);
-        s->avctx->internal->byte_buffer      = new_buffer;
-        s->avctx->internal->byte_buffer_size = new_buffer_size;
+        memcpy(new_buffer, s->c.avctx->internal->byte_buffer, s->c.avctx->internal->byte_buffer_size);
+        av_free(s->c.avctx->internal->byte_buffer);
+        s->c.avctx->internal->byte_buffer      = new_buffer;
+        s->c.avctx->internal->byte_buffer_size = new_buffer_size;
         rebase_put_bits(&s->pb, new_buffer, new_buffer_size);
         s->ptr_lastgob   = s->pb.buf + lastgob_pos;
     }
@@ -2947,11 +2950,10 @@ int ff_mpv_reallocate_putbitbuffer(MpegEncContext *s, size_t threshold, size_t s
 }
 
 static int encode_thread(AVCodecContext *c, void *arg){
-    MpegEncContext *s= *(void**)arg;
-    int mb_x, mb_y, mb_y_order;
-    int chr_h= 16>>s->chroma_y_shift;
+    MPVEncContext *const s = *(void**)arg;
+    int chr_h = 16 >> s->c.chroma_y_shift;
     int i, j;
-    MpegEncContext best_s = { 0 }, backup_s;
+    MPVEncContext best_s = { 0 }, backup_s;
     uint8_t bit_buf[2][MAX_MB_BYTES];
     uint8_t bit_buf2[2][MAX_MB_BYTES];
     uint8_t bit_buf_tex[2][MAX_MB_BYTES];
@@ -2973,82 +2975,83 @@ static int encode_thread(AVCodecContext *c, void *arg){
     for(i=0; i<3; i++){
         /* init last dc values */
         /* note: quant matrix value (8) is implied here */
-        s->last_dc[i] = 128 << s->intra_dc_precision;
+        s->c.last_dc[i] = 128 << s->c.intra_dc_precision;
 
         s->encoding_error[i] = 0;
     }
-    if(s->codec_id==AV_CODEC_ID_AMV){
-        s->last_dc[0] = 128*8/13;
-        s->last_dc[1] = 128*8/14;
-        s->last_dc[2] = 128*8/14;
+    if (s->c.codec_id == AV_CODEC_ID_AMV) {
+        s->c.last_dc[0] = 128 * 8 / 13;
+        s->c.last_dc[1] = 128 * 8 / 14;
+        s->c.last_dc[2] = 128 * 8 / 14;
     }
-    s->mb_skip_run = 0;
-    memset(s->last_mv, 0, sizeof(s->last_mv));
+    s->c.mb_skip_run = 0;
+    memset(s->c.last_mv, 0, sizeof(s->c.last_mv));
 
     s->last_mv_dir = 0;
 
-    switch(s->codec_id){
+    switch(s->c.codec_id){
     case AV_CODEC_ID_H263:
     case AV_CODEC_ID_H263P:
     case AV_CODEC_ID_FLV1:
         if (CONFIG_H263_ENCODER)
-            s->gob_index = H263_GOB_HEIGHT(s->height);
+            s->c.gob_index = H263_GOB_HEIGHT(s->c.height);
         break;
     case AV_CODEC_ID_MPEG4:
-        if(CONFIG_MPEG4_ENCODER && s->partitioned_frame)
+        if(CONFIG_MPEG4_ENCODER && s->c.partitioned_frame)
             ff_mpeg4_init_partitions(s);
         break;
     }
 
-    s->resync_mb_x=0;
-    s->resync_mb_y=0;
-    s->first_slice_line = 1;
+    s->c.resync_mb_x = 0;
+    s->c.resync_mb_y = 0;
+    s->c.first_slice_line = 1;
     s->ptr_lastgob = s->pb.buf;
-    for (mb_y_order = s->start_mb_y; mb_y_order < s->end_mb_y; mb_y_order++) {
-        if (CONFIG_SPEEDHQ_ENCODER && s->codec_id == AV_CODEC_ID_SPEEDHQ) {
+    for (int mb_y_order = s->c.start_mb_y; mb_y_order < s->c.end_mb_y; mb_y_order++) {
+        int mb_y;
+        if (CONFIG_SPEEDHQ_ENCODER && s->c.codec_id == AV_CODEC_ID_SPEEDHQ) {
             int first_in_slice;
-            mb_y = ff_speedhq_mb_y_order_to_mb(mb_y_order, s->mb_height, &first_in_slice);
-            if (first_in_slice && mb_y_order != s->start_mb_y)
+            mb_y = ff_speedhq_mb_y_order_to_mb(mb_y_order, s->c.mb_height, &first_in_slice);
+            if (first_in_slice && mb_y_order != s->c.start_mb_y)
                 ff_speedhq_end_slice(s);
-            s->last_dc[0] = s->last_dc[1] = s->last_dc[2] = 1024 << s->intra_dc_precision;
+            s->c.last_dc[0] = s->c.last_dc[1] = s->c.last_dc[2] = 1024 << s->c.intra_dc_precision;
         } else {
             mb_y = mb_y_order;
         }
-        s->mb_x=0;
-        s->mb_y= mb_y;
+        s->c.mb_x = 0;
+        s->c.mb_y = mb_y;
 
-        ff_set_qscale(s, s->qscale);
-        ff_init_block_index(s);
+        ff_set_qscale(&s->c, s->c.qscale);
+        ff_init_block_index(&s->c);
 
-        for(mb_x=0; mb_x < s->mb_width; mb_x++) {
-            int xy= mb_y*s->mb_stride + mb_x; // removed const, H261 needs to adjust this
+        for (int mb_x = 0; mb_x < s->c.mb_width; mb_x++) {
+            int xy = mb_y*s->c.mb_stride + mb_x; // removed const, H261 needs to adjust this
             int mb_type= s->mb_type[xy];
 //            int d;
             int dmin= INT_MAX;
             int dir;
-            int size_increase =  s->avctx->internal->byte_buffer_size/4
-                               + s->mb_width*MAX_MB_BYTES;
+            int size_increase =  s->c.avctx->internal->byte_buffer_size/4
+                               + s->c.mb_width*MAX_MB_BYTES;
 
             ff_mpv_reallocate_putbitbuffer(s, MAX_MB_BYTES, size_increase);
             if (put_bytes_left(&s->pb, 0) < MAX_MB_BYTES){
-                av_log(s->avctx, AV_LOG_ERROR, "encoded frame too large\n");
+                av_log(s->c.avctx, AV_LOG_ERROR, "encoded frame too large\n");
                 return -1;
             }
-            if(s->data_partitioning){
+            if (s->c.data_partitioning) {
                 if (put_bytes_left(&s->pb2,    0) < MAX_MB_BYTES ||
                     put_bytes_left(&s->tex_pb, 0) < MAX_MB_BYTES) {
-                    av_log(s->avctx, AV_LOG_ERROR, "encoded partitioned frame too large\n");
+                    av_log(s->c.avctx, AV_LOG_ERROR, "encoded partitioned frame too large\n");
                     return -1;
                 }
             }
 
-            s->mb_x = mb_x;
-            s->mb_y = mb_y;  // moved into loop, can get changed by H.261
-            ff_update_block_index(s, 8, 0, s->chroma_x_shift);
+            s->c.mb_x = mb_x;
+            s->c.mb_y = mb_y;  // moved into loop, can get changed by H.261
+            ff_update_block_index(&s->c, 8, 0, s->c.chroma_x_shift);
 
-            if(CONFIG_H261_ENCODER && s->codec_id == AV_CODEC_ID_H261){
+            if(CONFIG_H261_ENCODER && s->c.codec_id == AV_CODEC_ID_H261){
                 ff_h261_reorder_mb_index(s);
-                xy= s->mb_y*s->mb_stride + s->mb_x;
+                xy = s->c.mb_y*s->c.mb_stride + s->c.mb_x;
                 mb_type= s->mb_type[xy];
             }
 
@@ -3063,40 +3066,39 @@ static int encode_thread(AVCodecContext *c, void *arg){
                                current_packet_size >= s->rtp_payload_size &&
                                mb_y + mb_x > 0;
 
-                if(s->start_mb_y == mb_y && mb_y > 0 && mb_x==0) is_gob_start=1;
+                if (s->c.start_mb_y == mb_y && mb_y > 0 && mb_x == 0) is_gob_start = 1;
 
-                switch(s->codec_id){
+                switch(s->c.codec_id){
                 case AV_CODEC_ID_H263:
                 case AV_CODEC_ID_H263P:
-                    if(!s->h263_slice_structured)
-                        if(s->mb_x || s->mb_y%s->gob_index) is_gob_start=0;
+                    if (!s->c.h263_slice_structured)
+                        if (s->c.mb_x || s->c.mb_y % s->c.gob_index) is_gob_start = 0;
                     break;
                 case AV_CODEC_ID_MPEG2VIDEO:
-                    if(s->mb_x==0 && s->mb_y!=0) is_gob_start=1;
+                    if (s->c.mb_x == 0 && s->c.mb_y != 0) is_gob_start = 1;
                 case AV_CODEC_ID_MPEG1VIDEO:
-                    if (s->codec_id == AV_CODEC_ID_MPEG1VIDEO && s->mb_y >= 175 ||
-                        s->mb_skip_run)
+                    if (s->c.codec_id == AV_CODEC_ID_MPEG1VIDEO && s->c.mb_y >= 175 ||
+                        s->c.mb_skip_run)
                         is_gob_start=0;
                     break;
                 case AV_CODEC_ID_MJPEG:
-                    if(s->mb_x==0 && s->mb_y!=0) is_gob_start=1;
+                    if (s->c.mb_x == 0 && s->c.mb_y != 0) is_gob_start = 1;
                     break;
                 }
 
                 if(is_gob_start){
-                    if(s->start_mb_y != mb_y || mb_x!=0){
+                    if (s->c.start_mb_y != mb_y || mb_x != 0) {
                         write_slice_end(s);
 
-                        if(CONFIG_MPEG4_ENCODER && s->codec_id==AV_CODEC_ID_MPEG4 && s->partitioned_frame){
+                        if (CONFIG_MPEG4_ENCODER && s->c.codec_id==AV_CODEC_ID_MPEG4 && s->c.partitioned_frame)
                             ff_mpeg4_init_partitions(s);
-                        }
                     }
 
                     av_assert2((put_bits_count(&s->pb)&7) == 0);
                     current_packet_size= put_bits_ptr(&s->pb) - s->ptr_lastgob;
 
-                    if (s->error_rate && s->resync_mb_x + s->resync_mb_y > 0) {
-                        int r = put_bytes_count(&s->pb, 0) + s->picture_number + 16 + s->mb_x + s->mb_y;
+                    if (s->error_rate && s->c.resync_mb_x + s->c.resync_mb_y > 0) {
+                        int r = put_bytes_count(&s->pb, 0) + s->c.picture_number + 16 + s->c.mb_x + s->c.mb_y;
                         int d = 100 / s->error_rate;
                         if(r % d == 0){
                             current_packet_size=0;
@@ -3105,18 +3107,18 @@ static int encode_thread(AVCodecContext *c, void *arg){
                         }
                     }
 
-                    switch(s->codec_id){
+                    switch (s->c.codec_id) {
                     case AV_CODEC_ID_MPEG4:
                         if (CONFIG_MPEG4_ENCODER) {
                             ff_mpeg4_encode_video_packet_header(s);
-                            ff_mpeg4_clean_buffers(s);
+                            ff_mpeg4_clean_buffers(&s->c);
                         }
                     break;
                     case AV_CODEC_ID_MPEG1VIDEO:
                     case AV_CODEC_ID_MPEG2VIDEO:
                         if (CONFIG_MPEG1VIDEO_ENCODER || CONFIG_MPEG2VIDEO_ENCODER) {
                             ff_mpeg1_encode_slice_header(s);
-                            ff_mpeg1_clean_buffers(s);
+                            ff_mpeg1_clean_buffers(&s->c);
                         }
                     break;
                     case AV_CODEC_ID_H263:
@@ -3128,25 +3130,25 @@ static int encode_thread(AVCodecContext *c, void *arg){
                     break;
                     }
 
-                    if (s->avctx->flags & AV_CODEC_FLAG_PASS1) {
+                    if (s->c.avctx->flags & AV_CODEC_FLAG_PASS1) {
                         int bits= put_bits_count(&s->pb);
                         s->misc_bits+= bits - s->last_bits;
                         s->last_bits= bits;
                     }
 
-                    s->ptr_lastgob += current_packet_size;
-                    s->first_slice_line=1;
-                    s->resync_mb_x=mb_x;
-                    s->resync_mb_y=mb_y;
+                    s->ptr_lastgob       += current_packet_size;
+                    s->c.first_slice_line = 1;
+                    s->c.resync_mb_x      = mb_x;
+                    s->c.resync_mb_y      = mb_y;
                 }
             }
 
-            if(  (s->resync_mb_x   == s->mb_x)
-               && s->resync_mb_y+1 == s->mb_y){
-                s->first_slice_line=0;
+            if(  (s->c.resync_mb_x   == s->c.mb_x)
+               && s->c.resync_mb_y+1 == s->c.mb_y){
+                s->c.first_slice_line = 0;
             }
 
-            s->mb_skipped=0;
+            s->c.mb_skipped = 0;
             s->dquant=0; //only for QP_RD
 
             update_mb_info(s, 0);
@@ -3157,173 +3159,173 @@ static int encode_thread(AVCodecContext *c, void *arg){
 
                 copy_context_before_encode(&backup_s, s);
                 backup_s.pb= s->pb;
-                best_s.data_partitioning= s->data_partitioning;
-                best_s.partitioned_frame= s->partitioned_frame;
-                if(s->data_partitioning){
+                best_s.c.data_partitioning = s->c.data_partitioning;
+                best_s.c.partitioned_frame = s->c.partitioned_frame;
+                if (s->c.data_partitioning) {
                     backup_s.pb2= s->pb2;
                     backup_s.tex_pb= s->tex_pb;
                 }
 
                 if(mb_type&CANDIDATE_MB_TYPE_INTER){
-                    s->mv_dir = MV_DIR_FORWARD;
-                    s->mv_type = MV_TYPE_16X16;
-                    s->mb_intra= 0;
-                    s->mv[0][0][0] = s->p_mv_table[xy][0];
-                    s->mv[0][0][1] = s->p_mv_table[xy][1];
+                    s->c.mv_dir      = MV_DIR_FORWARD;
+                    s->c.mv_type     = MV_TYPE_16X16;
+                    s->c.mb_intra    = 0;
+                    s->c.mv[0][0][0] = s->p_mv_table[xy][0];
+                    s->c.mv[0][0][1] = s->p_mv_table[xy][1];
                     encode_mb_hq(s, &backup_s, &best_s, pb, pb2, tex_pb,
-                                 &dmin, &next_block, s->mv[0][0][0], s->mv[0][0][1]);
+                                 &dmin, &next_block, s->c.mv[0][0][0], s->c.mv[0][0][1]);
                 }
                 if(mb_type&CANDIDATE_MB_TYPE_INTER_I){
-                    s->mv_dir = MV_DIR_FORWARD;
-                    s->mv_type = MV_TYPE_FIELD;
-                    s->mb_intra= 0;
+                    s->c.mv_dir   = MV_DIR_FORWARD;
+                    s->c.mv_type  = MV_TYPE_FIELD;
+                    s->c.mb_intra = 0;
                     for(i=0; i<2; i++){
-                        j= s->field_select[0][i] = s->p_field_select_table[i][xy];
-                        s->mv[0][i][0] = s->p_field_mv_table[i][j][xy][0];
-                        s->mv[0][i][1] = s->p_field_mv_table[i][j][xy][1];
+                        j = s->c.field_select[0][i] = s->p_field_select_table[i][xy];
+                        s->c.mv[0][i][0] = s->c.p_field_mv_table[i][j][xy][0];
+                        s->c.mv[0][i][1] = s->c.p_field_mv_table[i][j][xy][1];
                     }
                     encode_mb_hq(s, &backup_s, &best_s, pb, pb2, tex_pb,
                                  &dmin, &next_block, 0, 0);
                 }
                 if(mb_type&CANDIDATE_MB_TYPE_SKIPPED){
-                    s->mv_dir = MV_DIR_FORWARD;
-                    s->mv_type = MV_TYPE_16X16;
-                    s->mb_intra= 0;
-                    s->mv[0][0][0] = 0;
-                    s->mv[0][0][1] = 0;
+                    s->c.mv_dir      = MV_DIR_FORWARD;
+                    s->c.mv_type     = MV_TYPE_16X16;
+                    s->c.mb_intra    = 0;
+                    s->c.mv[0][0][0] = 0;
+                    s->c.mv[0][0][1] = 0;
                     encode_mb_hq(s, &backup_s, &best_s, pb, pb2, tex_pb,
-                                 &dmin, &next_block, s->mv[0][0][0], s->mv[0][0][1]);
+                                 &dmin, &next_block, s->c.mv[0][0][0], s->c.mv[0][0][1]);
                 }
                 if(mb_type&CANDIDATE_MB_TYPE_INTER4V){
-                    s->mv_dir = MV_DIR_FORWARD;
-                    s->mv_type = MV_TYPE_8X8;
-                    s->mb_intra= 0;
+                    s->c.mv_dir   = MV_DIR_FORWARD;
+                    s->c.mv_type  = MV_TYPE_8X8;
+                    s->c.mb_intra = 0;
                     for(i=0; i<4; i++){
-                        s->mv[0][i][0] = s->cur_pic.motion_val[0][s->block_index[i]][0];
-                        s->mv[0][i][1] = s->cur_pic.motion_val[0][s->block_index[i]][1];
+                        s->c.mv[0][i][0] = s->c.cur_pic.motion_val[0][s->c.block_index[i]][0];
+                        s->c.mv[0][i][1] = s->c.cur_pic.motion_val[0][s->c.block_index[i]][1];
                     }
                     encode_mb_hq(s, &backup_s, &best_s, pb, pb2, tex_pb,
                                  &dmin, &next_block, 0, 0);
                 }
                 if(mb_type&CANDIDATE_MB_TYPE_FORWARD){
-                    s->mv_dir = MV_DIR_FORWARD;
-                    s->mv_type = MV_TYPE_16X16;
-                    s->mb_intra= 0;
-                    s->mv[0][0][0] = s->b_forw_mv_table[xy][0];
-                    s->mv[0][0][1] = s->b_forw_mv_table[xy][1];
+                    s->c.mv_dir      = MV_DIR_FORWARD;
+                    s->c.mv_type     = MV_TYPE_16X16;
+                    s->c.mb_intra    = 0;
+                    s->c.mv[0][0][0] = s->b_forw_mv_table[xy][0];
+                    s->c.mv[0][0][1] = s->b_forw_mv_table[xy][1];
                     encode_mb_hq(s, &backup_s, &best_s, pb, pb2, tex_pb,
-                                 &dmin, &next_block, s->mv[0][0][0], s->mv[0][0][1]);
+                                 &dmin, &next_block, s->c.mv[0][0][0], s->c.mv[0][0][1]);
                 }
                 if(mb_type&CANDIDATE_MB_TYPE_BACKWARD){
-                    s->mv_dir = MV_DIR_BACKWARD;
-                    s->mv_type = MV_TYPE_16X16;
-                    s->mb_intra= 0;
-                    s->mv[1][0][0] = s->b_back_mv_table[xy][0];
-                    s->mv[1][0][1] = s->b_back_mv_table[xy][1];
+                    s->c.mv_dir      = MV_DIR_BACKWARD;
+                    s->c.mv_type     = MV_TYPE_16X16;
+                    s->c.mb_intra    = 0;
+                    s->c.mv[1][0][0] = s->b_back_mv_table[xy][0];
+                    s->c.mv[1][0][1] = s->b_back_mv_table[xy][1];
                     encode_mb_hq(s, &backup_s, &best_s, pb, pb2, tex_pb,
-                                 &dmin, &next_block, s->mv[1][0][0], s->mv[1][0][1]);
+                                 &dmin, &next_block, s->c.mv[1][0][0], s->c.mv[1][0][1]);
                 }
                 if(mb_type&CANDIDATE_MB_TYPE_BIDIR){
-                    s->mv_dir = MV_DIR_FORWARD | MV_DIR_BACKWARD;
-                    s->mv_type = MV_TYPE_16X16;
-                    s->mb_intra= 0;
-                    s->mv[0][0][0] = s->b_bidir_forw_mv_table[xy][0];
-                    s->mv[0][0][1] = s->b_bidir_forw_mv_table[xy][1];
-                    s->mv[1][0][0] = s->b_bidir_back_mv_table[xy][0];
-                    s->mv[1][0][1] = s->b_bidir_back_mv_table[xy][1];
+                    s->c.mv_dir = MV_DIR_FORWARD | MV_DIR_BACKWARD;
+                    s->c.mv_type = MV_TYPE_16X16;
+                    s->c.mb_intra= 0;
+                    s->c.mv[0][0][0] = s->b_bidir_forw_mv_table[xy][0];
+                    s->c.mv[0][0][1] = s->b_bidir_forw_mv_table[xy][1];
+                    s->c.mv[1][0][0] = s->b_bidir_back_mv_table[xy][0];
+                    s->c.mv[1][0][1] = s->b_bidir_back_mv_table[xy][1];
                     encode_mb_hq(s, &backup_s, &best_s, pb, pb2, tex_pb,
                                  &dmin, &next_block, 0, 0);
                 }
                 if(mb_type&CANDIDATE_MB_TYPE_FORWARD_I){
-                    s->mv_dir = MV_DIR_FORWARD;
-                    s->mv_type = MV_TYPE_FIELD;
-                    s->mb_intra= 0;
+                    s->c.mv_dir = MV_DIR_FORWARD;
+                    s->c.mv_type = MV_TYPE_FIELD;
+                    s->c.mb_intra= 0;
                     for(i=0; i<2; i++){
-                        j= s->field_select[0][i] = s->b_field_select_table[0][i][xy];
-                        s->mv[0][i][0] = s->b_field_mv_table[0][i][j][xy][0];
-                        s->mv[0][i][1] = s->b_field_mv_table[0][i][j][xy][1];
+                        j = s->c.field_select[0][i] = s->b_field_select_table[0][i][xy];
+                        s->c.mv[0][i][0] = s->b_field_mv_table[0][i][j][xy][0];
+                        s->c.mv[0][i][1] = s->b_field_mv_table[0][i][j][xy][1];
                     }
                     encode_mb_hq(s, &backup_s, &best_s, pb, pb2, tex_pb,
                                  &dmin, &next_block, 0, 0);
                 }
                 if(mb_type&CANDIDATE_MB_TYPE_BACKWARD_I){
-                    s->mv_dir = MV_DIR_BACKWARD;
-                    s->mv_type = MV_TYPE_FIELD;
-                    s->mb_intra= 0;
+                    s->c.mv_dir = MV_DIR_BACKWARD;
+                    s->c.mv_type = MV_TYPE_FIELD;
+                    s->c.mb_intra= 0;
                     for(i=0; i<2; i++){
-                        j= s->field_select[1][i] = s->b_field_select_table[1][i][xy];
-                        s->mv[1][i][0] = s->b_field_mv_table[1][i][j][xy][0];
-                        s->mv[1][i][1] = s->b_field_mv_table[1][i][j][xy][1];
+                        j = s->c.field_select[1][i] = s->b_field_select_table[1][i][xy];
+                        s->c.mv[1][i][0] = s->b_field_mv_table[1][i][j][xy][0];
+                        s->c.mv[1][i][1] = s->b_field_mv_table[1][i][j][xy][1];
                     }
                     encode_mb_hq(s, &backup_s, &best_s, pb, pb2, tex_pb,
                                  &dmin, &next_block, 0, 0);
                 }
                 if(mb_type&CANDIDATE_MB_TYPE_BIDIR_I){
-                    s->mv_dir = MV_DIR_FORWARD | MV_DIR_BACKWARD;
-                    s->mv_type = MV_TYPE_FIELD;
-                    s->mb_intra= 0;
+                    s->c.mv_dir   = MV_DIR_FORWARD | MV_DIR_BACKWARD;
+                    s->c.mv_type  = MV_TYPE_FIELD;
+                    s->c.mb_intra = 0;
                     for(dir=0; dir<2; dir++){
                         for(i=0; i<2; i++){
-                            j= s->field_select[dir][i] = s->b_field_select_table[dir][i][xy];
-                            s->mv[dir][i][0] = s->b_field_mv_table[dir][i][j][xy][0];
-                            s->mv[dir][i][1] = s->b_field_mv_table[dir][i][j][xy][1];
+                            j = s->c.field_select[dir][i] = s->b_field_select_table[dir][i][xy];
+                            s->c.mv[dir][i][0] = s->b_field_mv_table[dir][i][j][xy][0];
+                            s->c.mv[dir][i][1] = s->b_field_mv_table[dir][i][j][xy][1];
                         }
                     }
                     encode_mb_hq(s, &backup_s, &best_s, pb, pb2, tex_pb,
                                  &dmin, &next_block, 0, 0);
                 }
                 if(mb_type&CANDIDATE_MB_TYPE_INTRA){
-                    s->mv_dir = 0;
-                    s->mv_type = MV_TYPE_16X16;
-                    s->mb_intra= 1;
-                    s->mv[0][0][0] = 0;
-                    s->mv[0][0][1] = 0;
+                    s->c.mv_dir      = 0;
+                    s->c.mv_type     = MV_TYPE_16X16;
+                    s->c.mb_intra    = 1;
+                    s->c.mv[0][0][0] = 0;
+                    s->c.mv[0][0][1] = 0;
                     encode_mb_hq(s, &backup_s, &best_s, pb, pb2, tex_pb,
                                  &dmin, &next_block, 0, 0);
-                    s->mbintra_table[xy] = 1;
+                    s->c.mbintra_table[xy] = 1;
                 }
 
                 if ((s->mpv_flags & FF_MPV_FLAG_QP_RD) && dmin < INT_MAX) {
-                    if(best_s.mv_type==MV_TYPE_16X16){ //FIXME move 4mv after QPRD
-                        const int last_qp= backup_s.qscale;
+                    if (best_s.c.mv_type == MV_TYPE_16X16) { //FIXME move 4mv after QPRD
+                        const int last_qp = backup_s.c.qscale;
                         int qpi, qp, dc[6];
                         int16_t ac[6][16];
-                        const int mvdir= (best_s.mv_dir&MV_DIR_BACKWARD) ? 1 : 0;
+                        const int mvdir = (best_s.c.mv_dir & MV_DIR_BACKWARD) ? 1 : 0;
                         static const int dquant_tab[4]={-1,1,-2,2};
-                        int storecoefs = s->mb_intra && s->dc_val[0];
+                        int storecoefs = s->c.mb_intra && s->c.dc_val[0];
 
                         av_assert2(backup_s.dquant == 0);
 
                         //FIXME intra
-                        s->mv_dir= best_s.mv_dir;
-                        s->mv_type = MV_TYPE_16X16;
-                        s->mb_intra= best_s.mb_intra;
-                        s->mv[0][0][0] = best_s.mv[0][0][0];
-                        s->mv[0][0][1] = best_s.mv[0][0][1];
-                        s->mv[1][0][0] = best_s.mv[1][0][0];
-                        s->mv[1][0][1] = best_s.mv[1][0][1];
-
-                        qpi = s->pict_type == AV_PICTURE_TYPE_B ? 2 : 0;
+                        s->c.mv_dir   = best_s.c.mv_dir;
+                        s->c.mv_type  = MV_TYPE_16X16;
+                        s->c.mb_intra = best_s.c.mb_intra;
+                        s->c.mv[0][0][0] = best_s.c.mv[0][0][0];
+                        s->c.mv[0][0][1] = best_s.c.mv[0][0][1];
+                        s->c.mv[1][0][0] = best_s.c.mv[1][0][0];
+                        s->c.mv[1][0][1] = best_s.c.mv[1][0][1];
+
+                        qpi = s->c.pict_type == AV_PICTURE_TYPE_B ? 2 : 0;
                         for(; qpi<4; qpi++){
                             int dquant= dquant_tab[qpi];
                             qp= last_qp + dquant;
-                            if(qp < s->avctx->qmin || qp > s->avctx->qmax)
+                            if (qp < s->c.avctx->qmin || qp > s->c.avctx->qmax)
                                 continue;
                             backup_s.dquant= dquant;
                             if(storecoefs){
                                 for(i=0; i<6; i++){
-                                    dc[i]= s->dc_val[0][ s->block_index[i] ];
-                                    memcpy(ac[i], s->ac_val[0][s->block_index[i]], sizeof(int16_t)*16);
+                                    dc[i] = s->c.dc_val[0][s->c.block_index[i]];
+                                    memcpy(ac[i], s->c.ac_val[0][s->c.block_index[i]], sizeof(int16_t)*16);
                                 }
                             }
 
                             encode_mb_hq(s, &backup_s, &best_s, pb, pb2, tex_pb,
-                                         &dmin, &next_block, s->mv[mvdir][0][0], s->mv[mvdir][0][1]);
-                            if(best_s.qscale != qp){
+                                         &dmin, &next_block, s->c.mv[mvdir][0][0], s->c.mv[mvdir][0][1]);
+                            if (best_s.c.qscale != qp) {
                                 if(storecoefs){
                                     for(i=0; i<6; i++){
-                                        s->dc_val[0][ s->block_index[i] ]= dc[i];
-                                        memcpy(s->ac_val[0][s->block_index[i]], ac[i], sizeof(int16_t)*16);
+                                        s->c.dc_val[0][s->c.block_index[i]] = dc[i];
+                                        memcpy(s->c.ac_val[0][s->c.block_index[i]], ac[i], sizeof(int16_t)*16);
                                     }
                                 }
                             }
@@ -3335,45 +3337,45 @@ static int encode_thread(AVCodecContext *c, void *arg){
                     int my= s->b_direct_mv_table[xy][1];
 
                     backup_s.dquant = 0;
-                    s->mv_dir = MV_DIR_FORWARD | MV_DIR_BACKWARD | MV_DIRECT;
-                    s->mb_intra= 0;
-                    ff_mpeg4_set_direct_mv(s, mx, my);
+                    s->c.mv_dir     = MV_DIR_FORWARD | MV_DIR_BACKWARD | MV_DIRECT;
+                    s->c.mb_intra   = 0;
+                    ff_mpeg4_set_direct_mv(&s->c, mx, my);
                     encode_mb_hq(s, &backup_s, &best_s, pb, pb2, tex_pb,
                                  &dmin, &next_block, mx, my);
                 }
                 if(CONFIG_MPEG4_ENCODER && mb_type&CANDIDATE_MB_TYPE_DIRECT0){
                     backup_s.dquant = 0;
-                    s->mv_dir = MV_DIR_FORWARD | MV_DIR_BACKWARD | MV_DIRECT;
-                    s->mb_intra= 0;
-                    ff_mpeg4_set_direct_mv(s, 0, 0);
+                    s->c.mv_dir   = MV_DIR_FORWARD | MV_DIR_BACKWARD | MV_DIRECT;
+                    s->c.mb_intra = 0;
+                    ff_mpeg4_set_direct_mv(&s->c, 0, 0);
                     encode_mb_hq(s, &backup_s, &best_s, pb, pb2, tex_pb,
                                  &dmin, &next_block, 0, 0);
                 }
-                if (!best_s.mb_intra && s->mpv_flags & FF_MPV_FLAG_SKIP_RD) {
+                if (!best_s.c.mb_intra && s->mpv_flags & FF_MPV_FLAG_SKIP_RD) {
                     int coded=0;
                     for(i=0; i<6; i++)
-                        coded |= s->block_last_index[i];
+                        coded |= s->c.block_last_index[i];
                     if(coded){
                         int mx,my;
-                        memcpy(s->mv, best_s.mv, sizeof(s->mv));
-                        if(CONFIG_MPEG4_ENCODER && best_s.mv_dir & MV_DIRECT){
+                        memcpy(s->c.mv, best_s.c.mv, sizeof(s->c.mv));
+                        if (CONFIG_MPEG4_ENCODER && best_s.c.mv_dir & MV_DIRECT) {
                             mx=my=0; //FIXME find the one we actually used
-                            ff_mpeg4_set_direct_mv(s, mx, my);
-                        }else if(best_s.mv_dir&MV_DIR_BACKWARD){
-                            mx= s->mv[1][0][0];
-                            my= s->mv[1][0][1];
+                            ff_mpeg4_set_direct_mv(&s->c, mx, my);
+                        } else if (best_s.c.mv_dir & MV_DIR_BACKWARD) {
+                            mx = s->c.mv[1][0][0];
+                            my = s->c.mv[1][0][1];
                         }else{
-                            mx= s->mv[0][0][0];
-                            my= s->mv[0][0][1];
+                            mx = s->c.mv[0][0][0];
+                            my = s->c.mv[0][0][1];
                         }
 
-                        s->mv_dir= best_s.mv_dir;
-                        s->mv_type = best_s.mv_type;
-                        s->mb_intra= 0;
-/*                        s->mv[0][0][0] = best_s.mv[0][0][0];
-                        s->mv[0][0][1] = best_s.mv[0][0][1];
-                        s->mv[1][0][0] = best_s.mv[1][0][0];
-                        s->mv[1][0][1] = best_s.mv[1][0][1];*/
+                        s->c.mv_dir   = best_s.c.mv_dir;
+                        s->c.mv_type  = best_s.c.mv_type;
+                        s->c.mb_intra = 0;
+/*                        s->c.mv[0][0][0] = best_s.mv[0][0][0];
+                        s->c.mv[0][0][1] = best_s.mv[0][0][1];
+                        s->c.mv[1][0][0] = best_s.mv[1][0][0];
+                        s->c.mv[1][0][1] = best_s.mv[1][0][1];*/
                         backup_s.dquant= 0;
                         s->skipdct=1;
                         encode_mb_hq(s, &backup_s, &best_s, pb, pb2, tex_pb,
@@ -3389,7 +3391,7 @@ static int encode_thread(AVCodecContext *c, void *arg){
                 ff_copy_bits(&backup_s.pb, bit_buf[next_block^1], pb_bits_count);
                 s->pb= backup_s.pb;
 
-                if(s->data_partitioning){
+                if (s->c.data_partitioning) {
                     pb2_bits_count= put_bits_count(&s->pb2);
                     flush_put_bits(&s->pb2);
                     ff_copy_bits(&backup_s.pb2, bit_buf2[next_block^1], pb2_bits_count);
@@ -3403,178 +3405,178 @@ static int encode_thread(AVCodecContext *c, void *arg){
                 s->last_bits= put_bits_count(&s->pb);
 
                 if (CONFIG_H263_ENCODER &&
-                    s->out_format == FMT_H263 && s->pict_type!=AV_PICTURE_TYPE_B)
+                    s->c.out_format == FMT_H263 && s->c.pict_type!=AV_PICTURE_TYPE_B)
                     ff_h263_update_mb(s);
 
                 if(next_block==0){ //FIXME 16 vs linesize16
-                    s->hdsp.put_pixels_tab[0][0](s->dest[0], s->sc.rd_scratchpad                     , s->linesize  ,16);
-                    s->hdsp.put_pixels_tab[1][0](s->dest[1], s->sc.rd_scratchpad + 16*s->linesize    , s->uvlinesize, 8);
-                    s->hdsp.put_pixels_tab[1][0](s->dest[2], s->sc.rd_scratchpad + 16*s->linesize + 8, s->uvlinesize, 8);
+                    s->c.hdsp.put_pixels_tab[0][0](s->c.dest[0], s->c.sc.rd_scratchpad                     , s->c.linesize  ,16);
+                    s->c.hdsp.put_pixels_tab[1][0](s->c.dest[1], s->c.sc.rd_scratchpad + 16*s->c.linesize    , s->c.uvlinesize, 8);
+                    s->c.hdsp.put_pixels_tab[1][0](s->c.dest[2], s->c.sc.rd_scratchpad + 16*s->c.linesize + 8, s->c.uvlinesize, 8);
                 }
 
-                if(s->avctx->mb_decision == FF_MB_DECISION_BITS)
-                    mpv_reconstruct_mb(s, s->block);
+                if(s->c.avctx->mb_decision == FF_MB_DECISION_BITS)
+                    mpv_reconstruct_mb(s, s->c.block);
             } else {
                 int motion_x = 0, motion_y = 0;
-                s->mv_type=MV_TYPE_16X16;
+                s->c.mv_type=MV_TYPE_16X16;
                 // only one MB-Type possible
 
                 switch(mb_type){
                 case CANDIDATE_MB_TYPE_INTRA:
-                    s->mv_dir = 0;
-                    s->mb_intra= 1;
-                    motion_x= s->mv[0][0][0] = 0;
-                    motion_y= s->mv[0][0][1] = 0;
-                    s->mbintra_table[xy] = 1;
+                    s->c.mv_dir = 0;
+                    s->c.mb_intra= 1;
+                    motion_x= s->c.mv[0][0][0] = 0;
+                    motion_y= s->c.mv[0][0][1] = 0;
+                    s->c.mbintra_table[xy] = 1;
                     break;
                 case CANDIDATE_MB_TYPE_INTER:
-                    s->mv_dir = MV_DIR_FORWARD;
-                    s->mb_intra= 0;
-                    motion_x= s->mv[0][0][0] = s->p_mv_table[xy][0];
-                    motion_y= s->mv[0][0][1] = s->p_mv_table[xy][1];
+                    s->c.mv_dir = MV_DIR_FORWARD;
+                    s->c.mb_intra= 0;
+                    motion_x= s->c.mv[0][0][0] = s->p_mv_table[xy][0];
+                    motion_y= s->c.mv[0][0][1] = s->p_mv_table[xy][1];
                     break;
                 case CANDIDATE_MB_TYPE_INTER_I:
-                    s->mv_dir = MV_DIR_FORWARD;
-                    s->mv_type = MV_TYPE_FIELD;
-                    s->mb_intra= 0;
+                    s->c.mv_dir = MV_DIR_FORWARD;
+                    s->c.mv_type = MV_TYPE_FIELD;
+                    s->c.mb_intra= 0;
                     for(i=0; i<2; i++){
-                        j= s->field_select[0][i] = s->p_field_select_table[i][xy];
-                        s->mv[0][i][0] = s->p_field_mv_table[i][j][xy][0];
-                        s->mv[0][i][1] = s->p_field_mv_table[i][j][xy][1];
+                        j= s->c.field_select[0][i] = s->p_field_select_table[i][xy];
+                        s->c.mv[0][i][0] = s->c.p_field_mv_table[i][j][xy][0];
+                        s->c.mv[0][i][1] = s->c.p_field_mv_table[i][j][xy][1];
                     }
                     break;
                 case CANDIDATE_MB_TYPE_INTER4V:
-                    s->mv_dir = MV_DIR_FORWARD;
-                    s->mv_type = MV_TYPE_8X8;
-                    s->mb_intra= 0;
+                    s->c.mv_dir = MV_DIR_FORWARD;
+                    s->c.mv_type = MV_TYPE_8X8;
+                    s->c.mb_intra= 0;
                     for(i=0; i<4; i++){
-                        s->mv[0][i][0] = s->cur_pic.motion_val[0][s->block_index[i]][0];
-                        s->mv[0][i][1] = s->cur_pic.motion_val[0][s->block_index[i]][1];
+                        s->c.mv[0][i][0] = s->c.cur_pic.motion_val[0][s->c.block_index[i]][0];
+                        s->c.mv[0][i][1] = s->c.cur_pic.motion_val[0][s->c.block_index[i]][1];
                     }
                     break;
                 case CANDIDATE_MB_TYPE_DIRECT:
                     if (CONFIG_MPEG4_ENCODER) {
-                        s->mv_dir = MV_DIR_FORWARD|MV_DIR_BACKWARD|MV_DIRECT;
-                        s->mb_intra= 0;
+                        s->c.mv_dir = MV_DIR_FORWARD|MV_DIR_BACKWARD|MV_DIRECT;
+                        s->c.mb_intra= 0;
                         motion_x=s->b_direct_mv_table[xy][0];
                         motion_y=s->b_direct_mv_table[xy][1];
-                        ff_mpeg4_set_direct_mv(s, motion_x, motion_y);
+                        ff_mpeg4_set_direct_mv(&s->c, motion_x, motion_y);
                     }
                     break;
                 case CANDIDATE_MB_TYPE_DIRECT0:
                     if (CONFIG_MPEG4_ENCODER) {
-                        s->mv_dir = MV_DIR_FORWARD|MV_DIR_BACKWARD|MV_DIRECT;
-                        s->mb_intra= 0;
-                        ff_mpeg4_set_direct_mv(s, 0, 0);
+                        s->c.mv_dir = MV_DIR_FORWARD|MV_DIR_BACKWARD|MV_DIRECT;
+                        s->c.mb_intra= 0;
+                        ff_mpeg4_set_direct_mv(&s->c, 0, 0);
                     }
                     break;
                 case CANDIDATE_MB_TYPE_BIDIR:
-                    s->mv_dir = MV_DIR_FORWARD | MV_DIR_BACKWARD;
-                    s->mb_intra= 0;
-                    s->mv[0][0][0] = s->b_bidir_forw_mv_table[xy][0];
-                    s->mv[0][0][1] = s->b_bidir_forw_mv_table[xy][1];
-                    s->mv[1][0][0] = s->b_bidir_back_mv_table[xy][0];
-                    s->mv[1][0][1] = s->b_bidir_back_mv_table[xy][1];
+                    s->c.mv_dir = MV_DIR_FORWARD | MV_DIR_BACKWARD;
+                    s->c.mb_intra= 0;
+                    s->c.mv[0][0][0] = s->b_bidir_forw_mv_table[xy][0];
+                    s->c.mv[0][0][1] = s->b_bidir_forw_mv_table[xy][1];
+                    s->c.mv[1][0][0] = s->b_bidir_back_mv_table[xy][0];
+                    s->c.mv[1][0][1] = s->b_bidir_back_mv_table[xy][1];
                     break;
                 case CANDIDATE_MB_TYPE_BACKWARD:
-                    s->mv_dir = MV_DIR_BACKWARD;
-                    s->mb_intra= 0;
-                    motion_x= s->mv[1][0][0] = s->b_back_mv_table[xy][0];
-                    motion_y= s->mv[1][0][1] = s->b_back_mv_table[xy][1];
+                    s->c.mv_dir = MV_DIR_BACKWARD;
+                    s->c.mb_intra= 0;
+                    motion_x= s->c.mv[1][0][0] = s->b_back_mv_table[xy][0];
+                    motion_y= s->c.mv[1][0][1] = s->b_back_mv_table[xy][1];
                     break;
                 case CANDIDATE_MB_TYPE_FORWARD:
-                    s->mv_dir = MV_DIR_FORWARD;
-                    s->mb_intra= 0;
-                    motion_x= s->mv[0][0][0] = s->b_forw_mv_table[xy][0];
-                    motion_y= s->mv[0][0][1] = s->b_forw_mv_table[xy][1];
+                    s->c.mv_dir = MV_DIR_FORWARD;
+                    s->c.mb_intra= 0;
+                    motion_x= s->c.mv[0][0][0] = s->b_forw_mv_table[xy][0];
+                    motion_y= s->c.mv[0][0][1] = s->b_forw_mv_table[xy][1];
                     break;
                 case CANDIDATE_MB_TYPE_FORWARD_I:
-                    s->mv_dir = MV_DIR_FORWARD;
-                    s->mv_type = MV_TYPE_FIELD;
-                    s->mb_intra= 0;
+                    s->c.mv_dir = MV_DIR_FORWARD;
+                    s->c.mv_type = MV_TYPE_FIELD;
+                    s->c.mb_intra= 0;
                     for(i=0; i<2; i++){
-                        j= s->field_select[0][i] = s->b_field_select_table[0][i][xy];
-                        s->mv[0][i][0] = s->b_field_mv_table[0][i][j][xy][0];
-                        s->mv[0][i][1] = s->b_field_mv_table[0][i][j][xy][1];
+                        j= s->c.field_select[0][i] = s->b_field_select_table[0][i][xy];
+                        s->c.mv[0][i][0] = s->b_field_mv_table[0][i][j][xy][0];
+                        s->c.mv[0][i][1] = s->b_field_mv_table[0][i][j][xy][1];
                     }
                     break;
                 case CANDIDATE_MB_TYPE_BACKWARD_I:
-                    s->mv_dir = MV_DIR_BACKWARD;
-                    s->mv_type = MV_TYPE_FIELD;
-                    s->mb_intra= 0;
+                    s->c.mv_dir = MV_DIR_BACKWARD;
+                    s->c.mv_type = MV_TYPE_FIELD;
+                    s->c.mb_intra= 0;
                     for(i=0; i<2; i++){
-                        j= s->field_select[1][i] = s->b_field_select_table[1][i][xy];
-                        s->mv[1][i][0] = s->b_field_mv_table[1][i][j][xy][0];
-                        s->mv[1][i][1] = s->b_field_mv_table[1][i][j][xy][1];
+                        j= s->c.field_select[1][i] = s->b_field_select_table[1][i][xy];
+                        s->c.mv[1][i][0] = s->b_field_mv_table[1][i][j][xy][0];
+                        s->c.mv[1][i][1] = s->b_field_mv_table[1][i][j][xy][1];
                     }
                     break;
                 case CANDIDATE_MB_TYPE_BIDIR_I:
-                    s->mv_dir = MV_DIR_FORWARD | MV_DIR_BACKWARD;
-                    s->mv_type = MV_TYPE_FIELD;
-                    s->mb_intra= 0;
+                    s->c.mv_dir = MV_DIR_FORWARD | MV_DIR_BACKWARD;
+                    s->c.mv_type = MV_TYPE_FIELD;
+                    s->c.mb_intra= 0;
                     for(dir=0; dir<2; dir++){
                         for(i=0; i<2; i++){
-                            j= s->field_select[dir][i] = s->b_field_select_table[dir][i][xy];
-                            s->mv[dir][i][0] = s->b_field_mv_table[dir][i][j][xy][0];
-                            s->mv[dir][i][1] = s->b_field_mv_table[dir][i][j][xy][1];
+                            j= s->c.field_select[dir][i] = s->b_field_select_table[dir][i][xy];
+                            s->c.mv[dir][i][0] = s->b_field_mv_table[dir][i][j][xy][0];
+                            s->c.mv[dir][i][1] = s->b_field_mv_table[dir][i][j][xy][1];
                         }
                     }
                     break;
                 default:
-                    av_log(s->avctx, AV_LOG_ERROR, "illegal MB type\n");
+                    av_log(s->c.avctx, AV_LOG_ERROR, "illegal MB type\n");
                 }
 
                 encode_mb(s, motion_x, motion_y);
 
                 // RAL: Update last macroblock type
-                s->last_mv_dir = s->mv_dir;
+                s->last_mv_dir = s->c.mv_dir;
 
                 if (CONFIG_H263_ENCODER &&
-                    s->out_format == FMT_H263 && s->pict_type!=AV_PICTURE_TYPE_B)
+                    s->c.out_format == FMT_H263 && s->c.pict_type != AV_PICTURE_TYPE_B)
                     ff_h263_update_mb(s);
 
-                mpv_reconstruct_mb(s, s->block);
+                mpv_reconstruct_mb(s, s->c.block);
             }
 
-            s->cur_pic.qscale_table[xy] = s->qscale;
+            s->c.cur_pic.qscale_table[xy] = s->c.qscale;
 
             /* clean the MV table in IPS frames for direct mode in B-frames */
-            if(s->mb_intra /* && I,P,S_TYPE */){
+            if (s->c.mb_intra /* && I,P,S_TYPE */) {
                 s->p_mv_table[xy][0]=0;
                 s->p_mv_table[xy][1]=0;
-            } else if ((s->h263_pred || s->h263_aic) && s->mbintra_table[xy])
-                ff_clean_intra_table_entries(s);
+            } else if ((s->c.h263_pred || s->c.h263_aic) && s->c.mbintra_table[xy])
+                ff_clean_intra_table_entries(&s->c);
 
-            if (s->avctx->flags & AV_CODEC_FLAG_PSNR) {
+            if (s->c.avctx->flags & AV_CODEC_FLAG_PSNR) {
                 int w= 16;
                 int h= 16;
 
-                if(s->mb_x*16 + 16 > s->width ) w= s->width - s->mb_x*16;
-                if(s->mb_y*16 + 16 > s->height) h= s->height- s->mb_y*16;
+                if(s->c.mb_x*16 + 16 > s->c.width ) w= s->c.width - s->c.mb_x*16;
+                if(s->c.mb_y*16 + 16 > s->c.height) h= s->c.height- s->c.mb_y*16;
 
                 s->encoding_error[0] += sse(
-                    s, s->new_pic->data[0] + s->mb_x*16 + s->mb_y*s->linesize*16,
-                    s->dest[0], w, h, s->linesize);
+                    s, s->new_pic->data[0] + s->c.mb_x*16 + s->c.mb_y*s->c.linesize*16,
+                    s->c.dest[0], w, h, s->c.linesize);
                 s->encoding_error[1] += sse(
-                    s, s->new_pic->data[1] + s->mb_x*8  + s->mb_y*s->uvlinesize*chr_h,
-                    s->dest[1], w>>1, h>>s->chroma_y_shift, s->uvlinesize);
+                    s, s->new_pic->data[1] + s->c.mb_x*8  + s->c.mb_y*s->c.uvlinesize*chr_h,
+                    s->c.dest[1], w>>1, h>>s->c.chroma_y_shift, s->c.uvlinesize);
                 s->encoding_error[2] += sse(
-                    s, s->new_pic->data[2] + s->mb_x*8  + s->mb_y*s->uvlinesize*chr_h,
-                    s->dest[2], w>>1, h>>s->chroma_y_shift, s->uvlinesize);
+                    s, s->new_pic->data[2] + s->c.mb_x*8  + s->c.mb_y*s->c.uvlinesize*chr_h,
+                    s->c.dest[2], w>>1, h>>s->c.chroma_y_shift, s->c.uvlinesize);
             }
-            if(s->loop_filter){
-                if(CONFIG_H263_ENCODER && s->out_format == FMT_H263)
-                    ff_h263_loop_filter(s);
+            if (s->c.loop_filter) {
+                if (CONFIG_H263_ENCODER && s->c.out_format == FMT_H263)
+                    ff_h263_loop_filter(&s->c);
             }
-            ff_dlog(s->avctx, "MB %d %d bits\n",
-                    s->mb_x + s->mb_y * s->mb_stride, put_bits_count(&s->pb));
+            ff_dlog(s->c.avctx, "MB %d %d bits\n",
+                    s->c.mb_x + s->c.mb_y * s->c.mb_stride, put_bits_count(&s->pb));
         }
     }
 
 #if CONFIG_MSMPEG4ENC
     //not beautiful here but we must write it before flushing so it has to be here
-    if (s->msmpeg4_version != MSMP4_UNUSED && s->msmpeg4_version < MSMP4_WMV1 &&
-        s->pict_type == AV_PICTURE_TYPE_I)
+    if (s->c.msmpeg4_version != MSMP4_UNUSED && s->c.msmpeg4_version < MSMP4_WMV1 &&
+        s->c.pict_type == AV_PICTURE_TYPE_I)
         ff_msmpeg4_encode_ext_header(s);
 #endif
 
@@ -3584,13 +3586,15 @@ static int encode_thread(AVCodecContext *c, void *arg){
 }
 
 #define MERGE(field) dst->field += src->field; src->field=0
-static void merge_context_after_me(MpegEncContext *dst, MpegEncContext *src){
-    MERGE(me.scene_change_score);
-    MERGE(me.mc_mb_var_sum_temp);
-    MERGE(me.mb_var_sum_temp);
+static void merge_context_after_me(MPVEncContext *const dst, MPVEncContext *const src)
+{
+    MERGE(c.me.scene_change_score);
+    MERGE(c.me.mc_mb_var_sum_temp);
+    MERGE(c.me.mb_var_sum_temp);
 }
 
-static void merge_context_after_encode(MpegEncContext *dst, MpegEncContext *src){
+static void merge_context_after_encode(MPVEncContext *const dst, MPVEncContext *const src)
+{
     int i;
 
     MERGE(dct_count[0]); //note, the other dct vars are not part of the context
@@ -3619,22 +3623,22 @@ static void merge_context_after_encode(MpegEncContext *dst, MpegEncContext *src)
 
 static int estimate_qp(MPVMainEncContext *const m, int dry_run)
 {
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
 
     if (m->next_lambda){
-        s->cur_pic.ptr->f->quality = m->next_lambda;
+        s->c.cur_pic.ptr->f->quality = m->next_lambda;
         if(!dry_run) m->next_lambda= 0;
     } else if (!m->fixed_qscale) {
         int quality = ff_rate_estimate_qscale(m, dry_run);
-        s->cur_pic.ptr->f->quality = quality;
-        if (s->cur_pic.ptr->f->quality < 0)
+        s->c.cur_pic.ptr->f->quality = quality;
+        if (s->c.cur_pic.ptr->f->quality < 0)
             return -1;
     }
 
     if(s->adaptive_quant){
         init_qscale_tab(s);
 
-        switch(s->codec_id){
+        switch (s->c.codec_id) {
         case AV_CODEC_ID_MPEG4:
             if (CONFIG_MPEG4_ENCODER)
                 ff_clean_mpeg4_qscales(s);
@@ -3647,169 +3651,174 @@ static int estimate_qp(MPVMainEncContext *const m, int dry_run)
             break;
         }
 
-        s->lambda= s->lambda_table[0];
+        s->c.lambda= s->lambda_table[0];
         //FIXME broken
     }else
-        s->lambda = s->cur_pic.ptr->f->quality;
+        s->c.lambda = s->c.cur_pic.ptr->f->quality;
     update_qscale(m);
     return 0;
 }
 
 /* must be called before writing the header */
-static void set_frame_distances(MpegEncContext * s){
-    av_assert1(s->cur_pic.ptr->f->pts != AV_NOPTS_VALUE);
-    s->time = s->cur_pic.ptr->f->pts * s->avctx->time_base.num;
+static void set_frame_distances(MPVEncContext *const s)
+{
+    av_assert1(s->c.cur_pic.ptr->f->pts != AV_NOPTS_VALUE);
+    s->c.time = s->c.cur_pic.ptr->f->pts * s->c.avctx->time_base.num;
 
-    if(s->pict_type==AV_PICTURE_TYPE_B){
-        s->pb_time= s->pp_time - (s->last_non_b_time - s->time);
-        av_assert1(s->pb_time > 0 && s->pb_time < s->pp_time);
+    if (s->c.pict_type == AV_PICTURE_TYPE_B) {
+        s->c.pb_time = s->c.pp_time - (s->c.last_non_b_time - s->c.time);
+        av_assert1(s->c.pb_time > 0 && s->c.pb_time < s->c.pp_time);
     }else{
-        s->pp_time= s->time - s->last_non_b_time;
-        s->last_non_b_time= s->time;
-        av_assert1(s->picture_number==0 || s->pp_time > 0);
+        s->c.pp_time = s->c.time - s->c.last_non_b_time;
+        s->c.last_non_b_time = s->c.time;
+        av_assert1(s->c.picture_number==0 || s->c.pp_time > 0);
     }
 }
 
 static int encode_picture(MPVMainEncContext *const m, const AVPacket *pkt)
 {
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
     int i, ret;
     int bits;
-    int context_count = s->slice_context_count;
+    int context_count = s->c.slice_context_count;
 
     /* Reset the average MB variance */
-    s->me.mb_var_sum_temp    =
-    s->me.mc_mb_var_sum_temp = 0;
+    s->c.me.mb_var_sum_temp    =
+    s->c.me.mc_mb_var_sum_temp = 0;
 
     /* we need to initialize some time vars before we can encode B-frames */
     // RAL: Condition added for MPEG1VIDEO
-    if (s->out_format == FMT_MPEG1 || (s->h263_pred && s->msmpeg4_version == MSMP4_UNUSED))
+    if (s->c.out_format == FMT_MPEG1 || (s->c.h263_pred && s->c.msmpeg4_version == MSMP4_UNUSED))
         set_frame_distances(s);
-    if(CONFIG_MPEG4_ENCODER && s->codec_id == AV_CODEC_ID_MPEG4)
+    if (CONFIG_MPEG4_ENCODER && s->c.codec_id == AV_CODEC_ID_MPEG4)
         ff_set_mpeg4_time(s);
 
-    s->me.scene_change_score=0;
+    s->c.me.scene_change_score=0;
 
-//    s->lambda= s->cur_pic.ptr->quality; //FIXME qscale / ... stuff for ME rate distortion
+//    s->c.lambda= s->c.cur_pic.ptr->quality; //FIXME qscale / ... stuff for ME rate distortion
 
-    if(s->pict_type==AV_PICTURE_TYPE_I){
-        s->no_rounding = s->msmpeg4_version >= MSMP4_V3;
-    }else if(s->pict_type!=AV_PICTURE_TYPE_B){
-        s->no_rounding ^= s->flipflop_rounding;
+    if (s->c.pict_type == AV_PICTURE_TYPE_I) {
+        s->c.no_rounding = s->c.msmpeg4_version >= MSMP4_V3;
+    } else if (s->c.pict_type != AV_PICTURE_TYPE_B) {
+        s->c.no_rounding ^= s->c.flipflop_rounding;
     }
 
-    if (s->avctx->flags & AV_CODEC_FLAG_PASS2) {
+    if (s->c.avctx->flags & AV_CODEC_FLAG_PASS2) {
         ret = estimate_qp(m, 1);
         if (ret < 0)
             return ret;
         ff_get_2pass_fcode(m);
-    } else if (!(s->avctx->flags & AV_CODEC_FLAG_QSCALE)) {
-        if(s->pict_type==AV_PICTURE_TYPE_B)
-            s->lambda = m->last_lambda_for[s->pict_type];
+    } else if (!(s->c.avctx->flags & AV_CODEC_FLAG_QSCALE)) {
+        if(s->c.pict_type==AV_PICTURE_TYPE_B)
+            s->c.lambda = m->last_lambda_for[s->c.pict_type];
         else
-            s->lambda = m->last_lambda_for[m->last_non_b_pict_type];
+            s->c.lambda = m->last_lambda_for[m->last_non_b_pict_type];
         update_qscale(m);
     }
 
     ff_me_init_pic(s);
 
-    s->mb_intra=0; //for the rate distortion & bit compare functions
+    s->c.mb_intra = 0; //for the rate distortion & bit compare functions
     for (int i = 0; i < context_count; i++) {
-        MpegEncContext *const slice = s->thread_context[i];
+        MPVEncContext *const slice = s->c.enc_contexts[i];
         uint8_t *start, *end;
         int h;
 
         if (i) {
-            ret = ff_update_duplicate_context(slice, s);
+            ret = ff_update_duplicate_context(&slice->c, &s->c);
             if (ret < 0)
                 return ret;
         }
-        slice->me.temp = slice->me.scratchpad = slice->sc.scratchpad_buf;
+        slice->c.me.temp = slice->c.me.scratchpad = slice->c.sc.scratchpad_buf;
 
-        h     = s->mb_height;
-        start = pkt->data + (size_t)(((int64_t) pkt->size) * slice->start_mb_y / h);
-        end   = pkt->data + (size_t)(((int64_t) pkt->size) * slice->  end_mb_y / h);
+        h     = s->c.mb_height;
+        start = pkt->data + (size_t)(((int64_t) pkt->size) * slice->c.start_mb_y / h);
+        end   = pkt->data + (size_t)(((int64_t) pkt->size) * slice->c.  end_mb_y / h);
 
-        init_put_bits(&s->thread_context[i]->pb, start, end - start);
+        init_put_bits(&s->c.enc_contexts[i]->pb, start, end - start);
     }
 
     /* Estimate motion for every MB */
-    if(s->pict_type != AV_PICTURE_TYPE_I){
-        s->lambda  = (s->lambda  * m->me_penalty_compensation + 128) >> 8;
-        s->lambda2 = (s->lambda2 * (int64_t) m->me_penalty_compensation + 128) >> 8;
-        if (s->pict_type != AV_PICTURE_TYPE_B) {
+    if (s->c.pict_type != AV_PICTURE_TYPE_I) {
+        s->c.lambda  = (s->c.lambda  * m->me_penalty_compensation + 128) >> 8;
+        s->c.lambda2 = (s->c.lambda2 * (int64_t) m->me_penalty_compensation + 128) >> 8;
+        if (s->c.pict_type != AV_PICTURE_TYPE_B) {
             if ((m->me_pre && m->last_non_b_pict_type == AV_PICTURE_TYPE_I) ||
                 m->me_pre == 2) {
-                s->avctx->execute(s->avctx, pre_estimate_motion_thread, &s->thread_context[0], NULL, context_count, sizeof(void*));
+                s->c.avctx->execute(s->c.avctx, pre_estimate_motion_thread,
+                                    &s->c.enc_contexts[0], NULL,
+                                    context_count, sizeof(void*));
             }
         }
 
-        s->avctx->execute(s->avctx, estimate_motion_thread, &s->thread_context[0], NULL, context_count, sizeof(void*));
-    }else /* if(s->pict_type == AV_PICTURE_TYPE_I) */{
+        s->c.avctx->execute(s->c.avctx, estimate_motion_thread, &s->c.enc_contexts[0],
+                            NULL, context_count, sizeof(void*));
+    }else /* if(s->c.pict_type == AV_PICTURE_TYPE_I) */{
         /* I-Frame */
-        for(i=0; i<s->mb_stride*s->mb_height; i++)
+        for (int i = 0; i < s->c.mb_stride * s->c.mb_height; i++)
             s->mb_type[i]= CANDIDATE_MB_TYPE_INTRA;
 
         if (!m->fixed_qscale) {
             /* finding spatial complexity for I-frame rate control */
-            s->avctx->execute(s->avctx, mb_var_thread, &s->thread_context[0], NULL, context_count, sizeof(void*));
+            s->c.avctx->execute(s->c.avctx, mb_var_thread, &s->c.enc_contexts[0],
+                                NULL, context_count, sizeof(void*));
         }
     }
     for(i=1; i<context_count; i++){
-        merge_context_after_me(s, s->thread_context[i]);
+        merge_context_after_me(s, s->c.enc_contexts[i]);
     }
-    m->mc_mb_var_sum = s->me.mc_mb_var_sum_temp;
-    m->mb_var_sum    = s->me.   mb_var_sum_temp;
+    m->mc_mb_var_sum = s->c.me.mc_mb_var_sum_temp;
+    m->mb_var_sum    = s->c.me.   mb_var_sum_temp;
     emms_c();
 
-    if (s->me.scene_change_score > m->scenechange_threshold &&
-        s->pict_type == AV_PICTURE_TYPE_P) {
-        s->pict_type= AV_PICTURE_TYPE_I;
-        for(i=0; i<s->mb_stride*s->mb_height; i++)
-            s->mb_type[i]= CANDIDATE_MB_TYPE_INTRA;
-        if (s->msmpeg4_version >= MSMP4_V3)
-            s->no_rounding=1;
-        ff_dlog(s->avctx, "Scene change detected, encoding as I Frame %"PRId64" %"PRId64"\n",
+    if (s->c.me.scene_change_score > m->scenechange_threshold &&
+        s->c.pict_type == AV_PICTURE_TYPE_P) {
+        s->c.pict_type = AV_PICTURE_TYPE_I;
+        for (int i = 0; i < s->c.mb_stride * s->c.mb_height; i++)
+            s->mb_type[i] = CANDIDATE_MB_TYPE_INTRA;
+        if (s->c.msmpeg4_version >= MSMP4_V3)
+            s->c.no_rounding = 1;
+        ff_dlog(s->c.avctx, "Scene change detected, encoding as I Frame %"PRId64" %"PRId64"\n",
                 m->mb_var_sum, m->mc_mb_var_sum);
     }
 
-    if(!s->umvplus){
-        if(s->pict_type==AV_PICTURE_TYPE_P || s->pict_type==AV_PICTURE_TYPE_S) {
-            s->f_code = ff_get_best_fcode(m, s->p_mv_table, CANDIDATE_MB_TYPE_INTER);
+    if (!s->c.umvplus) {
+        if (s->c.pict_type == AV_PICTURE_TYPE_P || s->c.pict_type == AV_PICTURE_TYPE_S) {
+            s->c.f_code = ff_get_best_fcode(m, s->p_mv_table, CANDIDATE_MB_TYPE_INTER);
 
-            if (s->avctx->flags & AV_CODEC_FLAG_INTERLACED_ME) {
+            if (s->c.avctx->flags & AV_CODEC_FLAG_INTERLACED_ME) {
                 int a,b;
-                a = ff_get_best_fcode(m, s->p_field_mv_table[0][0], CANDIDATE_MB_TYPE_INTER_I); //FIXME field_select
-                b = ff_get_best_fcode(m, s->p_field_mv_table[1][1], CANDIDATE_MB_TYPE_INTER_I);
-                s->f_code= FFMAX3(s->f_code, a, b);
+                a = ff_get_best_fcode(m, s->c.p_field_mv_table[0][0], CANDIDATE_MB_TYPE_INTER_I); //FIXME field_select
+                b = ff_get_best_fcode(m, s->c.p_field_mv_table[1][1], CANDIDATE_MB_TYPE_INTER_I);
+                s->c.f_code= FFMAX3(s->c.f_code, a, b);
             }
 
             ff_fix_long_p_mvs(s, s->intra_penalty ? CANDIDATE_MB_TYPE_INTER : CANDIDATE_MB_TYPE_INTRA);
-            ff_fix_long_mvs(s, NULL, 0, s->p_mv_table, s->f_code, CANDIDATE_MB_TYPE_INTER, !!s->intra_penalty);
-            if (s->avctx->flags & AV_CODEC_FLAG_INTERLACED_ME) {
+            ff_fix_long_mvs(s, NULL, 0, s->p_mv_table, s->c.f_code, CANDIDATE_MB_TYPE_INTER, !!s->intra_penalty);
+            if (s->c.avctx->flags & AV_CODEC_FLAG_INTERLACED_ME) {
                 int j;
                 for(i=0; i<2; i++){
                     for(j=0; j<2; j++)
                         ff_fix_long_mvs(s, s->p_field_select_table[i], j,
-                                        s->p_field_mv_table[i][j], s->f_code, CANDIDATE_MB_TYPE_INTER_I, !!s->intra_penalty);
+                                        s->c.p_field_mv_table[i][j], s->c.f_code, CANDIDATE_MB_TYPE_INTER_I, !!s->intra_penalty);
                 }
             }
-        } else if (s->pict_type == AV_PICTURE_TYPE_B) {
+        } else if (s->c.pict_type == AV_PICTURE_TYPE_B) {
             int a, b;
 
             a = ff_get_best_fcode(m, s->b_forw_mv_table, CANDIDATE_MB_TYPE_FORWARD);
             b = ff_get_best_fcode(m, s->b_bidir_forw_mv_table, CANDIDATE_MB_TYPE_BIDIR);
-            s->f_code = FFMAX(a, b);
+            s->c.f_code = FFMAX(a, b);
 
             a = ff_get_best_fcode(m, s->b_back_mv_table, CANDIDATE_MB_TYPE_BACKWARD);
             b = ff_get_best_fcode(m, s->b_bidir_back_mv_table, CANDIDATE_MB_TYPE_BIDIR);
-            s->b_code = FFMAX(a, b);
+            s->c.b_code = FFMAX(a, b);
 
-            ff_fix_long_mvs(s, NULL, 0, s->b_forw_mv_table, s->f_code, CANDIDATE_MB_TYPE_FORWARD, 1);
-            ff_fix_long_mvs(s, NULL, 0, s->b_back_mv_table, s->b_code, CANDIDATE_MB_TYPE_BACKWARD, 1);
-            ff_fix_long_mvs(s, NULL, 0, s->b_bidir_forw_mv_table, s->f_code, CANDIDATE_MB_TYPE_BIDIR, 1);
-            ff_fix_long_mvs(s, NULL, 0, s->b_bidir_back_mv_table, s->b_code, CANDIDATE_MB_TYPE_BIDIR, 1);
-            if (s->avctx->flags & AV_CODEC_FLAG_INTERLACED_ME) {
+            ff_fix_long_mvs(s, NULL, 0, s->b_forw_mv_table, s->c.f_code, CANDIDATE_MB_TYPE_FORWARD, 1);
+            ff_fix_long_mvs(s, NULL, 0, s->b_back_mv_table, s->c.b_code, CANDIDATE_MB_TYPE_BACKWARD, 1);
+            ff_fix_long_mvs(s, NULL, 0, s->b_bidir_forw_mv_table, s->c.f_code, CANDIDATE_MB_TYPE_BIDIR, 1);
+            ff_fix_long_mvs(s, NULL, 0, s->b_bidir_back_mv_table, s->c.b_code, CANDIDATE_MB_TYPE_BIDIR, 1);
+            if (s->c.avctx->flags & AV_CODEC_FLAG_INTERLACED_ME) {
                 int dir, j;
                 for(dir=0; dir<2; dir++){
                     for(i=0; i<2; i++){
@@ -3817,7 +3826,7 @@ static int encode_picture(MPVMainEncContext *const m, const AVPacket *pkt)
                             int type= dir ? (CANDIDATE_MB_TYPE_BACKWARD_I|CANDIDATE_MB_TYPE_BIDIR_I)
                                           : (CANDIDATE_MB_TYPE_FORWARD_I |CANDIDATE_MB_TYPE_BIDIR_I);
                             ff_fix_long_mvs(s, s->b_field_select_table[dir][i], j,
-                                            s->b_field_mv_table[dir][i][j], dir ? s->b_code : s->f_code, type, 1);
+                                            s->b_field_mv_table[dir][i][j], dir ? s->c.b_code : s->c.f_code, type, 1);
                         }
                     }
                 }
@@ -3829,70 +3838,70 @@ static int encode_picture(MPVMainEncContext *const m, const AVPacket *pkt)
     if (ret < 0)
         return ret;
 
-    if (s->qscale < 3 && s->max_qcoeff <= 128 &&
-        s->pict_type == AV_PICTURE_TYPE_I &&
-        !(s->avctx->flags & AV_CODEC_FLAG_QSCALE))
-        s->qscale= 3; //reduce clipping problems
+    if (s->c.qscale < 3 && s->max_qcoeff <= 128 &&
+        s->c.pict_type == AV_PICTURE_TYPE_I &&
+        !(s->c.avctx->flags & AV_CODEC_FLAG_QSCALE))
+        s->c.qscale= 3; //reduce clipping problems
 
-    if (s->out_format == FMT_MJPEG) {
-        ret = ff_check_codec_matrices(s->avctx, FF_MATRIX_TYPE_INTRA | FF_MATRIX_TYPE_CHROMA_INTRA, (7 + s->qscale) / s->qscale, 65535);
+    if (s->c.out_format == FMT_MJPEG) {
+        ret = ff_check_codec_matrices(s->c.avctx, FF_MATRIX_TYPE_INTRA | FF_MATRIX_TYPE_CHROMA_INTRA, (7 + s->c.qscale) / s->c.qscale, 65535);
         if (ret < 0)
             return ret;
 
-        if (s->codec_id != AV_CODEC_ID_AMV) {
+        if (s->c.codec_id != AV_CODEC_ID_AMV) {
             const uint16_t *  luma_matrix = ff_mpeg1_default_intra_matrix;
             const uint16_t *chroma_matrix = ff_mpeg1_default_intra_matrix;
 
-            if (s->avctx->intra_matrix) {
+            if (s->c.avctx->intra_matrix) {
                 chroma_matrix =
-                luma_matrix = s->avctx->intra_matrix;
+                luma_matrix = s->c.avctx->intra_matrix;
             }
-            if (s->avctx->chroma_intra_matrix)
-                chroma_matrix = s->avctx->chroma_intra_matrix;
+            if (s->c.avctx->chroma_intra_matrix)
+                chroma_matrix = s->c.avctx->chroma_intra_matrix;
 
             /* for mjpeg, we do include qscale in the matrix */
             for (int i = 1; i < 64; i++) {
-                int j = s->idsp.idct_permutation[i];
+                int j = s->c.idsp.idct_permutation[i];
 
-                s->chroma_intra_matrix[j] = av_clip_uint8((chroma_matrix[i] * s->qscale) >> 3);
-                s->       intra_matrix[j] = av_clip_uint8((  luma_matrix[i] * s->qscale) >> 3);
+                s->c.chroma_intra_matrix[j] = av_clip_uint8((chroma_matrix[i] * s->c.qscale) >> 3);
+                s->c.       intra_matrix[j] = av_clip_uint8((  luma_matrix[i] * s->c.qscale) >> 3);
             }
-            s->y_dc_scale_table =
-            s->c_dc_scale_table = ff_mpeg12_dc_scale_table[s->intra_dc_precision];
-            s->chroma_intra_matrix[0] =
-            s->intra_matrix[0]  = ff_mpeg12_dc_scale_table[s->intra_dc_precision][8];
+            s->c.y_dc_scale_table =
+            s->c.c_dc_scale_table = ff_mpeg12_dc_scale_table[s->c.intra_dc_precision];
+            s->c.chroma_intra_matrix[0] =
+            s->c.intra_matrix[0]  = ff_mpeg12_dc_scale_table[s->c.intra_dc_precision][8];
         } else {
             static const uint8_t y[32] = {13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13,13};
             static const uint8_t c[32] = {14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14,14};
             for (int i = 1; i < 64; i++) {
-                int j = s->idsp.idct_permutation[ff_zigzag_direct[i]];
+                int j = s->c.idsp.idct_permutation[ff_zigzag_direct[i]];
 
-                s->intra_matrix[j]        = sp5x_qscale_five_quant_table[0][i];
-                s->chroma_intra_matrix[j] = sp5x_qscale_five_quant_table[1][i];
+                s->c.intra_matrix[j]        = sp5x_qscale_five_quant_table[0][i];
+                s->c.chroma_intra_matrix[j] = sp5x_qscale_five_quant_table[1][i];
             }
-            s->y_dc_scale_table = y;
-            s->c_dc_scale_table = c;
-            s->intra_matrix[0] = 13;
-            s->chroma_intra_matrix[0] = 14;
+            s->c.y_dc_scale_table = y;
+            s->c.c_dc_scale_table = c;
+            s->c.intra_matrix[0] = 13;
+            s->c.chroma_intra_matrix[0] = 14;
         }
         ff_convert_matrix(s, s->q_intra_matrix, s->q_intra_matrix16,
-                          s->intra_matrix, s->intra_quant_bias, 8, 8, 1);
+                          s->c.intra_matrix, s->intra_quant_bias, 8, 8, 1);
         ff_convert_matrix(s, s->q_chroma_intra_matrix, s->q_chroma_intra_matrix16,
-                          s->chroma_intra_matrix, s->intra_quant_bias, 8, 8, 1);
-        s->qscale = 8;
+                          s->c.chroma_intra_matrix, s->intra_quant_bias, 8, 8, 1);
+        s->c.qscale = 8;
     }
 
-    if (s->pict_type == AV_PICTURE_TYPE_I) {
-        s->cur_pic.ptr->f->flags |= AV_FRAME_FLAG_KEY;
+    if (s->c.pict_type == AV_PICTURE_TYPE_I) {
+        s->c.cur_pic.ptr->f->flags |= AV_FRAME_FLAG_KEY;
     } else {
-        s->cur_pic.ptr->f->flags &= ~AV_FRAME_FLAG_KEY;
+        s->c.cur_pic.ptr->f->flags &= ~AV_FRAME_FLAG_KEY;
     }
-    s->cur_pic.ptr->f->pict_type = s->pict_type;
+    s->c.cur_pic.ptr->f->pict_type = s->c.pict_type;
 
-    if (s->cur_pic.ptr->f->flags & AV_FRAME_FLAG_KEY)
+    if (s->c.cur_pic.ptr->f->flags & AV_FRAME_FLAG_KEY)
         m->picture_in_gop_number = 0;
 
-    s->mb_x = s->mb_y = 0;
+    s->c.mb_x = s->c.mb_y = 0;
     s->last_bits= put_bits_count(&s->pb);
     ret = m->encode_picture_header(m);
     if (ret < 0)
@@ -3901,20 +3910,22 @@ static int encode_picture(MPVMainEncContext *const m, const AVPacket *pkt)
     m->header_bits = bits - s->last_bits;
 
     for(i=1; i<context_count; i++){
-        update_duplicate_context_after_me(s->thread_context[i], s);
+        update_duplicate_context_after_me(s->c.enc_contexts[i], s);
     }
-    s->avctx->execute(s->avctx, encode_thread, &s->thread_context[0], NULL, context_count, sizeof(void*));
+    s->c.avctx->execute(s->c.avctx, encode_thread, &s->c.enc_contexts[0],
+                        NULL, context_count, sizeof(void*));
     for(i=1; i<context_count; i++){
-        if (s->pb.buf_end == s->thread_context[i]->pb.buf)
-            set_put_bits_buffer_size(&s->pb, FFMIN(s->thread_context[i]->pb.buf_end - s->pb.buf, INT_MAX/8-BUF_BITS));
-        merge_context_after_encode(s, s->thread_context[i]);
+        if (s->pb.buf_end == s->c.enc_contexts[i]->pb.buf)
+            set_put_bits_buffer_size(&s->pb, FFMIN(s->c.enc_contexts[i]->pb.buf_end - s->pb.buf, INT_MAX/8-BUF_BITS));
+        merge_context_after_encode(s, s->c.enc_contexts[i]);
     }
     emms_c();
     return 0;
 }
 
-static void denoise_dct_c(MpegEncContext *s, int16_t *block){
-    const int intra= s->mb_intra;
+static void denoise_dct_c(MPVEncContext *const s, int16_t *block)
+{
+    const int intra= s->c.mb_intra;
     int i;
 
     s->dct_count[intra]++;
@@ -3937,7 +3948,7 @@ static void denoise_dct_c(MpegEncContext *s, int16_t *block){
     }
 }
 
-static int dct_quantize_trellis_c(MpegEncContext *s,
+static int dct_quantize_trellis_c(MPVEncContext *const s,
                                   int16_t *block, int n,
                                   int qscale, int *overflow){
     const int *qmat;
@@ -3961,7 +3972,7 @@ static int dct_quantize_trellis_c(MpegEncContext *s,
     int qmul, qadd, start_i, last_non_zero, i, dc;
     const int esc_length= s->ac_esc_length;
     const uint8_t *length, *last_length;
-    const int lambda= s->lambda2 >> (FF_LAMBDA_SHIFT - 6);
+    const int lambda= s->c.lambda2 >> (FF_LAMBDA_SHIFT - 6);
     int mpeg2_qscale;
 
     s->fdsp.fdct(block);
@@ -3971,18 +3982,18 @@ static int dct_quantize_trellis_c(MpegEncContext *s,
     qmul= qscale*16;
     qadd= ((qscale-1)|1)*8;
 
-    if (s->q_scale_type) mpeg2_qscale = ff_mpeg2_non_linear_qscale[qscale];
+    if (s->c.q_scale_type) mpeg2_qscale = ff_mpeg2_non_linear_qscale[qscale];
     else                 mpeg2_qscale = qscale << 1;
 
-    if (s->mb_intra) {
+    if (s->c.mb_intra) {
         int q;
-        scantable= s->intra_scantable.scantable;
-        perm_scantable= s->intra_scantable.permutated;
-        if (!s->h263_aic) {
+        scantable= s->c.intra_scantable.scantable;
+        perm_scantable= s->c.intra_scantable.permutated;
+        if (!s->c.h263_aic) {
             if (n < 4)
-                q = s->y_dc_scale;
+                q = s->c.y_dc_scale;
             else
-                q = s->c_dc_scale;
+                q = s->c.c_dc_scale;
             q = q << 3;
         } else{
             /* For AIC we skip quant/dequant of INTRADC */
@@ -3995,8 +4006,8 @@ static int dct_quantize_trellis_c(MpegEncContext *s,
         start_i = 1;
         last_non_zero = 0;
         qmat = n < 4 ? s->q_intra_matrix[qscale] : s->q_chroma_intra_matrix[qscale];
-        matrix = n < 4 ? s->intra_matrix : s->chroma_intra_matrix;
-        if(s->mpeg_quant || s->out_format == FMT_MPEG1 || s->out_format == FMT_MJPEG)
+        matrix = n < 4 ? s->c.intra_matrix : s->c.chroma_intra_matrix;
+        if (s->c.mpeg_quant || s->c.out_format == FMT_MPEG1 || s->c.out_format == FMT_MJPEG)
             bias= 1<<(QMAT_SHIFT-1);
 
         if (n > 3 && s->intra_chroma_ac_vlc_length) {
@@ -4007,12 +4018,12 @@ static int dct_quantize_trellis_c(MpegEncContext *s,
             last_length= s->intra_ac_vlc_last_length;
         }
     } else {
-        scantable= s->inter_scantable.scantable;
-        perm_scantable= s->inter_scantable.permutated;
+        scantable      = s->c.inter_scantable.scantable;
+        perm_scantable = s->c.inter_scantable.permutated;
         start_i = 0;
         last_non_zero = -1;
         qmat = s->q_inter_matrix[qscale];
-        matrix = s->inter_matrix;
+        matrix = s->c.inter_matrix;
         length     = s->inter_ac_vlc_length;
         last_length= s->inter_ac_vlc_last_length;
     }
@@ -4086,14 +4097,14 @@ static int dct_quantize_trellis_c(MpegEncContext *s,
 
             av_assert2(level);
 
-            if(s->out_format == FMT_H263 || s->out_format == FMT_H261){
+            if (s->c.out_format == FMT_H263 || s->c.out_format == FMT_H261) {
                 unquant_coeff= alevel*qmul + qadd;
-            } else if(s->out_format == FMT_MJPEG) {
-                j = s->idsp.idct_permutation[scantable[i]];
+            } else if(s->c.out_format == FMT_MJPEG) {
+                j = s->c.idsp.idct_permutation[scantable[i]];
                 unquant_coeff = alevel * matrix[j] * 8;
             }else{ // MPEG-1
-                j = s->idsp.idct_permutation[scantable[i]]; // FIXME: optimize
-                if(s->mb_intra){
+                j = s->c.idsp.idct_permutation[scantable[i]]; // FIXME: optimize
+                if(s->c.mb_intra){
                         unquant_coeff = (int)(  alevel  * mpeg2_qscale * matrix[j]) >> 4;
                         unquant_coeff =   (unquant_coeff - 1) | 1;
                 }else{
@@ -4118,7 +4129,7 @@ static int dct_quantize_trellis_c(MpegEncContext *s,
                     }
                 }
 
-                if(s->out_format == FMT_H263 || s->out_format == FMT_H261){
+                if (s->c.out_format == FMT_H263 || s->c.out_format == FMT_H261) {
                     for(j=survivor_count-1; j>=0; j--){
                         int run= i - survivor[j];
                         int score= distortion + last_length[UNI_AC_ENC_INDEX(run, level)]*lambda;
@@ -4144,7 +4155,7 @@ static int dct_quantize_trellis_c(MpegEncContext *s,
                     }
                 }
 
-                if(s->out_format == FMT_H263 || s->out_format == FMT_H261){
+                if (s->c.out_format == FMT_H263 || s->c.out_format == FMT_H261) {
                   for(j=survivor_count-1; j>=0; j--){
                         int run= i - survivor[j];
                         int score= distortion + score_tab[i-run];
@@ -4177,7 +4188,7 @@ static int dct_quantize_trellis_c(MpegEncContext *s,
         survivor[ survivor_count++ ]= i+1;
     }
 
-    if(s->out_format != FMT_H263 && s->out_format != FMT_H261){
+    if(s->c.out_format != FMT_H263 && s->c.out_format != FMT_H261){
         last_score= 256*256*256*120;
         for(i= survivor[0]; i<=last_non_zero + 1; i++){
             int score= score_tab[i];
@@ -4211,7 +4222,7 @@ static int dct_quantize_trellis_c(MpegEncContext *s,
             int alevel= FFABS(level);
             int unquant_coeff, score, distortion;
 
-            if(s->out_format == FMT_H263 || s->out_format == FMT_H261){
+            if(s->c.out_format == FMT_H263 || s->c.out_format == FMT_H261){
                     unquant_coeff= (alevel*qmul + qadd)>>3;
             } else{ // MPEG-1
                     unquant_coeff = (((  alevel  << 1) + 1) * mpeg2_qscale * ((int) matrix[0])) >> 5;
@@ -4270,7 +4281,7 @@ static void build_basis(uint8_t *perm){
     }
 }
 
-static int dct_quantize_refine(MpegEncContext *s, //FIXME breaks denoise?
+static int dct_quantize_refine(MPVEncContext *const s, //FIXME breaks denoise?
                         int16_t *block, int16_t *weight, int16_t *orig,
                         int n, int qscale){
     int16_t rem[64];
@@ -4286,21 +4297,21 @@ static int dct_quantize_refine(MpegEncContext *s, //FIXME breaks denoise?
     const uint8_t *length;
     const uint8_t *last_length;
     int lambda;
-    int rle_index, run, q = 1, sum; //q is only used when s->mb_intra is true
+    int rle_index, run, q = 1, sum; //q is only used when s->c.mb_intra is true
 
     if(basis[0][0] == 0)
-        build_basis(s->idsp.idct_permutation);
+        build_basis(s->c.idsp.idct_permutation);
 
     qmul= qscale*2;
     qadd= (qscale-1)|1;
-    if (s->mb_intra) {
-        scantable= s->intra_scantable.scantable;
-        perm_scantable= s->intra_scantable.permutated;
-        if (!s->h263_aic) {
+    if (s->c.mb_intra) {
+        scantable= s->c.intra_scantable.scantable;
+        perm_scantable= s->c.intra_scantable.permutated;
+        if (!s->c.h263_aic) {
             if (n < 4)
-                q = s->y_dc_scale;
+                q = s->c.y_dc_scale;
             else
-                q = s->c_dc_scale;
+                q = s->c.c_dc_scale;
         } else{
             /* For AIC we skip quant/dequant of INTRADC */
             q = 1;
@@ -4311,7 +4322,7 @@ static int dct_quantize_refine(MpegEncContext *s, //FIXME breaks denoise?
         dc= block[0]*q;
 //        block[0] = (block[0] + (q >> 1)) / q;
         start_i = 1;
-//        if(s->mpeg_quant || s->out_format == FMT_MPEG1)
+//        if(s->c.mpeg_quant || s->c.out_format == FMT_MPEG1)
 //            bias= 1<<(QMAT_SHIFT-1);
         if (n > 3 && s->intra_chroma_ac_vlc_length) {
             length     = s->intra_chroma_ac_vlc_length;
@@ -4321,14 +4332,14 @@ static int dct_quantize_refine(MpegEncContext *s, //FIXME breaks denoise?
             last_length= s->intra_ac_vlc_last_length;
         }
     } else {
-        scantable= s->inter_scantable.scantable;
-        perm_scantable= s->inter_scantable.permutated;
+        scantable= s->c.inter_scantable.scantable;
+        perm_scantable= s->c.inter_scantable.permutated;
         dc= 0;
         start_i = 0;
         length     = s->inter_ac_vlc_length;
         last_length= s->inter_ac_vlc_last_length;
     }
-    last_non_zero = s->block_last_index[n];
+    last_non_zero = s->c.block_last_index[n];
 
     dc += (1<<(RECON_SHIFT-1));
     for(i=0; i<64; i++){
@@ -4351,7 +4362,7 @@ static int dct_quantize_refine(MpegEncContext *s, //FIXME breaks denoise?
         av_assert2(w<(1<<6));
         sum += w*w;
     }
-    lambda= sum*(uint64_t)s->lambda2 >> (FF_LAMBDA_SHIFT - 6 + 6 + 6 + 6);
+    lambda= sum*(uint64_t)s->c.lambda2 >> (FF_LAMBDA_SHIFT - 6 + 6 + 6 + 6);
 
     run=0;
     rle_index=0;
@@ -4392,7 +4403,7 @@ static int dct_quantize_refine(MpegEncContext *s, //FIXME breaks denoise?
             const int level= block[0];
             int change, old_coeff;
 
-            av_assert2(s->mb_intra);
+            av_assert2(s->c.mb_intra);
 
             old_coeff= q*level;
 
@@ -4622,7 +4633,7 @@ void ff_block_permute(int16_t *block, const uint8_t *permutation,
     }
 }
 
-static int dct_quantize_c(MpegEncContext *s,
+static int dct_quantize_c(MPVEncContext *const s,
                           int16_t *block, int n,
                           int qscale, int *overflow)
 {
@@ -4638,13 +4649,13 @@ static int dct_quantize_c(MpegEncContext *s,
     if(s->dct_error_sum)
         s->denoise_dct(s, block);
 
-    if (s->mb_intra) {
-        scantable= s->intra_scantable.scantable;
-        if (!s->h263_aic) {
+    if (s->c.mb_intra) {
+        scantable= s->c.intra_scantable.scantable;
+        if (!s->c.h263_aic) {
             if (n < 4)
-                q = s->y_dc_scale;
+                q = s->c.y_dc_scale;
             else
-                q = s->c_dc_scale;
+                q = s->c.c_dc_scale;
             q = q << 3;
         } else
             /* For AIC we skip quant/dequant of INTRADC */
@@ -4657,7 +4668,7 @@ static int dct_quantize_c(MpegEncContext *s,
         qmat = n < 4 ? s->q_intra_matrix[qscale] : s->q_chroma_intra_matrix[qscale];
         bias= s->intra_quant_bias*(1<<(QMAT_SHIFT - QUANT_BIAS_SHIFT));
     } else {
-        scantable= s->inter_scantable.scantable;
+        scantable= s->c.inter_scantable.scantable;
         start_i = 0;
         last_non_zero = -1;
         qmat = s->q_inter_matrix[qscale];
@@ -4698,8 +4709,8 @@ static int dct_quantize_c(MpegEncContext *s,
     *overflow= s->max_qcoeff < max; //overflow might have happened
 
     /* we need this permutation so that we correct the IDCT, we only permute the !=0 elements */
-    if (s->idsp.perm_type != FF_IDCT_PERM_NONE)
-        ff_block_permute(block, s->idsp.idct_permutation,
+    if (s->c.idsp.perm_type != FF_IDCT_PERM_NONE)
+        ff_block_permute(block, s->c.idsp.idct_permutation,
                       scantable, last_non_zero);
 
     return last_non_zero;
diff --git a/libavcodec/mpegvideoenc.h b/libavcodec/mpegvideoenc.h
index bc8a3e80d5..1d124b1bd1 100644
--- a/libavcodec/mpegvideoenc.h
+++ b/libavcodec/mpegvideoenc.h
@@ -32,13 +32,142 @@
 
 #include "libavutil/avassert.h"
 #include "libavutil/opt.h"
+#include "fdctdsp.h"
 #include "mpegvideo.h"
+#include "mpegvideoencdsp.h"
+#include "pixblockdsp.h"
+#include "put_bits.h"
 #include "ratecontrol.h"
 
 #define MPVENC_MAX_B_FRAMES 16
 
+typedef struct MPVEncContext {
+    MpegEncContext c;           ///< the common base context
+
+    /** bit output */
+    PutBitContext pb;
+
+    int *lambda_table;
+    int adaptive_quant;         ///< use adaptive quantization
+    int dquant;                 ///< qscale difference to prev qscale
+    int skipdct;                ///< skip dct and code zero residual
+
+    int luma_elim_threshold;
+    int chroma_elim_threshold;
+
+    /**
+     * Reference to the source picture.
+     */
+    AVFrame *new_pic;
+
+    FDCTDSPContext fdsp;
+    MpegvideoEncDSPContext mpvencdsp;
+    PixblockDSPContext pdsp;
+
+    int16_t (*p_mv_table)[2];            ///< MV table (1MV per MB) P-frame
+    int16_t (*b_forw_mv_table)[2];       ///< MV table (1MV per MB) forward mode B-frame
+    int16_t (*b_back_mv_table)[2];       ///< MV table (1MV per MB) backward mode B-frame
+    int16_t (*b_bidir_forw_mv_table)[2]; ///< MV table (1MV per MB) bidir mode B-frame
+    int16_t (*b_bidir_back_mv_table)[2]; ///< MV table (1MV per MB) bidir mode B-frame
+    int16_t (*b_direct_mv_table)[2];     ///< MV table (1MV per MB) direct mode B-frame
+    int16_t (*b_field_mv_table[2][2][2])[2];///< MV table (4MV per MB) interlaced B-frame
+    uint8_t (*p_field_select_table[2]);  ///< Only the first element is allocated
+    uint8_t (*b_field_select_table[2][2]); ///< allocated jointly with p_field_select_table
+
+    uint16_t *mb_type;          ///< Table for candidate MB types
+    uint16_t *mb_var;           ///< Table for MB variances
+    uint16_t *mc_mb_var;        ///< Table for motion compensated MB variances
+    uint8_t  *mb_mean;          ///< Table for MB luminance
+    uint64_t encoding_error[MPV_MAX_PLANES];
+
+    int intra_quant_bias;    ///< bias for the quantizer
+    int inter_quant_bias;    ///< bias for the quantizer
+    int min_qcoeff;          ///< minimum encodable coefficient
+    int max_qcoeff;          ///< maximum encodable coefficient
+    int ac_esc_length;       ///< num of bits needed to encode the longest esc
+    uint8_t *intra_ac_vlc_length;
+    uint8_t *intra_ac_vlc_last_length;
+    uint8_t *intra_chroma_ac_vlc_length;
+    uint8_t *intra_chroma_ac_vlc_last_length;
+    uint8_t *inter_ac_vlc_length;
+    uint8_t *inter_ac_vlc_last_length;
+    uint8_t *luma_dc_vlc_length;
+
+    int coded_score[12];
+
+    /** precomputed matrix (combine qscale and DCT renorm) */
+    int (*q_intra_matrix)[64];
+    int (*q_chroma_intra_matrix)[64];
+    int (*q_inter_matrix)[64];
+    /** identical to the above but for MMX & these are not permutated, second 64 entries are bias*/
+    uint16_t (*q_intra_matrix16)[2][64];
+    uint16_t (*q_chroma_intra_matrix16)[2][64];
+    uint16_t (*q_inter_matrix16)[2][64];
+
+    /* noise reduction */
+    int (*dct_error_sum)[64];
+    int dct_count[2];
+    uint16_t (*dct_offset)[64];
+
+    /* statistics, used for 2-pass encoding */
+    int mv_bits;
+    int i_tex_bits;
+    int p_tex_bits;
+    int i_count;
+    int misc_bits; ///< cbp, mb_type
+    int last_bits; ///< temp var used for calculating the above vars
+
+    /* H.263 specific */
+    int mb_info;                   ///< interval for outputting info about mb offsets as side data
+    int prev_mb_info, last_mb_info;
+    int mb_info_size;
+    uint8_t *mb_info_ptr;
+
+    /* MPEG-4 specific */
+    PutBitContext tex_pb;          ///< used for data partitioned VOPs
+    PutBitContext pb2;             ///< used for data partitioned VOPs
+
+    /* MSMPEG4 specific */
+    int esc3_level_length;
+
+    /* MJPEG specific */
+    struct MJpegContext *mjpeg_ctx;
+    int esc_pos;
+
+    /* MPEG-1 specific */
+    int last_mv_dir;         ///< last mv_dir, used for B-frame encoding
+
+    /* RTP specific */
+    int rtp_mode;
+    int rtp_payload_size;
+
+    uint8_t *ptr_lastgob;
+
+    void (*encode_mb)(struct MPVEncContext *s, int16_t block[][64],
+                      int motion_x, int motion_y);
+
+    int (*dct_quantize)(struct MPVEncContext *s, int16_t *block/*align 16*/, int n, int qscale, int *overflow);
+    void (*denoise_dct)(struct MPVEncContext *s, int16_t *block);
+
+    int mpv_flags;      ///< flags set by private options
+    int quantizer_noise_shaping;
+
+    me_cmp_func ildct_cmp[2]; ///< 0 = intra, 1 = non-intra
+    me_cmp_func n_sse_cmp[2]; ///< either SSE or NSSE cmp func
+    me_cmp_func sad_cmp[2];
+    me_cmp_func sse_cmp[2];
+    int (*sum_abs_dctelem)(const int16_t *block);
+
+    /// Bitfield containing information which frames to reconstruct.
+    int frame_reconstruction_bitfield;
+
+    int error_rate;
+
+    int intra_penalty;
+} MPVEncContext;
+
 typedef struct MPVMainEncContext {
-    MpegEncContext s;  ///< The main slicecontext
+    MPVEncContext s;               ///< The main slicecontext
 
     int scenechange_threshold;
 
@@ -112,14 +241,14 @@ typedef struct MPVMainEncContext {
     int16_t (*mv_table_base)[2];
 } MPVMainEncContext;
 
-static inline const MPVMainEncContext *slice_to_mainenc(const MpegEncContext *s)
+static inline const MPVMainEncContext *slice_to_mainenc(const MPVEncContext *s)
 {
 #ifdef NO_SLICE_THREADING_HERE
-    av_assert2(s->slice_context_count <= 1 &&
-               !(s->avctx->codec->capabilities & AV_CODEC_CAP_SLICE_THREADS));
+    av_assert2(s->c.slice_context_count <= 1 &&
+               !(s->c.avctx->codec->capabilities & AV_CODEC_CAP_SLICE_THREADS));
     return (const MPVMainEncContext*)s;
 #else
-    return s->encparent;
+    return s->c.encparent;
 #endif
 }
 
@@ -170,7 +299,7 @@ static inline const MPVMainEncContext *slice_to_mainenc(const MpegEncContext *s)
 { "chroma", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = FF_CMP_CHROMA }, INT_MIN, INT_MAX, FF_MPV_OPT_FLAGS, .unit = "cmp_func" }, \
 { "msad",   "Sum of absolute differences, median predicted", 0, AV_OPT_TYPE_CONST, {.i64 = FF_CMP_MEDIAN_SAD }, INT_MIN, INT_MAX, FF_MPV_OPT_FLAGS, .unit = "cmp_func" }
 
-#define FF_MPV_OFFSET(x) offsetof(MpegEncContext, x)
+#define FF_MPV_OFFSET(x) offsetof(MPVEncContext, x)
 #define FF_MPV_MAIN_OFFSET(x) offsetof(MPVMainEncContext, x)
 #define FF_RC_OFFSET(x)  offsetof(MPVMainEncContext, rc_context.x)
 #define FF_MPV_OPT_FLAGS (AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM)
@@ -217,7 +346,7 @@ FF_MPV_OPT_CMP_FUNC, \
 
 #define FF_MPV_COMMON_MOTION_EST_OPTS \
 { "mv0",            "always try a mb with mv=<0,0>",                     0, AV_OPT_TYPE_CONST, { .i64 = FF_MPV_FLAG_MV0 },    0, 0, FF_MPV_OPT_FLAGS, .unit = "mpv_flags" },\
-{"motion_est", "motion estimation algorithm",                       FF_MPV_OFFSET(me.motion_est), AV_OPT_TYPE_INT, {.i64 = FF_ME_EPZS }, FF_ME_ZERO, FF_ME_XONE, FF_MPV_OPT_FLAGS, .unit = "motion_est" },   \
+{"motion_est", "motion estimation algorithm",                       FF_MPV_OFFSET(c.me.motion_est), AV_OPT_TYPE_INT, {.i64 = FF_ME_EPZS }, FF_ME_ZERO, FF_ME_XONE, FF_MPV_OPT_FLAGS, .unit = "motion_est" },   \
 { "zero", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = FF_ME_ZERO }, 0, 0, FF_MPV_OPT_FLAGS, .unit = "motion_est" }, \
 { "epzs", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = FF_ME_EPZS }, 0, 0, FF_MPV_OPT_FLAGS, .unit = "motion_est" }, \
 { "xone", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = FF_ME_XONE }, 0, 0, FF_MPV_OPT_FLAGS, .unit = "motion_est" }, \
@@ -233,21 +362,21 @@ int ff_mpv_encode_init(AVCodecContext *avctx);
 int ff_mpv_encode_end(AVCodecContext *avctx);
 int ff_mpv_encode_picture(AVCodecContext *avctx, AVPacket *pkt,
                           const AVFrame *frame, int *got_packet);
-int ff_mpv_reallocate_putbitbuffer(MpegEncContext *s, size_t threshold, size_t size_increase);
+int ff_mpv_reallocate_putbitbuffer(MPVEncContext *s, size_t threshold, size_t size_increase);
 
 void ff_write_quant_matrix(PutBitContext *pb, uint16_t *matrix);
 
-void ff_dct_encode_init(MpegEncContext *s);
-void ff_mpvenc_dct_init_mips(MpegEncContext *s);
-void ff_dct_encode_init_x86(MpegEncContext *s);
+void ff_dct_encode_init(MPVEncContext *s);
+void ff_mpvenc_dct_init_mips(MPVEncContext *s);
+void ff_dct_encode_init_x86(MPVEncContext *s);
 
-void ff_convert_matrix(MpegEncContext *s, int (*qmat)[64], uint16_t (*qmat16)[2][64],
+void ff_convert_matrix(MPVEncContext *s, int (*qmat)[64], uint16_t (*qmat16)[2][64],
                        const uint16_t *quant_matrix, int bias, int qmin, int qmax, int intra);
 
 void ff_block_permute(int16_t *block, const uint8_t *permutation,
                       const uint8_t *scantable, int last);
 
-static inline int get_bits_diff(MpegEncContext *s)
+static inline int get_bits_diff(MPVEncContext *s)
 {
     const int bits = put_bits_count(&s->pb);
     const int last = s->last_bits;
diff --git a/libavcodec/msmpeg4enc.c b/libavcodec/msmpeg4enc.c
index bba0493f01..99e415303a 100644
--- a/libavcodec/msmpeg4enc.c
+++ b/libavcodec/msmpeg4enc.c
@@ -150,7 +150,7 @@ static av_cold void msmpeg4_encode_init_static(void)
 
 static void find_best_tables(MSMPEG4EncContext *ms)
 {
-    MpegEncContext *const s = &ms->m.s;
+    MPVEncContext *const s = &ms->m.s;
     int i;
     int best        = 0, best_size        = INT_MAX;
     int chroma_best = 0, best_chroma_size = INT_MAX;
@@ -174,7 +174,7 @@ static void find_best_tables(MSMPEG4EncContext *ms)
                     int intra_luma_count  = ms->ac_stats[1][0][level][run][last];
                     int intra_chroma_count= ms->ac_stats[1][1][level][run][last];
 
-                    if(s->pict_type==AV_PICTURE_TYPE_I){
+                    if(s->c.pict_type==AV_PICTURE_TYPE_I){
                         size       += intra_luma_count  *rl_length[i  ][level][run][last];
                         chroma_size+= intra_chroma_count*rl_length[i+3][level][run][last];
                     }else{
@@ -196,16 +196,16 @@ static void find_best_tables(MSMPEG4EncContext *ms)
         }
     }
 
-    if(s->pict_type==AV_PICTURE_TYPE_P) chroma_best= best;
+    if(s->c.pict_type==AV_PICTURE_TYPE_P) chroma_best= best;
 
     memset(ms->ac_stats, 0, sizeof(ms->ac_stats));
 
     ms->rl_table_index        =        best;
     ms->rl_chroma_table_index = chroma_best;
 
-    if (s->pict_type != ms->m.last_non_b_pict_type) {
+    if (s->c.pict_type != ms->m.last_non_b_pict_type) {
         ms->rl_table_index= 2;
-        if(s->pict_type==AV_PICTURE_TYPE_I)
+        if(s->c.pict_type==AV_PICTURE_TYPE_I)
             ms->rl_chroma_table_index = 1;
         else
             ms->rl_chroma_table_index = 2;
@@ -217,15 +217,15 @@ static void find_best_tables(MSMPEG4EncContext *ms)
 static int msmpeg4_encode_picture_header(MPVMainEncContext *const m)
 {
     MSMPEG4EncContext *const ms = (MSMPEG4EncContext*)m;
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
 
     find_best_tables(ms);
 
     align_put_bits(&s->pb);
-    put_bits(&s->pb, 2, s->pict_type - 1);
+    put_bits(&s->pb, 2, s->c.pict_type - 1);
 
-    put_bits(&s->pb, 5, s->qscale);
-    if (s->msmpeg4_version <= MSMP4_V2) {
+    put_bits(&s->pb, 5, s->c.qscale);
+    if (s->c.msmpeg4_version <= MSMP4_V2) {
         ms->rl_table_index = 2;
         ms->rl_chroma_table_index = 2;
     }
@@ -234,24 +234,24 @@ static int msmpeg4_encode_picture_header(MPVMainEncContext *const m)
     ms->mv_table_index   = 1; /* only if P-frame */
     ms->use_skip_mb_code = 1; /* only if P-frame */
     ms->per_mb_rl_table  = 0;
-    if (s->msmpeg4_version == MSMP4_WMV1)
-        s->inter_intra_pred = s->width * s->height < 320*240 &&
+    if (s->c.msmpeg4_version == MSMP4_WMV1)
+        s->c.inter_intra_pred = s->c.width * s->c.height < 320*240 &&
                               m->bit_rate  <= II_BITRATE     &&
-                              s->pict_type == AV_PICTURE_TYPE_P;
-    ff_dlog(s->avctx, "%d %"PRId64" %d %d %d\n", s->pict_type, m->bit_rate,
-            s->inter_intra_pred, s->width, s->height);
+                              s->c.pict_type == AV_PICTURE_TYPE_P;
+    ff_dlog(s->c.avctx, "%d %"PRId64" %d %d %d\n", s->c.pict_type, m->bit_rate,
+            s->c.inter_intra_pred, s->c.width, s->c.height);
 
-    if (s->pict_type == AV_PICTURE_TYPE_I) {
-        s->slice_height= s->mb_height/1;
-        put_bits(&s->pb, 5, 0x16 + s->mb_height/s->slice_height);
+    if (s->c.pict_type == AV_PICTURE_TYPE_I) {
+        s->c.slice_height = s->c.mb_height/1;
+        put_bits(&s->pb, 5, 0x16 + s->c.mb_height/s->c.slice_height);
 
-        if (s->msmpeg4_version == MSMP4_WMV1) {
+        if (s->c.msmpeg4_version == MSMP4_WMV1) {
             ff_msmpeg4_encode_ext_header(s);
             if (m->bit_rate > MBAC_BITRATE)
                 put_bits(&s->pb, 1, ms->per_mb_rl_table);
         }
 
-        if (s->msmpeg4_version > MSMP4_V2) {
+        if (s->c.msmpeg4_version > MSMP4_V2) {
             if (!ms->per_mb_rl_table){
                 ff_msmpeg4_code012(&s->pb, ms->rl_chroma_table_index);
                 ff_msmpeg4_code012(&s->pb, ms->rl_table_index);
@@ -262,10 +262,10 @@ static int msmpeg4_encode_picture_header(MPVMainEncContext *const m)
     } else {
         put_bits(&s->pb, 1, ms->use_skip_mb_code);
 
-        if (s->msmpeg4_version == MSMP4_WMV1 && m->bit_rate > MBAC_BITRATE)
+        if (s->c.msmpeg4_version == MSMP4_WMV1 && m->bit_rate > MBAC_BITRATE)
             put_bits(&s->pb, 1, ms->per_mb_rl_table);
 
-        if (s->msmpeg4_version > MSMP4_V2) {
+        if (s->c.msmpeg4_version > MSMP4_V2) {
             if (!ms->per_mb_rl_table)
                 ff_msmpeg4_code012(&s->pb, ms->rl_table_index);
 
@@ -281,18 +281,18 @@ static int msmpeg4_encode_picture_header(MPVMainEncContext *const m)
     return 0;
 }
 
-void ff_msmpeg4_encode_ext_header(MpegEncContext * s)
+void ff_msmpeg4_encode_ext_header(MPVEncContext *const s)
 {
     const MPVMainEncContext *const m = slice_to_mainenc(s);
     unsigned fps;
 
-    if (s->avctx->framerate.num > 0 && s->avctx->framerate.den > 0)
-        fps = s->avctx->framerate.num / s->avctx->framerate.den;
+    if (s->c.avctx->framerate.num > 0 && s->c.avctx->framerate.den > 0)
+        fps = s->c.avctx->framerate.num / s->c.avctx->framerate.den;
     else {
 FF_DISABLE_DEPRECATION_WARNINGS
-        fps = s->avctx->time_base.den / s->avctx->time_base.num
+        fps = s->c.avctx->time_base.den / s->c.avctx->time_base.num
 #if FF_API_TICKS_PER_FRAME
-            / FFMAX(s->avctx->ticks_per_frame, 1)
+            / FFMAX(s->c.avctx->ticks_per_frame, 1)
 #endif
             ;
 FF_ENABLE_DEPRECATION_WARNINGS
@@ -302,16 +302,16 @@ FF_ENABLE_DEPRECATION_WARNINGS
 
     put_bits(&s->pb, 11, FFMIN(m->bit_rate / 1024, 2047));
 
-    if (s->msmpeg4_version >= MSMP4_V3)
-        put_bits(&s->pb, 1, s->flipflop_rounding);
+    if (s->c.msmpeg4_version >= MSMP4_V3)
+        put_bits(&s->pb, 1, s->c.flipflop_rounding);
     else
-        av_assert0(!s->flipflop_rounding);
+        av_assert0(!s->c.flipflop_rounding);
 }
 
 void ff_msmpeg4_encode_motion(MSMPEG4EncContext *const ms,
                                   int mx, int my)
 {
-    MpegEncContext *const s = &ms->m.s;
+    MPVEncContext *const s = &ms->m.s;
     const uint32_t *const mv_vector_table = mv_vector_tables[ms->mv_table_index];
     uint32_t code;
 
@@ -334,20 +334,21 @@ void ff_msmpeg4_encode_motion(MSMPEG4EncContext *const ms,
     put_bits(&s->pb, code & 0xff, code >> 8);
 }
 
-void ff_msmpeg4_handle_slices(MpegEncContext *s){
-    if (s->mb_x == 0) {
-        if (s->slice_height && (s->mb_y % s->slice_height) == 0) {
-            if (s->msmpeg4_version < MSMP4_WMV1) {
-                ff_mpeg4_clean_buffers(s);
+void ff_msmpeg4_handle_slices(MPVEncContext *const s)
+{
+    if (s->c.mb_x == 0) {
+        if (s->c.slice_height && (s->c.mb_y % s->c.slice_height) == 0) {
+            if (s->c.msmpeg4_version < MSMP4_WMV1) {
+                ff_mpeg4_clean_buffers(&s->c);
             }
-            s->first_slice_line = 1;
+            s->c.first_slice_line = 1;
         } else {
-            s->first_slice_line = 0;
+            s->c.first_slice_line = 0;
         }
     }
 }
 
-static void msmpeg4v2_encode_motion(MpegEncContext * s, int val)
+static void msmpeg4v2_encode_motion(MPVEncContext *const s, int val)
 {
     int range, bit_size, sign, code, bits;
 
@@ -355,7 +356,7 @@ static void msmpeg4v2_encode_motion(MpegEncContext * s, int val)
         /* zero vector; corresponds to ff_mvtab[0] */
         put_bits(&s->pb, 1, 0x1);
     } else {
-        bit_size = s->f_code - 1;
+        bit_size = s->c.f_code - 1;
         range = 1 << bit_size;
         if (val <= -64)
             val += 64;
@@ -379,7 +380,7 @@ static void msmpeg4v2_encode_motion(MpegEncContext * s, int val)
     }
 }
 
-static void msmpeg4_encode_mb(MpegEncContext *const s,
+static void msmpeg4_encode_mb(MPVEncContext *const s,
                               int16_t block[][64],
                               int motion_x, int motion_y)
 {
@@ -389,11 +390,11 @@ static void msmpeg4_encode_mb(MpegEncContext *const s,
 
     ff_msmpeg4_handle_slices(s);
 
-    if (!s->mb_intra) {
+    if (!s->c.mb_intra) {
         /* compute cbp */
         cbp = 0;
         for (i = 0; i < 6; i++) {
-            if (s->block_last_index[i] >= 0)
+            if (s->c.block_last_index[i] >= 0)
                 cbp |= 1 << (5 - i);
         }
         if (ms->use_skip_mb_code && (cbp | motion_x | motion_y) == 0) {
@@ -407,7 +408,7 @@ static void msmpeg4_encode_mb(MpegEncContext *const s,
         if (ms->use_skip_mb_code)
             put_bits(&s->pb, 1, 0);     /* mb coded */
 
-        if (s->msmpeg4_version <= MSMP4_V2) {
+        if (s->c.msmpeg4_version <= MSMP4_V2) {
             put_bits(&s->pb,
                      ff_v2_mb_type[cbp&3][1],
                      ff_v2_mb_type[cbp&3][0]);
@@ -420,7 +421,7 @@ static void msmpeg4_encode_mb(MpegEncContext *const s,
 
             s->misc_bits += get_bits_diff(s);
 
-            ff_h263_pred_motion(s, 0, 0, &pred_x, &pred_y);
+            ff_h263_pred_motion(&s->c, 0, 0, &pred_x, &pred_y);
             msmpeg4v2_encode_motion(s, motion_x - pred_x);
             msmpeg4v2_encode_motion(s, motion_y - pred_y);
         }else{
@@ -431,7 +432,7 @@ static void msmpeg4_encode_mb(MpegEncContext *const s,
             s->misc_bits += get_bits_diff(s);
 
             /* motion vector */
-            ff_h263_pred_motion(s, 0, 0, &pred_x, &pred_y);
+            ff_h263_pred_motion(&s->c, 0, 0, &pred_x, &pred_y);
             ff_msmpeg4_encode_motion(ms, motion_x - pred_x,
                                      motion_y - pred_y);
         }
@@ -446,11 +447,11 @@ static void msmpeg4_encode_mb(MpegEncContext *const s,
         /* compute cbp */
         cbp = 0;
         for (int i = 0; i < 6; i++) {
-            int val = (s->block_last_index[i] >= 1);
+            int val = (s->c.block_last_index[i] >= 1);
             cbp |= val << (5 - i);
         }
-        if (s->msmpeg4_version <= MSMP4_V2) {
-            if (s->pict_type == AV_PICTURE_TYPE_I) {
+        if (s->c.msmpeg4_version <= MSMP4_V2) {
+            if (s->c.pict_type == AV_PICTURE_TYPE_I) {
                 put_bits(&s->pb,
                          ff_v2_intra_cbpc[cbp&3][1], ff_v2_intra_cbpc[cbp&3][0]);
             } else {
@@ -465,14 +466,14 @@ static void msmpeg4_encode_mb(MpegEncContext *const s,
                      ff_h263_cbpy_tab[cbp>>2][1],
                      ff_h263_cbpy_tab[cbp>>2][0]);
         }else{
-            if (s->pict_type == AV_PICTURE_TYPE_I) {
+            if (s->c.pict_type == AV_PICTURE_TYPE_I) {
                 /* compute coded_cbp; the 0x3 corresponds to chroma cbp;
                  * luma coded_cbp are set in the loop below */
                 coded_cbp = cbp & 0x3;
                 for (int i = 0; i < 4; i++) {
                     uint8_t *coded_block;
-                    int pred = ff_msmpeg4_coded_block_pred(s, i, &coded_block);
-                    int val = (s->block_last_index[i] >= 1);
+                    int pred = ff_msmpeg4_coded_block_pred(&s->c, i, &coded_block);
+                    int val = (s->c.block_last_index[i] >= 1);
                     *coded_block = val;
                     val ^= pred;
                     coded_cbp |= val << (5 - i);
@@ -488,9 +489,10 @@ static void msmpeg4_encode_mb(MpegEncContext *const s,
                          ff_table_mb_non_intra[cbp][0]);
             }
             put_bits(&s->pb, 1, 0);             /* no AC prediction yet */
-            if(s->inter_intra_pred){
-                s->h263_aic_dir=0;
-                put_bits(&s->pb, ff_table_inter_intra[s->h263_aic_dir][1], ff_table_inter_intra[s->h263_aic_dir][0]);
+            if (s->c.inter_intra_pred) {
+                s->c.h263_aic_dir = 0;
+                put_bits(&s->pb, ff_table_inter_intra[s->c.h263_aic_dir][1],
+                                 ff_table_inter_intra[s->c.h263_aic_dir][0]);
             }
         }
         s->misc_bits += get_bits_diff(s);
@@ -505,24 +507,24 @@ static void msmpeg4_encode_mb(MpegEncContext *const s,
 
 static void msmpeg4_encode_dc(MSMPEG4EncContext *const ms, int level, int n, int *dir_ptr)
 {
-    MpegEncContext *const s = &ms->m.s;
+    MPVEncContext *const s = &ms->m.s;
     int sign, code;
     int pred;
 
     int16_t *dc_val;
-    pred = ff_msmpeg4_pred_dc(s, n, &dc_val, dir_ptr);
+    pred = ff_msmpeg4_pred_dc(&s->c, n, &dc_val, dir_ptr);
 
     /* update predictor */
     if (n < 4) {
-        *dc_val = level * s->y_dc_scale;
+        *dc_val = level * s->c.y_dc_scale;
     } else {
-        *dc_val = level * s->c_dc_scale;
+        *dc_val = level * s->c.c_dc_scale;
     }
 
     /* do the prediction */
     level -= pred;
 
-    if (s->msmpeg4_version <= MSMP4_V2) {
+    if (s->c.msmpeg4_version <= MSMP4_V2) {
         if (n < 4) {
             put_bits(&s->pb,
                      ff_v2_dc_lum_table[level + 256][1],
@@ -556,7 +558,7 @@ static void msmpeg4_encode_dc(MSMPEG4EncContext *const ms, int level, int n, int
 
 /* Encoding of a block; very similar to MPEG-4 except for a different
  * escape coding (same as H.263) and more VLC tables. */
-void ff_msmpeg4_encode_block(MpegEncContext * s, int16_t * block, int n)
+void ff_msmpeg4_encode_block(MPVEncContext *const s, int16_t * block, int n)
 {
     MSMPEG4EncContext *const ms = (MSMPEG4EncContext*)s;
     int level, run, last, i, j, last_index;
@@ -565,7 +567,7 @@ void ff_msmpeg4_encode_block(MpegEncContext * s, int16_t * block, int n)
     const RLTable *rl;
     const uint8_t *scantable;
 
-    if (s->mb_intra) {
+    if (s->c.mb_intra) {
         msmpeg4_encode_dc(ms, block[0], n, &dc_pred_dir);
         i = 1;
         if (n < 4) {
@@ -573,23 +575,23 @@ void ff_msmpeg4_encode_block(MpegEncContext * s, int16_t * block, int n)
         } else {
             rl = &ff_rl_table[3 + ms->rl_chroma_table_index];
         }
-        run_diff = s->msmpeg4_version >= MSMP4_WMV1;
-        scantable= s->intra_scantable.permutated;
+        run_diff = s->c.msmpeg4_version >= MSMP4_WMV1;
+        scantable= s->c.intra_scantable.permutated;
     } else {
         i = 0;
         rl = &ff_rl_table[3 + ms->rl_table_index];
-        run_diff = s->msmpeg4_version > MSMP4_V2;
-        scantable= s->inter_scantable.permutated;
+        run_diff = s->c.msmpeg4_version > MSMP4_V2;
+        scantable= s->c.inter_scantable.permutated;
     }
 
     /* recalculate block_last_index for M$ wmv1 */
-    if (s->msmpeg4_version >= MSMP4_WMV1 && s->block_last_index[n] > 0) {
+    if (s->c.msmpeg4_version >= MSMP4_WMV1 && s->c.block_last_index[n] > 0) {
         for(last_index=63; last_index>=0; last_index--){
             if(block[scantable[last_index]]) break;
         }
-        s->block_last_index[n]= last_index;
+        s->c.block_last_index[n]= last_index;
     }else
-        last_index = s->block_last_index[n];
+        last_index = s->c.block_last_index[n];
     /* AC coefs */
     last_non_zero = i - 1;
     for (; i <= last_index; i++) {
@@ -606,10 +608,10 @@ void ff_msmpeg4_encode_block(MpegEncContext * s, int16_t * block, int n)
             }
 
             if(level<=MAX_LEVEL && run<=MAX_RUN){
-                ms->ac_stats[s->mb_intra][n>3][level][run][last]++;
+                ms->ac_stats[s->c.mb_intra][n>3][level][run][last]++;
             }
 
-            ms->ac_stats[s->mb_intra][n > 3][40][63][0]++; //esc3 like
+            ms->ac_stats[s->c.mb_intra][n > 3][40][63][0]++; //esc3 like
 
             code = get_rl_index(rl, last, run, level);
             put_bits(&s->pb, rl->table_vlc[code][1], rl->table_vlc[code][0]);
@@ -629,7 +631,7 @@ void ff_msmpeg4_encode_block(MpegEncContext * s, int16_t * block, int n)
                     if (run1 < 0)
                         goto esc3;
                     code = get_rl_index(rl, last, run1+1, level);
-                    if (s->msmpeg4_version == MSMP4_WMV1 && code == rl->n)
+                    if (s->c.msmpeg4_version == MSMP4_WMV1 && code == rl->n)
                         goto esc3;
                     code = get_rl_index(rl, last, run1, level);
                     if (code == rl->n) {
@@ -637,12 +639,12 @@ void ff_msmpeg4_encode_block(MpegEncContext * s, int16_t * block, int n)
                         /* third escape */
                         put_bits(&s->pb, 1, 0);
                         put_bits(&s->pb, 1, last);
-                        if (s->msmpeg4_version >= MSMP4_WMV1) {
+                        if (s->c.msmpeg4_version >= MSMP4_WMV1) {
                             if (s->esc3_level_length == 0) {
                                 s->esc3_level_length = 8;
                                 ms->esc3_run_length  = 6;
                                 //ESCLVLSZ + ESCRUNSZ
-                                if(s->qscale<8)
+                                if(s->c.qscale<8)
                                     put_bits(&s->pb, 6, 3);
                                 else
                                     put_bits(&s->pb, 8, 3);
@@ -676,17 +678,17 @@ void ff_msmpeg4_encode_block(MpegEncContext * s, int16_t * block, int n)
 
 av_cold void ff_msmpeg4_encode_init(MPVMainEncContext *const m)
 {
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
     static AVOnce init_static_once = AV_ONCE_INIT;
 
-    ff_msmpeg4_common_init(s);
+    ff_msmpeg4_common_init(&s->c);
 
-    if (s->msmpeg4_version <= MSMP4_WMV1) {
+    if (s->c.msmpeg4_version <= MSMP4_WMV1) {
         m->encode_picture_header = msmpeg4_encode_picture_header;
         s->encode_mb             = msmpeg4_encode_mb;
     }
 
-    if (s->msmpeg4_version >= MSMP4_WMV1) {
+    if (s->c.msmpeg4_version >= MSMP4_WMV1) {
         s->min_qcoeff = -255;
         s->max_qcoeff =  255;
     }
diff --git a/libavcodec/msmpeg4enc.h b/libavcodec/msmpeg4enc.h
index bce1265bb5..167600f01f 100644
--- a/libavcodec/msmpeg4enc.h
+++ b/libavcodec/msmpeg4enc.h
@@ -41,16 +41,16 @@ typedef struct MSMPEG4EncContext {
     unsigned ac_stats[2][2][MAX_LEVEL + 1][MAX_RUN + 1][2];
 } MSMPEG4EncContext;
 
-static inline MSMPEG4EncContext *mpv_to_msmpeg4(MpegEncContext *s)
+static inline MSMPEG4EncContext *mpv_to_msmpeg4(MPVEncContext *s)
 {
     // Only legal because no MSMPEG-4 decoder uses slice-threading.
     return (MSMPEG4EncContext*)s;
 }
 
 void ff_msmpeg4_encode_init(MPVMainEncContext *m);
-void ff_msmpeg4_encode_ext_header(MpegEncContext *s);
-void ff_msmpeg4_encode_block(MpegEncContext * s, int16_t * block, int n);
-void ff_msmpeg4_handle_slices(MpegEncContext *s);
+void ff_msmpeg4_encode_ext_header(MPVEncContext *s);
+void ff_msmpeg4_encode_block(MPVEncContext * s, int16_t * block, int n);
+void ff_msmpeg4_handle_slices(MPVEncContext *s);
 void ff_msmpeg4_encode_motion(MSMPEG4EncContext *ms, int mx, int my);
 
 void ff_msmpeg4_code012(PutBitContext *pb, int n);
diff --git a/libavcodec/ppc/me_cmp.c b/libavcodec/ppc/me_cmp.c
index 90f21525d7..764e30da2a 100644
--- a/libavcodec/ppc/me_cmp.c
+++ b/libavcodec/ppc/me_cmp.c
@@ -51,7 +51,7 @@
     iv = vec_vsx_ld(1,  pix);\
 }
 #endif
-static int sad16_x2_altivec(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+static int sad16_x2_altivec(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                             ptrdiff_t stride, int h)
 {
     int i;
@@ -91,7 +91,7 @@ static int sad16_x2_altivec(MpegEncContext *v, const uint8_t *pix1, const uint8_
     return s;
 }
 
-static int sad16_y2_altivec(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+static int sad16_y2_altivec(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                             ptrdiff_t stride, int h)
 {
     int i;
@@ -141,7 +141,7 @@ static int sad16_y2_altivec(MpegEncContext *v, const uint8_t *pix1, const uint8_
     return s;
 }
 
-static int sad16_xy2_altivec(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+static int sad16_xy2_altivec(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                              ptrdiff_t stride, int h)
 {
     int i;
@@ -230,7 +230,7 @@ static int sad16_xy2_altivec(MpegEncContext *v, const uint8_t *pix1, const uint8
     return s;
 }
 
-static int sad16_altivec(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+static int sad16_altivec(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                          ptrdiff_t stride, int h)
 {
     int i;
@@ -265,7 +265,7 @@ static int sad16_altivec(MpegEncContext *v, const uint8_t *pix1, const uint8_t *
     return s;
 }
 
-static int sad8_altivec(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+static int sad8_altivec(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                         ptrdiff_t stride, int h)
 {
     int i;
@@ -309,7 +309,7 @@ static int sad8_altivec(MpegEncContext *v, const uint8_t *pix1, const uint8_t *p
 
 /* Sum of Squared Errors for an 8x8 block, AltiVec-enhanced.
  * It's the sad8_altivec code above w/ squaring added. */
-static int sse8_altivec(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+static int sse8_altivec(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                         ptrdiff_t stride, int h)
 {
     int i;
@@ -354,7 +354,7 @@ static int sse8_altivec(MpegEncContext *v, const uint8_t *pix1, const uint8_t *p
 
 /* Sum of Squared Errors for a 16x16 block, AltiVec-enhanced.
  * It's the sad16_altivec code above w/ squaring added. */
-static int sse16_altivec(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+static int sse16_altivec(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                          ptrdiff_t stride, int h)
 {
     int i;
@@ -392,7 +392,7 @@ static int sse16_altivec(MpegEncContext *v, const uint8_t *pix1, const uint8_t *
     return s;
 }
 
-static int hadamard8_diff8x8_altivec(MpegEncContext *s, const uint8_t *dst,
+static int hadamard8_diff8x8_altivec(MPVEncContext *s, const uint8_t *dst,
                                      const uint8_t *src, ptrdiff_t stride, int h)
 {
     int __attribute__((aligned(16))) sum;
@@ -518,7 +518,7 @@ static int hadamard8_diff8x8_altivec(MpegEncContext *s, const uint8_t *dst,
  * On the 970, the hand-made RA is still a win (around 690 vs. around 780),
  * but xlc goes to around 660 on the regular C code...
  */
-static int hadamard8_diff16x8_altivec(MpegEncContext *s, const uint8_t *dst,
+static int hadamard8_diff16x8_altivec(MPVEncContext *s, const uint8_t *dst,
                                       const uint8_t *src, ptrdiff_t stride, int h)
 {
     int __attribute__((aligned(16))) sum;
@@ -709,7 +709,7 @@ static int hadamard8_diff16x8_altivec(MpegEncContext *s, const uint8_t *dst,
     return sum;
 }
 
-static int hadamard8_diff16_altivec(MpegEncContext *s, const uint8_t *dst,
+static int hadamard8_diff16_altivec(MPVEncContext *s, const uint8_t *dst,
                                     const uint8_t *src, ptrdiff_t stride, int h)
 {
     int score = hadamard8_diff16x8_altivec(s, dst, src, stride, 8);
diff --git a/libavcodec/ratecontrol.c b/libavcodec/ratecontrol.c
index b131f61b70..d60e14252e 100644
--- a/libavcodec/ratecontrol.c
+++ b/libavcodec/ratecontrol.c
@@ -37,20 +37,20 @@
 
 void ff_write_pass1_stats(MPVMainEncContext *const m)
 {
-    const MpegEncContext *const s = &m->s;
-    snprintf(s->avctx->stats_out, 256,
+    const MPVEncContext *const s = &m->s;
+    snprintf(s->c.avctx->stats_out, 256,
              "in:%d out:%d type:%d q:%d itex:%d ptex:%d mv:%d misc:%d "
              "fcode:%d bcode:%d mc-var:%"PRId64" var:%"PRId64" icount:%d hbits:%d;\n",
-             s->cur_pic.ptr->display_picture_number,
-             s->cur_pic.ptr->coded_picture_number,
-             s->pict_type,
-             s->cur_pic.ptr->f->quality,
+             s->c.cur_pic.ptr->display_picture_number,
+             s->c.cur_pic.ptr->coded_picture_number,
+             s->c.pict_type,
+             s->c.cur_pic.ptr->f->quality,
              s->i_tex_bits,
              s->p_tex_bits,
              s->mv_bits,
              s->misc_bits,
-             s->f_code,
-             s->b_code,
+             s->c.f_code,
+             s->c.b_code,
              m->mc_mb_var_sum,
              m->mb_var_sum,
              s->i_count,
@@ -104,9 +104,9 @@ static double bits2qp_cb(void *rce, double qp)
 
 static double get_diff_limited_q(MPVMainEncContext *m, const RateControlEntry *rce, double q)
 {
-    MpegEncContext     *const   s = &m->s;
+    MPVEncContext      *const   s = &m->s;
     RateControlContext *const rcc = &m->rc_context;
-    AVCodecContext *a         = s->avctx;
+    AVCodecContext     *const   a = s->c.avctx;
     const int pict_type       = rce->new_pict_type;
     const double last_p_q     = rcc->last_qscale_for[AV_PICTURE_TYPE_P];
     const double last_non_b_q = rcc->last_qscale_for[rcc->last_non_b_pict_type];
@@ -144,7 +144,7 @@ static double get_diff_limited_q(MPVMainEncContext *m, const RateControlEntry *r
  */
 static void get_qminmax(int *qmin_ret, int *qmax_ret, MPVMainEncContext *const m, int pict_type)
 {
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
     int qmin = m->lmin;
     int qmax = m->lmax;
 
@@ -152,12 +152,12 @@ static void get_qminmax(int *qmin_ret, int *qmax_ret, MPVMainEncContext *const m
 
     switch (pict_type) {
     case AV_PICTURE_TYPE_B:
-        qmin = (int)(qmin * FFABS(s->avctx->b_quant_factor) + s->avctx->b_quant_offset + 0.5);
-        qmax = (int)(qmax * FFABS(s->avctx->b_quant_factor) + s->avctx->b_quant_offset + 0.5);
+        qmin = (int)(qmin * FFABS(s->c.avctx->b_quant_factor) + s->c.avctx->b_quant_offset + 0.5);
+        qmax = (int)(qmax * FFABS(s->c.avctx->b_quant_factor) + s->c.avctx->b_quant_offset + 0.5);
         break;
     case AV_PICTURE_TYPE_I:
-        qmin = (int)(qmin * FFABS(s->avctx->i_quant_factor) + s->avctx->i_quant_offset + 0.5);
-        qmax = (int)(qmax * FFABS(s->avctx->i_quant_factor) + s->avctx->i_quant_offset + 0.5);
+        qmin = (int)(qmin * FFABS(s->c.avctx->i_quant_factor) + s->c.avctx->i_quant_offset + 0.5);
+        qmax = (int)(qmax * FFABS(s->c.avctx->i_quant_factor) + s->c.avctx->i_quant_offset + 0.5);
         break;
     }
 
@@ -174,12 +174,12 @@ static void get_qminmax(int *qmin_ret, int *qmax_ret, MPVMainEncContext *const m
 static double modify_qscale(MPVMainEncContext *const m, const RateControlEntry *rce,
                             double q, int frame_num)
 {
-    MpegEncContext     *const   s = &m->s;
+    MPVEncContext      *const   s = &m->s;
     RateControlContext *const rcc = &m->rc_context;
-    const double buffer_size = s->avctx->rc_buffer_size;
-    const double fps         = get_fps(s->avctx);
-    const double min_rate    = s->avctx->rc_min_rate / fps;
-    const double max_rate    = s->avctx->rc_max_rate / fps;
+    const double buffer_size = s->c.avctx->rc_buffer_size;
+    const double fps         = get_fps(s->c.avctx);
+    const double min_rate    = s->c.avctx->rc_min_rate / fps;
+    const double max_rate    = s->c.avctx->rc_max_rate / fps;
     const int pict_type      = rce->new_pict_type;
     int qmin, qmax;
 
@@ -206,11 +206,11 @@ static double modify_qscale(MPVMainEncContext *const m, const RateControlEntry *
 
             q_limit = bits2qp(rce,
                               FFMAX((min_rate - buffer_size + rcc->buffer_index) *
-                                    s->avctx->rc_min_vbv_overflow_use, 1));
+                                    s->c.avctx->rc_min_vbv_overflow_use, 1));
 
             if (q > q_limit) {
-                if (s->avctx->debug & FF_DEBUG_RC)
-                    av_log(s->avctx, AV_LOG_DEBUG,
+                if (s->c.avctx->debug & FF_DEBUG_RC)
+                    av_log(s->c.avctx, AV_LOG_DEBUG,
                            "limiting QP %f -> %f\n", q, q_limit);
                 q = q_limit;
             }
@@ -226,17 +226,17 @@ static double modify_qscale(MPVMainEncContext *const m, const RateControlEntry *
 
             q_limit = bits2qp(rce,
                               FFMAX(rcc->buffer_index *
-                                    s->avctx->rc_max_available_vbv_use,
+                                    s->c.avctx->rc_max_available_vbv_use,
                                     1));
             if (q < q_limit) {
-                if (s->avctx->debug & FF_DEBUG_RC)
-                    av_log(s->avctx, AV_LOG_DEBUG,
+                if (s->c.avctx->debug & FF_DEBUG_RC)
+                    av_log(s->c.avctx, AV_LOG_DEBUG,
                            "limiting QP %f -> %f\n", q, q_limit);
                 q = q_limit;
             }
         }
     }
-    ff_dlog(s->avctx, "q:%f max:%f min:%f size:%f index:%f agr:%f\n",
+    ff_dlog(s->c.avctx, "q:%f max:%f min:%f size:%f index:%f agr:%f\n",
             q, max_rate, min_rate, buffer_size, rcc->buffer_index,
             rcc->buffer_aggressivity);
     if (rcc->qsquish == 0.0 || qmin == qmax) {
@@ -266,11 +266,11 @@ static double modify_qscale(MPVMainEncContext *const m, const RateControlEntry *
 static double get_qscale(MPVMainEncContext *const m, RateControlEntry *rce,
                          double rate_factor, int frame_num)
 {
-    MpegEncContext *const s = &m->s;
+    MPVEncContext  *const s = &m->s;
     RateControlContext *rcc = &m->rc_context;
-    AVCodecContext *a       = s->avctx;
+    AVCodecContext *const a = s->c.avctx;
     const int pict_type     = rce->new_pict_type;
-    const double mb_num     = s->mb_num;
+    const double mb_num     = s->c.mb_num;
     double q, bits;
     int i;
 
@@ -300,7 +300,7 @@ static double get_qscale(MPVMainEncContext *const m, RateControlEntry *rce,
 
     bits = av_expr_eval(rcc->rc_eq_eval, const_values, rce);
     if (isnan(bits)) {
-        av_log(s->avctx, AV_LOG_ERROR, "Error evaluating rc_eq \"%s\"\n", rcc->rc_eq);
+        av_log(s->c.avctx, AV_LOG_ERROR, "Error evaluating rc_eq \"%s\"\n", rcc->rc_eq);
         return -1;
     }
 
@@ -311,8 +311,8 @@ static double get_qscale(MPVMainEncContext *const m, RateControlEntry *rce,
     bits += 1.0; // avoid 1/0 issues
 
     /* user override */
-    for (i = 0; i < s->avctx->rc_override_count; i++) {
-        RcOverride *rco = s->avctx->rc_override;
+    for (i = 0; i < s->c.avctx->rc_override_count; i++) {
+        RcOverride *rco = s->c.avctx->rc_override;
         if (rco[i].start_frame > frame_num)
             continue;
         if (rco[i].end_frame < frame_num)
@@ -327,10 +327,10 @@ static double get_qscale(MPVMainEncContext *const m, RateControlEntry *rce,
     q = bits2qp(rce, bits);
 
     /* I/B difference */
-    if (pict_type == AV_PICTURE_TYPE_I && s->avctx->i_quant_factor < 0.0)
-        q = -q * s->avctx->i_quant_factor + s->avctx->i_quant_offset;
-    else if (pict_type == AV_PICTURE_TYPE_B && s->avctx->b_quant_factor < 0.0)
-        q = -q * s->avctx->b_quant_factor + s->avctx->b_quant_offset;
+    if (pict_type == AV_PICTURE_TYPE_I && s->c.avctx->i_quant_factor < 0.0)
+        q = -q * s->c.avctx->i_quant_factor + s->c.avctx->i_quant_offset;
+    else if (pict_type == AV_PICTURE_TYPE_B && s->c.avctx->b_quant_factor < 0.0)
+        q = -q * s->c.avctx->b_quant_factor + s->c.avctx->b_quant_offset;
     if (q < 1)
         q = 1;
 
@@ -340,10 +340,10 @@ static double get_qscale(MPVMainEncContext *const m, RateControlEntry *rce,
 static int init_pass2(MPVMainEncContext *const m)
 {
     RateControlContext *const rcc = &m->rc_context;
-    MpegEncContext     *const   s = &m->s;
-    AVCodecContext *a       = s->avctx;
+    MPVEncContext      *const   s = &m->s;
+    AVCodecContext     *const   a = s->c.avctx;
     int i, toobig;
-    AVRational fps         = get_fpsQ(s->avctx);
+    AVRational fps         = get_fpsQ(s->c.avctx);
     double complexity[5]   = { 0 }; // approximate bits at quant=1
     uint64_t const_bits[5] = { 0 }; // quantizer independent bits
     uint64_t all_const_bits;
@@ -376,7 +376,7 @@ static int init_pass2(MPVMainEncContext *const m)
                      const_bits[AV_PICTURE_TYPE_B];
 
     if (all_available_bits < all_const_bits) {
-        av_log(s->avctx, AV_LOG_ERROR, "requested bitrate is too low\n");
+        av_log(s->c.avctx, AV_LOG_ERROR, "requested bitrate is too low\n");
         return -1;
     }
 
@@ -393,7 +393,7 @@ static int init_pass2(MPVMainEncContext *const m)
         expected_bits = 0;
         rate_factor  += step;
 
-        rcc->buffer_index = s->avctx->rc_buffer_size / 2;
+        rcc->buffer_index = s->c.avctx->rc_buffer_size / 2;
 
         /* find qscale */
         for (i = 0; i < rcc->num_entries; i++) {
@@ -453,7 +453,7 @@ static int init_pass2(MPVMainEncContext *const m)
             expected_bits     += bits;
         }
 
-        ff_dlog(s->avctx,
+        ff_dlog(s->c.avctx,
                 "expected_bits: %f all_available_bits: %d rate_factor: %f\n",
                 expected_bits, (int)all_available_bits, rate_factor);
         if (expected_bits > all_available_bits) {
@@ -467,32 +467,32 @@ static int init_pass2(MPVMainEncContext *const m)
     /* check bitrate calculations and print info */
     qscale_sum = 0.0;
     for (i = 0; i < rcc->num_entries; i++) {
-        ff_dlog(s->avctx, "[lavc rc] entry[%d].new_qscale = %.3f  qp = %.3f\n",
+        ff_dlog(s->c.avctx, "[lavc rc] entry[%d].new_qscale = %.3f  qp = %.3f\n",
                 i,
                 rcc->entry[i].new_qscale,
                 rcc->entry[i].new_qscale / FF_QP2LAMBDA);
         qscale_sum += av_clip(rcc->entry[i].new_qscale / FF_QP2LAMBDA,
-                              s->avctx->qmin, s->avctx->qmax);
+                              s->c.avctx->qmin, s->c.avctx->qmax);
     }
     av_assert0(toobig <= 40);
-    av_log(s->avctx, AV_LOG_DEBUG,
+    av_log(s->c.avctx, AV_LOG_DEBUG,
            "[lavc rc] requested bitrate: %"PRId64" bps  expected bitrate: %"PRId64" bps\n",
            m->bit_rate,
            (int64_t)(expected_bits / ((double)all_available_bits / m->bit_rate)));
-    av_log(s->avctx, AV_LOG_DEBUG,
+    av_log(s->c.avctx, AV_LOG_DEBUG,
            "[lavc rc] estimated target average qp: %.3f\n",
            (float)qscale_sum / rcc->num_entries);
     if (toobig == 0) {
-        av_log(s->avctx, AV_LOG_INFO,
+        av_log(s->c.avctx, AV_LOG_INFO,
                "[lavc rc] Using all of requested bitrate is not "
                "necessary for this video with these parameters.\n");
     } else if (toobig == 40) {
-        av_log(s->avctx, AV_LOG_ERROR,
+        av_log(s->c.avctx, AV_LOG_ERROR,
                "[lavc rc] Error: bitrate too low for this video "
                "with these parameters.\n");
         return -1;
     } else if (fabs(expected_bits / all_available_bits - 1.0) > 0.01) {
-        av_log(s->avctx, AV_LOG_ERROR,
+        av_log(s->c.avctx, AV_LOG_ERROR,
                "[lavc rc] Error: 2pass curve failed to converge\n");
         return -1;
     }
@@ -502,7 +502,7 @@ static int init_pass2(MPVMainEncContext *const m)
 
 av_cold int ff_rate_control_init(MPVMainEncContext *const m)
 {
-    MpegEncContext *const s = &m->s;
+    MPVEncContext  *const s = &m->s;
     RateControlContext *rcc = &m->rc_context;
     int i, res;
     static const char * const const_names[] = {
@@ -540,19 +540,19 @@ av_cold int ff_rate_control_init(MPVMainEncContext *const m)
     };
     emms_c();
 
-    if (!s->avctx->rc_max_available_vbv_use && s->avctx->rc_buffer_size) {
-        if (s->avctx->rc_max_rate) {
-            s->avctx->rc_max_available_vbv_use = av_clipf(s->avctx->rc_max_rate/(s->avctx->rc_buffer_size*get_fps(s->avctx)), 1.0/3, 1.0);
+    if (!s->c.avctx->rc_max_available_vbv_use && s->c.avctx->rc_buffer_size) {
+        if (s->c.avctx->rc_max_rate) {
+            s->c.avctx->rc_max_available_vbv_use = av_clipf(s->c.avctx->rc_max_rate/(s->c.avctx->rc_buffer_size*get_fps(s->c.avctx)), 1.0/3, 1.0);
         } else
-            s->avctx->rc_max_available_vbv_use = 1.0;
+            s->c.avctx->rc_max_available_vbv_use = 1.0;
     }
 
     res = av_expr_parse(&rcc->rc_eq_eval,
                         rcc->rc_eq ? rcc->rc_eq : "tex^qComp",
                         const_names, func1_names, func1,
-                        NULL, NULL, 0, s->avctx);
+                        NULL, NULL, 0, s->c.avctx);
     if (res < 0) {
-        av_log(s->avctx, AV_LOG_ERROR, "Error parsing rc_eq \"%s\"\n", rcc->rc_eq);
+        av_log(s->c.avctx, AV_LOG_ERROR, "Error parsing rc_eq \"%s\"\n", rcc->rc_eq);
         return res;
     }
 
@@ -569,16 +569,16 @@ av_cold int ff_rate_control_init(MPVMainEncContext *const m)
 
         rcc->last_qscale_for[i] = FF_QP2LAMBDA * 5;
     }
-    rcc->buffer_index = s->avctx->rc_initial_buffer_occupancy;
+    rcc->buffer_index = s->c.avctx->rc_initial_buffer_occupancy;
     if (!rcc->buffer_index)
-        rcc->buffer_index = s->avctx->rc_buffer_size * 3 / 4;
+        rcc->buffer_index = s->c.avctx->rc_buffer_size * 3 / 4;
 
-    if (s->avctx->flags & AV_CODEC_FLAG_PASS2) {
+    if (s->c.avctx->flags & AV_CODEC_FLAG_PASS2) {
         int i;
         char *p;
 
         /* find number of pics */
-        p = s->avctx->stats_in;
+        p = s->c.avctx->stats_in;
         for (i = -1; p; i++)
             p = strchr(p + 1, ';');
         i += m->max_b_frames;
@@ -596,12 +596,12 @@ av_cold int ff_rate_control_init(MPVMainEncContext *const m)
 
             rce->pict_type  = rce->new_pict_type = AV_PICTURE_TYPE_P;
             rce->qscale     = rce->new_qscale    = FF_QP2LAMBDA * 2;
-            rce->misc_bits  = s->mb_num + 10;
-            rce->mb_var_sum = s->mb_num * 100;
+            rce->misc_bits  = s->c.mb_num + 10;
+            rce->mb_var_sum = s->c.mb_num * 100;
         }
 
         /* read stats */
-        p = s->avctx->stats_in;
+        p = s->c.avctx->stats_in;
         for (i = 0; i < rcc->num_entries - m->max_b_frames; i++) {
             RateControlEntry *rce;
             int picture_number;
@@ -630,7 +630,7 @@ av_cold int ff_rate_control_init(MPVMainEncContext *const m)
                         &rce->mc_mb_var_sum, &rce->mb_var_sum,
                         &rce->i_count, &rce->header_bits);
             if (e != 13) {
-                av_log(s->avctx, AV_LOG_ERROR,
+                av_log(s->c.avctx, AV_LOG_ERROR,
                        "statistics are damaged at line %d, parser out=%d\n",
                        i, e);
                 return -1;
@@ -644,21 +644,21 @@ av_cold int ff_rate_control_init(MPVMainEncContext *const m)
             return res;
     }
 
-    if (!(s->avctx->flags & AV_CODEC_FLAG_PASS2)) {
+    if (!(s->c.avctx->flags & AV_CODEC_FLAG_PASS2)) {
         rcc->short_term_qsum   = 0.001;
         rcc->short_term_qcount = 0.001;
 
         rcc->pass1_rc_eq_output_sum = 0.001;
         rcc->pass1_wanted_bits      = 0.001;
 
-        if (s->avctx->qblur > 1.0) {
-            av_log(s->avctx, AV_LOG_ERROR, "qblur too large\n");
+        if (s->c.avctx->qblur > 1.0) {
+            av_log(s->c.avctx, AV_LOG_ERROR, "qblur too large\n");
             return -1;
         }
         /* init stuff with the user specified complexity */
         if (rcc->initial_cplx) {
             for (i = 0; i < 60 * 30; i++) {
-                double bits = rcc->initial_cplx * (i / 10000.0 + 1.0) * s->mb_num;
+                double bits = rcc->initial_cplx * (i / 10000.0 + 1.0) * s->c.mb_num;
                 RateControlEntry rce;
 
                 if (i % ((m->gop_size + 3) / 4) == 0)
@@ -669,16 +669,16 @@ av_cold int ff_rate_control_init(MPVMainEncContext *const m)
                     rce.pict_type = AV_PICTURE_TYPE_P;
 
                 rce.new_pict_type = rce.pict_type;
-                rce.mc_mb_var_sum = bits * s->mb_num / 100000;
-                rce.mb_var_sum    = s->mb_num;
+                rce.mc_mb_var_sum = bits * s->c.mb_num / 100000;
+                rce.mb_var_sum    = s->c.mb_num;
 
                 rce.qscale    = FF_QP2LAMBDA * 2;
                 rce.f_code    = 2;
                 rce.b_code    = 1;
                 rce.misc_bits = 1;
 
-                if (s->pict_type == AV_PICTURE_TYPE_I) {
-                    rce.i_count    = s->mb_num;
+                if (s->c.pict_type == AV_PICTURE_TYPE_I) {
+                    rce.i_count    = s->c.mb_num;
                     rce.i_tex_bits = bits;
                     rce.p_tex_bits = 0;
                     rce.mv_bits    = 0;
@@ -696,13 +696,13 @@ av_cold int ff_rate_control_init(MPVMainEncContext *const m)
                 get_qscale(m, &rce, rcc->pass1_wanted_bits / rcc->pass1_rc_eq_output_sum, i);
 
                 // FIXME misbehaves a little for variable fps
-                rcc->pass1_wanted_bits += m->bit_rate / get_fps(s->avctx);
+                rcc->pass1_wanted_bits += m->bit_rate / get_fps(s->c.avctx);
             }
         }
     }
 
     if (s->adaptive_quant) {
-        unsigned mb_array_size = s->mb_stride * s->mb_height;
+        unsigned mb_array_size = s->c.mb_stride * s->c.mb_height;
 
         rcc->cplx_tab = av_malloc_array(mb_array_size, 2 * sizeof(rcc->cplx_tab));
         if (!rcc->cplx_tab)
@@ -726,14 +726,14 @@ av_cold void ff_rate_control_uninit(RateControlContext *rcc)
 
 int ff_vbv_update(MPVMainEncContext *m, int frame_size)
 {
-    MpegEncContext     *const   s = &m->s;
+    MPVEncContext      *const   s = &m->s;
     RateControlContext *const rcc = &m->rc_context;
-    const double fps        = get_fps(s->avctx);
-    const int buffer_size   = s->avctx->rc_buffer_size;
-    const double min_rate   = s->avctx->rc_min_rate / fps;
-    const double max_rate   = s->avctx->rc_max_rate / fps;
+    const double fps        = get_fps(s->c.avctx);
+    const int buffer_size   = s->c.avctx->rc_buffer_size;
+    const double min_rate   = s->c.avctx->rc_min_rate / fps;
+    const double max_rate   = s->c.avctx->rc_max_rate / fps;
 
-    ff_dlog(s->avctx, "%d %f %d %f %f\n",
+    ff_dlog(s->c.avctx, "%d %f %d %f %f\n",
             buffer_size, rcc->buffer_index, frame_size, min_rate, max_rate);
 
     if (buffer_size) {
@@ -741,9 +741,9 @@ int ff_vbv_update(MPVMainEncContext *m, int frame_size)
 
         rcc->buffer_index -= frame_size;
         if (rcc->buffer_index < 0) {
-            av_log(s->avctx, AV_LOG_ERROR, "rc buffer underflow\n");
-            if (frame_size > max_rate && s->qscale == s->avctx->qmax) {
-                av_log(s->avctx, AV_LOG_ERROR, "max bitrate possibly too small or try trellis with large lmax or increase qmax\n");
+            av_log(s->c.avctx, AV_LOG_ERROR, "rc buffer underflow\n");
+            if (frame_size > max_rate && s->c.qscale == s->c.avctx->qmax) {
+                av_log(s->c.avctx, AV_LOG_ERROR, "max bitrate possibly too small or try trellis with large lmax or increase qmax\n");
             }
             rcc->buffer_index = 0;
         }
@@ -754,12 +754,12 @@ int ff_vbv_update(MPVMainEncContext *m, int frame_size)
         if (rcc->buffer_index > buffer_size) {
             int stuffing = ceil((rcc->buffer_index - buffer_size) / 8);
 
-            if (stuffing < 4 && s->codec_id == AV_CODEC_ID_MPEG4)
+            if (stuffing < 4 && s->c.codec_id == AV_CODEC_ID_MPEG4)
                 stuffing = 4;
             rcc->buffer_index -= 8 * stuffing;
 
-            if (s->avctx->debug & FF_DEBUG_RC)
-                av_log(s->avctx, AV_LOG_DEBUG, "stuffing %d bytes\n", stuffing);
+            if (s->c.avctx->debug & FF_DEBUG_RC)
+                av_log(s->c.avctx, AV_LOG_DEBUG, "stuffing %d bytes\n", stuffing);
 
             return stuffing;
         }
@@ -787,31 +787,30 @@ static void update_predictor(Predictor *p, double q, double var, double size)
 static void adaptive_quantization(RateControlContext *const rcc,
                                   MPVMainEncContext *const m, double q)
 {
-    MpegEncContext *const s = &m->s;
-    int i;
-    const float lumi_masking         = s->avctx->lumi_masking / (128.0 * 128.0);
-    const float dark_masking         = s->avctx->dark_masking / (128.0 * 128.0);
-    const float temp_cplx_masking    = s->avctx->temporal_cplx_masking;
-    const float spatial_cplx_masking = s->avctx->spatial_cplx_masking;
-    const float p_masking            = s->avctx->p_masking;
+    MPVEncContext *const s = &m->s;
+    const float lumi_masking         = s->c.avctx->lumi_masking / (128.0 * 128.0);
+    const float dark_masking         = s->c.avctx->dark_masking / (128.0 * 128.0);
+    const float temp_cplx_masking    = s->c.avctx->temporal_cplx_masking;
+    const float spatial_cplx_masking = s->c.avctx->spatial_cplx_masking;
+    const float p_masking            = s->c.avctx->p_masking;
     const float border_masking       = m->border_masking;
     float bits_sum                   = 0.0;
     float cplx_sum                   = 0.0;
     float *cplx_tab                  = rcc->cplx_tab;
     float *bits_tab                  = rcc->bits_tab;
-    const int qmin                   = s->avctx->mb_lmin;
-    const int qmax                   = s->avctx->mb_lmax;
-    const int mb_width               = s->mb_width;
-    const int mb_height              = s->mb_height;
+    const int qmin                   = s->c.avctx->mb_lmin;
+    const int qmax                   = s->c.avctx->mb_lmax;
+    const int mb_width               = s->c.mb_width;
+    const int mb_height              = s->c.mb_height;
 
-    for (i = 0; i < s->mb_num; i++) {
-        const int mb_xy = s->mb_index2xy[i];
+    for (int i = 0; i < s->c.mb_num; i++) {
+        const int mb_xy = s->c.mb_index2xy[i];
         float temp_cplx = sqrt(s->mc_mb_var[mb_xy]); // FIXME merge in pow()
         float spat_cplx = sqrt(s->mb_var[mb_xy]);
         const int lumi  = s->mb_mean[mb_xy];
         float bits, cplx, factor;
-        int mb_x = mb_xy % s->mb_stride;
-        int mb_y = mb_xy / s->mb_stride;
+        int mb_x = mb_xy % s->c.mb_stride;
+        int mb_y = mb_xy / s->c.mb_stride;
         int mb_distance;
         float mb_factor = 0.0;
         if (spat_cplx < 4)
@@ -865,7 +864,7 @@ static void adaptive_quantization(RateControlContext *const rcc,
     /* handle qmin/qmax clipping */
     if (s->mpv_flags & FF_MPV_FLAG_NAQ) {
         float factor = bits_sum / cplx_sum;
-        for (i = 0; i < s->mb_num; i++) {
+        for (int i = 0; i < s->c.mb_num; i++) {
             float newq = q * cplx_tab[i] / bits_tab[i];
             newq *= factor;
 
@@ -883,8 +882,8 @@ static void adaptive_quantization(RateControlContext *const rcc,
             cplx_sum = 0.001;
     }
 
-    for (i = 0; i < s->mb_num; i++) {
-        const int mb_xy = s->mb_index2xy[i];
+    for (int i = 0; i < s->c.mb_num; i++) {
+        const int mb_xy = s->c.mb_index2xy[i];
         float newq      = q * cplx_tab[i] / bits_tab[i];
         int intq;
 
@@ -904,39 +903,39 @@ static void adaptive_quantization(RateControlContext *const rcc,
 
 void ff_get_2pass_fcode(MPVMainEncContext *const m)
 {
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
     const RateControlContext *rcc = &m->rc_context;
-    const RateControlEntry   *rce = &rcc->entry[s->picture_number];
+    const RateControlEntry   *rce = &rcc->entry[s->c.picture_number];
 
-    s->f_code = rce->f_code;
-    s->b_code = rce->b_code;
+    s->c.f_code = rce->f_code;
+    s->c.b_code = rce->b_code;
 }
 
 // FIXME rd or at least approx for dquant
 
 float ff_rate_estimate_qscale(MPVMainEncContext *const m, int dry_run)
 {
-    MpegEncContext *const s = &m->s;
+    MPVEncContext  *const s = &m->s;
     RateControlContext *rcc = &m->rc_context;
+    AVCodecContext *const a = s->c.avctx;
     float q;
     int qmin, qmax;
     float br_compensation;
     double diff;
     double short_term_q;
     double fps;
-    int picture_number = s->picture_number;
+    int picture_number = s->c.picture_number;
     int64_t wanted_bits;
-    AVCodecContext *a       = s->avctx;
     RateControlEntry local_rce, *rce;
     double bits;
     double rate_factor;
     int64_t var;
-    const int pict_type = s->pict_type;
+    const int pict_type = s->c.pict_type;
     emms_c();
 
     get_qminmax(&qmin, &qmax, m, pict_type);
 
-    fps = get_fps(s->avctx);
+    fps = get_fps(s->c.avctx);
     /* update predictors */
     if (picture_number > 2 && !dry_run) {
         const int64_t last_var =
@@ -949,10 +948,10 @@ float ff_rate_estimate_qscale(MPVMainEncContext *const m, int dry_run)
                          m->frame_bits - m->stuffing_bits);
     }
 
-    if (s->avctx->flags & AV_CODEC_FLAG_PASS2) {
+    if (s->c.avctx->flags & AV_CODEC_FLAG_PASS2) {
         av_assert0(picture_number >= 0);
         if (picture_number >= rcc->num_entries) {
-            av_log(s->avctx, AV_LOG_ERROR, "Input is longer than 2-pass log file\n");
+            av_log(s->c.avctx, AV_LOG_ERROR, "Input is longer than 2-pass log file\n");
             return -1;
         }
         rce         = &rcc->entry[picture_number];
@@ -965,17 +964,17 @@ float ff_rate_estimate_qscale(MPVMainEncContext *const m, int dry_run)
         /* FIXME add a dts field to AVFrame and ensure it is set and use it
          * here instead of reordering but the reordering is simpler for now
          * until H.264 B-pyramid must be handled. */
-        if (s->pict_type == AV_PICTURE_TYPE_B || s->low_delay)
-            dts_pic = s->cur_pic.ptr;
+        if (s->c.pict_type == AV_PICTURE_TYPE_B || s->c.low_delay)
+            dts_pic = s->c.cur_pic.ptr;
         else
-            dts_pic = s->last_pic.ptr;
+            dts_pic = s->c.last_pic.ptr;
 
         if (!dts_pic || dts_pic->f->pts == AV_NOPTS_VALUE)
             wanted_bits_double = m->bit_rate * (double)picture_number / fps;
         else
             wanted_bits_double = m->bit_rate * (double)dts_pic->f->pts / fps;
         if (wanted_bits_double > INT64_MAX) {
-            av_log(s->avctx, AV_LOG_WARNING, "Bits exceed 64bit range\n");
+            av_log(s->c.avctx, AV_LOG_WARNING, "Bits exceed 64bit range\n");
             wanted_bits = INT64_MAX;
         } else
             wanted_bits = (int64_t)wanted_bits_double;
@@ -989,12 +988,12 @@ float ff_rate_estimate_qscale(MPVMainEncContext *const m, int dry_run)
     var = pict_type == AV_PICTURE_TYPE_I ? m->mb_var_sum : m->mc_mb_var_sum;
 
     short_term_q = 0; /* avoid warning */
-    if (s->avctx->flags & AV_CODEC_FLAG_PASS2) {
+    if (s->c.avctx->flags & AV_CODEC_FLAG_PASS2) {
         if (pict_type != AV_PICTURE_TYPE_I)
             av_assert0(pict_type == rce->new_pict_type);
 
         q = rce->new_qscale / br_compensation;
-        ff_dlog(s->avctx, "%f %f %f last:%d var:%"PRId64" type:%d//\n", q, rce->new_qscale,
+        ff_dlog(s->c.avctx, "%f %f %f last:%d var:%"PRId64" type:%d//\n", q, rce->new_qscale,
                 br_compensation, m->frame_bits, var, pict_type);
     } else {
         rce->pict_type     =
@@ -1002,13 +1001,13 @@ float ff_rate_estimate_qscale(MPVMainEncContext *const m, int dry_run)
         rce->mc_mb_var_sum = m->mc_mb_var_sum;
         rce->mb_var_sum    = m->mb_var_sum;
         rce->qscale        = FF_QP2LAMBDA * 2;
-        rce->f_code        = s->f_code;
-        rce->b_code        = s->b_code;
+        rce->f_code        = s->c.f_code;
+        rce->b_code        = s->c.b_code;
         rce->misc_bits     = 1;
 
         bits = predict_size(&rcc->pred[pict_type], rce->qscale, sqrt(var));
         if (pict_type == AV_PICTURE_TYPE_I) {
-            rce->i_count    = s->mb_num;
+            rce->i_count    = s->c.mb_num;
             rce->i_tex_bits = bits;
             rce->p_tex_bits = 0;
             rce->mv_bits    = 0;
@@ -1052,8 +1051,8 @@ float ff_rate_estimate_qscale(MPVMainEncContext *const m, int dry_run)
         av_assert0(q > 0.0);
     }
 
-    if (s->avctx->debug & FF_DEBUG_RC) {
-        av_log(s->avctx, AV_LOG_DEBUG,
+    if (s->c.avctx->debug & FF_DEBUG_RC) {
+        av_log(s->c.avctx, AV_LOG_DEBUG,
                "%c qp:%d<%2.1f<%d %d want:%"PRId64" total:%"PRId64" comp:%f st_q:%2.2f "
                "size:%d var:%"PRId64"/%"PRId64" br:%"PRId64" fps:%d\n",
                av_get_picture_type_char(pict_type),
diff --git a/libavcodec/riscv/me_cmp_init.c b/libavcodec/riscv/me_cmp_init.c
index f246e55cb1..f22dbff7f4 100644
--- a/libavcodec/riscv/me_cmp_init.c
+++ b/libavcodec/riscv/me_cmp_init.c
@@ -24,55 +24,55 @@
 #include "libavutil/cpu.h"
 #include "libavutil/riscv/cpu.h"
 #include "libavcodec/me_cmp.h"
-#include "libavcodec/mpegvideo.h"
+#include "libavcodec/mpegvideoenc.h"
 
-int ff_pix_abs16_rvv(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_pix_abs16_rvv(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                               ptrdiff_t stride, int h);
-int ff_pix_abs8_rvv(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_pix_abs8_rvv(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                              ptrdiff_t stride, int h);
-int ff_pix_abs16_x2_rvv(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_pix_abs16_x2_rvv(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                          ptrdiff_t stride, int h);
-int ff_pix_abs8_x2_rvv(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_pix_abs8_x2_rvv(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                          ptrdiff_t stride, int h);
-int ff_pix_abs16_y2_rvv(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_pix_abs16_y2_rvv(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                           ptrdiff_t stride, int h);
-int ff_pix_abs8_y2_rvv(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_pix_abs8_y2_rvv(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                           ptrdiff_t stride, int h);
 
-int ff_sse16_rvv(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_sse16_rvv(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                    ptrdiff_t stride, int h);
-int ff_sse8_rvv(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_sse8_rvv(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                    ptrdiff_t stride, int h);
-int ff_sse4_rvv(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_sse4_rvv(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                    ptrdiff_t stride, int h);
 
-int ff_vsse16_rvv(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2, ptrdiff_t stride, int h);
-int ff_vsse8_rvv(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2, ptrdiff_t stride, int h);
-int ff_vsse_intra16_rvv(MpegEncContext *c, const uint8_t *s, const uint8_t *dummy, ptrdiff_t stride, int h);
-int ff_vsse_intra8_rvv(MpegEncContext *c, const uint8_t *s, const uint8_t *dummy, ptrdiff_t stride, int h);
-int ff_vsad16_rvv(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2, ptrdiff_t stride, int h);
-int ff_vsad8_rvv(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2, ptrdiff_t stride, int h);
-int ff_vsad_intra16_rvv(MpegEncContext *c, const uint8_t *s, const uint8_t *dummy, ptrdiff_t stride, int h);
-int ff_vsad_intra8_rvv(MpegEncContext *c, const uint8_t *s, const uint8_t *dummy, ptrdiff_t stride, int h);
+int ff_vsse16_rvv(MPVEncContext *c, const uint8_t *s1, const uint8_t *s2, ptrdiff_t stride, int h);
+int ff_vsse8_rvv(MPVEncContext *c, const uint8_t *s1, const uint8_t *s2, ptrdiff_t stride, int h);
+int ff_vsse_intra16_rvv(MPVEncContext *c, const uint8_t *s, const uint8_t *dummy, ptrdiff_t stride, int h);
+int ff_vsse_intra8_rvv(MPVEncContext *c, const uint8_t *s, const uint8_t *dummy, ptrdiff_t stride, int h);
+int ff_vsad16_rvv(MPVEncContext *c, const uint8_t *s1, const uint8_t *s2, ptrdiff_t stride, int h);
+int ff_vsad8_rvv(MPVEncContext *c, const uint8_t *s1, const uint8_t *s2, ptrdiff_t stride, int h);
+int ff_vsad_intra16_rvv(MPVEncContext *c, const uint8_t *s, const uint8_t *dummy, ptrdiff_t stride, int h);
+int ff_vsad_intra8_rvv(MPVEncContext *c, const uint8_t *s, const uint8_t *dummy, ptrdiff_t stride, int h);
 int ff_nsse16_rvv(int multiplier, const uint8_t *s1, const uint8_t *s2,
                     ptrdiff_t stride, int h);
 int ff_nsse8_rvv(int multiplier, const uint8_t *s1, const uint8_t *s2,
                     ptrdiff_t stride, int h);
 
-static int nsse16_rvv_wrapper(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2,
+static int nsse16_rvv_wrapper(MPVEncContext *c, const uint8_t *s1, const uint8_t *s2,
                         ptrdiff_t stride, int h)
 {
     if (c)
-        return ff_nsse16_rvv(c->avctx->nsse_weight, s1, s2, stride, h);
+        return ff_nsse16_rvv(c->c.avctx->nsse_weight, s1, s2, stride, h);
     else
         return ff_nsse16_rvv(8, s1, s2, stride, h);
 }
 
-static int nsse8_rvv_wrapper(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2,
+static int nsse8_rvv_wrapper(MPVEncContext *c, const uint8_t *s1, const uint8_t *s2,
                         ptrdiff_t stride, int h)
 {
     if (c)
-        return ff_nsse8_rvv(c->avctx->nsse_weight, s1, s2, stride, h);
+        return ff_nsse8_rvv(c->c.avctx->nsse_weight, s1, s2, stride, h);
     else
         return ff_nsse8_rvv(8, s1, s2, stride, h);
 }
diff --git a/libavcodec/rv10enc.c b/libavcodec/rv10enc.c
index 0b5065212d..984fe3379d 100644
--- a/libavcodec/rv10enc.c
+++ b/libavcodec/rv10enc.c
@@ -33,33 +33,33 @@
 
 int ff_rv10_encode_picture_header(MPVMainEncContext *const m)
 {
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
     int full_frame= 0;
 
     align_put_bits(&s->pb);
 
     put_bits(&s->pb, 1, 1);     /* marker */
 
-    put_bits(&s->pb, 1, (s->pict_type == AV_PICTURE_TYPE_P));
+    put_bits(&s->pb, 1, (s->c.pict_type == AV_PICTURE_TYPE_P));
 
     put_bits(&s->pb, 1, 0);     /* not PB-mframe */
 
-    put_bits(&s->pb, 5, s->qscale);
+    put_bits(&s->pb, 5, s->c.qscale);
 
-    if (s->pict_type == AV_PICTURE_TYPE_I) {
+    if (s->c.pict_type == AV_PICTURE_TYPE_I) {
         /* specific MPEG like DC coding not used */
     }
     /* if multiple packets per frame are sent, the position at which
        to display the macroblocks is coded here */
     if(!full_frame){
-        if (s->mb_width * s->mb_height >= (1U << 12)) {
-            avpriv_report_missing_feature(s->avctx, "Encoding frames with %d (>= 4096) macroblocks",
-                                          s->mb_width * s->mb_height);
+        if (s->c.mb_width * s->c.mb_height >= (1U << 12)) {
+            avpriv_report_missing_feature(s->c.avctx, "Encoding frames with %d (>= 4096) macroblocks",
+                                          s->c.mb_width * s->c.mb_height);
             return AVERROR(ENOSYS);
         }
         put_bits(&s->pb, 6, 0); /* mb_x */
         put_bits(&s->pb, 6, 0); /* mb_y */
-        put_bits(&s->pb, 12, s->mb_width * s->mb_height);
+        put_bits(&s->pb, 12, s->c.mb_width * s->c.mb_height);
     }
 
     put_bits(&s->pb, 3, 0);     /* ignored */
diff --git a/libavcodec/rv20enc.c b/libavcodec/rv20enc.c
index 1a59fd4c70..dea89c8be4 100644
--- a/libavcodec/rv20enc.c
+++ b/libavcodec/rv20enc.c
@@ -36,32 +36,32 @@
 
 int ff_rv20_encode_picture_header(MPVMainEncContext *const m)
 {
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
 
-    put_bits(&s->pb, 2, s->pict_type); //I 0 vs. 1 ?
+    put_bits(&s->pb, 2, s->c.pict_type); //I 0 vs. 1 ?
     put_bits(&s->pb, 1, 0);     /* unknown bit */
-    put_bits(&s->pb, 5, s->qscale);
+    put_bits(&s->pb, 5, s->c.qscale);
 
-    put_sbits(&s->pb, 8, s->picture_number); //FIXME wrong, but correct is not known
-    s->mb_x= s->mb_y= 0;
+    put_sbits(&s->pb, 8, s->c.picture_number); //FIXME wrong, but correct is not known
+    s->c.mb_x = s->c.mb_y = 0;
     ff_h263_encode_mba(s);
 
-    put_bits(&s->pb, 1, s->no_rounding);
+    put_bits(&s->pb, 1, s->c.no_rounding);
 
-    av_assert0(s->f_code == 1);
-    av_assert0(s->unrestricted_mv == 0);
-    av_assert0(s->alt_inter_vlc == 0);
-    av_assert0(s->umvplus == 0);
-    av_assert0(s->modified_quant==1);
-    av_assert0(s->loop_filter==1);
+    av_assert0(s->c.f_code == 1);
+    av_assert0(s->c.unrestricted_mv == 0);
+    av_assert0(s->c.alt_inter_vlc == 0);
+    av_assert0(s->c.umvplus == 0);
+    av_assert0(s->c.modified_quant==1);
+    av_assert0(s->c.loop_filter==1);
 
-    s->h263_aic= s->pict_type == AV_PICTURE_TYPE_I;
-    if(s->h263_aic){
-        s->y_dc_scale_table=
-        s->c_dc_scale_table= ff_aic_dc_scale_table;
+    s->c.h263_aic= s->c.pict_type == AV_PICTURE_TYPE_I;
+    if(s->c.h263_aic){
+        s->c.y_dc_scale_table =
+        s->c.c_dc_scale_table = ff_aic_dc_scale_table;
     }else{
-        s->y_dc_scale_table=
-        s->c_dc_scale_table= ff_mpeg1_dc_scale_table;
+        s->c.y_dc_scale_table =
+        s->c.c_dc_scale_table = ff_mpeg1_dc_scale_table;
     }
     return 0;
 }
diff --git a/libavcodec/snow_dwt.c b/libavcodec/snow_dwt.c
index 1250597ee0..0f2b86e2cc 100644
--- a/libavcodec/snow_dwt.c
+++ b/libavcodec/snow_dwt.c
@@ -741,7 +741,7 @@ void ff_spatial_idwt(IDWTELEM *buffer, IDWTELEM *temp, int width, int height,
                               decomposition_count, y);
 }
 
-static inline int w_c(struct MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2, ptrdiff_t line_size,
+static inline int w_c(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2, ptrdiff_t line_size,
                       int w, int h, int type)
 {
     int s, i, j;
@@ -810,32 +810,32 @@ static inline int w_c(struct MpegEncContext *v, const uint8_t *pix1, const uint8
     return s >> 9;
 }
 
-static int w53_8_c(struct MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2, ptrdiff_t line_size, int h)
+static int w53_8_c(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2, ptrdiff_t line_size, int h)
 {
     return w_c(v, pix1, pix2, line_size, 8, h, 1);
 }
 
-static int w97_8_c(struct MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2, ptrdiff_t line_size, int h)
+static int w97_8_c(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2, ptrdiff_t line_size, int h)
 {
     return w_c(v, pix1, pix2, line_size, 8, h, 0);
 }
 
-static int w53_16_c(struct MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2, ptrdiff_t line_size, int h)
+static int w53_16_c(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2, ptrdiff_t line_size, int h)
 {
     return w_c(v, pix1, pix2, line_size, 16, h, 1);
 }
 
-static int w97_16_c(struct MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2, ptrdiff_t line_size, int h)
+static int w97_16_c(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2, ptrdiff_t line_size, int h)
 {
     return w_c(v, pix1, pix2, line_size, 16, h, 0);
 }
 
-int ff_w53_32_c(struct MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2, ptrdiff_t line_size, int h)
+int ff_w53_32_c(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2, ptrdiff_t line_size, int h)
 {
     return w_c(v, pix1, pix2, line_size, 32, h, 1);
 }
 
-int ff_w97_32_c(struct MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2, ptrdiff_t line_size, int h)
+int ff_w97_32_c(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2, ptrdiff_t line_size, int h)
 {
     return w_c(v, pix1, pix2, line_size, 32, h, 0);
 }
diff --git a/libavcodec/snow_dwt.h b/libavcodec/snow_dwt.h
index 6e7d22c71a..b5803bc99a 100644
--- a/libavcodec/snow_dwt.h
+++ b/libavcodec/snow_dwt.h
@@ -26,7 +26,7 @@
 
 #include "libavutil/attributes.h"
 
-struct MpegEncContext;
+typedef struct MPVEncContext MPVEncContext;
 
 typedef int DWTELEM;
 typedef short IDWTELEM;
@@ -144,8 +144,8 @@ void ff_snow_inner_add_yblock(const uint8_t *obmc, const int obmc_stride,
                               int src_y, int src_stride, slice_buffer *sb,
                               int add, uint8_t *dst8);
 
-int ff_w53_32_c(struct MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2, ptrdiff_t line_size, int h);
-int ff_w97_32_c(struct MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2, ptrdiff_t line_size, int h);
+int ff_w53_32_c(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2, ptrdiff_t line_size, int h);
+int ff_w97_32_c(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2, ptrdiff_t line_size, int h);
 
 void ff_spatial_dwt(int *buffer, int *temp, int width, int height, int stride,
                     int type, int decomposition_count);
diff --git a/libavcodec/snowenc.c b/libavcodec/snowenc.c
index fc2a56a808..f9b10f7dab 100644
--- a/libavcodec/snowenc.c
+++ b/libavcodec/snowenc.c
@@ -61,7 +61,7 @@ typedef struct SnowEncContext {
     int scenechange_threshold;
 
     MECmpContext mecc;
-    MPVMainEncContext m; // needed for motion estimation, should not be used for anything else, the idea is to eventually make the motion estimation independent of MpegEncContext, so this will be removed then (FIXME/XXX)
+    MPVMainEncContext m; // needed for motion estimation, should not be used for anything else, the idea is to eventually make the motion estimation independent of MPVEncContext, so this will be removed then (FIXME/XXX)
     MPVPicture cur_pic, last_pic;
 #define ME_CACHE_SIZE 1024
     unsigned me_cache[ME_CACHE_SIZE];
@@ -160,7 +160,7 @@ static av_cold int encode_init(AVCodecContext *avctx)
 {
     SnowEncContext *const enc = avctx->priv_data;
     SnowContext *const s = &enc->com;
-    MpegEncContext *const mpv = &enc->m.s;
+    MPVEncContext *const mpv = &enc->m.s;
     int plane_index, ret;
     int i;
 
@@ -217,7 +217,7 @@ static av_cold int encode_init(AVCodecContext *avctx)
     mcf(12,12)
 
     ff_me_cmp_init(&enc->mecc, avctx);
-    ret = ff_me_init(&mpv->me, avctx, &enc->mecc, 0);
+    ret = ff_me_init(&mpv->c.me, avctx, &enc->mecc, 0);
     if (ret < 0)
         return ret;
     ff_mpegvideoencdsp_init(&enc->mpvencdsp, avctx);
@@ -226,21 +226,21 @@ static av_cold int encode_init(AVCodecContext *avctx)
 
     s->version=0;
 
-    mpv->avctx   = avctx;
+    mpv->c.avctx   = avctx;
     enc->m.bit_rate = avctx->bit_rate;
     enc->m.lmin  = avctx->mb_lmin;
     enc->m.lmax  = avctx->mb_lmax;
-    mpv->mb_num  = (avctx->width * avctx->height + 255) / 256; // For ratecontrol
+    mpv->c.mb_num  = (avctx->width * avctx->height + 255) / 256; // For ratecontrol
 
-    mpv->me.temp      =
-    mpv->me.scratchpad = av_calloc(avctx->width + 64, 2*16*2*sizeof(uint8_t));
-    mpv->sc.obmc_scratchpad= av_mallocz(MB_SIZE*MB_SIZE*12*sizeof(uint32_t));
-    mpv->me.map       = av_mallocz(2 * ME_MAP_SIZE * sizeof(*mpv->me.map));
-    if (!mpv->me.scratchpad || !mpv->me.map || !mpv->sc.obmc_scratchpad)
+    mpv->c.me.temp      =
+    mpv->c.me.scratchpad = av_calloc(avctx->width + 64, 2*16*2*sizeof(uint8_t));
+    mpv->c.sc.obmc_scratchpad= av_mallocz(MB_SIZE*MB_SIZE*12*sizeof(uint32_t));
+    mpv->c.me.map       = av_mallocz(2 * ME_MAP_SIZE * sizeof(*mpv->c.me.map));
+    if (!mpv->c.me.scratchpad || !mpv->c.me.map || !mpv->c.sc.obmc_scratchpad)
         return AVERROR(ENOMEM);
-    mpv->me.score_map = mpv->me.map + ME_MAP_SIZE;
+    mpv->c.me.score_map = mpv->c.me.map + ME_MAP_SIZE;
 
-    mpv->me.mv_penalty = ff_h263_get_mv_penalty();
+    mpv->c.me.mv_penalty = ff_h263_get_mv_penalty();
 
     s->max_ref_frames = av_clip(avctx->refs, 1, MAX_REF_FRAMES);
 
@@ -369,7 +369,7 @@ static inline int get_penalty_factor(int lambda, int lambda2, int type){
 static int encode_q_branch(SnowEncContext *enc, int level, int x, int y)
 {
     SnowContext      *const s = &enc->com;
-    MotionEstContext *const c = &enc->m.s.me;
+    MotionEstContext *const c = &enc->m.s.c.me;
     uint8_t p_buffer[1024];
     uint8_t i_buffer[1024];
     uint8_t p_state[sizeof(s->block_state)];
@@ -435,9 +435,9 @@ static int encode_q_branch(SnowEncContext *enc, int level, int x, int y)
     last_mv[2][0]= bottom->mx;
     last_mv[2][1]= bottom->my;
 
-    enc->m.s.mb_stride = 2;
-    enc->m.s.mb_x =
-    enc->m.s.mb_y = 0;
+    enc->m.s.c.mb_stride = 2;
+    enc->m.s.c.mb_x =
+    enc->m.s.c.mb_y = 0;
     c->skip= 0;
 
     av_assert1(c->  stride ==   stride);
@@ -446,7 +446,7 @@ static int encode_q_branch(SnowEncContext *enc, int level, int x, int y)
     c->penalty_factor    = get_penalty_factor(enc->lambda, enc->lambda2, c->avctx->me_cmp);
     c->sub_penalty_factor= get_penalty_factor(enc->lambda, enc->lambda2, c->avctx->me_sub_cmp);
     c->mb_penalty_factor = get_penalty_factor(enc->lambda, enc->lambda2, c->avctx->mb_cmp);
-    c->current_mv_penalty = c->mv_penalty[enc->m.s.f_code=1] + MAX_DMV;
+    c->current_mv_penalty = c->mv_penalty[enc->m.s.c.f_code=1] + MAX_DMV;
 
     c->xmin = - x*block_w - 16+3;
     c->ymin = - y*block_w - 16+3;
@@ -569,7 +569,7 @@ static int encode_q_branch(SnowEncContext *enc, int level, int x, int y)
         if (vard <= 64 || vard < varc)
             c->scene_change_score+= ff_sqrt(vard) - ff_sqrt(varc);
         else
-            c->scene_change_score += enc->m.s.qscale;
+            c->scene_change_score += enc->m.s.c.qscale;
     }
 
     if(level!=s->block_max_depth){
@@ -672,7 +672,7 @@ static int get_dc(SnowEncContext *enc, int mb_x, int mb_y, int plane_index)
     const int obmc_stride= plane_index ? (2*block_size)>>s->chroma_h_shift : 2*block_size;
     const int ref_stride= s->current_picture->linesize[plane_index];
     const uint8_t *src = s->input_picture->data[plane_index];
-    IDWTELEM *dst= (IDWTELEM*)enc->m.s.sc.obmc_scratchpad + plane_index*block_size*block_size*4; //FIXME change to unsigned
+    IDWTELEM *dst= (IDWTELEM*)enc->m.s.c.sc.obmc_scratchpad + plane_index*block_size*block_size*4; //FIXME change to unsigned
     const int b_stride = s->b_width << s->block_max_depth;
     const int w= p->width;
     const int h= p->height;
@@ -770,7 +770,7 @@ static int get_block_rd(SnowEncContext *enc, int mb_x, int mb_y,
     const int ref_stride= s->current_picture->linesize[plane_index];
     uint8_t *dst= s->current_picture->data[plane_index];
     const uint8_t *src = s->input_picture->data[plane_index];
-    IDWTELEM *pred= (IDWTELEM*)enc->m.s.sc.obmc_scratchpad + plane_index*block_size*block_size*4;
+    IDWTELEM *pred= (IDWTELEM*)enc->m.s.c.sc.obmc_scratchpad + plane_index*block_size*block_size*4;
     uint8_t *cur = s->scratchbuf;
     uint8_t *tmp = s->emu_edge_buffer;
     const int b_stride = s->b_width << s->block_max_depth;
@@ -840,12 +840,12 @@ static int get_block_rd(SnowEncContext *enc, int mb_x, int mb_y,
             distortion = 0;
             for(i=0; i<4; i++){
                 int off = sx+16*(i&1) + (sy+16*(i>>1))*ref_stride;
-                distortion += enc->m.s.me.me_cmp[0](&enc->m.s, src + off, dst + off, ref_stride, 16);
+                distortion += enc->m.s.c.me.me_cmp[0](&enc->m.s, src + off, dst + off, ref_stride, 16);
             }
         }
     }else{
         av_assert2(block_w==8);
-        distortion = enc->m.s.me.me_cmp[0](&enc->m.s, src + sx + sy*ref_stride, dst + sx + sy*ref_stride, ref_stride, block_w*2);
+        distortion = enc->m.s.c.me.me_cmp[0](&enc->m.s, src + sx + sy*ref_stride, dst + sx + sy*ref_stride, ref_stride, block_w*2);
     }
 
     if(plane_index==0){
@@ -911,7 +911,7 @@ static int get_4block_rd(SnowEncContext *enc, int mb_x, int mb_y, int plane_inde
         }
 
         av_assert1(block_w== 8 || block_w==16);
-        distortion += enc->m.s.me.me_cmp[block_w==8](&enc->m.s, src + x + y*ref_stride, dst + x + y*ref_stride, ref_stride, block_h);
+        distortion += enc->m.s.c.me.me_cmp[block_w==8](&enc->m.s, src + x + y*ref_stride, dst + x + y*ref_stride, ref_stride, block_h);
     }
 
     if(plane_index==0){
@@ -1759,7 +1759,7 @@ static int encode_frame(AVCodecContext *avctx, AVPacket *pkt,
 {
     SnowEncContext *const enc = avctx->priv_data;
     SnowContext *const s = &enc->com;
-    MpegEncContext *const mpv = &enc->m.s;
+    MPVEncContext *const mpv = &enc->m.s;
     RangeCoder * const c= &s->c;
     AVCodecInternal *avci = avctx->internal;
     AVFrame *pic;
@@ -1793,9 +1793,9 @@ static int encode_frame(AVCodecContext *avctx, AVPacket *pkt,
     pic->pict_type = pict->pict_type;
     pic->quality = pict->quality;
 
-    mpv->picture_number = avctx->frame_num;
+    mpv->c.picture_number = avctx->frame_num;
     if(avctx->flags&AV_CODEC_FLAG_PASS2){
-        mpv->pict_type = pic->pict_type = enc->m.rc_context.entry[avctx->frame_num].new_pict_type;
+        mpv->c.pict_type = pic->pict_type = enc->m.rc_context.entry[avctx->frame_num].new_pict_type;
         s->keyframe = pic->pict_type == AV_PICTURE_TYPE_I;
         if(!(avctx->flags&AV_CODEC_FLAG_QSCALE)) {
             pic->quality = ff_rate_estimate_qscale(&enc->m, 0);
@@ -1804,7 +1804,7 @@ static int encode_frame(AVCodecContext *avctx, AVPacket *pkt,
         }
     }else{
         s->keyframe= avctx->gop_size==0 || avctx->frame_num % avctx->gop_size == 0;
-        mpv->pict_type = pic->pict_type = s->keyframe ? AV_PICTURE_TYPE_I : AV_PICTURE_TYPE_P;
+        mpv->c.pict_type = pic->pict_type = s->keyframe ? AV_PICTURE_TYPE_I : AV_PICTURE_TYPE_P;
     }
 
     if (enc->pass1_rc && avctx->frame_num == 0)
@@ -1841,9 +1841,9 @@ static int encode_frame(AVCodecContext *avctx, AVPacket *pkt,
     if (ret < 0)
         return ret;
 
-    mpv->cur_pic.ptr         = &enc->cur_pic;
-    mpv->cur_pic.ptr->f      = s->current_picture;
-    mpv->cur_pic.ptr->f->pts = pict->pts;
+    mpv->c.cur_pic.ptr         = &enc->cur_pic;
+    mpv->c.cur_pic.ptr->f      = s->current_picture;
+    mpv->c.cur_pic.ptr->f->pts = pict->pts;
     if(pic->pict_type == AV_PICTURE_TYPE_P){
         int block_width = (width +15)>>4;
         int block_height= (height+15)>>4;
@@ -1852,35 +1852,35 @@ static int encode_frame(AVCodecContext *avctx, AVPacket *pkt,
         av_assert0(s->current_picture->data[0]);
         av_assert0(s->last_picture[0]->data[0]);
 
-        mpv->avctx = s->avctx;
-        mpv->last_pic.ptr    = &enc->last_pic;
-        mpv->last_pic.ptr->f = s->last_picture[0];
+        mpv->c.avctx = s->avctx;
+        mpv->c.last_pic.ptr    = &enc->last_pic;
+        mpv->c.last_pic.ptr->f = s->last_picture[0];
         mpv-> new_pic     = s->input_picture;
-        mpv->linesize   = stride;
-        mpv->uvlinesize = s->current_picture->linesize[1];
-        mpv->width      = width;
-        mpv->height     = height;
-        mpv->mb_width   = block_width;
-        mpv->mb_height  = block_height;
-        mpv->mb_stride  =     mpv->mb_width + 1;
-        mpv->b8_stride  = 2 * mpv->mb_width + 1;
-        mpv->f_code     = 1;
-        mpv->pict_type  = pic->pict_type;
-        mpv->me.motion_est = enc->motion_est;
-        mpv->me.scene_change_score = 0;
-        mpv->me.dia_size = avctx->dia_size;
-        mpv->quarter_sample  = (s->avctx->flags & AV_CODEC_FLAG_QPEL)!=0;
-        mpv->out_format      = FMT_H263;
-        mpv->unrestricted_mv = 1;
-
-        mpv->lambda = enc->lambda;
-        mpv->qscale = (mpv->lambda*139 + FF_LAMBDA_SCALE*64) >> (FF_LAMBDA_SHIFT + 7);
-        enc->lambda2  = mpv->lambda2 = (mpv->lambda*mpv->lambda + FF_LAMBDA_SCALE/2) >> FF_LAMBDA_SHIFT;
-
-        mpv->qdsp = enc->qdsp; //move
-        mpv->hdsp = s->hdsp;
+        mpv->c.linesize   = stride;
+        mpv->c.uvlinesize = s->current_picture->linesize[1];
+        mpv->c.width      = width;
+        mpv->c.height     = height;
+        mpv->c.mb_width   = block_width;
+        mpv->c.mb_height  = block_height;
+        mpv->c.mb_stride  =     mpv->c.mb_width + 1;
+        mpv->c.b8_stride  = 2 * mpv->c.mb_width + 1;
+        mpv->c.f_code     = 1;
+        mpv->c.pict_type  = pic->pict_type;
+        mpv->c.me.motion_est = enc->motion_est;
+        mpv->c.me.scene_change_score = 0;
+        mpv->c.me.dia_size = avctx->dia_size;
+        mpv->c.quarter_sample  = (s->avctx->flags & AV_CODEC_FLAG_QPEL)!=0;
+        mpv->c.out_format      = FMT_H263;
+        mpv->c.unrestricted_mv = 1;
+
+        mpv->c.lambda = enc->lambda;
+        mpv->c.qscale = (mpv->c.lambda*139 + FF_LAMBDA_SCALE*64) >> (FF_LAMBDA_SHIFT + 7);
+        enc->lambda2  = mpv->c.lambda2 = (mpv->c.lambda*mpv->c.lambda + FF_LAMBDA_SCALE/2) >> FF_LAMBDA_SHIFT;
+
+        mpv->c.qdsp = enc->qdsp; //move
+        mpv->c.hdsp = s->hdsp;
         ff_me_init_pic(mpv);
-        s->hdsp = mpv->hdsp;
+        s->hdsp = mpv->c.hdsp;
     }
 
     if (enc->pass1_rc) {
@@ -1901,7 +1901,7 @@ redo_frame:
         return AVERROR(EINVAL);
     }
 
-    mpv->pict_type = pic->pict_type;
+    mpv->c.pict_type = pic->pict_type;
     s->qbias = pic->pict_type == AV_PICTURE_TYPE_P ? 2 : 0;
 
     ff_snow_common_init_after_header(avctx);
@@ -1937,7 +1937,7 @@ redo_frame:
             if(   plane_index==0
                && pic->pict_type == AV_PICTURE_TYPE_P
                && !(avctx->flags&AV_CODEC_FLAG_PASS2)
-               && mpv->me.scene_change_score > enc->scenechange_threshold) {
+               && mpv->c.me.scene_change_score > enc->scenechange_threshold) {
                 ff_init_range_encoder(c, pkt->data, pkt->size);
                 ff_build_rac_states(c, (1LL<<32)/20, 256-8);
                 pic->pict_type= AV_PICTURE_TYPE_I;
@@ -2058,7 +2058,7 @@ redo_frame:
     }
     if(avctx->flags&AV_CODEC_FLAG_PASS1)
         ff_write_pass1_stats(&enc->m);
-    enc->m.last_pict_type = mpv->pict_type;
+    enc->m.last_pict_type = mpv->c.pict_type;
 
     emms_c();
 
@@ -2092,10 +2092,10 @@ static av_cold int encode_end(AVCodecContext *avctx)
         av_freep(&s->ref_scores[i]);
     }
 
-    enc->m.s.me.temp = NULL;
-    av_freep(&enc->m.s.me.scratchpad);
-    av_freep(&enc->m.s.me.map);
-    av_freep(&enc->m.s.sc.obmc_scratchpad);
+    enc->m.s.c.me.temp = NULL;
+    av_freep(&enc->m.s.c.me.scratchpad);
+    av_freep(&enc->m.s.c.me.map);
+    av_freep(&enc->m.s.c.sc.obmc_scratchpad);
 
     av_freep(&avctx->stats_out);
 
diff --git a/libavcodec/speedhqenc.c b/libavcodec/speedhqenc.c
index 1e28702f18..7ddfb92076 100644
--- a/libavcodec/speedhqenc.c
+++ b/libavcodec/speedhqenc.c
@@ -98,9 +98,9 @@ static av_cold void speedhq_init_static_data(void)
 static int speedhq_encode_picture_header(MPVMainEncContext *const m)
 {
     SpeedHQEncContext *const ctx = (SpeedHQEncContext*)m;
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
 
-    put_bits_le(&s->pb, 8, 100 - s->qscale * 2);  /* FIXME why doubled */
+    put_bits_le(&s->pb, 8, 100 - s->c.qscale * 2);  /* FIXME why doubled */
     put_bits_le(&s->pb, 24, 4);  /* no second field */
 
     ctx->slice_start = 4;
@@ -110,7 +110,7 @@ static int speedhq_encode_picture_header(MPVMainEncContext *const m)
     return 0;
 }
 
-void ff_speedhq_end_slice(MpegEncContext *s)
+void ff_speedhq_end_slice(MPVEncContext *const s)
 {
     SpeedHQEncContext *ctx = (SpeedHQEncContext*)s;
     int slice_len;
@@ -158,7 +158,7 @@ static inline void encode_dc(PutBitContext *pb, int diff, int component)
     }
 }
 
-static void encode_block(MpegEncContext *s, int16_t *block, int n)
+static void encode_block(MPVEncContext *const s, const int16_t block[], int n)
 {
     int alevel, level, last_non_zero, dc, i, j, run, last_index, sign;
     int code;
@@ -167,16 +167,16 @@ static void encode_block(MpegEncContext *s, int16_t *block, int n)
     /* DC coef */
     component = (n <= 3 ? 0 : (n&1) + 1);
     dc = block[0]; /* overflow is impossible */
-    val = s->last_dc[component] - dc;  /* opposite of most codecs */
+    val = s->c.last_dc[component] - dc;  /* opposite of most codecs */
     encode_dc(&s->pb, val, component);
-    s->last_dc[component] = dc;
+    s->c.last_dc[component] = dc;
 
     /* now quantify & encode AC coefs */
     last_non_zero = 0;
-    last_index = s->block_last_index[n];
+    last_index = s->c.block_last_index[n];
 
     for (i = 1; i <= last_index; i++) {
-        j     = s->intra_scantable.permutated[i];
+        j     = s->c.intra_scantable.permutated[i];
         level = block[j];
 
         /* encode using VLC */
@@ -207,14 +207,14 @@ static void encode_block(MpegEncContext *s, int16_t *block, int n)
     put_bits_le(&s->pb, 4, 6);
 }
 
-static void speedhq_encode_mb(MpegEncContext *const s, int16_t block[12][64],
+static void speedhq_encode_mb(MPVEncContext *const s, int16_t block[12][64],
                               int unused_x, int unused_y)
 {
     int i;
     for(i=0;i<6;i++) {
         encode_block(s, block[i], i);
     }
-    if (s->chroma_format == CHROMA_444) {
+    if (s->c.chroma_format == CHROMA_444) {
         encode_block(s, block[8], 8);
         encode_block(s, block[9], 9);
 
@@ -223,7 +223,7 @@ static void speedhq_encode_mb(MpegEncContext *const s, int16_t block[12][64],
 
         encode_block(s, block[10], 10);
         encode_block(s, block[11], 11);
-    } else if (s->chroma_format == CHROMA_422) {
+    } else if (s->c.chroma_format == CHROMA_422) {
         encode_block(s, block[6], 6);
         encode_block(s, block[7], 7);
     }
@@ -235,7 +235,7 @@ static av_cold int speedhq_encode_init(AVCodecContext *avctx)
 {
     static AVOnce init_static_once = AV_ONCE_INIT;
     MPVMainEncContext *const m = avctx->priv_data;
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
     int ret;
 
     if (avctx->width > 65500 || avctx->height > 65500) {
@@ -274,8 +274,8 @@ static av_cold int speedhq_encode_init(AVCodecContext *avctx)
     s->intra_chroma_ac_vlc_length      =
     s->intra_chroma_ac_vlc_last_length = uni_speedhq_ac_vlc_len;
 
-    s->y_dc_scale_table =
-    s->c_dc_scale_table = ff_mpeg12_dc_scale_table[3];
+    s->c.y_dc_scale_table =
+    s->c.c_dc_scale_table = ff_mpeg12_dc_scale_table[3];
 
     ret = ff_mpv_encode_init(avctx);
     if (ret < 0)
diff --git a/libavcodec/speedhqenc.h b/libavcodec/speedhqenc.h
index e804ce714a..568f82c76e 100644
--- a/libavcodec/speedhqenc.h
+++ b/libavcodec/speedhqenc.h
@@ -29,11 +29,9 @@
 #ifndef AVCODEC_SPEEDHQENC_H
 #define AVCODEC_SPEEDHQENC_H
 
-#include <stdint.h>
+typedef struct MPVEncContext MPVEncContext;
 
-#include "mpegvideo.h"
-
-void ff_speedhq_end_slice(MpegEncContext *s);
+void ff_speedhq_end_slice(MPVEncContext *s);
 
 static inline int ff_speedhq_mb_rows_in_slice(int slice_num, int mb_height)
 {
diff --git a/libavcodec/svq1enc.c b/libavcodec/svq1enc.c
index 652dc11b03..ddc94dffe5 100644
--- a/libavcodec/svq1enc.c
+++ b/libavcodec/svq1enc.c
@@ -58,8 +58,8 @@
 typedef struct SVQ1EncContext {
     /* FIXME: Needed for motion estimation, should not be used for anything
      * else, the idea is to make the motion estimation eventually independent
-     * of MpegEncContext, so this will be removed then. */
-    MpegEncContext m;
+     * of MPVEncContext, so this will be removed then. */
+    MPVEncContext m;
     AVCodecContext *avctx;
     MECmpContext mecc;
     HpelDSPContext hdsp;
@@ -289,7 +289,8 @@ static int encode_block(SVQ1EncContext *s, uint8_t *src, uint8_t *ref,
     return best_score;
 }
 
-static void init_block_index(MpegEncContext *s){
+static void init_block_index(MpegEncContext *const s)
+{
     s->block_index[0]= s->b8_stride*(s->mb_y*2    )     + s->mb_x*2;
     s->block_index[1]= s->b8_stride*(s->mb_y*2    ) + 1 + s->mb_x*2;
     s->block_index[2]= s->b8_stride*(s->mb_y*2 + 1)     + s->mb_x*2;
@@ -305,6 +306,7 @@ static int svq1_encode_plane(SVQ1EncContext *s, int plane,
                              unsigned char *decoded_plane,
                              int width, int height, int src_stride, int stride)
 {
+    MpegEncContext *const s2 = &s->m.c;
     int x, y;
     int i;
     int block_width, block_height;
@@ -323,36 +325,36 @@ static int svq1_encode_plane(SVQ1EncContext *s, int plane,
     block_height = (height + 15) / 16;
 
     if (s->pict_type == AV_PICTURE_TYPE_P) {
-        s->m.avctx                         = s->avctx;
-        s->m.last_pic.data[0]        = ref_plane;
-        s->m.linesize                      =
-        s->m.last_pic.linesize[0]    =
+        s2->avctx                         = s->avctx;
+        s2->last_pic.data[0]        = ref_plane;
+        s2->linesize                      =
+        s2->last_pic.linesize[0]    =
         s->m.new_pic->linesize[0]      =
-        s->m.cur_pic.linesize[0] = stride;
-        s->m.width                         = width;
-        s->m.height                        = height;
-        s->m.mb_width                      = block_width;
-        s->m.mb_height                     = block_height;
-        s->m.mb_stride                     = s->m.mb_width + 1;
-        s->m.b8_stride                     = 2 * s->m.mb_width + 1;
-        s->m.f_code                        = 1;
-        s->m.pict_type                     = s->pict_type;
-        s->m.me.scene_change_score         = 0;
-        // s->m.out_format                    = FMT_H263;
-        // s->m.unrestricted_mv               = 1;
-        s->m.lambda                        = s->quality;
-        s->m.qscale                        = s->m.lambda * 139 +
+        s2->cur_pic.linesize[0] = stride;
+        s2->width                         = width;
+        s2->height                        = height;
+        s2->mb_width                      = block_width;
+        s2->mb_height                     = block_height;
+        s2->mb_stride                     = s2->mb_width + 1;
+        s2->b8_stride                     = 2 * s2->mb_width + 1;
+        s2->f_code                        = 1;
+        s2->pict_type                     = s->pict_type;
+        s2->me.scene_change_score         = 0;
+        // s2->out_format                    = FMT_H263;
+        // s2->unrestricted_mv               = 1;
+        s2->lambda                        = s->quality;
+        s2->qscale                        = s2->lambda * 139 +
                                              FF_LAMBDA_SCALE * 64 >>
                                              FF_LAMBDA_SHIFT + 7;
-        s->m.lambda2                       = s->m.lambda * s->m.lambda +
+        s2->lambda2                       = s2->lambda * s2->lambda +
                                              FF_LAMBDA_SCALE / 2 >>
                                              FF_LAMBDA_SHIFT;
 
         if (!s->motion_val8[plane]) {
-            s->motion_val8[plane]  = av_mallocz((s->m.b8_stride *
+            s->motion_val8[plane]  = av_mallocz((s2->b8_stride *
                                                  block_height * 2 + 2) *
                                                 2 * sizeof(int16_t));
-            s->motion_val16[plane] = av_mallocz((s->m.mb_stride *
+            s->motion_val16[plane] = av_mallocz((s2->mb_stride *
                                                  (block_height + 2) + 1) *
                                                 2 * sizeof(int16_t));
             if (!s->motion_val8[plane] || !s->motion_val16[plane])
@@ -365,18 +367,18 @@ static int svq1_encode_plane(SVQ1EncContext *s, int plane,
         s->m.mb_mean   = (uint8_t *)s->dummy;
         s->m.mb_var    = (uint16_t *)s->dummy;
         s->m.mc_mb_var = (uint16_t *)s->dummy;
-        s->m.cur_pic.mb_type = s->dummy;
+        s2->cur_pic.mb_type = s->dummy;
 
-        s->m.cur_pic.motion_val[0]   = s->motion_val8[plane] + 2;
-        s->m.p_mv_table                      = s->motion_val16[plane] +
-                                               s->m.mb_stride + 1;
+        s2->cur_pic.motion_val[0]   = s->motion_val8[plane] + 2;
+        s->m.p_mv_table             = s->motion_val16[plane] +
+                                             s2->mb_stride + 1;
         ff_me_init_pic(&s->m);
 
-        s->m.me.dia_size      = s->avctx->dia_size;
-        s->m.first_slice_line = 1;
+        s2->me.dia_size      = s->avctx->dia_size;
+        s2->first_slice_line = 1;
         for (y = 0; y < block_height; y++) {
             s->m.new_pic->data[0]  = src - y * 16 * stride; // ugly
-            s->m.mb_y                  = y;
+            s2->mb_y                  = y;
 
             for (i = 0; i < 16 && i + 16 * y < height; i++) {
                 memcpy(&src[i * stride], &src_plane[(i + 16 * y) * src_stride],
@@ -389,20 +391,20 @@ static int svq1_encode_plane(SVQ1EncContext *s, int plane,
                        16 * block_width);
 
             for (x = 0; x < block_width; x++) {
-                s->m.mb_x = x;
-                init_block_index(&s->m);
+                s2->mb_x = x;
+                init_block_index(s2);
 
                 ff_estimate_p_frame_motion(&s->m, x, y);
             }
-            s->m.first_slice_line = 0;
+            s2->first_slice_line = 0;
         }
 
         ff_fix_long_p_mvs(&s->m, CANDIDATE_MB_TYPE_INTRA);
-        ff_fix_long_mvs(&s->m, NULL, 0, s->m.p_mv_table, s->m.f_code,
+        ff_fix_long_mvs(&s->m, NULL, 0, s->m.p_mv_table, s2->f_code,
                         CANDIDATE_MB_TYPE_INTER, 0);
     }
 
-    s->m.first_slice_line = 1;
+    s2->first_slice_line = 1;
     for (y = 0; y < block_height; y++) {
         for (i = 0; i < 16 && i + 16 * y < height; i++) {
             memcpy(&src[i * stride], &src_plane[(i + 16 * y) * src_stride],
@@ -413,7 +415,7 @@ static int svq1_encode_plane(SVQ1EncContext *s, int plane,
         for (; i < 16 && i + 16 * y < 16 * block_height; i++)
             memcpy(&src[i * stride], &src[(i - 1) * stride], 16 * block_width);
 
-        s->m.mb_y = y;
+        s2->mb_y = y;
         for (x = 0; x < block_width; x++) {
             uint8_t reorder_buffer[2][6][7 * 32];
             int count[2][6];
@@ -428,11 +430,11 @@ static int svq1_encode_plane(SVQ1EncContext *s, int plane,
                 return -1;
             }
 
-            s->m.mb_x = x;
-            init_block_index(&s->m);
+            s2->mb_x = x;
+            init_block_index(s2);
 
             if (s->pict_type == AV_PICTURE_TYPE_I ||
-                (s->m.mb_type[x + y * s->m.mb_stride] &
+                (s->m.mb_type[x + y * s2->mb_stride] &
                  CANDIDATE_MB_TYPE_INTRA)) {
                 for (i = 0; i < 6; i++)
                     init_put_bits(&s->reorder_pb[i], reorder_buffer[0][i],
@@ -456,8 +458,8 @@ static int svq1_encode_plane(SVQ1EncContext *s, int plane,
                 int mx, my, pred_x, pred_y, dxy;
                 int16_t *motion_ptr;
 
-                motion_ptr = ff_h263_pred_motion(&s->m, 0, 0, &pred_x, &pred_y);
-                if (s->m.mb_type[x + y * s->m.mb_stride] &
+                motion_ptr = ff_h263_pred_motion(s2, 0, 0, &pred_x, &pred_y);
+                if (s->m.mb_type[x + y * s2->mb_stride] &
                     CANDIDATE_MB_TYPE_INTER) {
                     for (i = 0; i < 6; i++)
                         init_put_bits(&s->reorder_pb[i], reorder_buffer[1][i],
@@ -506,10 +508,10 @@ static int svq1_encode_plane(SVQ1EncContext *s, int plane,
                     motion_ptr[1]                      =
                     motion_ptr[2]                      =
                     motion_ptr[3]                      =
-                    motion_ptr[0 + 2 * s->m.b8_stride] =
-                    motion_ptr[1 + 2 * s->m.b8_stride] =
-                    motion_ptr[2 + 2 * s->m.b8_stride] =
-                    motion_ptr[3 + 2 * s->m.b8_stride] = 0;
+                    motion_ptr[0 + 2 * s2->b8_stride] =
+                    motion_ptr[1 + 2 * s2->b8_stride] =
+                    motion_ptr[2 + 2 * s2->b8_stride] =
+                    motion_ptr[3 + 2 * s2->b8_stride] = 0;
                 }
             }
 
@@ -522,7 +524,7 @@ static int svq1_encode_plane(SVQ1EncContext *s, int plane,
             if (best == 0)
                 s->hdsp.put_pixels_tab[0][0](decoded, temp, stride, 16);
         }
-        s->m.first_slice_line = 0;
+        s2->first_slice_line = 0;
     }
     return 0;
 }
@@ -537,14 +539,14 @@ static av_cold int svq1_encode_end(AVCodecContext *avctx)
                s->rd_total / (double)(avctx->width * avctx->height *
                                       avctx->frame_num));
 
-    av_freep(&s->m.me.scratchpad);
-    av_freep(&s->m.me.map);
+    av_freep(&s->m.c.me.scratchpad);
+    av_freep(&s->m.c.me.map);
     av_freep(&s->mb_type);
     av_freep(&s->dummy);
     av_freep(&s->scratchbuf);
 
     s->m.mb_type = NULL;
-    ff_mpv_common_end(&s->m);
+    ff_mpv_common_end(&s->m.c);
 
     for (i = 0; i < 3; i++) {
         av_freep(&s->motion_val8[i]);
@@ -583,7 +585,7 @@ static av_cold int svq1_encode_init(AVCodecContext *avctx)
 
     ff_hpeldsp_init(&s->hdsp, avctx->flags);
     ff_me_cmp_init(&s->mecc, avctx);
-    ret = ff_me_init(&s->m.me, avctx, &s->mecc, 0);
+    ret = ff_me_init(&s->m.c.me, avctx, &s->mecc, 0);
     if (ret < 0)
         return ret;
     ff_mpegvideoencdsp_init(&s->m.mpvencdsp, avctx);
@@ -604,31 +606,31 @@ static av_cold int svq1_encode_init(AVCodecContext *avctx)
     s->c_block_height = (s->frame_height / 4 + 15) / 16;
 
     s->avctx               = avctx;
-    s->m.avctx             = avctx;
+    s->m.c.avctx           = avctx;
 
-    if ((ret = ff_mpv_common_init(&s->m)) < 0) {
+    ret = ff_mpv_common_init(&s->m.c);
+    if (ret < 0)
         return ret;
-    }
 
-    s->m.picture_structure = PICT_FRAME;
-    s->m.me.temp           =
-    s->m.me.scratchpad     = av_mallocz((avctx->width + 64) *
+    s->m.c.picture_structure = PICT_FRAME;
+    s->m.c.me.temp           =
+    s->m.c.me.scratchpad     = av_mallocz((avctx->width + 64) *
                                         2 * 16 * 2 * sizeof(uint8_t));
     s->mb_type             = av_mallocz((s->y_block_width + 1) *
                                         s->y_block_height * sizeof(int16_t));
     s->dummy               = av_mallocz((s->y_block_width + 1) *
                                         s->y_block_height * sizeof(int32_t));
-    s->m.me.map            = av_mallocz(2 * ME_MAP_SIZE * sizeof(*s->m.me.map));
+    s->m.c.me.map            = av_mallocz(2 * ME_MAP_SIZE * sizeof(*s->m.c.me.map));
     s->m.new_pic       = av_frame_alloc();
 
-    if (!s->m.me.scratchpad || !s->m.me.map ||
+    if (!s->m.c.me.scratchpad || !s->m.c.me.map ||
         !s->mb_type || !s->dummy || !s->m.new_pic)
         return AVERROR(ENOMEM);
-    s->m.me.score_map = s->m.me.map + ME_MAP_SIZE;
+    s->m.c.me.score_map = s->m.c.me.map + ME_MAP_SIZE;
 
     ff_svq1enc_init(&s->svq1encdsp);
 
-    s->m.me.mv_penalty = ff_h263_get_mv_penalty();
+    s->m.c.me.mv_penalty = ff_h263_get_mv_penalty();
 
     return write_ident(avctx, s->avctx->flags & AV_CODEC_FLAG_BITEXACT ? "Lavc" : LIBAVCODEC_IDENT);
 }
@@ -716,7 +718,7 @@ static int svq1_encode_frame(AVCodecContext *avctx, AVPacket *pkt,
 #define OFFSET(x) offsetof(struct SVQ1EncContext, x)
 #define VE AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM
 static const AVOption options[] = {
-    { "motion-est", "Motion estimation algorithm", OFFSET(m.me.motion_est), AV_OPT_TYPE_INT, { .i64 = FF_ME_EPZS }, FF_ME_ZERO, FF_ME_XONE, VE, .unit = "motion-est"},
+    { "motion-est", "Motion estimation algorithm", OFFSET(m.c.me.motion_est), AV_OPT_TYPE_INT, { .i64 = FF_ME_EPZS }, FF_ME_ZERO, FF_ME_XONE, VE, .unit = "motion-est"},
         { "zero", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = FF_ME_ZERO }, 0, 0, FF_MPV_OPT_FLAGS, .unit = "motion-est" },
         { "epzs", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = FF_ME_EPZS }, 0, 0, FF_MPV_OPT_FLAGS, .unit = "motion-est" },
         { "xone", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = FF_ME_XONE }, 0, 0, FF_MPV_OPT_FLAGS, .unit = "motion-est" },
diff --git a/libavcodec/wmv2enc.c b/libavcodec/wmv2enc.c
index 81b0ace053..f9fd918dbf 100644
--- a/libavcodec/wmv2enc.c
+++ b/libavcodec/wmv2enc.c
@@ -48,17 +48,17 @@ typedef struct WMV2EncContext {
 
 static int encode_ext_header(WMV2EncContext *w)
 {
-    MpegEncContext *const s = &w->msmpeg4.m.s;
+    MPVEncContext *const s = &w->msmpeg4.m.s;
     PutBitContext pb;
     int code;
 
-    init_put_bits(&pb, s->avctx->extradata, WMV2_EXTRADATA_SIZE);
+    init_put_bits(&pb, s->c.avctx->extradata, WMV2_EXTRADATA_SIZE);
 
-    put_bits(&pb, 5, s->avctx->time_base.den / s->avctx->time_base.num); // yes 29.97 -> 29
+    put_bits(&pb, 5, s->c.avctx->time_base.den / s->c.avctx->time_base.num); // yes 29.97 -> 29
     put_bits(&pb, 11, FFMIN(w->msmpeg4.m.bit_rate / 1024, 2047));
 
     put_bits(&pb, 1, w->mspel_bit        = 1);
-    put_bits(&pb, 1, s->loop_filter);
+    put_bits(&pb, 1, s->c.loop_filter);
     put_bits(&pb, 1, w->abt_flag         = 1);
     put_bits(&pb, 1, w->j_type_bit       = 1);
     put_bits(&pb, 1, w->top_left_mv_flag = 0);
@@ -67,7 +67,7 @@ static int encode_ext_header(WMV2EncContext *w)
 
     flush_put_bits(&pb);
 
-    s->slice_height = s->mb_height / code;
+    s->c.slice_height = s->c.mb_height / code;
 
     return 0;
 }
@@ -76,25 +76,25 @@ static int wmv2_encode_picture_header(MPVMainEncContext *const m)
 {
     WMV2EncContext *const w = (WMV2EncContext *) m;
     MSMPEG4EncContext *const ms = &w->msmpeg4;
-    MpegEncContext *const s = &m->s;
+    MPVEncContext *const s = &m->s;
 
-    put_bits(&s->pb, 1, s->pict_type - 1);
-    if (s->pict_type == AV_PICTURE_TYPE_I)
+    put_bits(&s->pb, 1, s->c.pict_type - 1);
+    if (s->c.pict_type == AV_PICTURE_TYPE_I)
         put_bits(&s->pb, 7, 0);
-    put_bits(&s->pb, 5, s->qscale);
+    put_bits(&s->pb, 5, s->c.qscale);
 
     ms->dc_table_index  = 1;
     ms->mv_table_index  = 1; /* only if P-frame */
     ms->per_mb_rl_table = 0;
-    s->mspel           = 0;
+    s->c.mspel          = 0;
     w->per_mb_abt      = 0;
     w->abt_type        = 0;
     w->j_type          = 0;
 
-    av_assert0(s->flipflop_rounding);
+    av_assert0(s->c.flipflop_rounding);
 
-    if (s->pict_type == AV_PICTURE_TYPE_I) {
-        av_assert0(s->no_rounding == 1);
+    if (s->c.pict_type == AV_PICTURE_TYPE_I) {
+        av_assert0(s->c.no_rounding == 1);
         if (w->j_type_bit)
             put_bits(&s->pb, 1, w->j_type);
 
@@ -108,17 +108,17 @@ static int wmv2_encode_picture_header(MPVMainEncContext *const m)
 
         put_bits(&s->pb, 1, ms->dc_table_index);
 
-        s->inter_intra_pred = 0;
+        s->c.inter_intra_pred = 0;
     } else {
         int cbp_index;
 
         put_bits(&s->pb, 2, SKIP_TYPE_NONE);
 
         ff_msmpeg4_code012(&s->pb, cbp_index = 0);
-        w->cbp_table_index = wmv2_get_cbp_table_index(s, cbp_index);
+        w->cbp_table_index = wmv2_get_cbp_table_index(&s->c, cbp_index);
 
         if (w->mspel_bit)
-            put_bits(&s->pb, 1, s->mspel);
+            put_bits(&s->pb, 1, s->c.mspel);
 
         if (w->abt_flag) {
             put_bits(&s->pb, 1, w->per_mb_abt ^ 1);
@@ -136,7 +136,7 @@ static int wmv2_encode_picture_header(MPVMainEncContext *const m)
         put_bits(&s->pb, 1, ms->dc_table_index);
         put_bits(&s->pb, 1, ms->mv_table_index);
 
-        s->inter_intra_pred = 0; // (s->width * s->height < 320 * 240 && m->bit_rate <= II_BITRATE);
+        s->c.inter_intra_pred = 0; // (s->c.width * s->c.height < 320 * 240 && m->bit_rate <= II_BITRATE);
     }
     s->esc3_level_length = 0;
     ms->esc3_run_length  = 0;
@@ -147,7 +147,7 @@ static int wmv2_encode_picture_header(MPVMainEncContext *const m)
 /* Nearly identical to wmv1 but that is just because we do not use the
  * useless M$ crap features. It is duplicated here in case someone wants
  * to add support for these crap features. */
-static void wmv2_encode_mb(MpegEncContext *const s, int16_t block[][64],
+static void wmv2_encode_mb(MPVEncContext *const s, int16_t block[][64],
                            int motion_x, int motion_y)
 {
     WMV2EncContext *const w = (WMV2EncContext *) s;
@@ -157,11 +157,11 @@ static void wmv2_encode_mb(MpegEncContext *const s, int16_t block[][64],
 
     ff_msmpeg4_handle_slices(s);
 
-    if (!s->mb_intra) {
+    if (!s->c.mb_intra) {
         /* compute cbp */
         cbp = 0;
         for (i = 0; i < 6; i++)
-            if (s->block_last_index[i] >= 0)
+            if (s->c.block_last_index[i] >= 0)
                 cbp |= 1 << (5 - i);
 
         put_bits(&s->pb,
@@ -170,7 +170,7 @@ static void wmv2_encode_mb(MpegEncContext *const s, int16_t block[][64],
 
         s->misc_bits += get_bits_diff(s);
         /* motion vector */
-        ff_h263_pred_motion(s, 0, 0, &pred_x, &pred_y);
+        ff_h263_pred_motion(&s->c, 0, 0, &pred_x, &pred_y);
         ff_msmpeg4_encode_motion(&w->msmpeg4, motion_x - pred_x,
                                  motion_y - pred_y);
         s->mv_bits += get_bits_diff(s);
@@ -179,19 +179,19 @@ static void wmv2_encode_mb(MpegEncContext *const s, int16_t block[][64],
         cbp       = 0;
         coded_cbp = 0;
         for (i = 0; i < 6; i++) {
-            int val, pred;
-            val  = (s->block_last_index[i] >= 1);
+            int val = (s->c.block_last_index[i] >= 1);
+
             cbp |= val << (5 - i);
             if (i < 4) {
                 /* predict value for close blocks only for luma */
-                pred         = ff_msmpeg4_coded_block_pred(s, i, &coded_block);
+                int pred     = ff_msmpeg4_coded_block_pred(&s->c, i, &coded_block);
                 *coded_block = val;
                 val          = val ^ pred;
             }
             coded_cbp |= val << (5 - i);
         }
 
-        if (s->pict_type == AV_PICTURE_TYPE_I)
+        if (s->c.pict_type == AV_PICTURE_TYPE_I)
             put_bits(&s->pb,
                      ff_msmp4_mb_i_table[coded_cbp][1],
                      ff_msmp4_mb_i_table[coded_cbp][0]);
@@ -200,18 +200,18 @@ static void wmv2_encode_mb(MpegEncContext *const s, int16_t block[][64],
                      ff_wmv2_inter_table[w->cbp_table_index][cbp][1],
                      ff_wmv2_inter_table[w->cbp_table_index][cbp][0]);
         put_bits(&s->pb, 1, 0);         /* no AC prediction yet */
-        if (s->inter_intra_pred) {
-            s->h263_aic_dir = 0;
+        if (s->c.inter_intra_pred) {
+            s->c.h263_aic_dir = 0;
             put_bits(&s->pb,
-                     ff_table_inter_intra[s->h263_aic_dir][1],
-                     ff_table_inter_intra[s->h263_aic_dir][0]);
+                     ff_table_inter_intra[s->c.h263_aic_dir][1],
+                     ff_table_inter_intra[s->c.h263_aic_dir][0]);
         }
         s->misc_bits += get_bits_diff(s);
     }
 
     for (i = 0; i < 6; i++)
         ff_msmpeg4_encode_block(s, block[i], i);
-    if (s->mb_intra)
+    if (s->c.mb_intra)
         s->i_tex_bits += get_bits_diff(s);
     else
         s->p_tex_bits += get_bits_diff(s);
@@ -220,17 +220,17 @@ static void wmv2_encode_mb(MpegEncContext *const s, int16_t block[][64],
 static av_cold int wmv2_encode_init(AVCodecContext *avctx)
 {
     WMV2EncContext *const w = avctx->priv_data;
-    MpegEncContext *const s = &w->msmpeg4.m.s;
+    MPVEncContext *const s = &w->msmpeg4.m.s;
     int ret;
 
     w->msmpeg4.m.encode_picture_header = wmv2_encode_picture_header;
     s->encode_mb                       = wmv2_encode_mb;
-    s->private_ctx = &w->common;
+    s->c.private_ctx = &w->common;
     ret = ff_mpv_encode_init(avctx);
     if (ret < 0)
         return ret;
 
-    ff_wmv2_common_init(s);
+    ff_wmv2_common_init(&s->c);
 
     avctx->extradata_size = WMV2_EXTRADATA_SIZE;
     avctx->extradata      = av_mallocz(avctx->extradata_size + AV_INPUT_BUFFER_PADDING_SIZE);
diff --git a/libavcodec/x86/me_cmp.asm b/libavcodec/x86/me_cmp.asm
index 923eb8078b..a494cdeb64 100644
--- a/libavcodec/x86/me_cmp.asm
+++ b/libavcodec/x86/me_cmp.asm
@@ -214,7 +214,7 @@ hadamard8x8_diff %+ SUFFIX:
 hadamard8_16_wrapper %1, 3
 %elif cpuflag(mmx)
 ALIGN 16
-; int ff_hadamard8_diff_ ## cpu(MpegEncContext *s, const uint8_t *src1,
+; int ff_hadamard8_diff_ ## cpu(MPVEncContext *s, const uint8_t *src1,
 ;                               const uint8_t *src2, ptrdiff_t stride, int h)
 ; r0 = void *s = unused, int h = unused (always 8)
 ; note how r1, r2 and r3 are not clobbered in this function, so 16x16
@@ -278,7 +278,7 @@ INIT_XMM ssse3
 %define ABS_SUM_8x8 ABS_SUM_8x8_64
 HADAMARD8_DIFF 9
 
-; int ff_sse*_*(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+; int ff_sse*_*(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
 ;               ptrdiff_t line_size, int h)
 
 %macro SUM_SQUARED_ERRORS 1
@@ -466,7 +466,7 @@ HF_NOISE 8
 HF_NOISE 16
 
 ;---------------------------------------------------------------------------------------
-;int ff_sad_<opt>(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2, ptrdiff_t stride, int h);
+;int ff_sad_<opt>(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2, ptrdiff_t stride, int h);
 ;---------------------------------------------------------------------------------------
 ;%1 = 8/16
 %macro SAD 1
@@ -521,7 +521,7 @@ INIT_XMM sse2
 SAD 16
 
 ;------------------------------------------------------------------------------------------
-;int ff_sad_x2_<opt>(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2, ptrdiff_t stride, int h);
+;int ff_sad_x2_<opt>(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2, ptrdiff_t stride, int h);
 ;------------------------------------------------------------------------------------------
 ;%1 = 8/16
 %macro SAD_X2 1
@@ -598,7 +598,7 @@ INIT_XMM sse2
 SAD_X2 16
 
 ;------------------------------------------------------------------------------------------
-;int ff_sad_y2_<opt>(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2, ptrdiff_t stride, int h);
+;int ff_sad_y2_<opt>(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2, ptrdiff_t stride, int h);
 ;------------------------------------------------------------------------------------------
 ;%1 = 8/16
 %macro SAD_Y2 1
@@ -668,7 +668,7 @@ INIT_XMM sse2
 SAD_Y2 16
 
 ;-------------------------------------------------------------------------------------------
-;int ff_sad_approx_xy2_<opt>(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2, ptrdiff_t stride, int h);
+;int ff_sad_approx_xy2_<opt>(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2, ptrdiff_t stride, int h);
 ;-------------------------------------------------------------------------------------------
 ;%1 = 8/16
 %macro SAD_APPROX_XY2 1
@@ -769,7 +769,7 @@ INIT_XMM sse2
 SAD_APPROX_XY2 16
 
 ;--------------------------------------------------------------------
-;int ff_vsad_intra(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+;int ff_vsad_intra(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
 ;                  ptrdiff_t line_size, int h);
 ;--------------------------------------------------------------------
 ; %1 = 8/16
@@ -830,7 +830,7 @@ INIT_XMM sse2
 VSAD_INTRA 16
 
 ;---------------------------------------------------------------------
-;int ff_vsad_approx(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+;int ff_vsad_approx(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
 ;                   ptrdiff_t line_size, int h);
 ;---------------------------------------------------------------------
 ; %1 = 8/16
diff --git a/libavcodec/x86/me_cmp_init.c b/libavcodec/x86/me_cmp_init.c
index 98b71b1894..45425f7109 100644
--- a/libavcodec/x86/me_cmp_init.c
+++ b/libavcodec/x86/me_cmp_init.c
@@ -28,59 +28,59 @@
 #include "libavutil/x86/asm.h"
 #include "libavutil/x86/cpu.h"
 #include "libavcodec/me_cmp.h"
-#include "libavcodec/mpegvideo.h"
+#include "libavcodec/mpegvideoenc.h"
 
 int ff_sum_abs_dctelem_sse2(const int16_t *block);
 int ff_sum_abs_dctelem_ssse3(const int16_t *block);
-int ff_sse8_mmx(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_sse8_mmx(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                 ptrdiff_t stride, int h);
-int ff_sse16_mmx(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_sse16_mmx(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                  ptrdiff_t stride, int h);
-int ff_sse16_sse2(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_sse16_sse2(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                   ptrdiff_t stride, int h);
 int ff_hf_noise8_mmx(const uint8_t *pix1, ptrdiff_t stride, int h);
 int ff_hf_noise16_mmx(const uint8_t *pix1, ptrdiff_t stride, int h);
-int ff_sad8_mmxext(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_sad8_mmxext(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                    ptrdiff_t stride, int h);
-int ff_sad16_mmxext(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_sad16_mmxext(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                     ptrdiff_t stride, int h);
-int ff_sad16_sse2(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_sad16_sse2(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                   ptrdiff_t stride, int h);
-int ff_sad8_x2_mmxext(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_sad8_x2_mmxext(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                       ptrdiff_t stride, int h);
-int ff_sad16_x2_mmxext(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_sad16_x2_mmxext(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                        ptrdiff_t stride, int h);
-int ff_sad16_x2_sse2(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_sad16_x2_sse2(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                      ptrdiff_t stride, int h);
-int ff_sad8_y2_mmxext(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_sad8_y2_mmxext(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                       ptrdiff_t stride, int h);
-int ff_sad16_y2_mmxext(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_sad16_y2_mmxext(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                        ptrdiff_t stride, int h);
-int ff_sad16_y2_sse2(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_sad16_y2_sse2(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                      ptrdiff_t stride, int h);
-int ff_sad8_approx_xy2_mmxext(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_sad8_approx_xy2_mmxext(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                               ptrdiff_t stride, int h);
-int ff_sad16_approx_xy2_mmxext(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_sad16_approx_xy2_mmxext(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                                ptrdiff_t stride, int h);
-int ff_sad16_approx_xy2_sse2(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_sad16_approx_xy2_sse2(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                              ptrdiff_t stride, int h);
-int ff_vsad_intra8_mmxext(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_vsad_intra8_mmxext(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                           ptrdiff_t stride, int h);
-int ff_vsad_intra16_mmxext(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_vsad_intra16_mmxext(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                            ptrdiff_t stride, int h);
-int ff_vsad_intra16_sse2(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_vsad_intra16_sse2(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                          ptrdiff_t stride, int h);
-int ff_vsad8_approx_mmxext(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_vsad8_approx_mmxext(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                     ptrdiff_t stride, int h);
-int ff_vsad16_approx_mmxext(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_vsad16_approx_mmxext(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                      ptrdiff_t stride, int h);
-int ff_vsad16_approx_sse2(MpegEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
+int ff_vsad16_approx_sse2(MPVEncContext *v, const uint8_t *pix1, const uint8_t *pix2,
                    ptrdiff_t stride, int h);
 
-#define hadamard_func(cpu)                                                    \
-    int ff_hadamard8_diff_ ## cpu(MpegEncContext *s, const uint8_t *src1,     \
+#define hadamard_func(cpu)                                                       \
+    int ff_hadamard8_diff_ ## cpu(MPVEncContext *s, const uint8_t *src1,         \
                                   const uint8_t *src2, ptrdiff_t stride, int h); \
-    int ff_hadamard8_diff16_ ## cpu(MpegEncContext *s, const uint8_t *src1,   \
+    int ff_hadamard8_diff16_ ## cpu(MPVEncContext *s, const uint8_t *src1,       \
                                     const uint8_t *src2, ptrdiff_t stride, int h);
 
 hadamard_func(mmxext)
@@ -88,7 +88,7 @@ hadamard_func(sse2)
 hadamard_func(ssse3)
 
 #if HAVE_X86ASM
-static int nsse16_mmx(MpegEncContext *c, const uint8_t *pix1, const uint8_t *pix2,
+static int nsse16_mmx(MPVEncContext *c, const uint8_t *pix1, const uint8_t *pix2,
                       ptrdiff_t stride, int h)
 {
     int score1, score2;
@@ -101,12 +101,12 @@ static int nsse16_mmx(MpegEncContext *c, const uint8_t *pix1, const uint8_t *pix
            - ff_hf_noise16_mmx(pix2, stride, h) - ff_hf_noise8_mmx(pix2+8, stride, h);
 
     if (c)
-        return score1 + FFABS(score2) * c->avctx->nsse_weight;
+        return score1 + FFABS(score2) * c->c.avctx->nsse_weight;
     else
         return score1 + FFABS(score2) * 8;
 }
 
-static int nsse8_mmx(MpegEncContext *c, const uint8_t *pix1, const uint8_t *pix2,
+static int nsse8_mmx(MPVEncContext *c, const uint8_t *pix1, const uint8_t *pix2,
                      ptrdiff_t stride, int h)
 {
     int score1 = ff_sse8_mmx(c, pix1, pix2, stride, h);
@@ -114,7 +114,7 @@ static int nsse8_mmx(MpegEncContext *c, const uint8_t *pix1, const uint8_t *pix2
                  ff_hf_noise8_mmx(pix2, stride, h);
 
     if (c)
-        return score1 + FFABS(score2) * c->avctx->nsse_weight;
+        return score1 + FFABS(score2) * c->c.avctx->nsse_weight;
     else
         return score1 + FFABS(score2) * 8;
 }
@@ -199,7 +199,7 @@ static inline int sum_mmx(void)
 }
 
 #define PIX_SADXY(suf)                                                  \
-static int sad8_xy2_ ## suf(MpegEncContext *v, const uint8_t *blk2,     \
+static int sad8_xy2_ ## suf(MPVEncContext *v, const uint8_t *blk2,      \
                             const uint8_t *blk1, ptrdiff_t stride, int h) \
 {                                                                       \
     __asm__ volatile (                                                  \
@@ -212,7 +212,7 @@ static int sad8_xy2_ ## suf(MpegEncContext *v, const uint8_t *blk2,     \
     return sum_ ## suf();                                               \
 }                                                                       \
                                                                         \
-static int sad16_xy2_ ## suf(MpegEncContext *v, const uint8_t *blk2,    \
+static int sad16_xy2_ ## suf(MPVEncContext *v, const uint8_t *blk2,     \
                              const uint8_t *blk1, ptrdiff_t stride, int h) \
 {                                                                       \
     __asm__ volatile (                                                  \
diff --git a/libavcodec/x86/mpegvideoenc.c b/libavcodec/x86/mpegvideoenc.c
index 612e7ff758..99daccef3b 100644
--- a/libavcodec/x86/mpegvideoenc.c
+++ b/libavcodec/x86/mpegvideoenc.c
@@ -70,8 +70,9 @@ DECLARE_ALIGNED(16, static const uint16_t, inv_zigzag_direct16)[64] = {
 
 #if HAVE_INLINE_ASM
 #if HAVE_SSE2_INLINE
-static void  denoise_dct_sse2(MpegEncContext *s, int16_t *block){
-    const int intra= s->mb_intra;
+static void denoise_dct_sse2(MPVEncContext *const s, int16_t block[])
+{
+    const int intra= s->c.mb_intra;
     int *sum= s->dct_error_sum[intra];
     uint16_t *offset= s->dct_offset[intra];
 
@@ -128,9 +129,9 @@ static void  denoise_dct_sse2(MpegEncContext *s, int16_t *block){
 #endif /* HAVE_SSE2_INLINE */
 #endif /* HAVE_INLINE_ASM */
 
-av_cold void ff_dct_encode_init_x86(MpegEncContext *s)
+av_cold void ff_dct_encode_init_x86(MPVEncContext *const s)
 {
-    const int dct_algo = s->avctx->dct_algo;
+    const int dct_algo = s->c.avctx->dct_algo;
 
     if (dct_algo == FF_DCT_AUTO || dct_algo == FF_DCT_MMX) {
 #if HAVE_MMX_INLINE
diff --git a/libavcodec/x86/mpegvideoenc_template.c b/libavcodec/x86/mpegvideoenc_template.c
index f937c7166b..08619ac570 100644
--- a/libavcodec/x86/mpegvideoenc_template.c
+++ b/libavcodec/x86/mpegvideoenc_template.c
@@ -26,7 +26,7 @@
 #include "libavutil/mem_internal.h"
 #include "libavutil/x86/asm.h"
 #include "libavcodec/mpegutils.h"
-#include "libavcodec/mpegvideo.h"
+#include "libavcodec/mpegvideoenc.h"
 #include "fdct.h"
 
 #undef MMREG_WIDTH
@@ -90,7 +90,7 @@
             "psubw "a", "b"             \n\t" // out=((ABS(block[i])*qmat[0] - bias[0]*qmat[0])>>16)*sign(block[i])
 #endif
 
-static int RENAME(dct_quantize)(MpegEncContext *s,
+static int RENAME(dct_quantize)(MPVEncContext *const s,
                             int16_t *block, int n,
                             int qscale, int *overflow)
 {
@@ -105,19 +105,19 @@ static int RENAME(dct_quantize)(MpegEncContext *s,
     if(s->dct_error_sum)
         s->denoise_dct(s, block);
 
-    if (s->mb_intra) {
+    if (s->c.mb_intra) {
         int dummy;
         if (n < 4){
-            q = s->y_dc_scale;
+            q = s->c.y_dc_scale;
             bias = s->q_intra_matrix16[qscale][1];
             qmat = s->q_intra_matrix16[qscale][0];
         }else{
-            q = s->c_dc_scale;
+            q = s->c.c_dc_scale;
             bias = s->q_chroma_intra_matrix16[qscale][1];
             qmat = s->q_chroma_intra_matrix16[qscale][0];
         }
         /* note: block[0] is assumed to be positive */
-        if (!s->h263_aic) {
+        if (!s->c.h263_aic) {
         __asm__ volatile (
                 "mul %%ecx                \n\t"
                 : "=d" (level), "=a"(dummy)
@@ -136,7 +136,7 @@ static int RENAME(dct_quantize)(MpegEncContext *s,
         qmat = s->q_inter_matrix16[qscale][0];
     }
 
-    if((s->out_format == FMT_H263 || s->out_format == FMT_H261) && s->mpeg_quant==0){
+    if((s->c.out_format == FMT_H263 || s->c.out_format == FMT_H261) && s->c.mpeg_quant==0){
 
         __asm__ volatile(
             "movd %%"FF_REG_a", "MM"3           \n\t" // last_non_zero_p1
@@ -220,11 +220,11 @@ static int RENAME(dct_quantize)(MpegEncContext *s,
         : "g" (s->max_qcoeff)
     );
 
-    if(s->mb_intra) block[0]= level;
+    if(s->c.mb_intra) block[0]= level;
     else            block[0]= temp_block[0];
 
-    av_assert2(ARCH_X86_32 || s->idsp.perm_type != FF_IDCT_PERM_SIMPLE);
-    if (ARCH_X86_32 && s->idsp.perm_type == FF_IDCT_PERM_SIMPLE) {
+    av_assert2(ARCH_X86_32 || s->c.idsp.perm_type != FF_IDCT_PERM_SIMPLE);
+    if (ARCH_X86_32 && s->c.idsp.perm_type == FF_IDCT_PERM_SIMPLE) {
         if(last_non_zero_p1 <= 1) goto end;
         block[0x08] = temp_block[0x01]; block[0x10] = temp_block[0x08];
         block[0x20] = temp_block[0x10];
@@ -268,7 +268,7 @@ static int RENAME(dct_quantize)(MpegEncContext *s,
         block[0x3E] = temp_block[0x3D]; block[0x27] = temp_block[0x36];
         block[0x3D] = temp_block[0x2F]; block[0x2F] = temp_block[0x37];
         block[0x37] = temp_block[0x3E]; block[0x3F] = temp_block[0x3F];
-    }else if(s->idsp.perm_type == FF_IDCT_PERM_LIBMPEG2){
+    }else if(s->c.idsp.perm_type == FF_IDCT_PERM_LIBMPEG2){
         if(last_non_zero_p1 <= 1) goto end;
         block[0x04] = temp_block[0x01];
         block[0x08] = temp_block[0x08]; block[0x10] = temp_block[0x10];
@@ -312,7 +312,7 @@ static int RENAME(dct_quantize)(MpegEncContext *s,
         block[0x3E] = temp_block[0x3D]; block[0x33] = temp_block[0x36];
         block[0x2F] = temp_block[0x2F]; block[0x37] = temp_block[0x37];
         block[0x3B] = temp_block[0x3E]; block[0x3F] = temp_block[0x3F];
-    } else if (s->idsp.perm_type == FF_IDCT_PERM_NONE) {
+    } else if (s->c.idsp.perm_type == FF_IDCT_PERM_NONE) {
         if(last_non_zero_p1 <= 1) goto end;
         block[0x01] = temp_block[0x01];
         block[0x08] = temp_block[0x08]; block[0x10] = temp_block[0x10];
@@ -356,7 +356,7 @@ static int RENAME(dct_quantize)(MpegEncContext *s,
         block[0x3D] = temp_block[0x3D]; block[0x36] = temp_block[0x36];
         block[0x2F] = temp_block[0x2F]; block[0x37] = temp_block[0x37];
         block[0x3E] = temp_block[0x3E]; block[0x3F] = temp_block[0x3F];
-    } else if (s->idsp.perm_type == FF_IDCT_PERM_TRANSPOSE) {
+    } else if (s->c.idsp.perm_type == FF_IDCT_PERM_TRANSPOSE) {
         if(last_non_zero_p1 <= 1) goto end;
         block[0x08] = temp_block[0x01];
         block[0x01] = temp_block[0x08]; block[0x02] = temp_block[0x10];
@@ -401,12 +401,12 @@ static int RENAME(dct_quantize)(MpegEncContext *s,
         block[0x3D] = temp_block[0x2F]; block[0x3E] = temp_block[0x37];
         block[0x37] = temp_block[0x3E]; block[0x3F] = temp_block[0x3F];
     } else {
-        av_log(s->avctx, AV_LOG_DEBUG, "s->idsp.perm_type: %d\n",
-                (int)s->idsp.perm_type);
-        av_assert0(s->idsp.perm_type == FF_IDCT_PERM_NONE ||
-                s->idsp.perm_type == FF_IDCT_PERM_LIBMPEG2 ||
-                s->idsp.perm_type == FF_IDCT_PERM_SIMPLE ||
-                s->idsp.perm_type == FF_IDCT_PERM_TRANSPOSE);
+        av_log(s->c.avctx, AV_LOG_DEBUG, "s->c.idsp.perm_type: %d\n",
+                (int)s->c.idsp.perm_type);
+        av_assert0(s->c.idsp.perm_type == FF_IDCT_PERM_NONE ||
+                s->c.idsp.perm_type == FF_IDCT_PERM_LIBMPEG2 ||
+                s->c.idsp.perm_type == FF_IDCT_PERM_SIMPLE ||
+                s->c.idsp.perm_type == FF_IDCT_PERM_TRANSPOSE);
     }
     end:
     return last_non_zero_p1 - 1;
diff --git a/tests/checkasm/motion.c b/tests/checkasm/motion.c
index 7e322da0d5..960a41a8ed 100644
--- a/tests/checkasm/motion.c
+++ b/tests/checkasm/motion.c
@@ -51,7 +51,7 @@ static void test_motion(const char *name, me_cmp_func test_func)
     LOCAL_ALIGNED_16(uint8_t, img1, [WIDTH * HEIGHT]);
     LOCAL_ALIGNED_16(uint8_t, img2, [WIDTH * HEIGHT]);
 
-    declare_func_emms(AV_CPU_FLAG_MMX, int, struct MpegEncContext *c,
+    declare_func_emms(AV_CPU_FLAG_MMX, int, MPVEncContext *c,
                       const uint8_t *blk1 /* align width (8 or 16) */,
                       const uint8_t *blk2 /* align 1 */, ptrdiff_t stride,
                       int h);
-- 
2.45.2


[-- Attachment #3: 0064-avcodec-mpeg12enc-speedhqenc-Optimize-writing-escape.patch --]
[-- Type: text/x-patch, Size: 2639 bytes --]

From c595a3b0a336d6b06b8a18e3008a6b7e8f989c2d Mon Sep 17 00:00:00 2001
From: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Date: Wed, 19 Mar 2025 12:26:47 +0100
Subject: [PATCH 64/77] avcodec/mpeg12enc, speedhqenc: Optimize writing escape
 codes

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
 libavcodec/mpeg12enc.c  | 9 ++++-----
 libavcodec/speedhqenc.c | 9 ++++-----
 2 files changed, 8 insertions(+), 10 deletions(-)

diff --git a/libavcodec/mpeg12enc.c b/libavcodec/mpeg12enc.c
index 5a91f9fff1..584573b466 100644
--- a/libavcodec/mpeg12enc.c
+++ b/libavcodec/mpeg12enc.c
@@ -628,12 +628,11 @@ next_coef:
                 put_bits(&s->pb, table_vlc[code][1] + 1,
                          (table_vlc[code][0] << 1) + sign);
             } else {
-                /* Escape seems to be pretty rare <5% so I do not optimize it;
-                 * the following value is the common escape value for both
-                 * possible tables (i.e. table_vlc[111]). */
-                put_bits(&s->pb, 6, 0x01);
+                /* Escape seems to be pretty rare <5% so I do not optimize it.
+                 * The following encodes run together with the common escape
+                 * value of both tables 000001b. */
+                put_bits(&s->pb, 6 + 6, 0x01 << 6 | run);
                 /* escape: only clip in this case */
-                put_bits(&s->pb, 6, run);
                 if (s->c.codec_id == AV_CODEC_ID_MPEG1VIDEO) {
                     if (alevel < 128) {
                         put_sbits(&s->pb, 8, level);
diff --git a/libavcodec/speedhqenc.c b/libavcodec/speedhqenc.c
index 7ddfb92076..ecba2cd840 100644
--- a/libavcodec/speedhqenc.c
+++ b/libavcodec/speedhqenc.c
@@ -194,11 +194,10 @@ static void encode_block(MPVEncContext *const s, const int16_t block[], int n)
                             ff_speedhq_vlc_table[code][0] | (sign << ff_speedhq_vlc_table[code][1]));
             } else {
                 /* escape seems to be pretty rare <5% so I do not optimize it;
-                 * the values correspond to ff_speedhq_vlc_table[121] */
-                put_bits_le(&s->pb, 6, 32);
-                /* escape: only clip in this case */
-                put_bits_le(&s->pb, 6, run);
-                put_bits_le(&s->pb, 12, level + 2048);
+                 * The following encodes the escape value 100000b together with
+                 * run and level. */
+                put_bits_le(&s->pb, 6 + 6 + 12, 0x20 | run << 6 |
+                                                (level + 2048) << 12);
             }
             last_non_zero = i;
         }
-- 
2.45.2


[-- Attachment #4: 0065-avcodec-mpegvideo_enc-Move-lambda-lambda2-to-MPVEncC.patch --]
[-- Type: text/x-patch, Size: 15602 bytes --]

From 100d33b5ec28f4535cbcc00061807f1095433c5e Mon Sep 17 00:00:00 2001
From: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Date: Wed, 19 Mar 2025 12:50:06 +0100
Subject: [PATCH 65/77] avcodec/mpegvideo_enc: Move lambda, lambda2 to
 MPVEncContext

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
 libavcodec/h263enc.h       |  2 +-
 libavcodec/motion_est.c    | 24 +++++++++++-----------
 libavcodec/mpeg4videoenc.c |  2 +-
 libavcodec/mpegvideo.h     |  2 --
 libavcodec/mpegvideo_enc.c | 42 ++++++++++++++++++++------------------
 libavcodec/mpegvideoenc.h  |  2 ++
 libavcodec/snowenc.c       |  6 +++---
 libavcodec/svq1enc.c       |  6 +++---
 8 files changed, 44 insertions(+), 42 deletions(-)

diff --git a/libavcodec/h263enc.h b/libavcodec/h263enc.h
index 1f459a332c..63af3a576f 100644
--- a/libavcodec/h263enc.h
+++ b/libavcodec/h263enc.h
@@ -52,7 +52,7 @@ static inline int get_p_cbp(MPVEncContext *const s,
         int best_cbpc_score = INT_MAX;
         int cbpc = (-1), cbpy = (-1);
         const int offset = (s->c.mv_type == MV_TYPE_16X16 ? 0 : 16) + (s->dquant ? 8 : 0);
-        const int lambda = s->c.lambda2 >> (FF_LAMBDA_SHIFT - 6);
+        const int lambda = s->lambda2 >> (FF_LAMBDA_SHIFT - 6);
 
         for (int i = 0; i < 4; i++) {
             int score = ff_h263_inter_MCBPC_bits[i + offset] * lambda;
diff --git a/libavcodec/motion_est.c b/libavcodec/motion_est.c
index 923bf5687b..b2b888237b 100644
--- a/libavcodec/motion_est.c
+++ b/libavcodec/motion_est.c
@@ -902,9 +902,9 @@ void ff_estimate_p_frame_motion(MPVEncContext *const s,
     av_assert0(s->c.linesize == c->stride);
     av_assert0(s->c.uvlinesize == c->uvstride);
 
-    c->penalty_factor     = get_penalty_factor(s->c.lambda, s->c.lambda2, c->avctx->me_cmp);
-    c->sub_penalty_factor = get_penalty_factor(s->c.lambda, s->c.lambda2, c->avctx->me_sub_cmp);
-    c->mb_penalty_factor  = get_penalty_factor(s->c.lambda, s->c.lambda2, c->avctx->mb_cmp);
+    c->penalty_factor     = get_penalty_factor(s->lambda, s->lambda2, c->avctx->me_cmp);
+    c->sub_penalty_factor = get_penalty_factor(s->lambda, s->lambda2, c->avctx->me_sub_cmp);
+    c->mb_penalty_factor  = get_penalty_factor(s->lambda, s->lambda2, c->avctx->mb_cmp);
     c->current_mv_penalty = c->mv_penalty[s->c.f_code] + MAX_DMV;
 
     get_limits(s, 16*mb_x, 16*mb_y, 0);
@@ -968,14 +968,14 @@ void ff_estimate_p_frame_motion(MPVEncContext *const s,
     c->mc_mb_var_sum_temp += (vard+128)>>8;
 
     if (c->avctx->mb_decision > FF_MB_DECISION_SIMPLE) {
-        int p_score = FFMIN(vard, varc - 500 + (s->c.lambda2 >> FF_LAMBDA_SHIFT)*100);
-        int i_score = varc - 500 + (s->c.lambda2 >> FF_LAMBDA_SHIFT)*20;
+        int p_score = FFMIN(vard, varc - 500 + (s->lambda2 >> FF_LAMBDA_SHIFT)*100);
+        int i_score = varc - 500 + (s->lambda2 >> FF_LAMBDA_SHIFT)*20;
         c->scene_change_score+= ff_sqrt(p_score) - ff_sqrt(i_score);
 
         if (vard*2 + 200*256 > varc && !s->intra_penalty)
             mb_type|= CANDIDATE_MB_TYPE_INTRA;
         if (varc*2 + 200*256 > vard || s->c.qscale > 24){
-//        if (varc*2 + 200*256 + 50*(s->c.lambda2>>FF_LAMBDA_SHIFT) > vard){
+//        if (varc*2 + 200*256 + 50*(s->lambda2>>FF_LAMBDA_SHIFT) > vard){
             mb_type|= CANDIDATE_MB_TYPE_INTER;
             c->sub_motion_search(s, &mx, &my, dmin, 0, 0, 0, 16);
             if (s->mpv_flags & FF_MPV_FLAG_MV0)
@@ -1050,8 +1050,8 @@ void ff_estimate_p_frame_motion(MPVEncContext *const s,
             s->c.cur_pic.mb_type[mb_y*s->c.mb_stride + mb_x] = 0;
 
         {
-            int p_score = FFMIN(vard, varc-500+(s->c.lambda2>>FF_LAMBDA_SHIFT)*100);
-            int i_score = varc-500+(s->c.lambda2>>FF_LAMBDA_SHIFT)*20;
+            int p_score = FFMIN(vard, varc-500+(s->lambda2>>FF_LAMBDA_SHIFT)*100);
+            int i_score = varc-500+(s->lambda2>>FF_LAMBDA_SHIFT)*20;
             c->scene_change_score+= ff_sqrt(p_score) - ff_sqrt(i_score);
         }
     }
@@ -1071,7 +1071,7 @@ int ff_pre_estimate_p_frame_motion(MPVEncContext *const s,
 
     av_assert0(s->c.quarter_sample==0 || s->c.quarter_sample==1);
 
-    c->pre_penalty_factor = get_penalty_factor(s->c.lambda, s->c.lambda2, c->avctx->me_pre_cmp);
+    c->pre_penalty_factor = get_penalty_factor(s->lambda, s->lambda2, c->avctx->me_pre_cmp);
     c->current_mv_penalty = c->mv_penalty[s->c.f_code] + MAX_DMV;
 
     get_limits(s, 16*mb_x, 16*mb_y, 0);
@@ -1510,9 +1510,9 @@ void ff_estimate_b_frame_motion(MPVEncContext *const s,
         return;
     }
 
-    c->penalty_factor    = get_penalty_factor(s->c.lambda, s->c.lambda2, c->avctx->me_cmp);
-    c->sub_penalty_factor= get_penalty_factor(s->c.lambda, s->c.lambda2, c->avctx->me_sub_cmp);
-    c->mb_penalty_factor = get_penalty_factor(s->c.lambda, s->c.lambda2, c->avctx->mb_cmp);
+    c->penalty_factor    = get_penalty_factor(s->lambda, s->lambda2, c->avctx->me_cmp);
+    c->sub_penalty_factor= get_penalty_factor(s->lambda, s->lambda2, c->avctx->me_sub_cmp);
+    c->mb_penalty_factor = get_penalty_factor(s->lambda, s->lambda2, c->avctx->mb_cmp);
 
     if (s->c.codec_id == AV_CODEC_ID_MPEG4)
         dmin= direct_search(s, mb_x, mb_y);
diff --git a/libavcodec/mpeg4videoenc.c b/libavcodec/mpeg4videoenc.c
index 9f933b517e..fefd94ee99 100644
--- a/libavcodec/mpeg4videoenc.c
+++ b/libavcodec/mpeg4videoenc.c
@@ -461,7 +461,7 @@ static inline int get_b_cbp(MPVEncContext *const s, int16_t block[6][64],
 
     if (s->mpv_flags & FF_MPV_FLAG_CBP_RD) {
         int score        = 0;
-        const int lambda = s->c.lambda2 >> (FF_LAMBDA_SHIFT - 6);
+        const int lambda = s->lambda2 >> (FF_LAMBDA_SHIFT - 6);
 
         for (i = 0; i < 6; i++) {
             if (s->coded_score[i] < 0) {
diff --git a/libavcodec/mpegvideo.h b/libavcodec/mpegvideo.h
index adaa0cf2d0..5301338188 100644
--- a/libavcodec/mpegvideo.h
+++ b/libavcodec/mpegvideo.h
@@ -166,8 +166,6 @@ typedef struct MpegEncContext {
 
     int qscale;                 ///< QP
     int chroma_qscale;          ///< chroma QP
-    unsigned int lambda;        ///< Lagrange multiplier used in rate distortion
-    unsigned int lambda2;       ///< (lambda*lambda) >> FF_LAMBDA_SHIFT
     int pict_type;              ///< AV_PICTURE_TYPE_I, AV_PICTURE_TYPE_P, AV_PICTURE_TYPE_B, ...
     int droppable;
 
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index 7061ad0719..6c1d157b64 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -204,7 +204,7 @@ static inline void update_qscale(MPVMainEncContext *const m)
         int best = 1;
 
         for (i = 0 ; i<FF_ARRAY_ELEMS(ff_mpeg2_non_linear_qscale); i++) {
-            int diff = FFABS((ff_mpeg2_non_linear_qscale[i]<<(FF_LAMBDA_SHIFT + 6)) - (int)s->c.lambda * 139);
+            int diff = FFABS((ff_mpeg2_non_linear_qscale[i]<<(FF_LAMBDA_SHIFT + 6)) - (int)s->lambda * 139);
             if (ff_mpeg2_non_linear_qscale[i] < s->c.avctx->qmin ||
                 (ff_mpeg2_non_linear_qscale[i] > s->c.avctx->qmax && !m->vbv_ignore_qmax))
                 continue;
@@ -215,12 +215,12 @@ static inline void update_qscale(MPVMainEncContext *const m)
         }
         s->c.qscale = best;
     } else {
-        s->c.qscale = (s->c.lambda * 139 + FF_LAMBDA_SCALE * 64) >>
+        s->c.qscale = (s->lambda * 139 + FF_LAMBDA_SCALE * 64) >>
                     (FF_LAMBDA_SHIFT + 7);
         s->c.qscale = av_clip(s->c.qscale, s->c.avctx->qmin, m->vbv_ignore_qmax ? 31 : s->c.avctx->qmax);
     }
 
-    s->c.lambda2 = (s->c.lambda * s->c.lambda + FF_LAMBDA_SCALE / 2) >>
+    s->lambda2 = (s->lambda * s->lambda + FF_LAMBDA_SCALE / 2) >>
                  FF_LAMBDA_SHIFT;
 }
 
@@ -261,8 +261,8 @@ static void update_duplicate_context_after_me(MPVEncContext *const dst,
     COPY(c.f_code);
     COPY(c.b_code);
     COPY(c.qscale);
-    COPY(c.lambda);
-    COPY(c.lambda2);
+    COPY(lambda);
+    COPY(lambda2);
     COPY(c.frame_pred_frame_dct); // FIXME don't set in encode_header
     COPY(c.progressive_frame);    // FIXME don't set in encode_header
     COPY(c.partitioned_frame);    // FIXME don't set in encode_header
@@ -1476,7 +1476,7 @@ static int skip_check(MPVMainEncContext *const m,
 
     if (score64 < m->frame_skip_threshold)
         return 1;
-    if (score64 < ((m->frame_skip_factor * (int64_t) s->c.lambda) >> 8))
+    if (score64 < ((m->frame_skip_factor * (int64_t) s->lambda) >> 8))
         return 1;
     return 0;
 }
@@ -1991,8 +1991,8 @@ vbv_retry:
             int min_step = hq ? 1 : (1<<(FF_LAMBDA_SHIFT + 7))/139;
 
             if (put_bits_count(&s->pb) > max_size &&
-                s->c.lambda < m->lmax) {
-                m->next_lambda = FFMAX(s->c.lambda + min_step, s->c.lambda *
+                s->lambda < m->lmax) {
+                m->next_lambda = FFMAX(s->lambda + min_step, s->lambda *
                                        (s->c.qscale + 1) / s->c.qscale);
                 if (s->adaptive_quant) {
                     for (int i = 0; i < s->c.mb_height * s->c.mb_stride; i++)
@@ -2297,8 +2297,8 @@ static av_always_inline void encode_mb_internal(MPVEncContext *const s,
         const int last_qp = s->c.qscale;
         const int mb_xy = mb_x + mb_y * s->c.mb_stride;
 
-        s->c.lambda = s->lambda_table[mb_xy];
-        s->c.lambda2 = (s->c.lambda * s->c.lambda + FF_LAMBDA_SCALE / 2) >>
+        s->lambda  = s->lambda_table[mb_xy];
+        s->lambda2 = (s->lambda * s->lambda + FF_LAMBDA_SCALE / 2) >>
                        FF_LAMBDA_SHIFT;
 
         if (!(s->mpv_flags & FF_MPV_FLAG_QP_RD)) {
@@ -2723,7 +2723,7 @@ static void encode_mb_hq(MPVEncContext *const s, MPVEncContext *const backup, MP
     if(s->c.avctx->mb_decision == FF_MB_DECISION_RD){
         mpv_reconstruct_mb(s, s->c.block);
 
-        score *= s->c.lambda2;
+        score *= s->lambda2;
         score += sse_mb(s) << FF_LAMBDA_SHIFT;
     }
 
@@ -3651,10 +3651,10 @@ static int estimate_qp(MPVMainEncContext *const m, int dry_run)
             break;
         }
 
-        s->c.lambda= s->lambda_table[0];
+        s->lambda = s->lambda_table[0];
         //FIXME broken
     }else
-        s->c.lambda = s->c.cur_pic.ptr->f->quality;
+        s->lambda = s->c.cur_pic.ptr->f->quality;
     update_qscale(m);
     return 0;
 }
@@ -3695,7 +3695,7 @@ static int encode_picture(MPVMainEncContext *const m, const AVPacket *pkt)
 
     s->c.me.scene_change_score=0;
 
-//    s->c.lambda= s->c.cur_pic.ptr->quality; //FIXME qscale / ... stuff for ME rate distortion
+//    s->lambda = s->c.cur_pic.ptr->quality; //FIXME qscale / ... stuff for ME rate distortion
 
     if (s->c.pict_type == AV_PICTURE_TYPE_I) {
         s->c.no_rounding = s->c.msmpeg4_version >= MSMP4_V3;
@@ -3710,9 +3710,9 @@ static int encode_picture(MPVMainEncContext *const m, const AVPacket *pkt)
         ff_get_2pass_fcode(m);
     } else if (!(s->c.avctx->flags & AV_CODEC_FLAG_QSCALE)) {
         if(s->c.pict_type==AV_PICTURE_TYPE_B)
-            s->c.lambda = m->last_lambda_for[s->c.pict_type];
+            s->lambda = m->last_lambda_for[s->c.pict_type];
         else
-            s->c.lambda = m->last_lambda_for[m->last_non_b_pict_type];
+            s->lambda = m->last_lambda_for[m->last_non_b_pict_type];
         update_qscale(m);
     }
 
@@ -3728,6 +3728,8 @@ static int encode_picture(MPVMainEncContext *const m, const AVPacket *pkt)
             ret = ff_update_duplicate_context(&slice->c, &s->c);
             if (ret < 0)
                 return ret;
+            slice->lambda  = s->lambda;
+            slice->lambda2 = s->lambda2;
         }
         slice->c.me.temp = slice->c.me.scratchpad = slice->c.sc.scratchpad_buf;
 
@@ -3740,8 +3742,8 @@ static int encode_picture(MPVMainEncContext *const m, const AVPacket *pkt)
 
     /* Estimate motion for every MB */
     if (s->c.pict_type != AV_PICTURE_TYPE_I) {
-        s->c.lambda  = (s->c.lambda  * m->me_penalty_compensation + 128) >> 8;
-        s->c.lambda2 = (s->c.lambda2 * (int64_t) m->me_penalty_compensation + 128) >> 8;
+        s->lambda  = (s->lambda  * m->me_penalty_compensation + 128) >> 8;
+        s->lambda2 = (s->lambda2 * (int64_t) m->me_penalty_compensation + 128) >> 8;
         if (s->c.pict_type != AV_PICTURE_TYPE_B) {
             if ((m->me_pre && m->last_non_b_pict_type == AV_PICTURE_TYPE_I) ||
                 m->me_pre == 2) {
@@ -3972,7 +3974,7 @@ static int dct_quantize_trellis_c(MPVEncContext *const s,
     int qmul, qadd, start_i, last_non_zero, i, dc;
     const int esc_length= s->ac_esc_length;
     const uint8_t *length, *last_length;
-    const int lambda= s->c.lambda2 >> (FF_LAMBDA_SHIFT - 6);
+    const int lambda = s->lambda2 >> (FF_LAMBDA_SHIFT - 6);
     int mpeg2_qscale;
 
     s->fdsp.fdct(block);
@@ -4362,7 +4364,7 @@ static int dct_quantize_refine(MPVEncContext *const s, //FIXME breaks denoise?
         av_assert2(w<(1<<6));
         sum += w*w;
     }
-    lambda= sum*(uint64_t)s->c.lambda2 >> (FF_LAMBDA_SHIFT - 6 + 6 + 6 + 6);
+    lambda = sum*(uint64_t)s->lambda2 >> (FF_LAMBDA_SHIFT - 6 + 6 + 6 + 6);
 
     run=0;
     rle_index=0;
diff --git a/libavcodec/mpegvideoenc.h b/libavcodec/mpegvideoenc.h
index 1d124b1bd1..a2b544d70e 100644
--- a/libavcodec/mpegvideoenc.h
+++ b/libavcodec/mpegvideoenc.h
@@ -47,6 +47,8 @@ typedef struct MPVEncContext {
     /** bit output */
     PutBitContext pb;
 
+    unsigned int lambda;        ///< Lagrange multiplier used in rate distortion
+    unsigned int lambda2;       ///< (lambda*lambda) >> FF_LAMBDA_SHIFT
     int *lambda_table;
     int adaptive_quant;         ///< use adaptive quantization
     int dquant;                 ///< qscale difference to prev qscale
diff --git a/libavcodec/snowenc.c b/libavcodec/snowenc.c
index f9b10f7dab..269fc8a599 100644
--- a/libavcodec/snowenc.c
+++ b/libavcodec/snowenc.c
@@ -1873,9 +1873,9 @@ static int encode_frame(AVCodecContext *avctx, AVPacket *pkt,
         mpv->c.out_format      = FMT_H263;
         mpv->c.unrestricted_mv = 1;
 
-        mpv->c.lambda = enc->lambda;
-        mpv->c.qscale = (mpv->c.lambda*139 + FF_LAMBDA_SCALE*64) >> (FF_LAMBDA_SHIFT + 7);
-        enc->lambda2  = mpv->c.lambda2 = (mpv->c.lambda*mpv->c.lambda + FF_LAMBDA_SCALE/2) >> FF_LAMBDA_SHIFT;
+        mpv->lambda   = enc->lambda;
+        mpv->c.qscale = (mpv->lambda*139 + FF_LAMBDA_SCALE*64) >> (FF_LAMBDA_SHIFT + 7);
+        enc->lambda2  = mpv->lambda2 = (mpv->lambda*mpv->lambda + FF_LAMBDA_SCALE/2) >> FF_LAMBDA_SHIFT;
 
         mpv->c.qdsp = enc->qdsp; //move
         mpv->c.hdsp = s->hdsp;
diff --git a/libavcodec/svq1enc.c b/libavcodec/svq1enc.c
index ddc94dffe5..6e5146d6a5 100644
--- a/libavcodec/svq1enc.c
+++ b/libavcodec/svq1enc.c
@@ -342,11 +342,11 @@ static int svq1_encode_plane(SVQ1EncContext *s, int plane,
         s2->me.scene_change_score         = 0;
         // s2->out_format                    = FMT_H263;
         // s2->unrestricted_mv               = 1;
-        s2->lambda                        = s->quality;
-        s2->qscale                        = s2->lambda * 139 +
+        s->m.lambda                       = s->quality;
+        s2->qscale                        = s->m.lambda * 139 +
                                              FF_LAMBDA_SCALE * 64 >>
                                              FF_LAMBDA_SHIFT + 7;
-        s2->lambda2                       = s2->lambda * s2->lambda +
+        s->m.lambda2                      = s->m.lambda * s->m.lambda +
                                              FF_LAMBDA_SCALE / 2 >>
                                              FF_LAMBDA_SHIFT;
 
-- 
2.45.2


[-- Attachment #5: 0066-avcodec-mpegvideo_enc-Don-t-reset-statistics-twice.patch --]
[-- Type: text/x-patch, Size: 1735 bytes --]

From ac6283a7d26bad8814372ca70bb8e980a51209dd Mon Sep 17 00:00:00 2001
From: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Date: Wed, 19 Mar 2025 12:53:44 +0100
Subject: [PATCH 66/77] avcodec/mpegvideo_enc: Don't reset statistics twice

This happens currently for the non-main slice contexts.
But these variables get reset at the start of encode_thread()
anyway for all slices, so this is unnecessary.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
 libavcodec/mpegvideo_enc.c | 17 +++++++++--------
 1 file changed, 9 insertions(+), 8 deletions(-)

diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index 6c1d157b64..7492a9fdbd 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -3585,6 +3585,7 @@ static int encode_thread(AVCodecContext *c, void *arg){
     return 0;
 }
 
+#define ADD(field)   dst->field += src->field;
 #define MERGE(field) dst->field += src->field; src->field=0
 static void merge_context_after_me(MPVEncContext *const dst, MPVEncContext *const src)
 {
@@ -3599,14 +3600,14 @@ static void merge_context_after_encode(MPVEncContext *const dst, MPVEncContext *
 
     MERGE(dct_count[0]); //note, the other dct vars are not part of the context
     MERGE(dct_count[1]);
-    MERGE(mv_bits);
-    MERGE(i_tex_bits);
-    MERGE(p_tex_bits);
-    MERGE(i_count);
-    MERGE(misc_bits);
-    MERGE(encoding_error[0]);
-    MERGE(encoding_error[1]);
-    MERGE(encoding_error[2]);
+    ADD(mv_bits);
+    ADD(i_tex_bits);
+    ADD(p_tex_bits);
+    ADD(i_count);
+    ADD(misc_bits);
+    ADD(encoding_error[0]);
+    ADD(encoding_error[1]);
+    ADD(encoding_error[2]);
 
     if (dst->dct_error_sum) {
         for(i=0; i<64; i++){
-- 
2.45.2


[-- Attachment #6: 0067-avcodec-mpegvideoenc-Constify-vlc-length-pointees.patch --]
[-- Type: text/x-patch, Size: 2584 bytes --]

From 658033593eb885ca4fe37804581aae52d886ed89 Mon Sep 17 00:00:00 2001
From: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Date: Wed, 19 Mar 2025 13:03:22 +0100
Subject: [PATCH 67/77] avcodec/mpegvideoenc: Constify vlc length pointees

These pointers point to static tables which must not be modified
by anyone after they have been initialized. So constify the pointees.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
 libavcodec/me_cmp.c       |  4 ++--
 libavcodec/mpegvideoenc.h | 14 +++++++-------
 2 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/libavcodec/me_cmp.c b/libavcodec/me_cmp.c
index 09a830d15e..2a8ede5955 100644
--- a/libavcodec/me_cmp.c
+++ b/libavcodec/me_cmp.c
@@ -758,7 +758,7 @@ static int rd8x8_c(MPVEncContext *const s, const uint8_t *src1, const uint8_t *s
     LOCAL_ALIGNED_16(uint8_t, lsrc2, [64]);
     int i, last, run, bits, level, distortion, start_i;
     const int esc_length = s->ac_esc_length;
-    uint8_t *length, *last_length;
+    const uint8_t *length, *last_length;
 
     copy_block8(lsrc1, src1, 8, stride, 8);
     copy_block8(lsrc2, src2, 8, stride, 8);
@@ -831,7 +831,7 @@ static int bit8x8_c(MPVEncContext *const s, const uint8_t *src1, const uint8_t *
     LOCAL_ALIGNED_16(int16_t, temp, [64]);
     int i, last, run, bits, level, start_i;
     const int esc_length = s->ac_esc_length;
-    uint8_t *length, *last_length;
+    const uint8_t *length, *last_length;
 
     s->pdsp.diff_pixels_unaligned(temp, src1, src2, stride);
 
diff --git a/libavcodec/mpegvideoenc.h b/libavcodec/mpegvideoenc.h
index a2b544d70e..6985c41955 100644
--- a/libavcodec/mpegvideoenc.h
+++ b/libavcodec/mpegvideoenc.h
@@ -87,13 +87,13 @@ typedef struct MPVEncContext {
     int min_qcoeff;          ///< minimum encodable coefficient
     int max_qcoeff;          ///< maximum encodable coefficient
     int ac_esc_length;       ///< num of bits needed to encode the longest esc
-    uint8_t *intra_ac_vlc_length;
-    uint8_t *intra_ac_vlc_last_length;
-    uint8_t *intra_chroma_ac_vlc_length;
-    uint8_t *intra_chroma_ac_vlc_last_length;
-    uint8_t *inter_ac_vlc_length;
-    uint8_t *inter_ac_vlc_last_length;
-    uint8_t *luma_dc_vlc_length;
+    const uint8_t *intra_ac_vlc_length;
+    const uint8_t *intra_ac_vlc_last_length;
+    const uint8_t *intra_chroma_ac_vlc_length;
+    const uint8_t *intra_chroma_ac_vlc_last_length;
+    const uint8_t *inter_ac_vlc_length;
+    const uint8_t *inter_ac_vlc_last_length;
+    const uint8_t *luma_dc_vlc_length;
 
     int coded_score[12];
 
-- 
2.45.2


[-- Attachment #7: 0068-avcodec-motion_est-Move-ff_h263_round_chroma-to-h263.patch --]
[-- Type: text/x-patch, Size: 2953 bytes --]

From 8b1191cb43b627840d57568b36fdbfdc214f6070 Mon Sep 17 00:00:00 2001
From: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Date: Wed, 19 Mar 2025 13:26:37 +0100
Subject: [PATCH 68/77] avcodec/motion_est: Move ff_h263_round_chroma() to
 h263.h

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
 libavcodec/h263.h             | 10 ++++++++++
 libavcodec/motion_est.c       |  1 +
 libavcodec/motion_est.h       | 10 ----------
 libavcodec/mpegvideo_dec.c    |  1 +
 libavcodec/mpegvideo_motion.c |  1 +
 5 files changed, 13 insertions(+), 10 deletions(-)

diff --git a/libavcodec/h263.h b/libavcodec/h263.h
index 27a5f31c59..59f937070e 100644
--- a/libavcodec/h263.h
+++ b/libavcodec/h263.h
@@ -27,6 +27,16 @@
 
 #define H263_GOB_HEIGHT(h) ((h) <= 400 ? 1 : (h) <= 800 ? 2 : 4)
 
+static inline int ff_h263_round_chroma(int x)
+{
+    //FIXME static or not?
+    static const uint8_t h263_chroma_roundtab[16] = {
+    //  0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15
+        0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1,
+    };
+    return h263_chroma_roundtab[x & 0xf] + (x >> 3);
+}
+
 av_const int ff_h263_aspect_to_info(AVRational aspect);
 int16_t *ff_h263_pred_motion(MpegEncContext * s, int block, int dir,
                              int *px, int *py);
diff --git a/libavcodec/motion_est.c b/libavcodec/motion_est.c
index b2b888237b..db610c3245 100644
--- a/libavcodec/motion_est.c
+++ b/libavcodec/motion_est.c
@@ -32,6 +32,7 @@
 #include <limits.h>
 
 #include "avcodec.h"
+#include "h263.h"
 #include "mathops.h"
 #include "motion_est.h"
 #include "mpegutils.h"
diff --git a/libavcodec/motion_est.h b/libavcodec/motion_est.h
index 16975abfe1..fe3d61fafb 100644
--- a/libavcodec/motion_est.h
+++ b/libavcodec/motion_est.h
@@ -106,16 +106,6 @@ typedef struct MotionEstContext {
                              int size, int h);
 } MotionEstContext;
 
-static inline int ff_h263_round_chroma(int x)
-{
-    //FIXME static or not?
-    static const uint8_t h263_chroma_roundtab[16] = {
-    //  0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15
-        0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1,
-    };
-    return h263_chroma_roundtab[x & 0xf] + (x >> 3);
-}
-
 /**
  * Performs one-time initialization of the MotionEstContext.
  */
diff --git a/libavcodec/mpegvideo_dec.c b/libavcodec/mpegvideo_dec.c
index 8c84b59c5e..4019b4f0da 100644
--- a/libavcodec/mpegvideo_dec.c
+++ b/libavcodec/mpegvideo_dec.c
@@ -32,6 +32,7 @@
 
 #include "avcodec.h"
 #include "decode.h"
+#include "h263.h"
 #include "h264chroma.h"
 #include "internal.h"
 #include "mpegutils.h"
diff --git a/libavcodec/mpegvideo_motion.c b/libavcodec/mpegvideo_motion.c
index 6e9368dd9c..edc4931092 100644
--- a/libavcodec/mpegvideo_motion.c
+++ b/libavcodec/mpegvideo_motion.c
@@ -29,6 +29,7 @@
 
 #include "avcodec.h"
 #include "h261.h"
+#include "h263.h"
 #include "mpegutils.h"
 #include "mpegvideo.h"
 #include "mpeg4videodec.h"
-- 
2.45.2


[-- Attachment #8: 0069-avcodec-mpegvideo-Move-MotionEstContext-to-MPVEncCon.patch --]
[-- Type: text/x-patch, Size: 36306 bytes --]

From fd5e6878a9956b123c1d73d6472bc4acf420a7f5 Mon Sep 17 00:00:00 2001
From: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Date: Wed, 19 Mar 2025 14:24:56 +0100
Subject: [PATCH 69/77] avcodec/mpegvideo: Move MotionEstContext to
 MPVEncContext

All that is necessary to do so is perform ff_me_init_pic()
on every slice.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
 libavcodec/h261enc.c             |  2 +-
 libavcodec/ituh263enc.c          |  2 +-
 libavcodec/motion_est.c          | 36 ++++++++++++++--------------
 libavcodec/motion_est_template.c | 32 ++++++++++++-------------
 libavcodec/mpeg12enc.c           |  2 +-
 libavcodec/mpegvideo.c           |  6 -----
 libavcodec/mpegvideo.h           |  3 ---
 libavcodec/mpegvideo_enc.c       | 41 ++++++++++++++++----------------
 libavcodec/mpegvideoenc.h        |  4 +++-
 libavcodec/snowenc.c             | 36 ++++++++++++++--------------
 libavcodec/svq1enc.c             | 24 +++++++++----------
 11 files changed, 90 insertions(+), 98 deletions(-)

diff --git a/libavcodec/h261enc.c b/libavcodec/h261enc.c
index 7c3c8752df..ae5d7b1205 100644
--- a/libavcodec/h261enc.c
+++ b/libavcodec/h261enc.c
@@ -377,7 +377,7 @@ static av_cold int h261_encode_init(AVCodecContext *avctx)
     s->max_qcoeff       = 127;
     s->ac_esc_length    = H261_ESC_LEN;
 
-    s->c.me.mv_penalty = mv_penalty;
+    s->me.mv_penalty = mv_penalty;
 
     s->intra_ac_vlc_length      = s->inter_ac_vlc_length      = uni_h261_rl_len;
     s->intra_ac_vlc_last_length = s->inter_ac_vlc_last_length = uni_h261_rl_len_last;
diff --git a/libavcodec/ituh263enc.c b/libavcodec/ituh263enc.c
index 6bd7b6a6cd..2e087c518d 100644
--- a/libavcodec/ituh263enc.c
+++ b/libavcodec/ituh263enc.c
@@ -820,7 +820,7 @@ av_cold void ff_h263_encode_init(MPVMainEncContext *const m)
 {
     MPVEncContext *const s = &m->s;
 
-    s->c.me.mv_penalty = ff_h263_get_mv_penalty(); // FIXME exact table for MSMPEG4 & H.263+
+    s->me.mv_penalty = ff_h263_get_mv_penalty(); // FIXME exact table for MSMPEG4 & H.263+
 
     ff_h263dsp_init(&s->c.h263dsp);
 
diff --git a/libavcodec/motion_est.c b/libavcodec/motion_est.c
index db610c3245..af06acd9b2 100644
--- a/libavcodec/motion_est.c
+++ b/libavcodec/motion_est.c
@@ -110,7 +110,7 @@ static int get_flags(MotionEstContext *c, int direct, int chroma){
 static av_always_inline int cmp_direct_inline(MPVEncContext *const s, const int x, const int y, const int subx, const int suby,
                       const int size, const int h, int ref_index, int src_index,
                       me_cmp_func cmp_func, me_cmp_func chroma_cmp_func, int qpel){
-    MotionEstContext *const c = &s->c.me;
+    MotionEstContext *const c = &s->me;
     const int stride= c->stride;
     const int hx = subx + x * (1 << (1 + qpel));
     const int hy = suby + y * (1 << (1 + qpel));
@@ -182,7 +182,7 @@ static av_always_inline int cmp_direct_inline(MPVEncContext *const s, const int
 static av_always_inline int cmp_inline(MPVEncContext *const s, const int x, const int y, const int subx, const int suby,
                       const int size, const int h, int ref_index, int src_index,
                       me_cmp_func cmp_func, me_cmp_func chroma_cmp_func, int qpel, int chroma){
-    MotionEstContext *const c = &s->c.me;
+    MotionEstContext *const c = &s->me;
     const int stride= c->stride;
     const int uvstride= c->uvstride;
     const int dxy= subx + (suby<<(1+qpel)); //FIXME log2_subpel?
@@ -370,7 +370,7 @@ av_cold int ff_me_init(MotionEstContext *c, AVCodecContext *avctx,
 
 void ff_me_init_pic(MPVEncContext *const s)
 {
-    MotionEstContext * const c= &s->c.me;
+    MotionEstContext *const c = &s->me;
 
 /*FIXME s->c.no_rounding b_type*/
     if (c->avctx->flags & AV_CODEC_FLAG_QPEL) {
@@ -411,7 +411,7 @@ static int sad_hpel_motion_search(MPVEncContext *const s,
                                   int src_index, int ref_index,
                                   int size, int h)
 {
-    MotionEstContext *const c = &s->c.me;
+    MotionEstContext *const c = &s->me;
     const int penalty_factor= c->sub_penalty_factor;
     int mx, my, dminh;
     const uint8_t *pix, *ptr;
@@ -540,7 +540,7 @@ static inline void set_p_mv_tables(MPVEncContext *const s, int mx, int my, int m
  */
 static inline void get_limits(MPVEncContext *const s, int x, int y, int bframe)
 {
-    MotionEstContext *const c = &s->c.me;
+    MotionEstContext *const c = &s->me;
     int range= c->avctx->me_range >> (1 + !!(c->flags&FLAG_QPEL));
     int max_range = MAX_MV >> (1 + !!(c->flags&FLAG_QPEL));
 /*
@@ -587,7 +587,7 @@ static inline void init_mv4_ref(MotionEstContext *c){
 
 static inline int h263_mv4_search(MPVEncContext *const s, int mx, int my, int shift)
 {
-    MotionEstContext *const c = &s->c.me;
+    MotionEstContext *const c = &s->me;
     const int size= 1;
     const int h=8;
     int block;
@@ -730,7 +730,7 @@ static inline int h263_mv4_search(MPVEncContext *const s, int mx, int my, int sh
 
 static inline void init_interlaced_ref(MPVEncContext *const s, int ref_index)
 {
-    MotionEstContext *const c = &s->c.me;
+    MotionEstContext *const c = &s->me;
 
     c->ref[1+ref_index][0] = c->ref[0+ref_index][0] + s->c.linesize;
     c->src[1][0] = c->src[0][0] + s->c.linesize;
@@ -745,7 +745,7 @@ static inline void init_interlaced_ref(MPVEncContext *const s, int ref_index)
 static int interlaced_search(MPVEncContext *const s, int ref_index,
                              int16_t (*mv_tables[2][2])[2], uint8_t *field_select_tables[2], int mx, int my, int user_field_select)
 {
-    MotionEstContext *const c = &s->c.me;
+    MotionEstContext *const c = &s->me;
     const int size=0;
     const int h=8;
     int block;
@@ -888,7 +888,7 @@ static inline int get_penalty_factor(int lambda, int lambda2, int type){
 void ff_estimate_p_frame_motion(MPVEncContext *const s,
                                 int mb_x, int mb_y)
 {
-    MotionEstContext *const c = &s->c.me;
+    MotionEstContext *const c = &s->me;
     const uint8_t *pix, *ppix;
     int sum, mx = 0, my = 0, dmin = 0;
     int varc;            ///< the variance of the block (sum of squared (p[y][x]-average))
@@ -1063,7 +1063,7 @@ void ff_estimate_p_frame_motion(MPVEncContext *const s,
 int ff_pre_estimate_p_frame_motion(MPVEncContext *const s,
                                     int mb_x, int mb_y)
 {
-    MotionEstContext *const c = &s->c.me;
+    MotionEstContext *const c = &s->me;
     int mx, my, dmin;
     int P[10][2];
     const int shift = 1 + s->c.quarter_sample;
@@ -1116,7 +1116,7 @@ int ff_pre_estimate_p_frame_motion(MPVEncContext *const s,
 static int estimate_motion_b(MPVEncContext *const s, int mb_x, int mb_y,
                              int16_t (*mv_table)[2], int ref_index, int f_code)
 {
-    MotionEstContext * const c= &s->c.me;
+    MotionEstContext *const c = &s->me;
     int mx = 0, my = 0, dmin = 0;
     int P[10][2];
     const int shift= 1+s->c.quarter_sample;
@@ -1182,7 +1182,7 @@ static inline int check_bidir_mv(MPVEncContext *const s,
     //FIXME optimize?
     //FIXME better f_code prediction (max mv & distance)
     //FIXME pointers
-    MotionEstContext * const c= &s->c.me;
+    MotionEstContext *const c = &s->me;
     const uint8_t * const mv_penalty_f = c->mv_penalty[s->c.f_code] + MAX_DMV; // f_code of the prev frame
     const uint8_t * const mv_penalty_b = c->mv_penalty[s->c.b_code] + MAX_DMV; // f_code of the prev frame
     int stride= c->stride;
@@ -1239,7 +1239,7 @@ static inline int check_bidir_mv(MPVEncContext *const s,
 /* refine the bidir vectors in hq mode and return the score in both lq & hq mode*/
 static inline int bidir_refine(MPVEncContext *const s, int mb_x, int mb_y)
 {
-    MotionEstContext * const c= &s->c.me;
+    MotionEstContext *const c = &s->me;
     const int mot_stride = s->c.mb_stride;
     const int xy = mb_y *mot_stride + mb_x;
     int fbmin;
@@ -1386,7 +1386,7 @@ CHECK_BIDIR(-(a),-(b),-(c),-(d))
 
 static inline int direct_search(MPVEncContext *const s, int mb_x, int mb_y)
 {
-    MotionEstContext * const c= &s->c.me;
+    MotionEstContext *const c = &s->me;
     int P[10][2];
     const int mot_stride = s->c.mb_stride;
     const int mot_xy = mb_y*mot_stride + mb_x;
@@ -1489,7 +1489,7 @@ static inline int direct_search(MPVEncContext *const s, int mb_x, int mb_y)
 void ff_estimate_b_frame_motion(MPVEncContext *const s,
                              int mb_x, int mb_y)
 {
-    MotionEstContext * const c= &s->c.me;
+    MotionEstContext *const c = &s->me;
     int fmin, bmin, dmin, fbmin, bimin, fimin;
     int type=0;
     const int xy = mb_y*s->c.mb_stride + mb_x;
@@ -1601,7 +1601,7 @@ void ff_estimate_b_frame_motion(MPVEncContext *const s,
 int ff_get_best_fcode(MPVMainEncContext *const m, const int16_t (*mv_table)[2], int type)
 {
     MPVEncContext *const s = &m->s;
-    MotionEstContext *const c = &s->c.me;
+    MotionEstContext *const c = &s->me;
 
     if (c->motion_est != FF_ME_ZERO) {
         int score[8];
@@ -1656,7 +1656,7 @@ int ff_get_best_fcode(MPVMainEncContext *const m, const int16_t (*mv_table)[2],
 
 void ff_fix_long_p_mvs(MPVEncContext *const s, int type)
 {
-    MotionEstContext * const c= &s->c.me;
+    MotionEstContext *const c = &s->me;
     const int f_code= s->c.f_code;
     int y, range;
     av_assert0(s->c.pict_type==AV_PICTURE_TYPE_P);
@@ -1706,7 +1706,7 @@ void ff_fix_long_p_mvs(MPVEncContext *const s, int type)
 void ff_fix_long_mvs(MPVEncContext *const s, uint8_t *field_select_table, int field_select,
                      int16_t (*mv_table)[2], int f_code, int type, int truncate)
 {
-    MotionEstContext * const c= &s->c.me;
+    MotionEstContext *const c = &s->me;
     int y, h_range, v_range;
 
     // RAL: 8 in MPEG-1, 16 in MPEG-4
diff --git a/libavcodec/motion_est_template.c b/libavcodec/motion_est_template.c
index 7c7e645625..aa669e0ee7 100644
--- a/libavcodec/motion_est_template.c
+++ b/libavcodec/motion_est_template.c
@@ -52,7 +52,7 @@ static int hpel_motion_search(MPVEncContext *const s,
                                   int src_index, int ref_index,
                                   int size, int h)
 {
-    MotionEstContext * const c= &s->c.me;
+    MotionEstContext *const c = &s->me;
     const int mx = *mx_ptr;
     const int my = *my_ptr;
     const int penalty_factor= c->sub_penalty_factor;
@@ -166,7 +166,7 @@ static inline int get_mb_score(MPVEncContext *const s, int mx, int my,
                                int src_index, int ref_index, int size,
                                int h, int add_rate)
 {
-    MotionEstContext * const c= &s->c.me;
+    MotionEstContext *const c = &s->me;
     const int penalty_factor= c->mb_penalty_factor;
     const int flags= c->mb_flags;
     const int qpel= flags & FLAG_QPEL;
@@ -209,7 +209,7 @@ static int qpel_motion_search(MPVEncContext *const s,
                                   int src_index, int ref_index,
                                   int size, int h)
 {
-    MotionEstContext * const c= &s->c.me;
+    MotionEstContext *const c = &s->me;
     const int mx = *mx_ptr;
     const int my = *my_ptr;
     const int penalty_factor= c->sub_penalty_factor;
@@ -256,7 +256,7 @@ static int qpel_motion_search(MPVEncContext *const s,
         int best_pos[8][2];
 
         memset(best, 64, sizeof(int)*8);
-        if(s->c.me.dia_size>=2){
+        if(s->me.dia_size>=2){
             const int tl= score_map[(index-(1<<ME_MAP_SHIFT)-1)&(ME_MAP_SIZE-1)];
             const int bl= score_map[(index+(1<<ME_MAP_SHIFT)-1)&(ME_MAP_SIZE-1)];
             const int tr= score_map[(index-(1<<ME_MAP_SHIFT)+1)&(ME_MAP_SIZE-1)];
@@ -417,7 +417,7 @@ static av_always_inline int small_diamond_search(MPVEncContext *const s, int *be
                                        int src_index, int ref_index, const int penalty_factor,
                                        int size, int h, int flags)
 {
-    MotionEstContext * const c= &s->c.me;
+    MotionEstContext *const c = &s->me;
     me_cmp_func cmpf, chroma_cmpf;
     int next_dir=-1;
     LOAD_COMMON
@@ -458,7 +458,7 @@ static int funny_diamond_search(MPVEncContext *const s, int *best, int dmin,
                                        int src_index, int ref_index, const int penalty_factor,
                                        int size, int h, int flags)
 {
-    MotionEstContext * const c= &s->c.me;
+    MotionEstContext *const c = &s->me;
     me_cmp_func cmpf, chroma_cmpf;
     int dia_size;
     LOAD_COMMON
@@ -500,7 +500,7 @@ static int hex_search(MPVEncContext *const s, int *best, int dmin,
                                        int src_index, int ref_index, const int penalty_factor,
                                        int size, int h, int flags, int dia_size)
 {
-    MotionEstContext * const c= &s->c.me;
+    MotionEstContext *const c = &s->me;
     me_cmp_func cmpf, chroma_cmpf;
     LOAD_COMMON
     LOAD_COMMON2
@@ -534,7 +534,7 @@ static int l2s_dia_search(MPVEncContext *const s, int *best, int dmin,
                                        int src_index, int ref_index, const int penalty_factor,
                                        int size, int h, int flags)
 {
-    MotionEstContext * const c= &s->c.me;
+    MotionEstContext *const c = &s->me;
     me_cmp_func cmpf, chroma_cmpf;
     LOAD_COMMON
     LOAD_COMMON2
@@ -572,7 +572,7 @@ static int umh_search(MPVEncContext *const s, int *best, int dmin,
                                        int src_index, int ref_index, const int penalty_factor,
                                        int size, int h, int flags)
 {
-    MotionEstContext * const c= &s->c.me;
+    MotionEstContext *const c = &s->me;
     me_cmp_func cmpf, chroma_cmpf;
     LOAD_COMMON
     LOAD_COMMON2
@@ -619,7 +619,7 @@ static int full_search(MPVEncContext *const s, int *best, int dmin,
                                        int src_index, int ref_index, const int penalty_factor,
                                        int size, int h, int flags)
 {
-    MotionEstContext * const c= &s->c.me;
+    MotionEstContext *const c = &s->me;
     me_cmp_func cmpf, chroma_cmpf;
     LOAD_COMMON
     LOAD_COMMON2
@@ -682,7 +682,7 @@ static int sab_diamond_search(MPVEncContext *const s, int *best, int dmin,
                                        int src_index, int ref_index, const int penalty_factor,
                                        int size, int h, int flags)
 {
-    MotionEstContext * const c= &s->c.me;
+    MotionEstContext *const c = &s->me;
     me_cmp_func cmpf, chroma_cmpf;
     Minima minima[MAX_SAB_SIZE];
     const int minima_count= FFABS(c->dia_size);
@@ -772,7 +772,7 @@ static int var_diamond_search(MPVEncContext *const s, int *best, int dmin,
                                        int src_index, int ref_index, const int penalty_factor,
                                        int size, int h, int flags)
 {
-    MotionEstContext * const c= &s->c.me;
+    MotionEstContext *const c = &s->me;
     me_cmp_func cmpf, chroma_cmpf;
     int dia_size;
     LOAD_COMMON
@@ -832,7 +832,7 @@ static int var_diamond_search(MPVEncContext *const s, int *best, int dmin,
 static av_always_inline int diamond_search(MPVEncContext *const s, int *best, int dmin,
                                        int src_index, int ref_index, const int penalty_factor,
                                        int size, int h, int flags){
-    MotionEstContext * const c= &s->c.me;
+    MotionEstContext *const c = &s->me;
     if(c->dia_size==-1)
         return funny_diamond_search(s, best, dmin, src_index, ref_index, penalty_factor, size, h, flags);
     else if(c->dia_size<-1)
@@ -861,7 +861,7 @@ static av_always_inline int epzs_motion_search_internal(MPVEncContext *const s,
                              int P[10][2], int src_index, int ref_index, const int16_t (*last_mv)[2],
                              int ref_mv_scale, int flags, int size, int h)
 {
-    MotionEstContext * const c= &s->c.me;
+    MotionEstContext *const c = &s->me;
     int best[2]={0, 0};      /**< x and y coordinates of the best motion vector.
                                i.e. the difference between the position of the
                                block currently being encoded and the position of
@@ -979,7 +979,7 @@ int ff_epzs_motion_search(MPVEncContext *const s, int *mx_ptr, int *my_ptr,
                           const int16_t (*last_mv)[2], int ref_mv_scale,
                           int size, int h)
 {
-    MotionEstContext * const c= &s->c.me;
+    MotionEstContext *const c = &s->me;
 //FIXME convert other functions in the same way if faster
     if(c->flags==0 && h==16 && size==0){
         return epzs_motion_search_internal(s, mx_ptr, my_ptr, P, src_index, ref_index, last_mv, ref_mv_scale, 0, 0, 16);
@@ -995,7 +995,7 @@ static int epzs_motion_search2(MPVEncContext *const s,
                              int src_index, int ref_index, const int16_t (*last_mv)[2],
                              int ref_mv_scale, const int size)
 {
-    MotionEstContext * const c= &s->c.me;
+    MotionEstContext *const c = &s->me;
     int best[2]={0, 0};
     int d, dmin;
     unsigned map_generation;
diff --git a/libavcodec/mpeg12enc.c b/libavcodec/mpeg12enc.c
index 584573b466..e4ac256538 100644
--- a/libavcodec/mpeg12enc.c
+++ b/libavcodec/mpeg12enc.c
@@ -1112,7 +1112,7 @@ static av_cold int encode_init(AVCodecContext *avctx)
     m->encode_picture_header = mpeg1_encode_picture_header;
     s->encode_mb             = mpeg12_encode_mb;
 
-    s->c.me.mv_penalty = mv_penalty;
+    s->me.mv_penalty = mv_penalty;
     m->fcode_tab     = fcode_tab + MAX_MV;
     if (avctx->codec_id == AV_CODEC_ID_MPEG1VIDEO) {
         s->min_qcoeff = -255;
diff --git a/libavcodec/mpegvideo.c b/libavcodec/mpegvideo.c
index a65125cc13..efc9ee24d6 100644
--- a/libavcodec/mpegvideo.c
+++ b/libavcodec/mpegvideo.c
@@ -406,7 +406,6 @@ static av_cold void free_duplicate_context(MpegEncContext *s)
 
     av_freep(&s->sc.edge_emu_buffer);
     av_freep(&s->sc.scratchpad_buf);
-    s->me.temp = s->me.scratchpad =
     s->sc.obmc_scratchpad = NULL;
     s->sc.linesize = 0;
 
@@ -428,13 +427,10 @@ static void backup_duplicate_context(MpegEncContext *bak, MpegEncContext *src)
 {
 #define COPY(a) bak->a = src->a
     COPY(sc);
-    COPY(me.map);
-    COPY(me.score_map);
     COPY(blocks);
     COPY(block);
     COPY(start_mb_y);
     COPY(end_mb_y);
-    COPY(me.map_generation);
     COPY(ac_val_base);
     COPY(ac_val[0]);
     COPY(ac_val[1]);
@@ -636,8 +632,6 @@ static void clear_context(MpegEncContext *s)
     s->ac_val[0] =
     s->ac_val[1] =
     s->ac_val[2] =NULL;
-    s->me.scratchpad = NULL;
-    s->me.temp = NULL;
     memset(&s->sc, 0, sizeof(s->sc));
 
 
diff --git a/libavcodec/mpegvideo.h b/libavcodec/mpegvideo.h
index 5301338188..1dcfca6b03 100644
--- a/libavcodec/mpegvideo.h
+++ b/libavcodec/mpegvideo.h
@@ -35,7 +35,6 @@
 #include "h263dsp.h"
 #include "hpeldsp.h"
 #include "idctdsp.h"
-#include "motion_est.h"
 #include "mpegpicture.h"
 #include "qpeldsp.h"
 #include "videodsp.h"
@@ -205,8 +204,6 @@ typedef struct MpegEncContext {
     int last_mv[2][2][2];             ///< last MV, used for MV prediction in MPEG-1 & B-frame MPEG-4
     int16_t direct_scale_mv[2][64];   ///< precomputed to avoid divisions in ff_mpeg4_set_direct_mv
 
-    MotionEstContext me;
-
     int no_rounding;  /**< apply no rounding to motion compensation (MPEG-4, msmpeg4, ...)
                         for B-frames rounding mode is always 0 */
 
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index 7492a9fdbd..116ca007ba 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -318,7 +318,7 @@ static av_cold int me_cmp_init(MPVMainEncContext *const m, AVCodecContext *avctx
     int ret;
 
     ff_me_cmp_init(&mecc, avctx);
-    ret = ff_me_init(&s->c.me, avctx, &mecc, 1);
+    ret = ff_me_init(&s->me, avctx, &mecc, 1);
     if (ret < 0)
         return ret;
     ret = ff_set_cmp(&mecc, me_cmp, m->frame_skip_cmp, 1);
@@ -432,7 +432,7 @@ static av_cold int init_buffers(MPVMainEncContext *const m, AVCodecContext *avct
 #else
         ALIGN = 128,
 #endif
-        ME_MAP_ALLOC_SIZE = FFALIGN(2 * ME_MAP_SIZE * sizeof(*s->c.me.map), ALIGN),
+        ME_MAP_ALLOC_SIZE = FFALIGN(2 * ME_MAP_SIZE * sizeof(*s->me.map), ALIGN),
         DCT_ERROR_SIZE    = FFALIGN(2 * sizeof(*s->dct_error_sum), ALIGN),
     };
     static_assert(FFMAX(ME_MAP_ALLOC_SIZE, DCT_ERROR_SIZE) * MAX_THREADS + ALIGN - 1 <= SIZE_MAX,
@@ -495,8 +495,8 @@ static av_cold int init_buffers(MPVMainEncContext *const m, AVCodecContext *avct
         s2->mb_mean      = (uint8_t*)(s2->mb_var + mb_array_size);
         s2->lambda_table = s->lambda_table;
 
-        s2->c.me.map     = (uint32_t*)me_map;
-        s2->c.me.score_map = s2->c.me.map + ME_MAP_SIZE;
+        s2->me.map       = (uint32_t*)me_map;
+        s2->me.score_map = s2->me.map + ME_MAP_SIZE;
         me_map          += ME_MAP_ALLOC_SIZE;
 
         s2->p_mv_table            = tmp_mv_table;
@@ -2791,8 +2791,8 @@ static int pre_estimate_motion_thread(AVCodecContext *c, void *arg){
     MPVEncContext *const s = *(void**)arg;
 
 
-    s->c.me.pre_pass = 1;
-    s->c.me.dia_size = s->c.avctx->pre_dia_size;
+    s->me.pre_pass = 1;
+    s->me.dia_size = s->c.avctx->pre_dia_size;
     s->c.first_slice_line = 1;
     for (s->c.mb_y = s->c.end_mb_y - 1; s->c.mb_y >= s->c.start_mb_y; s->c.mb_y--) {
         for (s->c.mb_x = s->c.mb_width - 1; s->c.mb_x >=0 ; s->c.mb_x--)
@@ -2800,7 +2800,7 @@ static int pre_estimate_motion_thread(AVCodecContext *c, void *arg){
         s->c.first_slice_line = 0;
     }
 
-    s->c.me.pre_pass = 0;
+    s->me.pre_pass = 0;
 
     return 0;
 }
@@ -2808,7 +2808,7 @@ static int pre_estimate_motion_thread(AVCodecContext *c, void *arg){
 static int estimate_motion_thread(AVCodecContext *c, void *arg){
     MPVEncContext *const s = *(void**)arg;
 
-    s->c.me.dia_size= s->c.avctx->dia_size;
+    s->me.dia_size = s->c.avctx->dia_size;
     s->c.first_slice_line=1;
     for (s->c.mb_y = s->c.start_mb_y; s->c.mb_y < s->c.end_mb_y; s->c.mb_y++) {
         s->c.mb_x=0; //for block init below
@@ -2846,7 +2846,7 @@ static int mb_var_thread(AVCodecContext *c, void *arg){
 
             s->mb_var [s->c.mb_stride * mb_y + mb_x] = varc;
             s->mb_mean[s->c.mb_stride * mb_y + mb_x] = (sum+128)>>8;
-            s->c.me.mb_var_sum_temp    += varc;
+            s->me.mb_var_sum_temp    += varc;
         }
     }
     return 0;
@@ -3589,9 +3589,9 @@ static int encode_thread(AVCodecContext *c, void *arg){
 #define MERGE(field) dst->field += src->field; src->field=0
 static void merge_context_after_me(MPVEncContext *const dst, MPVEncContext *const src)
 {
-    MERGE(c.me.scene_change_score);
-    MERGE(c.me.mc_mb_var_sum_temp);
-    MERGE(c.me.mb_var_sum_temp);
+    MERGE(me.scene_change_score);
+    MERGE(me.mc_mb_var_sum_temp);
+    MERGE(me.mb_var_sum_temp);
 }
 
 static void merge_context_after_encode(MPVEncContext *const dst, MPVEncContext *const src)
@@ -3684,8 +3684,8 @@ static int encode_picture(MPVMainEncContext *const m, const AVPacket *pkt)
     int context_count = s->c.slice_context_count;
 
     /* Reset the average MB variance */
-    s->c.me.mb_var_sum_temp    =
-    s->c.me.mc_mb_var_sum_temp = 0;
+    s->me.mb_var_sum_temp    =
+    s->me.mc_mb_var_sum_temp = 0;
 
     /* we need to initialize some time vars before we can encode B-frames */
     // RAL: Condition added for MPEG1VIDEO
@@ -3694,7 +3694,7 @@ static int encode_picture(MPVMainEncContext *const m, const AVPacket *pkt)
     if (CONFIG_MPEG4_ENCODER && s->c.codec_id == AV_CODEC_ID_MPEG4)
         ff_set_mpeg4_time(s);
 
-    s->c.me.scene_change_score=0;
+    s->me.scene_change_score=0;
 
 //    s->lambda = s->c.cur_pic.ptr->quality; //FIXME qscale / ... stuff for ME rate distortion
 
@@ -3717,8 +3717,6 @@ static int encode_picture(MPVMainEncContext *const m, const AVPacket *pkt)
         update_qscale(m);
     }
 
-    ff_me_init_pic(s);
-
     s->c.mb_intra = 0; //for the rate distortion & bit compare functions
     for (int i = 0; i < context_count; i++) {
         MPVEncContext *const slice = s->c.enc_contexts[i];
@@ -3732,7 +3730,8 @@ static int encode_picture(MPVMainEncContext *const m, const AVPacket *pkt)
             slice->lambda  = s->lambda;
             slice->lambda2 = s->lambda2;
         }
-        slice->c.me.temp = slice->c.me.scratchpad = slice->c.sc.scratchpad_buf;
+        slice->me.temp = slice->me.scratchpad = slice->c.sc.scratchpad_buf;
+        ff_me_init_pic(slice);
 
         h     = s->c.mb_height;
         start = pkt->data + (size_t)(((int64_t) pkt->size) * slice->c.start_mb_y / h);
@@ -3770,11 +3769,11 @@ static int encode_picture(MPVMainEncContext *const m, const AVPacket *pkt)
     for(i=1; i<context_count; i++){
         merge_context_after_me(s, s->c.enc_contexts[i]);
     }
-    m->mc_mb_var_sum = s->c.me.mc_mb_var_sum_temp;
-    m->mb_var_sum    = s->c.me.   mb_var_sum_temp;
+    m->mc_mb_var_sum = s->me.mc_mb_var_sum_temp;
+    m->mb_var_sum    = s->me.   mb_var_sum_temp;
     emms_c();
 
-    if (s->c.me.scene_change_score > m->scenechange_threshold &&
+    if (s->me.scene_change_score > m->scenechange_threshold &&
         s->c.pict_type == AV_PICTURE_TYPE_P) {
         s->c.pict_type = AV_PICTURE_TYPE_I;
         for (int i = 0; i < s->c.mb_stride * s->c.mb_height; i++)
diff --git a/libavcodec/mpegvideoenc.h b/libavcodec/mpegvideoenc.h
index 6985c41955..34438714e0 100644
--- a/libavcodec/mpegvideoenc.h
+++ b/libavcodec/mpegvideoenc.h
@@ -33,6 +33,7 @@
 #include "libavutil/avassert.h"
 #include "libavutil/opt.h"
 #include "fdctdsp.h"
+#include "motion_est.h"
 #include "mpegvideo.h"
 #include "mpegvideoencdsp.h"
 #include "pixblockdsp.h"
@@ -65,6 +66,7 @@ typedef struct MPVEncContext {
     FDCTDSPContext fdsp;
     MpegvideoEncDSPContext mpvencdsp;
     PixblockDSPContext pdsp;
+    MotionEstContext me;
 
     int16_t (*p_mv_table)[2];            ///< MV table (1MV per MB) P-frame
     int16_t (*b_forw_mv_table)[2];       ///< MV table (1MV per MB) forward mode B-frame
@@ -348,7 +350,7 @@ FF_MPV_OPT_CMP_FUNC, \
 
 #define FF_MPV_COMMON_MOTION_EST_OPTS \
 { "mv0",            "always try a mb with mv=<0,0>",                     0, AV_OPT_TYPE_CONST, { .i64 = FF_MPV_FLAG_MV0 },    0, 0, FF_MPV_OPT_FLAGS, .unit = "mpv_flags" },\
-{"motion_est", "motion estimation algorithm",                       FF_MPV_OFFSET(c.me.motion_est), AV_OPT_TYPE_INT, {.i64 = FF_ME_EPZS }, FF_ME_ZERO, FF_ME_XONE, FF_MPV_OPT_FLAGS, .unit = "motion_est" },   \
+{"motion_est", "motion estimation algorithm",                       FF_MPV_OFFSET(me.motion_est), AV_OPT_TYPE_INT, {.i64 = FF_ME_EPZS }, FF_ME_ZERO, FF_ME_XONE, FF_MPV_OPT_FLAGS, .unit = "motion_est" },   \
 { "zero", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = FF_ME_ZERO }, 0, 0, FF_MPV_OPT_FLAGS, .unit = "motion_est" }, \
 { "epzs", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = FF_ME_EPZS }, 0, 0, FF_MPV_OPT_FLAGS, .unit = "motion_est" }, \
 { "xone", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = FF_ME_XONE }, 0, 0, FF_MPV_OPT_FLAGS, .unit = "motion_est" }, \
diff --git a/libavcodec/snowenc.c b/libavcodec/snowenc.c
index 269fc8a599..0c99b20379 100644
--- a/libavcodec/snowenc.c
+++ b/libavcodec/snowenc.c
@@ -217,7 +217,7 @@ static av_cold int encode_init(AVCodecContext *avctx)
     mcf(12,12)
 
     ff_me_cmp_init(&enc->mecc, avctx);
-    ret = ff_me_init(&mpv->c.me, avctx, &enc->mecc, 0);
+    ret = ff_me_init(&mpv->me, avctx, &enc->mecc, 0);
     if (ret < 0)
         return ret;
     ff_mpegvideoencdsp_init(&enc->mpvencdsp, avctx);
@@ -232,15 +232,15 @@ static av_cold int encode_init(AVCodecContext *avctx)
     enc->m.lmax  = avctx->mb_lmax;
     mpv->c.mb_num  = (avctx->width * avctx->height + 255) / 256; // For ratecontrol
 
-    mpv->c.me.temp      =
-    mpv->c.me.scratchpad = av_calloc(avctx->width + 64, 2*16*2*sizeof(uint8_t));
+    mpv->me.temp      =
+    mpv->me.scratchpad = av_calloc(avctx->width + 64, 2*16*2*sizeof(uint8_t));
     mpv->c.sc.obmc_scratchpad= av_mallocz(MB_SIZE*MB_SIZE*12*sizeof(uint32_t));
-    mpv->c.me.map       = av_mallocz(2 * ME_MAP_SIZE * sizeof(*mpv->c.me.map));
-    if (!mpv->c.me.scratchpad || !mpv->c.me.map || !mpv->c.sc.obmc_scratchpad)
+    mpv->me.map       = av_mallocz(2 * ME_MAP_SIZE * sizeof(*mpv->me.map));
+    if (!mpv->me.scratchpad || !mpv->me.map || !mpv->c.sc.obmc_scratchpad)
         return AVERROR(ENOMEM);
-    mpv->c.me.score_map = mpv->c.me.map + ME_MAP_SIZE;
+    mpv->me.score_map = mpv->me.map + ME_MAP_SIZE;
 
-    mpv->c.me.mv_penalty = ff_h263_get_mv_penalty();
+    mpv->me.mv_penalty = ff_h263_get_mv_penalty();
 
     s->max_ref_frames = av_clip(avctx->refs, 1, MAX_REF_FRAMES);
 
@@ -369,7 +369,7 @@ static inline int get_penalty_factor(int lambda, int lambda2, int type){
 static int encode_q_branch(SnowEncContext *enc, int level, int x, int y)
 {
     SnowContext      *const s = &enc->com;
-    MotionEstContext *const c = &enc->m.s.c.me;
+    MotionEstContext *const c = &enc->m.s.me;
     uint8_t p_buffer[1024];
     uint8_t i_buffer[1024];
     uint8_t p_state[sizeof(s->block_state)];
@@ -840,12 +840,12 @@ static int get_block_rd(SnowEncContext *enc, int mb_x, int mb_y,
             distortion = 0;
             for(i=0; i<4; i++){
                 int off = sx+16*(i&1) + (sy+16*(i>>1))*ref_stride;
-                distortion += enc->m.s.c.me.me_cmp[0](&enc->m.s, src + off, dst + off, ref_stride, 16);
+                distortion += enc->m.s.me.me_cmp[0](&enc->m.s, src + off, dst + off, ref_stride, 16);
             }
         }
     }else{
         av_assert2(block_w==8);
-        distortion = enc->m.s.c.me.me_cmp[0](&enc->m.s, src + sx + sy*ref_stride, dst + sx + sy*ref_stride, ref_stride, block_w*2);
+        distortion = enc->m.s.me.me_cmp[0](&enc->m.s, src + sx + sy*ref_stride, dst + sx + sy*ref_stride, ref_stride, block_w*2);
     }
 
     if(plane_index==0){
@@ -911,7 +911,7 @@ static int get_4block_rd(SnowEncContext *enc, int mb_x, int mb_y, int plane_inde
         }
 
         av_assert1(block_w== 8 || block_w==16);
-        distortion += enc->m.s.c.me.me_cmp[block_w==8](&enc->m.s, src + x + y*ref_stride, dst + x + y*ref_stride, ref_stride, block_h);
+        distortion += enc->m.s.me.me_cmp[block_w==8](&enc->m.s, src + x + y*ref_stride, dst + x + y*ref_stride, ref_stride, block_h);
     }
 
     if(plane_index==0){
@@ -1866,9 +1866,9 @@ static int encode_frame(AVCodecContext *avctx, AVPacket *pkt,
         mpv->c.b8_stride  = 2 * mpv->c.mb_width + 1;
         mpv->c.f_code     = 1;
         mpv->c.pict_type  = pic->pict_type;
-        mpv->c.me.motion_est = enc->motion_est;
-        mpv->c.me.scene_change_score = 0;
-        mpv->c.me.dia_size = avctx->dia_size;
+        mpv->me.motion_est = enc->motion_est;
+        mpv->me.scene_change_score = 0;
+        mpv->me.dia_size = avctx->dia_size;
         mpv->c.quarter_sample  = (s->avctx->flags & AV_CODEC_FLAG_QPEL)!=0;
         mpv->c.out_format      = FMT_H263;
         mpv->c.unrestricted_mv = 1;
@@ -1937,7 +1937,7 @@ redo_frame:
             if(   plane_index==0
                && pic->pict_type == AV_PICTURE_TYPE_P
                && !(avctx->flags&AV_CODEC_FLAG_PASS2)
-               && mpv->c.me.scene_change_score > enc->scenechange_threshold) {
+               && mpv->me.scene_change_score > enc->scenechange_threshold) {
                 ff_init_range_encoder(c, pkt->data, pkt->size);
                 ff_build_rac_states(c, (1LL<<32)/20, 256-8);
                 pic->pict_type= AV_PICTURE_TYPE_I;
@@ -2092,9 +2092,9 @@ static av_cold int encode_end(AVCodecContext *avctx)
         av_freep(&s->ref_scores[i]);
     }
 
-    enc->m.s.c.me.temp = NULL;
-    av_freep(&enc->m.s.c.me.scratchpad);
-    av_freep(&enc->m.s.c.me.map);
+    enc->m.s.me.temp = NULL;
+    av_freep(&enc->m.s.me.scratchpad);
+    av_freep(&enc->m.s.me.map);
     av_freep(&enc->m.s.c.sc.obmc_scratchpad);
 
     av_freep(&avctx->stats_out);
diff --git a/libavcodec/svq1enc.c b/libavcodec/svq1enc.c
index 6e5146d6a5..13f86a76db 100644
--- a/libavcodec/svq1enc.c
+++ b/libavcodec/svq1enc.c
@@ -339,7 +339,7 @@ static int svq1_encode_plane(SVQ1EncContext *s, int plane,
         s2->b8_stride                     = 2 * s2->mb_width + 1;
         s2->f_code                        = 1;
         s2->pict_type                     = s->pict_type;
-        s2->me.scene_change_score         = 0;
+        s->m.me.scene_change_score        = 0;
         // s2->out_format                    = FMT_H263;
         // s2->unrestricted_mv               = 1;
         s->m.lambda                       = s->quality;
@@ -374,7 +374,7 @@ static int svq1_encode_plane(SVQ1EncContext *s, int plane,
                                              s2->mb_stride + 1;
         ff_me_init_pic(&s->m);
 
-        s2->me.dia_size      = s->avctx->dia_size;
+        s->m.me.dia_size     = s->avctx->dia_size;
         s2->first_slice_line = 1;
         for (y = 0; y < block_height; y++) {
             s->m.new_pic->data[0]  = src - y * 16 * stride; // ugly
@@ -539,8 +539,8 @@ static av_cold int svq1_encode_end(AVCodecContext *avctx)
                s->rd_total / (double)(avctx->width * avctx->height *
                                       avctx->frame_num));
 
-    av_freep(&s->m.c.me.scratchpad);
-    av_freep(&s->m.c.me.map);
+    av_freep(&s->m.me.scratchpad);
+    av_freep(&s->m.me.map);
     av_freep(&s->mb_type);
     av_freep(&s->dummy);
     av_freep(&s->scratchbuf);
@@ -585,7 +585,7 @@ static av_cold int svq1_encode_init(AVCodecContext *avctx)
 
     ff_hpeldsp_init(&s->hdsp, avctx->flags);
     ff_me_cmp_init(&s->mecc, avctx);
-    ret = ff_me_init(&s->m.c.me, avctx, &s->mecc, 0);
+    ret = ff_me_init(&s->m.me, avctx, &s->mecc, 0);
     if (ret < 0)
         return ret;
     ff_mpegvideoencdsp_init(&s->m.mpvencdsp, avctx);
@@ -613,24 +613,24 @@ static av_cold int svq1_encode_init(AVCodecContext *avctx)
         return ret;
 
     s->m.c.picture_structure = PICT_FRAME;
-    s->m.c.me.temp           =
-    s->m.c.me.scratchpad     = av_mallocz((avctx->width + 64) *
+    s->m.me.temp           =
+    s->m.me.scratchpad     = av_mallocz((avctx->width + 64) *
                                         2 * 16 * 2 * sizeof(uint8_t));
     s->mb_type             = av_mallocz((s->y_block_width + 1) *
                                         s->y_block_height * sizeof(int16_t));
     s->dummy               = av_mallocz((s->y_block_width + 1) *
                                         s->y_block_height * sizeof(int32_t));
-    s->m.c.me.map            = av_mallocz(2 * ME_MAP_SIZE * sizeof(*s->m.c.me.map));
+    s->m.me.map            = av_mallocz(2 * ME_MAP_SIZE * sizeof(*s->m.me.map));
     s->m.new_pic       = av_frame_alloc();
 
-    if (!s->m.c.me.scratchpad || !s->m.c.me.map ||
+    if (!s->m.me.scratchpad || !s->m.me.map ||
         !s->mb_type || !s->dummy || !s->m.new_pic)
         return AVERROR(ENOMEM);
-    s->m.c.me.score_map = s->m.c.me.map + ME_MAP_SIZE;
+    s->m.me.score_map = s->m.me.map + ME_MAP_SIZE;
 
     ff_svq1enc_init(&s->svq1encdsp);
 
-    s->m.c.me.mv_penalty = ff_h263_get_mv_penalty();
+    s->m.me.mv_penalty = ff_h263_get_mv_penalty();
 
     return write_ident(avctx, s->avctx->flags & AV_CODEC_FLAG_BITEXACT ? "Lavc" : LIBAVCODEC_IDENT);
 }
@@ -718,7 +718,7 @@ static int svq1_encode_frame(AVCodecContext *avctx, AVPacket *pkt,
 #define OFFSET(x) offsetof(struct SVQ1EncContext, x)
 #define VE AV_OPT_FLAG_VIDEO_PARAM | AV_OPT_FLAG_ENCODING_PARAM
 static const AVOption options[] = {
-    { "motion-est", "Motion estimation algorithm", OFFSET(m.c.me.motion_est), AV_OPT_TYPE_INT, { .i64 = FF_ME_EPZS }, FF_ME_ZERO, FF_ME_XONE, VE, .unit = "motion-est"},
+    { "motion-est", "Motion estimation algorithm", OFFSET(m.me.motion_est), AV_OPT_TYPE_INT, { .i64 = FF_ME_EPZS }, FF_ME_ZERO, FF_ME_XONE, VE, .unit = "motion-est"},
         { "zero", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = FF_ME_ZERO }, 0, 0, FF_MPV_OPT_FLAGS, .unit = "motion-est" },
         { "epzs", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = FF_ME_EPZS }, 0, 0, FF_MPV_OPT_FLAGS, .unit = "motion-est" },
         { "xone", NULL, 0, AV_OPT_TYPE_CONST, { .i64 = FF_ME_XONE }, 0, 0, FF_MPV_OPT_FLAGS, .unit = "motion-est" },
-- 
2.45.2


[-- Attachment #9: 0070-avcodec-mpegvideo_enc-Move-code-to-initialize-variab.patch --]
[-- Type: text/x-patch, Size: 1972 bytes --]

From df3ea2c7516e6cdbb3c5c4befb80744d7a291099 Mon Sep 17 00:00:00 2001
From: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Date: Wed, 19 Mar 2025 14:47:00 +0100
Subject: [PATCH 70/77] avcodec/mpegvideo_enc: Move code to initialize
 variables immediately

Also avoid casts and parentheses.
(This is only possible now because ff_update_duplicate_context()
no longer touches the PutBitContext.)

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
 libavcodec/mpegvideo_enc.c | 13 +++++--------
 1 file changed, 5 insertions(+), 8 deletions(-)

diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index 116ca007ba..62e3e5a22f 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -3720,8 +3720,11 @@ static int encode_picture(MPVMainEncContext *const m, const AVPacket *pkt)
     s->c.mb_intra = 0; //for the rate distortion & bit compare functions
     for (int i = 0; i < context_count; i++) {
         MPVEncContext *const slice = s->c.enc_contexts[i];
-        uint8_t *start, *end;
-        int h;
+        int h = s->c.mb_height;
+        uint8_t *start = pkt->data + (int64_t)pkt->size * slice->c.start_mb_y / h;
+        uint8_t *end   = pkt->data + (int64_t)pkt->size * slice->c.  end_mb_y / h;
+
+        init_put_bits(&slice->pb, start, end - start);
 
         if (i) {
             ret = ff_update_duplicate_context(&slice->c, &s->c);
@@ -3732,12 +3735,6 @@ static int encode_picture(MPVMainEncContext *const m, const AVPacket *pkt)
         }
         slice->me.temp = slice->me.scratchpad = slice->c.sc.scratchpad_buf;
         ff_me_init_pic(slice);
-
-        h     = s->c.mb_height;
-        start = pkt->data + (size_t)(((int64_t) pkt->size) * slice->c.start_mb_y / h);
-        end   = pkt->data + (size_t)(((int64_t) pkt->size) * slice->c.  end_mb_y / h);
-
-        init_put_bits(&s->c.enc_contexts[i]->pb, start, end - start);
     }
 
     /* Estimate motion for every MB */
-- 
2.45.2


[-- Attachment #10: 0071-avcodec-motion_est-Reset-scene_change-score-MB-varia.patch --]
[-- Type: text/x-patch, Size: 3428 bytes --]

From 3ab911f2164b4c98d27a2f5945720e1ee511c933 Mon Sep 17 00:00:00 2001
From: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Date: Wed, 19 Mar 2025 15:18:17 +0100
Subject: [PATCH 71/77] avcodec/motion_est: Reset scene_change score, MB
 variance stats

Reset them in ff_me_init_pic(). It is the appropriate place for it
and allows to remove resetting code from multiple places
(e.g. mpegvideo_enc.c resetted it in two different places
(one for the main slice context, one for all the others)).

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
 libavcodec/motion_est.c    |  4 ++++
 libavcodec/mpegvideo_enc.c | 12 +++---------
 libavcodec/snowenc.c       |  1 -
 3 files changed, 7 insertions(+), 10 deletions(-)

diff --git a/libavcodec/motion_est.c b/libavcodec/motion_est.c
index af06acd9b2..35f853bc61 100644
--- a/libavcodec/motion_est.c
+++ b/libavcodec/motion_est.c
@@ -397,6 +397,10 @@ void ff_me_init_pic(MPVEncContext *const s)
         c->hpel_put[2][0]= c->hpel_put[2][1]=
         c->hpel_put[2][2]= c->hpel_put[2][3]= zero_hpel;
     }
+    /* Reset the average MB variance and scene change stats */
+    c->scene_change_score = 0;
+    c->mb_var_sum_temp    =
+    c->mc_mb_var_sum_temp = 0;
 }
 
 #define CHECK_SAD_HALF_MV(suffix, x, y) \
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index 62e3e5a22f..e84f1dd467 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -3589,9 +3589,9 @@ static int encode_thread(AVCodecContext *c, void *arg){
 #define MERGE(field) dst->field += src->field; src->field=0
 static void merge_context_after_me(MPVEncContext *const dst, MPVEncContext *const src)
 {
-    MERGE(me.scene_change_score);
-    MERGE(me.mc_mb_var_sum_temp);
-    MERGE(me.mb_var_sum_temp);
+    ADD(me.scene_change_score);
+    ADD(me.mc_mb_var_sum_temp);
+    ADD(me.mb_var_sum_temp);
 }
 
 static void merge_context_after_encode(MPVEncContext *const dst, MPVEncContext *const src)
@@ -3683,10 +3683,6 @@ static int encode_picture(MPVMainEncContext *const m, const AVPacket *pkt)
     int bits;
     int context_count = s->c.slice_context_count;
 
-    /* Reset the average MB variance */
-    s->me.mb_var_sum_temp    =
-    s->me.mc_mb_var_sum_temp = 0;
-
     /* we need to initialize some time vars before we can encode B-frames */
     // RAL: Condition added for MPEG1VIDEO
     if (s->c.out_format == FMT_MPEG1 || (s->c.h263_pred && s->c.msmpeg4_version == MSMP4_UNUSED))
@@ -3694,8 +3690,6 @@ static int encode_picture(MPVMainEncContext *const m, const AVPacket *pkt)
     if (CONFIG_MPEG4_ENCODER && s->c.codec_id == AV_CODEC_ID_MPEG4)
         ff_set_mpeg4_time(s);
 
-    s->me.scene_change_score=0;
-
 //    s->lambda = s->c.cur_pic.ptr->quality; //FIXME qscale / ... stuff for ME rate distortion
 
     if (s->c.pict_type == AV_PICTURE_TYPE_I) {
diff --git a/libavcodec/snowenc.c b/libavcodec/snowenc.c
index 0c99b20379..fe71048a45 100644
--- a/libavcodec/snowenc.c
+++ b/libavcodec/snowenc.c
@@ -1867,7 +1867,6 @@ static int encode_frame(AVCodecContext *avctx, AVPacket *pkt,
         mpv->c.f_code     = 1;
         mpv->c.pict_type  = pic->pict_type;
         mpv->me.motion_est = enc->motion_est;
-        mpv->me.scene_change_score = 0;
         mpv->me.dia_size = avctx->dia_size;
         mpv->c.quarter_sample  = (s->avctx->flags & AV_CODEC_FLAG_QPEL)!=0;
         mpv->c.out_format      = FMT_H263;
-- 
2.45.2


[-- Attachment #11: 0072-avcodec-mpegvideo_enc-Defer-initialization-of-mb-pos.patch --]
[-- Type: text/x-patch, Size: 1866 bytes --]

From 6d2d34ec3b27c315938826233946d2105e3037f3 Mon Sep 17 00:00:00 2001
From: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Date: Wed, 19 Mar 2025 21:28:00 +0100
Subject: [PATCH 72/77] avcodec/mpegvideo_enc: Defer initialization of mb-pos
 dependent vars

Only set them after mb_x and mb_y are known which happens
only after the call to ff_h261_reorder_mb_index().

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
 libavcodec/mpegvideo_enc.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index e84f1dd467..dc165d92d2 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -3024,8 +3024,7 @@ static int encode_thread(AVCodecContext *c, void *arg){
         ff_init_block_index(&s->c);
 
         for (int mb_x = 0; mb_x < s->c.mb_width; mb_x++) {
-            int xy = mb_y*s->c.mb_stride + mb_x; // removed const, H261 needs to adjust this
-            int mb_type= s->mb_type[xy];
+            int mb_type, xy;
 //            int d;
             int dmin= INT_MAX;
             int dir;
@@ -3049,11 +3048,10 @@ static int encode_thread(AVCodecContext *c, void *arg){
             s->c.mb_y = mb_y;  // moved into loop, can get changed by H.261
             ff_update_block_index(&s->c, 8, 0, s->c.chroma_x_shift);
 
-            if(CONFIG_H261_ENCODER && s->c.codec_id == AV_CODEC_ID_H261){
+            if (CONFIG_H261_ENCODER && s->c.codec_id == AV_CODEC_ID_H261)
                 ff_h261_reorder_mb_index(s);
-                xy = s->c.mb_y*s->c.mb_stride + s->c.mb_x;
-                mb_type= s->mb_type[xy];
-            }
+            xy      = s->c.mb_y * s->c.mb_stride + s->c.mb_x;
+            mb_type = s->mb_type[xy];
 
             /* write gob / video packet header  */
             if(s->rtp_mode){
-- 
2.45.2


[-- Attachment #12: 0073-avcodec-mpegvideo_enc-Use-better-variable-name.patch --]
[-- Type: text/x-patch, Size: 1419 bytes --]

From 1ab3662fea849f911618fe9d7276bf49ebbbffd9 Mon Sep 17 00:00:00 2001
From: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Date: Wed, 19 Mar 2025 21:55:34 +0100
Subject: [PATCH 73/77] avcodec/mpegvideo_enc: Use better variable name

Also fixes shadowing.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
 libavcodec/mpegvideo_enc.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index dc165d92d2..95d774155a 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -534,7 +534,7 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx)
     MPVMainEncContext *const m = avctx->priv_data;
     MPVEncContext    *const s = &m->s;
     AVCPBProperties *cpb_props;
-    int i, ret;
+    int gcd, ret;
 
     mpv_encode_defaults(m);
 
@@ -818,11 +818,11 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx)
         m->b_frame_strategy = 0;
     }
 
-    i = av_gcd(avctx->time_base.den, avctx->time_base.num);
-    if (i > 1) {
+    gcd = av_gcd(avctx->time_base.den, avctx->time_base.num);
+    if (gcd > 1) {
         av_log(avctx, AV_LOG_INFO, "removing common factors from framerate\n");
-        avctx->time_base.den /= i;
-        avctx->time_base.num /= i;
+        avctx->time_base.den /= gcd;
+        avctx->time_base.num /= gcd;
         //return -1;
     }
 
-- 
2.45.2


[-- Attachment #13: 0074-avcodec-h261dec-Set-FF_CODEC_CAP_SKIP_FRAME_FILL_PAR.patch --]
[-- Type: text/x-patch, Size: 841 bytes --]

From 8bf165cbab2064129a3d38ca70b7dd70997978a2 Mon Sep 17 00:00:00 2001
From: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Date: Wed, 19 Mar 2025 16:55:22 +0100
Subject: [PATCH 74/77] avcodec/h261dec: Set FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM

This decoder sets the AVCodecContext fields even when a frame
is skipped.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
 libavcodec/h261dec.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/libavcodec/h261dec.c b/libavcodec/h261dec.c
index c32ddd2ddf..1f57c9f8fe 100644
--- a/libavcodec/h261dec.c
+++ b/libavcodec/h261dec.c
@@ -617,4 +617,5 @@ const FFCodec ff_h261_decoder = {
     .close          = ff_mpv_decode_close,
     .p.capabilities = AV_CODEC_CAP_DR1,
     .p.max_lowres   = 3,
+    .caps_internal  = FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM,
 };
-- 
2.45.2


[-- Attachment #14: 0075-avcodec-error_resilience-Avoid-me_cmp.h-inclusion.patch --]
[-- Type: text/x-patch, Size: 1441 bytes --]

From 1f32ad151711b371cb1a7a50c138868f3d94e73c Mon Sep 17 00:00:00 2001
From: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Date: Wed, 19 Mar 2025 17:03:31 +0100
Subject: [PATCH 75/77] avcodec/error_resilience: Avoid me_cmp.h inclusion

Use spell out what me_cmp_func means.
Avoids inclusions in the H.264 decoder as well as all
mpegvideo decoders.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
 libavcodec/error_resilience.h | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/libavcodec/error_resilience.h b/libavcodec/error_resilience.h
index 4005018676..8e43219b09 100644
--- a/libavcodec/error_resilience.h
+++ b/libavcodec/error_resilience.h
@@ -23,7 +23,6 @@
 #include <stdatomic.h>
 
 #include "avcodec.h"
-#include "me_cmp.h"
 
 /// current MB is the first after a resync marker
 #define VP_START               1
@@ -37,6 +36,8 @@
 #define ER_MB_ERROR (ER_AC_ERROR|ER_DC_ERROR|ER_MV_ERROR)
 #define ER_MB_END   (ER_AC_END|ER_DC_END|ER_MV_END)
 
+typedef struct MPVEncContext MPVEncContext;
+
 typedef struct ERPicture {
     AVFrame *f;
     const struct ThreadFrame *tf;
@@ -53,7 +54,8 @@ typedef struct ERPicture {
 typedef struct ERContext {
     AVCodecContext *avctx;
 
-    me_cmp_func sad;
+    int (*sad)(MPVEncContext *unused, const uint8_t *blk1,
+               const uint8_t *blk2, ptrdiff_t stride, int h);
     int mecc_inited;
 
     int *mb_index2xy;
-- 
2.45.2


[-- Attachment #15: 0076-avcodec-mpegvideo-Move-unquantize-functions-into-a-f.patch --]
[-- Type: text/x-patch, Size: 22498 bytes --]

From 3df77991b9c503b5c5607f641ca8f7534397e1ca Mon Sep 17 00:00:00 2001
From: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Date: Sun, 2 Mar 2025 05:44:09 +0100
Subject: [PATCH 76/77] avcodec/mpegvideo: Move unquantize functions into a
 file of their own

This is in preparation for only keeping the actually used
unquantize functions in MpegEncContext.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
 libavcodec/Makefile                   |   1 +
 libavcodec/arm/mpegvideo_arm.c        |   1 +
 libavcodec/mips/mpegvideo_init_mips.c |   1 +
 libavcodec/mpegvideo.c                | 237 +---------------------
 libavcodec/mpegvideo.h                |   5 -
 libavcodec/mpegvideo_unquantize.c     | 273 ++++++++++++++++++++++++++
 libavcodec/mpegvideo_unquantize.h     |  37 ++++
 libavcodec/neon/mpegvideo.c           |   1 +
 libavcodec/ppc/mpegvideo_altivec.c    |   1 +
 libavcodec/x86/mpegvideo.c            |   1 +
 10 files changed, 318 insertions(+), 240 deletions(-)
 create mode 100644 libavcodec/mpegvideo_unquantize.c
 create mode 100644 libavcodec/mpegvideo_unquantize.h

diff --git a/libavcodec/Makefile b/libavcodec/Makefile
index 37b201ec4a..b2abb3b863 100644
--- a/libavcodec/Makefile
+++ b/libavcodec/Makefile
@@ -147,6 +147,7 @@ OBJS-$(CONFIG_MPEGAUDIOHEADER)         += mpegaudiodecheader.o mpegaudiotabs.o
 OBJS-$(CONFIG_MPEG4AUDIO)              += mpeg4audio.o mpeg4audio_sample_rates.o
 OBJS-$(CONFIG_MPEGVIDEO)               += mpegvideo.o rl.o \
                                           mpegvideo_motion.o \
+                                          mpegvideo_unquantize.o \
                                           mpegvideodata.o mpegpicture.o  \
                                           to_upper4.o
 OBJS-$(CONFIG_MPEGVIDEODEC)            += mpegvideo_dec.o mpegutils.o
diff --git a/libavcodec/arm/mpegvideo_arm.c b/libavcodec/arm/mpegvideo_arm.c
index 28a3f2cdd9..e32451b554 100644
--- a/libavcodec/arm/mpegvideo_arm.c
+++ b/libavcodec/arm/mpegvideo_arm.c
@@ -24,6 +24,7 @@
 #include "libavutil/arm/cpu.h"
 #include "libavcodec/avcodec.h"
 #include "libavcodec/mpegvideo.h"
+#include "libavcodec/mpegvideo_unquantize.h"
 #include "mpegvideo_arm.h"
 #include "asm-offsets.h"
 
diff --git a/libavcodec/mips/mpegvideo_init_mips.c b/libavcodec/mips/mpegvideo_init_mips.c
index 1d02b0c937..a9acae94ce 100644
--- a/libavcodec/mips/mpegvideo_init_mips.c
+++ b/libavcodec/mips/mpegvideo_init_mips.c
@@ -20,6 +20,7 @@
 
 #include "libavutil/attributes.h"
 #include "libavutil/mips/cpu.h"
+#include "libavcodec/mpegvideo_unquantize.h"
 #include "h263dsp_mips.h"
 #include "mpegvideo_mips.h"
 
diff --git a/libavcodec/mpegvideo.c b/libavcodec/mpegvideo.c
index efc9ee24d6..9a40937bbd 100644
--- a/libavcodec/mpegvideo.c
+++ b/libavcodec/mpegvideo.c
@@ -41,221 +41,9 @@
 #include "mpegutils.h"
 #include "mpegvideo.h"
 #include "mpegvideodata.h"
+#include "mpegvideo_unquantize.h"
 #include "libavutil/refstruct.h"
 
-static void dct_unquantize_mpeg1_intra_c(MpegEncContext *s,
-                                   int16_t *block, int n, int qscale)
-{
-    int i, level, nCoeffs;
-    const uint16_t *quant_matrix;
-
-    nCoeffs= s->block_last_index[n];
-
-    block[0] *= n < 4 ? s->y_dc_scale : s->c_dc_scale;
-    /* XXX: only MPEG-1 */
-    quant_matrix = s->intra_matrix;
-    for(i=1;i<=nCoeffs;i++) {
-        int j= s->intra_scantable.permutated[i];
-        level = block[j];
-        if (level) {
-            if (level < 0) {
-                level = -level;
-                level = (int)(level * qscale * quant_matrix[j]) >> 3;
-                level = (level - 1) | 1;
-                level = -level;
-            } else {
-                level = (int)(level * qscale * quant_matrix[j]) >> 3;
-                level = (level - 1) | 1;
-            }
-            block[j] = level;
-        }
-    }
-}
-
-static void dct_unquantize_mpeg1_inter_c(MpegEncContext *s,
-                                   int16_t *block, int n, int qscale)
-{
-    int i, level, nCoeffs;
-    const uint16_t *quant_matrix;
-
-    nCoeffs= s->block_last_index[n];
-
-    quant_matrix = s->inter_matrix;
-    for(i=0; i<=nCoeffs; i++) {
-        int j= s->intra_scantable.permutated[i];
-        level = block[j];
-        if (level) {
-            if (level < 0) {
-                level = -level;
-                level = (((level << 1) + 1) * qscale *
-                         ((int) (quant_matrix[j]))) >> 4;
-                level = (level - 1) | 1;
-                level = -level;
-            } else {
-                level = (((level << 1) + 1) * qscale *
-                         ((int) (quant_matrix[j]))) >> 4;
-                level = (level - 1) | 1;
-            }
-            block[j] = level;
-        }
-    }
-}
-
-static void dct_unquantize_mpeg2_intra_c(MpegEncContext *s,
-                                   int16_t *block, int n, int qscale)
-{
-    int i, level, nCoeffs;
-    const uint16_t *quant_matrix;
-
-    if (s->q_scale_type) qscale = ff_mpeg2_non_linear_qscale[qscale];
-    else                 qscale <<= 1;
-
-    nCoeffs= s->block_last_index[n];
-
-    block[0] *= n < 4 ? s->y_dc_scale : s->c_dc_scale;
-    quant_matrix = s->intra_matrix;
-    for(i=1;i<=nCoeffs;i++) {
-        int j= s->intra_scantable.permutated[i];
-        level = block[j];
-        if (level) {
-            if (level < 0) {
-                level = -level;
-                level = (int)(level * qscale * quant_matrix[j]) >> 4;
-                level = -level;
-            } else {
-                level = (int)(level * qscale * quant_matrix[j]) >> 4;
-            }
-            block[j] = level;
-        }
-    }
-}
-
-static void dct_unquantize_mpeg2_intra_bitexact(MpegEncContext *s,
-                                   int16_t *block, int n, int qscale)
-{
-    int i, level, nCoeffs;
-    const uint16_t *quant_matrix;
-    int sum=-1;
-
-    if (s->q_scale_type) qscale = ff_mpeg2_non_linear_qscale[qscale];
-    else                 qscale <<= 1;
-
-    nCoeffs= s->block_last_index[n];
-
-    block[0] *= n < 4 ? s->y_dc_scale : s->c_dc_scale;
-    sum += block[0];
-    quant_matrix = s->intra_matrix;
-    for(i=1;i<=nCoeffs;i++) {
-        int j= s->intra_scantable.permutated[i];
-        level = block[j];
-        if (level) {
-            if (level < 0) {
-                level = -level;
-                level = (int)(level * qscale * quant_matrix[j]) >> 4;
-                level = -level;
-            } else {
-                level = (int)(level * qscale * quant_matrix[j]) >> 4;
-            }
-            block[j] = level;
-            sum+=level;
-        }
-    }
-    block[63]^=sum&1;
-}
-
-static void dct_unquantize_mpeg2_inter_c(MpegEncContext *s,
-                                   int16_t *block, int n, int qscale)
-{
-    int i, level, nCoeffs;
-    const uint16_t *quant_matrix;
-    int sum=-1;
-
-    if (s->q_scale_type) qscale = ff_mpeg2_non_linear_qscale[qscale];
-    else                 qscale <<= 1;
-
-    nCoeffs= s->block_last_index[n];
-
-    quant_matrix = s->inter_matrix;
-    for(i=0; i<=nCoeffs; i++) {
-        int j= s->intra_scantable.permutated[i];
-        level = block[j];
-        if (level) {
-            if (level < 0) {
-                level = -level;
-                level = (((level << 1) + 1) * qscale *
-                         ((int) (quant_matrix[j]))) >> 5;
-                level = -level;
-            } else {
-                level = (((level << 1) + 1) * qscale *
-                         ((int) (quant_matrix[j]))) >> 5;
-            }
-            block[j] = level;
-            sum+=level;
-        }
-    }
-    block[63]^=sum&1;
-}
-
-static void dct_unquantize_h263_intra_c(MpegEncContext *s,
-                                  int16_t *block, int n, int qscale)
-{
-    int i, level, qmul, qadd;
-    int nCoeffs;
-
-    av_assert2(s->block_last_index[n]>=0 || s->h263_aic);
-
-    qmul = qscale << 1;
-
-    if (!s->h263_aic) {
-        block[0] *= n < 4 ? s->y_dc_scale : s->c_dc_scale;
-        qadd = (qscale - 1) | 1;
-    }else{
-        qadd = 0;
-    }
-    if(s->ac_pred)
-        nCoeffs=63;
-    else
-        nCoeffs= s->intra_scantable.raster_end[ s->block_last_index[n] ];
-
-    for(i=1; i<=nCoeffs; i++) {
-        level = block[i];
-        if (level) {
-            if (level < 0) {
-                level = level * qmul - qadd;
-            } else {
-                level = level * qmul + qadd;
-            }
-            block[i] = level;
-        }
-    }
-}
-
-static void dct_unquantize_h263_inter_c(MpegEncContext *s,
-                                  int16_t *block, int n, int qscale)
-{
-    int i, level, qmul, qadd;
-    int nCoeffs;
-
-    av_assert2(s->block_last_index[n]>=0);
-
-    qadd = (qscale - 1) | 1;
-    qmul = qscale << 1;
-
-    nCoeffs= s->inter_scantable.raster_end[ s->block_last_index[n] ];
-
-    for(i=0; i<=nCoeffs; i++) {
-        level = block[i];
-        if (level) {
-            if (level < 0) {
-                level = level * qmul - qadd;
-            } else {
-                level = level * qmul + qadd;
-            }
-            block[i] = level;
-        }
-    }
-}
-
 
 static void gray16(uint8_t *dst, const uint8_t *src, ptrdiff_t linesize, int h)
 {
@@ -325,28 +113,7 @@ av_cold void ff_mpv_idct_init(MpegEncContext *s)
     ff_permute_scantable(s->permutated_intra_v_scantable, ff_alternate_vertical_scan,
                          s->idsp.idct_permutation);
 
-    s->dct_unquantize_h263_intra  = dct_unquantize_h263_intra_c;
-    s->dct_unquantize_h263_inter  = dct_unquantize_h263_inter_c;
-    s->dct_unquantize_mpeg1_intra = dct_unquantize_mpeg1_intra_c;
-    s->dct_unquantize_mpeg1_inter = dct_unquantize_mpeg1_inter_c;
-    s->dct_unquantize_mpeg2_intra = dct_unquantize_mpeg2_intra_c;
-    if (s->avctx->flags & AV_CODEC_FLAG_BITEXACT)
-        s->dct_unquantize_mpeg2_intra = dct_unquantize_mpeg2_intra_bitexact;
-    s->dct_unquantize_mpeg2_inter = dct_unquantize_mpeg2_inter_c;
-
-#if HAVE_INTRINSICS_NEON
-    ff_mpv_common_init_neon(s);
-#endif
-
-#if ARCH_ARM
-    ff_mpv_common_init_arm(s);
-#elif ARCH_PPC
-    ff_mpv_common_init_ppc(s);
-#elif ARCH_X86
-    ff_mpv_common_init_x86(s);
-#elif ARCH_MIPS
-    ff_mpv_common_init_mips(s);
-#endif
+    ff_mpv_unquantize_init(s);
 }
 
 static av_cold int init_duplicate_context(MpegEncContext *s)
diff --git a/libavcodec/mpegvideo.h b/libavcodec/mpegvideo.h
index 1dcfca6b03..7379160159 100644
--- a/libavcodec/mpegvideo.h
+++ b/libavcodec/mpegvideo.h
@@ -362,11 +362,6 @@ typedef struct MpegEncContext {
 void ff_mpv_common_defaults(MpegEncContext *s);
 
 int ff_mpv_common_init(MpegEncContext *s);
-void ff_mpv_common_init_arm(MpegEncContext *s);
-void ff_mpv_common_init_neon(MpegEncContext *s);
-void ff_mpv_common_init_ppc(MpegEncContext *s);
-void ff_mpv_common_init_x86(MpegEncContext *s);
-void ff_mpv_common_init_mips(MpegEncContext *s);
 /**
  * Initialize an MpegEncContext's thread contexts. Presumes that
  * slice_context_count is already set and that all the fields
diff --git a/libavcodec/mpegvideo_unquantize.c b/libavcodec/mpegvideo_unquantize.c
new file mode 100644
index 0000000000..12bacdf424
--- /dev/null
+++ b/libavcodec/mpegvideo_unquantize.c
@@ -0,0 +1,273 @@
+/*
+ * Unquantize functions for mpegvideo
+ * Copyright (c) 2000,2001 Fabrice Bellard
+ * Copyright (c) 2002-2004 Michael Niedermayer <michaelni@gmx.at>
+ *
+ * 4MV & hq & B-frame encoding stuff by Michael Niedermayer <michaelni@gmx.at>
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include <stdint.h>
+
+#include "config.h"
+
+#include "libavutil/attributes.h"
+#include "libavutil/avassert.h"
+#include "avcodec.h"
+#include "mpegvideo.h"
+#include "mpegvideodata.h"
+#include "mpegvideo_unquantize.h"
+
+static void dct_unquantize_mpeg1_intra_c(MpegEncContext *s,
+                                   int16_t *block, int n, int qscale)
+{
+    int i, level, nCoeffs;
+    const uint16_t *quant_matrix;
+
+    nCoeffs= s->block_last_index[n];
+
+    block[0] *= n < 4 ? s->y_dc_scale : s->c_dc_scale;
+    /* XXX: only MPEG-1 */
+    quant_matrix = s->intra_matrix;
+    for(i=1;i<=nCoeffs;i++) {
+        int j= s->intra_scantable.permutated[i];
+        level = block[j];
+        if (level) {
+            if (level < 0) {
+                level = -level;
+                level = (int)(level * qscale * quant_matrix[j]) >> 3;
+                level = (level - 1) | 1;
+                level = -level;
+            } else {
+                level = (int)(level * qscale * quant_matrix[j]) >> 3;
+                level = (level - 1) | 1;
+            }
+            block[j] = level;
+        }
+    }
+}
+
+static void dct_unquantize_mpeg1_inter_c(MpegEncContext *s,
+                                   int16_t *block, int n, int qscale)
+{
+    int i, level, nCoeffs;
+    const uint16_t *quant_matrix;
+
+    nCoeffs= s->block_last_index[n];
+
+    quant_matrix = s->inter_matrix;
+    for(i=0; i<=nCoeffs; i++) {
+        int j= s->intra_scantable.permutated[i];
+        level = block[j];
+        if (level) {
+            if (level < 0) {
+                level = -level;
+                level = (((level << 1) + 1) * qscale *
+                         ((int) (quant_matrix[j]))) >> 4;
+                level = (level - 1) | 1;
+                level = -level;
+            } else {
+                level = (((level << 1) + 1) * qscale *
+                         ((int) (quant_matrix[j]))) >> 4;
+                level = (level - 1) | 1;
+            }
+            block[j] = level;
+        }
+    }
+}
+
+static void dct_unquantize_mpeg2_intra_c(MpegEncContext *s,
+                                   int16_t *block, int n, int qscale)
+{
+    int i, level, nCoeffs;
+    const uint16_t *quant_matrix;
+
+    if (s->q_scale_type) qscale = ff_mpeg2_non_linear_qscale[qscale];
+    else                 qscale <<= 1;
+
+    nCoeffs= s->block_last_index[n];
+
+    block[0] *= n < 4 ? s->y_dc_scale : s->c_dc_scale;
+    quant_matrix = s->intra_matrix;
+    for(i=1;i<=nCoeffs;i++) {
+        int j= s->intra_scantable.permutated[i];
+        level = block[j];
+        if (level) {
+            if (level < 0) {
+                level = -level;
+                level = (int)(level * qscale * quant_matrix[j]) >> 4;
+                level = -level;
+            } else {
+                level = (int)(level * qscale * quant_matrix[j]) >> 4;
+            }
+            block[j] = level;
+        }
+    }
+}
+
+static void dct_unquantize_mpeg2_intra_bitexact(MpegEncContext *s,
+                                   int16_t *block, int n, int qscale)
+{
+    int i, level, nCoeffs;
+    const uint16_t *quant_matrix;
+    int sum=-1;
+
+    if (s->q_scale_type) qscale = ff_mpeg2_non_linear_qscale[qscale];
+    else                 qscale <<= 1;
+
+    nCoeffs= s->block_last_index[n];
+
+    block[0] *= n < 4 ? s->y_dc_scale : s->c_dc_scale;
+    sum += block[0];
+    quant_matrix = s->intra_matrix;
+    for(i=1;i<=nCoeffs;i++) {
+        int j= s->intra_scantable.permutated[i];
+        level = block[j];
+        if (level) {
+            if (level < 0) {
+                level = -level;
+                level = (int)(level * qscale * quant_matrix[j]) >> 4;
+                level = -level;
+            } else {
+                level = (int)(level * qscale * quant_matrix[j]) >> 4;
+            }
+            block[j] = level;
+            sum+=level;
+        }
+    }
+    block[63]^=sum&1;
+}
+
+static void dct_unquantize_mpeg2_inter_c(MpegEncContext *s,
+                                   int16_t *block, int n, int qscale)
+{
+    int i, level, nCoeffs;
+    const uint16_t *quant_matrix;
+    int sum=-1;
+
+    if (s->q_scale_type) qscale = ff_mpeg2_non_linear_qscale[qscale];
+    else                 qscale <<= 1;
+
+    nCoeffs= s->block_last_index[n];
+
+    quant_matrix = s->inter_matrix;
+    for(i=0; i<=nCoeffs; i++) {
+        int j= s->intra_scantable.permutated[i];
+        level = block[j];
+        if (level) {
+            if (level < 0) {
+                level = -level;
+                level = (((level << 1) + 1) * qscale *
+                         ((int) (quant_matrix[j]))) >> 5;
+                level = -level;
+            } else {
+                level = (((level << 1) + 1) * qscale *
+                         ((int) (quant_matrix[j]))) >> 5;
+            }
+            block[j] = level;
+            sum+=level;
+        }
+    }
+    block[63]^=sum&1;
+}
+
+static void dct_unquantize_h263_intra_c(MpegEncContext *s,
+                                  int16_t *block, int n, int qscale)
+{
+    int i, level, qmul, qadd;
+    int nCoeffs;
+
+    av_assert2(s->block_last_index[n]>=0 || s->h263_aic);
+
+    qmul = qscale << 1;
+
+    if (!s->h263_aic) {
+        block[0] *= n < 4 ? s->y_dc_scale : s->c_dc_scale;
+        qadd = (qscale - 1) | 1;
+    }else{
+        qadd = 0;
+    }
+    if(s->ac_pred)
+        nCoeffs=63;
+    else
+        nCoeffs= s->intra_scantable.raster_end[ s->block_last_index[n] ];
+
+    for(i=1; i<=nCoeffs; i++) {
+        level = block[i];
+        if (level) {
+            if (level < 0) {
+                level = level * qmul - qadd;
+            } else {
+                level = level * qmul + qadd;
+            }
+            block[i] = level;
+        }
+    }
+}
+
+static void dct_unquantize_h263_inter_c(MpegEncContext *s,
+                                  int16_t *block, int n, int qscale)
+{
+    int i, level, qmul, qadd;
+    int nCoeffs;
+
+    av_assert2(s->block_last_index[n]>=0);
+
+    qadd = (qscale - 1) | 1;
+    qmul = qscale << 1;
+
+    nCoeffs= s->inter_scantable.raster_end[ s->block_last_index[n] ];
+
+    for(i=0; i<=nCoeffs; i++) {
+        level = block[i];
+        if (level) {
+            if (level < 0) {
+                level = level * qmul - qadd;
+            } else {
+                level = level * qmul + qadd;
+            }
+            block[i] = level;
+        }
+    }
+}
+
+av_cold void ff_mpv_unquantize_init(MpegEncContext *s)
+{
+    s->dct_unquantize_h263_intra  = dct_unquantize_h263_intra_c;
+    s->dct_unquantize_h263_inter  = dct_unquantize_h263_inter_c;
+    s->dct_unquantize_mpeg1_intra = dct_unquantize_mpeg1_intra_c;
+    s->dct_unquantize_mpeg1_inter = dct_unquantize_mpeg1_inter_c;
+    s->dct_unquantize_mpeg2_intra = dct_unquantize_mpeg2_intra_c;
+    if (s->avctx->flags & AV_CODEC_FLAG_BITEXACT)
+        s->dct_unquantize_mpeg2_intra = dct_unquantize_mpeg2_intra_bitexact;
+    s->dct_unquantize_mpeg2_inter = dct_unquantize_mpeg2_inter_c;
+
+#if HAVE_INTRINSICS_NEON
+    ff_mpv_common_init_neon(s);
+#endif
+
+#if ARCH_ARM
+    ff_mpv_common_init_arm(s);
+#elif ARCH_PPC
+    ff_mpv_common_init_ppc(s);
+#elif ARCH_X86
+    ff_mpv_common_init_x86(s);
+#elif ARCH_MIPS
+    ff_mpv_common_init_mips(s);
+#endif
+}
diff --git a/libavcodec/mpegvideo_unquantize.h b/libavcodec/mpegvideo_unquantize.h
new file mode 100644
index 0000000000..1e7590561c
--- /dev/null
+++ b/libavcodec/mpegvideo_unquantize.h
@@ -0,0 +1,37 @@
+/*
+ * Unquantize functions for mpegvideo
+ * Copyright (c) 2000,2001 Fabrice Bellard
+ * Copyright (c) 2002-2004 Michael Niedermayer <michaelni@gmx.at>
+ *
+ * 4MV & hq & B-frame encoding stuff by Michael Niedermayer <michaelni@gmx.at>
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#ifndef AVCODEC_MPEGVIDEO_UNQUANTIZE_H
+#define AVCODEC_MPEGVIDEO_UNQUANTIZE_H
+
+typedef struct MpegEncContext MpegEncContext;
+
+void ff_mpv_unquantize_init(MpegEncContext *s);
+void ff_mpv_common_init_arm(MpegEncContext *s);
+void ff_mpv_common_init_neon(MpegEncContext *s);
+void ff_mpv_common_init_ppc(MpegEncContext *s);
+void ff_mpv_common_init_x86(MpegEncContext *s);
+void ff_mpv_common_init_mips(MpegEncContext *s);
+
+#endif /* AVCODEC_MPEGVIDEO_UNQUANTIZE_H */
diff --git a/libavcodec/neon/mpegvideo.c b/libavcodec/neon/mpegvideo.c
index 8f05d77a65..a8b2a0606d 100644
--- a/libavcodec/neon/mpegvideo.c
+++ b/libavcodec/neon/mpegvideo.c
@@ -32,6 +32,7 @@
 #endif
 
 #include "libavcodec/mpegvideo.h"
+#include "libavcodec/mpegvideo_unquantize.h"
 
 static void inline ff_dct_unquantize_h263_neon(int qscale, int qadd, int nCoeffs,
                                                int16_t *block)
diff --git a/libavcodec/ppc/mpegvideo_altivec.c b/libavcodec/ppc/mpegvideo_altivec.c
index bcb59ba845..c361ca7857 100644
--- a/libavcodec/ppc/mpegvideo_altivec.c
+++ b/libavcodec/ppc/mpegvideo_altivec.c
@@ -33,6 +33,7 @@
 #include "libavutil/ppc/util_altivec.h"
 
 #include "libavcodec/mpegvideo.h"
+#include "libavcodec/mpegvideo_unquantize.h"
 
 #if HAVE_ALTIVEC
 
diff --git a/libavcodec/x86/mpegvideo.c b/libavcodec/x86/mpegvideo.c
index 9878607a81..11a5ee474b 100644
--- a/libavcodec/x86/mpegvideo.c
+++ b/libavcodec/x86/mpegvideo.c
@@ -26,6 +26,7 @@
 #include "libavcodec/avcodec.h"
 #include "libavcodec/mpegvideo.h"
 #include "libavcodec/mpegvideodata.h"
+#include "libavcodec/mpegvideo_unquantize.h"
 
 #if HAVE_MMX_INLINE
 
-- 
2.45.2


[-- Attachment #16: 0077-avcodec-mpegvideo-Only-keep-the-actually-used-unquan.patch --]
[-- Type: text/x-patch, Size: 21057 bytes --]

From 3d1656ba8bae3fb1820bca9b929aeb41b2d28b8f Mon Sep 17 00:00:00 2001
From: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
Date: Wed, 19 Mar 2025 15:24:57 +0100
Subject: [PATCH 77/77] avcodec/mpegvideo: Only keep the actually used
 unquantize funcs

For all encoders and all decoders except MPEG-4 the unquantize
functions to use don't change at all and therefore needn't be
kept in the context. So discard them after setting them;
for MPEG-4, the functions get assigned on a per-frame basis.

Decoders not using any unquantize functions (H.261, MPEG-1/2)
as well as decoders that only call ff_mpv_reconstruct_mb()
through error resilience (RV30/40, the VC-1 family) don't have
the remaining pointers set at all.

Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com>
---
 libavcodec/arm/mpegvideo_arm.c        |  5 ++--
 libavcodec/arm/mpegvideo_arm.h        |  4 +--
 libavcodec/arm/mpegvideo_armv5te.c    |  2 +-
 libavcodec/h263dec.c                  | 15 ++++-------
 libavcodec/mips/mpegvideo_init_mips.c |  9 ++++---
 libavcodec/mpeg4videodec.c            | 13 +++++++++
 libavcodec/mpeg4videodec.h            |  5 ++++
 libavcodec/mpegvideo.c                |  2 --
 libavcodec/mpegvideo.h                | 12 ---------
 libavcodec/mpegvideo_enc.c            | 34 +++++++++++++++++-------
 libavcodec/mpegvideo_unquantize.c     | 15 ++++++-----
 libavcodec/mpegvideo_unquantize.h     | 38 ++++++++++++++++++++++-----
 libavcodec/neon/mpegvideo.c           |  3 ++-
 libavcodec/ppc/mpegvideo_altivec.c    |  7 ++---
 libavcodec/x86/mpegvideo.c            |  5 ++--
 15 files changed, 108 insertions(+), 61 deletions(-)

diff --git a/libavcodec/arm/mpegvideo_arm.c b/libavcodec/arm/mpegvideo_arm.c
index e32451b554..e5e418b6c4 100644
--- a/libavcodec/arm/mpegvideo_arm.c
+++ b/libavcodec/arm/mpegvideo_arm.c
@@ -46,12 +46,13 @@ void ff_dct_unquantize_h263_inter_neon(MpegEncContext *s, int16_t *block,
 void ff_dct_unquantize_h263_intra_neon(MpegEncContext *s, int16_t *block,
                                        int n, int qscale);
 
-av_cold void ff_mpv_common_init_arm(MpegEncContext *s)
+av_cold void ff_mpv_unquantize_init_arm(MPVUnquantDSPContext *s,
+                                        AVCodecContext *avctx)
 {
     int cpu_flags = av_get_cpu_flags();
 
     if (have_armv5te(cpu_flags))
-        ff_mpv_common_init_armv5te(s);
+        ff_mpv_unquantize_init_armv5te(s);
 
     if (have_neon(cpu_flags)) {
         s->dct_unquantize_h263_intra = ff_dct_unquantize_h263_intra_neon;
diff --git a/libavcodec/arm/mpegvideo_arm.h b/libavcodec/arm/mpegvideo_arm.h
index 709ae6b247..93da7a5664 100644
--- a/libavcodec/arm/mpegvideo_arm.h
+++ b/libavcodec/arm/mpegvideo_arm.h
@@ -19,8 +19,8 @@
 #ifndef AVCODEC_ARM_MPEGVIDEO_ARM_H
 #define AVCODEC_ARM_MPEGVIDEO_ARM_H
 
-#include "libavcodec/mpegvideo.h"
+#include "libavcodec/mpegvideo_unquantize.h"
 
-void ff_mpv_common_init_armv5te(MpegEncContext *s);
+void ff_mpv_unquantize_init_armv5te(MPVUnquantDSPContext *s);
 
 #endif /* AVCODEC_ARM_MPEGVIDEO_ARM_H */
diff --git a/libavcodec/arm/mpegvideo_armv5te.c b/libavcodec/arm/mpegvideo_armv5te.c
index e20bb4c645..2737f68643 100644
--- a/libavcodec/arm/mpegvideo_armv5te.c
+++ b/libavcodec/arm/mpegvideo_armv5te.c
@@ -95,7 +95,7 @@ static void dct_unquantize_h263_inter_armv5te(MpegEncContext *s,
     ff_dct_unquantize_h263_armv5te(block, qmul, qadd, nCoeffs + 1);
 }
 
-av_cold void ff_mpv_common_init_armv5te(MpegEncContext *s)
+av_cold void ff_mpv_unquantize_init_armv5te(MPVUnquantDSPContext *s)
 {
     s->dct_unquantize_h263_intra = dct_unquantize_h263_intra_armv5te;
     s->dct_unquantize_h263_inter = dct_unquantize_h263_inter_armv5te;
diff --git a/libavcodec/h263dec.c b/libavcodec/h263dec.c
index 8434f6e7cf..8d00e5bf3d 100644
--- a/libavcodec/h263dec.c
+++ b/libavcodec/h263dec.c
@@ -44,6 +44,7 @@
 #include "mpegvideo.h"
 #include "mpegvideodata.h"
 #include "mpegvideodec.h"
+#include "mpegvideo_unquantize.h"
 #include "msmpeg4dec.h"
 #include "thread.h"
 #include "wmv2dec.h"
@@ -90,6 +91,7 @@ static enum AVPixelFormat h263_get_format(AVCodecContext *avctx)
 av_cold int ff_h263_decode_init(AVCodecContext *avctx)
 {
     MpegEncContext *s = avctx->priv_data;
+    MPVUnquantDSPContext unquant_dsp_ctx;
     int ret;
 
     s->out_format      = FMT_H263;
@@ -105,10 +107,11 @@ av_cold int ff_h263_decode_init(AVCodecContext *avctx)
     s->y_dc_scale_table =
     s->c_dc_scale_table = ff_mpeg1_dc_scale_table;
 
+    ff_mpv_unquantize_init(&unquant_dsp_ctx, avctx, 0);
     // dct_unquantize defaults for H.263;
     // they might change on a per-frame basis for MPEG-4.
-    s->dct_unquantize_intra = s->dct_unquantize_h263_intra;
-    s->dct_unquantize_inter = s->dct_unquantize_h263_inter;
+    s->dct_unquantize_intra = unquant_dsp_ctx.dct_unquantize_h263_intra;
+    s->dct_unquantize_inter = unquant_dsp_ctx.dct_unquantize_h263_inter;
 
     /* select sub codec */
     switch (avctx->codec->id) {
@@ -117,9 +120,6 @@ av_cold int ff_h263_decode_init(AVCodecContext *avctx)
         avctx->chroma_sample_location = AVCHROMA_LOC_CENTER;
         break;
     case AV_CODEC_ID_MPEG4:
-        // dct_unquantize_inter is only used with MPEG-2 quantizers,
-        // so we can already set dct_unquantize_inter here once and for all.
-        s->dct_unquantize_inter = s->dct_unquantize_mpeg2_inter;
         break;
     case AV_CODEC_ID_MSMPEG4V1:
         s->h263_pred       = 1;
@@ -508,11 +508,6 @@ retry:
             goto retry;
         if (s->studio_profile != (s->idsp.idct == NULL))
             ff_mpv_idct_init(s);
-        if (s->mpeg_quant) {
-            s->dct_unquantize_intra = s->dct_unquantize_mpeg2_intra;
-        } else {
-            s->dct_unquantize_intra = s->dct_unquantize_h263_intra;
-        }
     }
 
     /* After H.263 & MPEG-4 header decode we have the height, width,
diff --git a/libavcodec/mips/mpegvideo_init_mips.c b/libavcodec/mips/mpegvideo_init_mips.c
index a9acae94ce..0de8245460 100644
--- a/libavcodec/mips/mpegvideo_init_mips.c
+++ b/libavcodec/mips/mpegvideo_init_mips.c
@@ -24,7 +24,8 @@
 #include "h263dsp_mips.h"
 #include "mpegvideo_mips.h"
 
-av_cold void ff_mpv_common_init_mips(MpegEncContext *s)
+av_cold void ff_mpv_unquantize_init_mips(MPVUnquantDSPContext *s,
+                                         AVCodecContext *avctx, int q_scale_type)
 {
     int cpu_flags = av_get_cpu_flags();
 
@@ -34,15 +35,15 @@ av_cold void ff_mpv_common_init_mips(MpegEncContext *s)
         s->dct_unquantize_mpeg1_intra = ff_dct_unquantize_mpeg1_intra_mmi;
         s->dct_unquantize_mpeg1_inter = ff_dct_unquantize_mpeg1_inter_mmi;
 
-        if (!(s->avctx->flags & AV_CODEC_FLAG_BITEXACT))
-            if (!s->q_scale_type)
+        if (!(avctx->flags & AV_CODEC_FLAG_BITEXACT))
+            if (!q_scale_type)
                 s->dct_unquantize_mpeg2_intra = ff_dct_unquantize_mpeg2_intra_mmi;
     }
 
     if (have_msa(cpu_flags)) {
         s->dct_unquantize_h263_intra = ff_dct_unquantize_h263_intra_msa;
         s->dct_unquantize_h263_inter = ff_dct_unquantize_h263_inter_msa;
-        if (!s->q_scale_type)
+        if (!q_scale_type)
             s->dct_unquantize_mpeg2_inter = ff_dct_unquantize_mpeg2_inter_msa;
     }
 }
diff --git a/libavcodec/mpeg4videodec.c b/libavcodec/mpeg4videodec.c
index 139b6d4b08..8803801b63 100644
--- a/libavcodec/mpeg4videodec.c
+++ b/libavcodec/mpeg4videodec.c
@@ -35,6 +35,7 @@
 #include "mpegvideo.h"
 #include "mpegvideodata.h"
 #include "mpegvideodec.h"
+#include "mpegvideo_unquantize.h"
 #include "mpeg4video.h"
 #include "mpeg4videodata.h"
 #include "mpeg4videodec.h"
@@ -3390,6 +3391,9 @@ static int decode_vop_header(Mpeg4DecContext *ctx, GetBitContext *gb,
         }
     }
 
+    s->dct_unquantize_intra = s->mpeg_quant ? ctx->dct_unquantize_mpeg2_intra
+                                            : ctx->dct_unquantize_h263_intra;
+
 end:
     /* detect buggy encoders which don't set the low_delay flag
      * (divx4/xvid/opendivx). Note we cannot detect divx5 without B-frames
@@ -3862,6 +3866,7 @@ static av_cold int decode_init(AVCodecContext *avctx)
     static AVOnce init_static_once = AV_ONCE_INIT;
     Mpeg4DecContext *ctx = avctx->priv_data;
     MpegEncContext *s = &ctx->m;
+    MPVUnquantDSPContext unquant_dsp_ctx;
     int ret;
 
     ctx->divx_version =
@@ -3872,6 +3877,14 @@ static av_cold int decode_init(AVCodecContext *avctx)
     if ((ret = ff_h263_decode_init(avctx)) < 0)
         return ret;
 
+    ff_mpv_unquantize_init(&unquant_dsp_ctx, avctx, 0);
+
+    ctx->dct_unquantize_h263_intra  = unquant_dsp_ctx.dct_unquantize_h263_intra;
+    ctx->dct_unquantize_mpeg2_intra = unquant_dsp_ctx.dct_unquantize_mpeg2_intra;
+    // dct_unquantize_inter is only used with MPEG-2 quantizers,
+    // so we can already set dct_unquantize_inter here once and for all.
+    s->dct_unquantize_inter = unquant_dsp_ctx.dct_unquantize_mpeg2_inter;
+
     s->h263_pred = 1;
     s->low_delay = 0; /* default, might be overridden in the vol header during header parsing */
     s->decode_mb = mpeg4_decode_mb;
diff --git a/libavcodec/mpeg4videodec.h b/libavcodec/mpeg4videodec.h
index bb14d24c88..3a254f2838 100644
--- a/libavcodec/mpeg4videodec.h
+++ b/libavcodec/mpeg4videodec.h
@@ -88,6 +88,11 @@ typedef struct Mpeg4DecContext {
 
     Mpeg4VideoDSPContext mdsp;
 
+    void (*dct_unquantize_mpeg2_intra)(MpegEncContext *s,
+                                       int16_t *block/*align 16*/, int n, int qscale);
+    void (*dct_unquantize_h263_intra)(MpegEncContext *s,
+                                      int16_t *block/*align 16*/, int n, int qscale);
+
     DECLARE_ALIGNED(8, int32_t, block32)[12][64];
     // 0 = DCT, 1 = DPCM top to bottom scan, -1 = DPCM bottom to top scan
     int dpcm_direction;
diff --git a/libavcodec/mpegvideo.c b/libavcodec/mpegvideo.c
index 9a40937bbd..794b2d0f66 100644
--- a/libavcodec/mpegvideo.c
+++ b/libavcodec/mpegvideo.c
@@ -112,8 +112,6 @@ av_cold void ff_mpv_idct_init(MpegEncContext *s)
                          s->idsp.idct_permutation);
     ff_permute_scantable(s->permutated_intra_v_scantable, ff_alternate_vertical_scan,
                          s->idsp.idct_permutation);
-
-    ff_mpv_unquantize_init(s);
 }
 
 static av_cold int init_duplicate_context(MpegEncContext *s)
diff --git a/libavcodec/mpegvideo.h b/libavcodec/mpegvideo.h
index 7379160159..2d60c9ddf0 100644
--- a/libavcodec/mpegvideo.h
+++ b/libavcodec/mpegvideo.h
@@ -326,18 +326,6 @@ typedef struct MpegEncContext {
 #define SLICE_END       -2 ///<end marker found
 #define SLICE_NOEND     -3 ///<no end marker or error found but mb count exceeded
 
-    void (*dct_unquantize_mpeg1_intra)(struct MpegEncContext *s,
-                           int16_t *block/*align 16*/, int n, int qscale);
-    void (*dct_unquantize_mpeg1_inter)(struct MpegEncContext *s,
-                           int16_t *block/*align 16*/, int n, int qscale);
-    void (*dct_unquantize_mpeg2_intra)(struct MpegEncContext *s,
-                           int16_t *block/*align 16*/, int n, int qscale);
-    void (*dct_unquantize_mpeg2_inter)(struct MpegEncContext *s,
-                           int16_t *block/*align 16*/, int n, int qscale);
-    void (*dct_unquantize_h263_intra)(struct MpegEncContext *s,
-                           int16_t *block/*align 16*/, int n, int qscale);
-    void (*dct_unquantize_h263_inter)(struct MpegEncContext *s,
-                           int16_t *block/*align 16*/, int n, int qscale);
     void (*dct_unquantize_intra)(struct MpegEncContext *s, // unquantizer to use (MPEG-4 can use both)
                            int16_t *block/*align 16*/, int n, int qscale);
     void (*dct_unquantize_inter)(struct MpegEncContext *s, // unquantizer to use (MPEG-4 can use both)
diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c
index 95d774155a..7be485bcad 100644
--- a/libavcodec/mpegvideo_enc.c
+++ b/libavcodec/mpegvideo_enc.c
@@ -60,6 +60,7 @@
 #include "mjpegenc_common.h"
 #include "mathops.h"
 #include "mpegutils.h"
+#include "mpegvideo_unquantize.h"
 #include "mjpegenc.h"
 #include "speedhqenc.h"
 #include "msmpeg4enc.h"
@@ -310,6 +311,24 @@ av_cold void ff_dct_encode_init(MPVEncContext *const s)
         s->dct_quantize  = dct_quantize_trellis_c;
 }
 
+static av_cold void init_unquantize(MpegEncContext *const s, AVCodecContext *avctx)
+{
+    MPVUnquantDSPContext unquant_dsp_ctx;
+
+    ff_mpv_unquantize_init(&unquant_dsp_ctx, avctx, s->q_scale_type);
+
+    if (s->mpeg_quant || s->codec_id == AV_CODEC_ID_MPEG2VIDEO) {
+        s->dct_unquantize_intra = unquant_dsp_ctx.dct_unquantize_mpeg2_intra;
+        s->dct_unquantize_inter = unquant_dsp_ctx.dct_unquantize_mpeg2_inter;
+    } else if (s->out_format == FMT_H263 || s->out_format == FMT_H261) {
+        s->dct_unquantize_intra = unquant_dsp_ctx.dct_unquantize_h263_intra;
+        s->dct_unquantize_inter = unquant_dsp_ctx.dct_unquantize_h263_inter;
+    } else {
+        s->dct_unquantize_intra = unquant_dsp_ctx.dct_unquantize_mpeg1_intra;
+        s->dct_unquantize_inter = unquant_dsp_ctx.dct_unquantize_mpeg1_inter;
+    }
+}
+
 static av_cold int me_cmp_init(MPVMainEncContext *const m, AVCodecContext *avctx)
 {
     MPVEncContext *const s = &m->s;
@@ -1013,6 +1032,7 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx)
      * to the slice contexts, so we initialize various fields of it
      * before calling ff_mpv_common_init(). */
     ff_mpv_idct_init(&s->c);
+    init_unquantize(&s->c, avctx);
     ff_fdctdsp_init(&s->fdsp, avctx);
     ff_mpegvideoencdsp_init(&s->mpvencdsp, avctx);
     ff_pixblockdsp_init(&s->pdsp, avctx);
@@ -1031,15 +1051,11 @@ av_cold int ff_mpv_encode_init(AVCodecContext *avctx)
 
     ff_dct_encode_init(s);
 
-    if (s->c.mpeg_quant || s->c.codec_id == AV_CODEC_ID_MPEG2VIDEO) {
-        s->c.dct_unquantize_intra = s->c.dct_unquantize_mpeg2_intra;
-        s->c.dct_unquantize_inter = s->c.dct_unquantize_mpeg2_inter;
-    } else if (s->c.out_format == FMT_H263 || s->c.out_format == FMT_H261) {
-        s->c.dct_unquantize_intra = s->c.dct_unquantize_h263_intra;
-        s->c.dct_unquantize_inter = s->c.dct_unquantize_h263_inter;
-    } else {
-        s->c.dct_unquantize_intra = s->c.dct_unquantize_mpeg1_intra;
-        s->c.dct_unquantize_inter = s->c.dct_unquantize_mpeg1_inter;
+    if (s->c.slice_context_count > 1) {
+        s->rtp_mode = 1;
+
+        if (avctx->codec_id == AV_CODEC_ID_H263P)
+            s->c.h263_slice_structured = 1;
     }
 
     if (CONFIG_H263_ENCODER && s->c.out_format == FMT_H263) {
diff --git a/libavcodec/mpegvideo_unquantize.c b/libavcodec/mpegvideo_unquantize.c
index 12bacdf424..fe45f350d7 100644
--- a/libavcodec/mpegvideo_unquantize.c
+++ b/libavcodec/mpegvideo_unquantize.c
@@ -246,28 +246,29 @@ static void dct_unquantize_h263_inter_c(MpegEncContext *s,
     }
 }
 
-av_cold void ff_mpv_unquantize_init(MpegEncContext *s)
+av_cold void ff_mpv_unquantize_init(MPVUnquantDSPContext *s,
+                                    AVCodecContext *avctx, int q_scale_type)
 {
     s->dct_unquantize_h263_intra  = dct_unquantize_h263_intra_c;
     s->dct_unquantize_h263_inter  = dct_unquantize_h263_inter_c;
     s->dct_unquantize_mpeg1_intra = dct_unquantize_mpeg1_intra_c;
     s->dct_unquantize_mpeg1_inter = dct_unquantize_mpeg1_inter_c;
     s->dct_unquantize_mpeg2_intra = dct_unquantize_mpeg2_intra_c;
-    if (s->avctx->flags & AV_CODEC_FLAG_BITEXACT)
+    if (avctx->flags & AV_CODEC_FLAG_BITEXACT)
         s->dct_unquantize_mpeg2_intra = dct_unquantize_mpeg2_intra_bitexact;
     s->dct_unquantize_mpeg2_inter = dct_unquantize_mpeg2_inter_c;
 
 #if HAVE_INTRINSICS_NEON
-    ff_mpv_common_init_neon(s);
+    ff_mpv_unquantize_init_neon(s, avctx);
 #endif
 
 #if ARCH_ARM
-    ff_mpv_common_init_arm(s);
+    ff_mpv_unquantize_init_arm(s, avctx);
 #elif ARCH_PPC
-    ff_mpv_common_init_ppc(s);
+    ff_mpv_unquantize_init_ppc(s, avctx);
 #elif ARCH_X86
-    ff_mpv_common_init_x86(s);
+    ff_mpv_unquantize_init_x86(s, avctx);
 #elif ARCH_MIPS
-    ff_mpv_common_init_mips(s);
+    ff_mpv_unquantize_init_mips(s. avctx, q_scale_type);
 #endif
 }
diff --git a/libavcodec/mpegvideo_unquantize.h b/libavcodec/mpegvideo_unquantize.h
index 1e7590561c..cfa349cccd 100644
--- a/libavcodec/mpegvideo_unquantize.h
+++ b/libavcodec/mpegvideo_unquantize.h
@@ -25,13 +25,39 @@
 #ifndef AVCODEC_MPEGVIDEO_UNQUANTIZE_H
 #define AVCODEC_MPEGVIDEO_UNQUANTIZE_H
 
+#include <stdint.h>
+
+#include "config.h"
+
+typedef struct AVCodecContext AVCodecContext;
 typedef struct MpegEncContext MpegEncContext;
 
-void ff_mpv_unquantize_init(MpegEncContext *s);
-void ff_mpv_common_init_arm(MpegEncContext *s);
-void ff_mpv_common_init_neon(MpegEncContext *s);
-void ff_mpv_common_init_ppc(MpegEncContext *s);
-void ff_mpv_common_init_x86(MpegEncContext *s);
-void ff_mpv_common_init_mips(MpegEncContext *s);
+typedef struct MPVUnquantDSPContext {
+    void (*dct_unquantize_mpeg1_intra)(struct MpegEncContext *s,
+                           int16_t *block/*align 16*/, int n, int qscale);
+    void (*dct_unquantize_mpeg1_inter)(struct MpegEncContext *s,
+                           int16_t *block/*align 16*/, int n, int qscale);
+    void (*dct_unquantize_mpeg2_intra)(struct MpegEncContext *s,
+                           int16_t *block/*align 16*/, int n, int qscale);
+    void (*dct_unquantize_mpeg2_inter)(struct MpegEncContext *s,
+                           int16_t *block/*align 16*/, int n, int qscale);
+    void (*dct_unquantize_h263_intra)(struct MpegEncContext *s,
+                           int16_t *block/*align 16*/, int n, int qscale);
+    void (*dct_unquantize_h263_inter)(struct MpegEncContext *s,
+                           int16_t *block/*align 16*/, int n, int qscale);
+} MPVUnquantDSPContext;
+
+#if !ARCH_MIPS
+#define ff_mpv_unquantize_init(s, avctx, q_scale_type) ff_mpv_unquantize_init(s, avctx)
+#endif
+
+void ff_mpv_unquantize_init(MPVUnquantDSPContext *s,
+                            AVCodecContext *avctx, int q_scale_type);
+void ff_mpv_unquantize_init_arm(MPVUnquantDSPContext *s, AVCodecContext *avctx);
+void ff_mpv_unquantize_init_neon(MPVUnquantDSPContext *s, AVCodecContext *avctx);
+void ff_mpv_unquantize_init_ppc(MPVUnquantDSPContext *s, AVCodecContext *avctx);
+void ff_mpv_unquantize_init_x86(MPVUnquantDSPContext *s, AVCodecContext *avctx);
+void ff_mpv_unquantize_init_mips(MPVUnquantDSPContext *s, AVCodecContext *avctx,
+                                 int q_scale_type);
 
 #endif /* AVCODEC_MPEGVIDEO_UNQUANTIZE_H */
diff --git a/libavcodec/neon/mpegvideo.c b/libavcodec/neon/mpegvideo.c
index a8b2a0606d..24cf8fe9b2 100644
--- a/libavcodec/neon/mpegvideo.c
+++ b/libavcodec/neon/mpegvideo.c
@@ -125,7 +125,8 @@ static void dct_unquantize_h263_intra_neon(MpegEncContext *s, int16_t *block,
 }
 
 
-av_cold void ff_mpv_common_init_neon(MpegEncContext *s)
+av_cold void ff_mpv_unquantize_init_neon(MPVUnquantDSPContext *s,
+                                         AVCodecContext *avctx)
 {
     int cpu_flags = av_get_cpu_flags();
 
diff --git a/libavcodec/ppc/mpegvideo_altivec.c b/libavcodec/ppc/mpegvideo_altivec.c
index c361ca7857..f2801fdd50 100644
--- a/libavcodec/ppc/mpegvideo_altivec.c
+++ b/libavcodec/ppc/mpegvideo_altivec.c
@@ -117,14 +117,15 @@ static void dct_unquantize_h263_altivec(MpegEncContext *s,
 
 #endif /* HAVE_ALTIVEC */
 
-av_cold void ff_mpv_common_init_ppc(MpegEncContext *s)
+av_cold void ff_mpv_unquantize_init_ppc(MPVUnquantDSPContext *s,
+                                        AVCodecContext *avctx)
 {
 #if HAVE_ALTIVEC
     if (!PPC_ALTIVEC(av_get_cpu_flags()))
         return;
 
-    if ((s->avctx->dct_algo == FF_DCT_AUTO) ||
-        (s->avctx->dct_algo == FF_DCT_ALTIVEC)) {
+    if ((avctx->dct_algo == FF_DCT_AUTO) ||
+        (avctx->dct_algo == FF_DCT_ALTIVEC)) {
         s->dct_unquantize_h263_intra = dct_unquantize_h263_altivec;
         s->dct_unquantize_h263_inter = dct_unquantize_h263_altivec;
     }
diff --git a/libavcodec/x86/mpegvideo.c b/libavcodec/x86/mpegvideo.c
index 11a5ee474b..2d026afc41 100644
--- a/libavcodec/x86/mpegvideo.c
+++ b/libavcodec/x86/mpegvideo.c
@@ -450,7 +450,8 @@ __asm__ volatile(
 
 #endif /* HAVE_MMX_INLINE */
 
-av_cold void ff_mpv_common_init_x86(MpegEncContext *s)
+av_cold void ff_mpv_unquantize_init_x86(MPVUnquantDSPContext *s,
+                                        AVCodecContext *avctx)
 {
 #if HAVE_MMX_INLINE
     int cpu_flags = av_get_cpu_flags();
@@ -460,7 +461,7 @@ av_cold void ff_mpv_common_init_x86(MpegEncContext *s)
         s->dct_unquantize_h263_inter = dct_unquantize_h263_inter_mmx;
         s->dct_unquantize_mpeg1_intra = dct_unquantize_mpeg1_intra_mmx;
         s->dct_unquantize_mpeg1_inter = dct_unquantize_mpeg1_inter_mmx;
-        if (!(s->avctx->flags & AV_CODEC_FLAG_BITEXACT))
+        if (!(avctx->flags & AV_CODEC_FLAG_BITEXACT))
             s->dct_unquantize_mpeg2_intra = dct_unquantize_mpeg2_intra_mmx;
         s->dct_unquantize_mpeg2_inter = dct_unquantize_mpeg2_inter_mmx;
     }
-- 
2.45.2


[-- Attachment #17: Type: text/plain, Size: 251 bytes --]

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

  reply	other threads:[~2025-03-19 21:20 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-19 21:18 Andreas Rheinhardt
2025-03-19 21:20 ` Andreas Rheinhardt [this message]
2025-03-22  0:34 ` Michael Niedermayer
2025-03-22 13:16   ` Andreas Rheinhardt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AS8P250MB07446BD9DA9523CB9EA6DDD88FD92@AS8P250MB0744.EURP250.PROD.OUTLOOK.COM \
    --to=andreas.rheinhardt@outlook.com \
    --cc=ffmpeg-devel@ffmpeg.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

Git Inbox Mirror of the ffmpeg-devel mailing list - see https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

This inbox may be cloned and mirrored by anyone:

	git clone --mirror https://master.gitmailbox.com/ffmpegdev/0 ffmpegdev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 ffmpegdev ffmpegdev/ https://master.gitmailbox.com/ffmpegdev \
		ffmpegdev@gitmailbox.com
	public-inbox-index ffmpegdev

Example config snippet for mirrors.


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git