From: "Rémi Denis-Courmont" <remi@remlab.net> To: FFmpeg development discussions and patches <ffmpeg-devel@ffmpeg.org> Subject: Re: [FFmpeg-devel] [PATCH 06/13] avcodec/mpv_reconstruct_mb_template: Merge template into its users Date: Mon, 01 Jul 2024 16:04:00 +0300 Message-ID: <8EADD83B-FB16-4449-BB76-6F50353E14E1@remlab.net> (raw) In-Reply-To: <GV1P250MB07372EFFCA9685932DCBCE668FD32@GV1P250MB0737.EURP250.PROD.OUTLOOK.COM> Le 1 juillet 2024 15:16:03 GMT+03:00, Andreas Rheinhardt <andreas.rheinhardt@outlook.com> a écrit : >A large part of this template is decoder-only. This makes >the complexity of the IS_ENCODER-checks not worth it. >So simply merge the template into both its users. > >Signed-off-by: Andreas Rheinhardt <andreas.rheinhardt@outlook.com> >--- > libavcodec/mpegvideo_dec.c | 218 +++++++++++++++++- > libavcodec/mpegvideo_enc.c | 81 ++++++- > libavcodec/mpv_reconstruct_mb_template.c | 272 ----------------------- > 3 files changed, 294 insertions(+), 277 deletions(-) > delete mode 100644 libavcodec/mpv_reconstruct_mb_template.c > >diff --git a/libavcodec/mpegvideo_dec.c b/libavcodec/mpegvideo_dec.c >index 1cab108935..2222d50283 100644 >--- a/libavcodec/mpegvideo_dec.c >+++ b/libavcodec/mpegvideo_dec.c >@@ -904,11 +904,225 @@ static inline void add_dct(MpegEncContext *s, > } > } > >-#define IS_ENCODER 0 >-#include "mpv_reconstruct_mb_template.c" >+/* put block[] to dest[] */ >+static inline void put_dct(MpegEncContext *s, >+ int16_t *block, int i, uint8_t *dest, int line_size, int qscale) >+{ >+ s->dct_unquantize_intra(s, block, i, qscale); >+ s->idsp.idct_put(dest, line_size, block); >+} BTW, shouldn't the put/add be folded into the more specific callback, as most other decoder DSP functions seems to work? (No objections either way.) >+ >+static inline void add_dequant_dct(MpegEncContext *s, >+ int16_t *block, int i, uint8_t *dest, int line_size, int qscale) >+{ >+ if (s->block_last_index[i] >= 0) { >+ s->dct_unquantize_inter(s, block, i, qscale); >+ >+ s->idsp.idct_add(dest, line_size, block); >+ } >+} >+ >+#define NOT_MPEG12_H261 0 >+#define MAY_BE_MPEG12_H261 1 >+#define DEFINITELY_MPEG12_H261 2 >+ >+/* generic function called after a macroblock has been parsed by the decoder. >+ >+ Important variables used: >+ s->mb_intra : true if intra macroblock >+ s->mv_dir : motion vector direction >+ s->mv_type : motion vector type >+ s->mv : motion vector >+ s->interlaced_dct : true if interlaced dct used (mpeg2) >+ */ >+static av_always_inline >+void mpv_reconstruct_mb_internal(MpegEncContext *s, int16_t block[12][64], >+ int lowres_flag, int is_mpeg12) >+{ >+#define IS_MPEG12_H261(s) (is_mpeg12 == MAY_BE_MPEG12_H261 ? ((s)->out_format <= FMT_H261) : is_mpeg12) >+ { >+ uint8_t *dest_y = s->dest[0], *dest_cb = s->dest[1], *dest_cr = s->dest[2]; >+ int dct_linesize, dct_offset; >+ const int linesize = s->cur_pic.linesize[0]; //not s->linesize as this would be wrong for field pics >+ const int uvlinesize = s->cur_pic.linesize[1]; >+ const int block_size = lowres_flag ? 8 >> s->avctx->lowres : 8; >+ >+ dct_linesize = linesize << s->interlaced_dct; >+ dct_offset = s->interlaced_dct ? linesize : linesize * block_size; >+ >+ if (!s->mb_intra) { >+ /* motion handling */ >+ if (HAVE_THREADS && is_mpeg12 != DEFINITELY_MPEG12_H261 && >+ s->avctx->active_thread_type & FF_THREAD_FRAME) { >+ if (s->mv_dir & MV_DIR_FORWARD) { >+ ff_thread_progress_await(&s->last_pic.ptr->progress, >+ lowest_referenced_row(s, 0)); >+ } >+ if (s->mv_dir & MV_DIR_BACKWARD) { >+ ff_thread_progress_await(&s->next_pic.ptr->progress, >+ lowest_referenced_row(s, 1)); >+ } >+ } >+ >+ if (lowres_flag) { >+ const h264_chroma_mc_func *op_pix = s->h264chroma.put_h264_chroma_pixels_tab; >+ >+ if (s->mv_dir & MV_DIR_FORWARD) { >+ MPV_motion_lowres(s, dest_y, dest_cb, dest_cr, 0, s->last_pic.data, op_pix); >+ op_pix = s->h264chroma.avg_h264_chroma_pixels_tab; >+ } >+ if (s->mv_dir & MV_DIR_BACKWARD) { >+ MPV_motion_lowres(s, dest_y, dest_cb, dest_cr, 1, s->next_pic.data, op_pix); >+ } >+ } else { >+ const op_pixels_func (*op_pix)[4]; >+ const qpel_mc_func (*op_qpix)[16]; >+ >+ if ((is_mpeg12 == DEFINITELY_MPEG12_H261 || !s->no_rounding) || s->pict_type == AV_PICTURE_TYPE_B) { >+ op_pix = s->hdsp.put_pixels_tab; >+ op_qpix = s->qdsp.put_qpel_pixels_tab; >+ } else { >+ op_pix = s->hdsp.put_no_rnd_pixels_tab; >+ op_qpix = s->qdsp.put_no_rnd_qpel_pixels_tab; >+ } >+ if (s->mv_dir & MV_DIR_FORWARD) { >+ ff_mpv_motion(s, dest_y, dest_cb, dest_cr, 0, s->last_pic.data, op_pix, op_qpix); >+ op_pix = s->hdsp.avg_pixels_tab; >+ op_qpix = s->qdsp.avg_qpel_pixels_tab; >+ } >+ if (s->mv_dir & MV_DIR_BACKWARD) { >+ ff_mpv_motion(s, dest_y, dest_cb, dest_cr, 1, s->next_pic.data, op_pix, op_qpix); >+ } >+ } >+ >+ /* skip dequant / idct if we are really late ;) */ >+ if (s->avctx->skip_idct) { >+ if( (s->avctx->skip_idct >= AVDISCARD_NONREF && s->pict_type == AV_PICTURE_TYPE_B) >+ ||(s->avctx->skip_idct >= AVDISCARD_NONKEY && s->pict_type != AV_PICTURE_TYPE_I) >+ || s->avctx->skip_idct >= AVDISCARD_ALL) >+ return; >+ } >+ >+ /* add dct residue */ >+ if (!(IS_MPEG12_H261(s) || s->msmpeg4_version != MSMP4_UNUSED || >+ (s->codec_id == AV_CODEC_ID_MPEG4 && !s->mpeg_quant))) { >+ add_dequant_dct(s, block[0], 0, dest_y , dct_linesize, s->qscale); >+ add_dequant_dct(s, block[1], 1, dest_y + block_size, dct_linesize, s->qscale); >+ add_dequant_dct(s, block[2], 2, dest_y + dct_offset , dct_linesize, s->qscale); >+ add_dequant_dct(s, block[3], 3, dest_y + dct_offset + block_size, dct_linesize, s->qscale); >+ >+ if (!CONFIG_GRAY || !(s->avctx->flags & AV_CODEC_FLAG_GRAY)) { >+ av_assert2(s->chroma_y_shift); >+ add_dequant_dct(s, block[4], 4, dest_cb, uvlinesize, s->chroma_qscale); >+ add_dequant_dct(s, block[5], 5, dest_cr, uvlinesize, s->chroma_qscale); >+ } >+ } else if (is_mpeg12 == DEFINITELY_MPEG12_H261 || lowres_flag || (s->codec_id != AV_CODEC_ID_WMV2)) { >+ add_dct(s, block[0], 0, dest_y , dct_linesize); >+ add_dct(s, block[1], 1, dest_y + block_size, dct_linesize); >+ add_dct(s, block[2], 2, dest_y + dct_offset , dct_linesize); >+ add_dct(s, block[3], 3, dest_y + dct_offset + block_size, dct_linesize); >+ >+ if (!CONFIG_GRAY || !(s->avctx->flags & AV_CODEC_FLAG_GRAY)) { >+ if (s->chroma_y_shift) {//Chroma420 >+ add_dct(s, block[4], 4, dest_cb, uvlinesize); >+ add_dct(s, block[5], 5, dest_cr, uvlinesize); >+ } else { >+ //chroma422 >+ dct_linesize = uvlinesize << s->interlaced_dct; >+ dct_offset = s->interlaced_dct ? uvlinesize : uvlinesize*block_size; >+ >+ add_dct(s, block[4], 4, dest_cb, dct_linesize); >+ add_dct(s, block[5], 5, dest_cr, dct_linesize); >+ add_dct(s, block[6], 6, dest_cb+dct_offset, dct_linesize); >+ add_dct(s, block[7], 7, dest_cr+dct_offset, dct_linesize); >+ if (!s->chroma_x_shift) {//Chroma444 >+ add_dct(s, block[8], 8, dest_cb+block_size, dct_linesize); >+ add_dct(s, block[9], 9, dest_cr+block_size, dct_linesize); >+ add_dct(s, block[10], 10, dest_cb+block_size+dct_offset, dct_linesize); >+ add_dct(s, block[11], 11, dest_cr+block_size+dct_offset, dct_linesize); >+ } >+ } >+ } //fi gray >+ } else if (CONFIG_WMV2_DECODER) { >+ ff_wmv2_add_mb(s, block, dest_y, dest_cb, dest_cr); >+ } >+ } else { >+ /* Only MPEG-4 Simple Studio Profile is supported in > 8-bit mode. >+ TODO: Integrate 10-bit properly into mpegvideo.c so that ER works properly */ >+ if (is_mpeg12 != DEFINITELY_MPEG12_H261 && CONFIG_MPEG4_DECODER && >+ /* s->codec_id == AV_CODEC_ID_MPEG4 && */ >+ s->avctx->bits_per_raw_sample > 8) { >+ ff_mpeg4_decode_studio(s, dest_y, dest_cb, dest_cr, block_size, >+ uvlinesize, dct_linesize, dct_offset); >+ } else if (!IS_MPEG12_H261(s)) { >+ /* dct only in intra block */ >+ put_dct(s, block[0], 0, dest_y , dct_linesize, s->qscale); >+ put_dct(s, block[1], 1, dest_y + block_size, dct_linesize, s->qscale); >+ put_dct(s, block[2], 2, dest_y + dct_offset , dct_linesize, s->qscale); >+ put_dct(s, block[3], 3, dest_y + dct_offset + block_size, dct_linesize, s->qscale); >+ >+ if (!CONFIG_GRAY || !(s->avctx->flags & AV_CODEC_FLAG_GRAY)) { >+ if (s->chroma_y_shift) { >+ put_dct(s, block[4], 4, dest_cb, uvlinesize, s->chroma_qscale); >+ put_dct(s, block[5], 5, dest_cr, uvlinesize, s->chroma_qscale); >+ } else { >+ dct_offset >>=1; >+ dct_linesize >>=1; >+ put_dct(s, block[4], 4, dest_cb, dct_linesize, s->chroma_qscale); >+ put_dct(s, block[5], 5, dest_cr, dct_linesize, s->chroma_qscale); >+ put_dct(s, block[6], 6, dest_cb + dct_offset, dct_linesize, s->chroma_qscale); >+ put_dct(s, block[7], 7, dest_cr + dct_offset, dct_linesize, s->chroma_qscale); >+ } >+ } >+ } else { >+ s->idsp.idct_put(dest_y, dct_linesize, block[0]); >+ s->idsp.idct_put(dest_y + block_size, dct_linesize, block[1]); >+ s->idsp.idct_put(dest_y + dct_offset, dct_linesize, block[2]); >+ s->idsp.idct_put(dest_y + dct_offset + block_size, dct_linesize, block[3]); >+ >+ if (!CONFIG_GRAY || !(s->avctx->flags & AV_CODEC_FLAG_GRAY)) { >+ if (s->chroma_y_shift) { >+ s->idsp.idct_put(dest_cb, uvlinesize, block[4]); >+ s->idsp.idct_put(dest_cr, uvlinesize, block[5]); >+ } else { >+ dct_linesize = uvlinesize << s->interlaced_dct; >+ dct_offset = s->interlaced_dct ? uvlinesize : uvlinesize*block_size; >+ >+ s->idsp.idct_put(dest_cb, dct_linesize, block[4]); >+ s->idsp.idct_put(dest_cr, dct_linesize, block[5]); >+ s->idsp.idct_put(dest_cb + dct_offset, dct_linesize, block[6]); >+ s->idsp.idct_put(dest_cr + dct_offset, dct_linesize, block[7]); >+ if (!s->chroma_x_shift) { //Chroma444 >+ s->idsp.idct_put(dest_cb + block_size, dct_linesize, block[8]); >+ s->idsp.idct_put(dest_cr + block_size, dct_linesize, block[9]); >+ s->idsp.idct_put(dest_cb + block_size + dct_offset, dct_linesize, block[10]); >+ s->idsp.idct_put(dest_cr + block_size + dct_offset, dct_linesize, block[11]); >+ } >+ } >+ } //gray >+ } >+ } >+ } >+} > > void ff_mpv_reconstruct_mb(MpegEncContext *s, int16_t block[12][64]) > { >+ const int mb_xy = s->mb_y * s->mb_stride + s->mb_x; >+ uint8_t *mbskip_ptr = &s->mbskip_table[mb_xy]; >+ >+ s->cur_pic.qscale_table[mb_xy] = s->qscale; >+ >+ /* avoid copy if macroblock skipped in last frame too */ >+ if (s->mb_skipped) { >+ s->mb_skipped = 0; >+ av_assert2(s->pict_type!=AV_PICTURE_TYPE_I); >+ *mbskip_ptr = 1; >+ } else if (!s->cur_pic.reference) { >+ *mbskip_ptr = 1; >+ } else{ >+ *mbskip_ptr = 0; /* not skipped */ >+ } >+ > if (s->avctx->debug & FF_DEBUG_DCT_COEFF) { > /* print DCT coefficients */ > av_log(s->avctx, AV_LOG_DEBUG, "DCT coeffs of MB at %dx%d:\n", s->mb_x, s->mb_y); >diff --git a/libavcodec/mpegvideo_enc.c b/libavcodec/mpegvideo_enc.c >index 31c5dd4736..774d16edad 100644 >--- a/libavcodec/mpegvideo_enc.c >+++ b/libavcodec/mpegvideo_enc.c >@@ -1082,11 +1082,33 @@ av_cold int ff_mpv_encode_end(AVCodecContext *avctx) > return 0; > } > >-#define IS_ENCODER 1 >-#include "mpv_reconstruct_mb_template.c" >+/* put block[] to dest[] */ >+static inline void put_dct(MpegEncContext *s, >+ int16_t *block, int i, uint8_t *dest, int line_size, int qscale) >+{ >+ s->dct_unquantize_intra(s, block, i, qscale); >+ s->idsp.idct_put(dest, line_size, block); >+} > >+static inline void add_dequant_dct(MpegEncContext *s, >+ int16_t *block, int i, uint8_t *dest, int line_size, int qscale) >+{ >+ if (s->block_last_index[i] >= 0) { >+ s->dct_unquantize_inter(s, block, i, qscale); >+ >+ s->idsp.idct_add(dest, line_size, block); >+ } >+} >+ >+/** >+ * Performs dequantization and IDCT (if necessary) >+ */ > static void mpv_reconstruct_mb(MpegEncContext *s, int16_t block[12][64]) > { >+ const int mb_xy = s->mb_y * s->mb_stride + s->mb_x; >+ >+ s->cur_pic.qscale_table[mb_xy] = s->qscale; >+ > if (s->avctx->debug & FF_DEBUG_DCT_COEFF) { > /* print DCT coefficients */ > av_log(s->avctx, AV_LOG_DEBUG, "DCT coeffs of MB at %dx%d:\n", s->mb_x, s->mb_y); >@@ -1099,7 +1121,60 @@ static void mpv_reconstruct_mb(MpegEncContext *s, int16_t block[12][64]) > } > } > >- mpv_reconstruct_mb_internal(s, block, 0, MAY_BE_MPEG12_H261); >+ if ((s->avctx->flags & AV_CODEC_FLAG_PSNR) || s->frame_skip_threshold || s->frame_skip_factor || >+ !((s->intra_only || s->pict_type == AV_PICTURE_TYPE_B) && >+ s->avctx->mb_decision != FF_MB_DECISION_RD)) { // FIXME precalc >+ uint8_t *dest_y = s->dest[0], *dest_cb = s->dest[1], *dest_cr = s->dest[2]; >+ int dct_linesize, dct_offset; >+ const int linesize = s->cur_pic.linesize[0]; >+ const int uvlinesize = s->cur_pic.linesize[1]; >+ const int block_size = 8; >+ >+ dct_linesize = linesize << s->interlaced_dct; >+ dct_offset = s->interlaced_dct ? linesize : linesize * block_size; >+ >+ if (!s->mb_intra) { >+ /* No MC, as that was already done otherwise */ >+ add_dequant_dct(s, block[0], 0, dest_y , dct_linesize, s->qscale); >+ add_dequant_dct(s, block[1], 1, dest_y + block_size, dct_linesize, s->qscale); >+ add_dequant_dct(s, block[2], 2, dest_y + dct_offset , dct_linesize, s->qscale); >+ add_dequant_dct(s, block[3], 3, dest_y + dct_offset + block_size, dct_linesize, s->qscale); >+ >+ if (!CONFIG_GRAY || !(s->avctx->flags & AV_CODEC_FLAG_GRAY)) { >+ if (s->chroma_y_shift) { >+ add_dequant_dct(s, block[4], 4, dest_cb, uvlinesize, s->chroma_qscale); >+ add_dequant_dct(s, block[5], 5, dest_cr, uvlinesize, s->chroma_qscale); >+ } else { >+ dct_linesize >>= 1; >+ dct_offset >>= 1; >+ add_dequant_dct(s, block[4], 4, dest_cb, dct_linesize, s->chroma_qscale); >+ add_dequant_dct(s, block[5], 5, dest_cr, dct_linesize, s->chroma_qscale); >+ add_dequant_dct(s, block[6], 6, dest_cb + dct_offset, dct_linesize, s->chroma_qscale); >+ add_dequant_dct(s, block[7], 7, dest_cr + dct_offset, dct_linesize, s->chroma_qscale); >+ } >+ } >+ } else { >+ /* dct only in intra block */ >+ put_dct(s, block[0], 0, dest_y , dct_linesize, s->qscale); >+ put_dct(s, block[1], 1, dest_y + block_size, dct_linesize, s->qscale); >+ put_dct(s, block[2], 2, dest_y + dct_offset , dct_linesize, s->qscale); >+ put_dct(s, block[3], 3, dest_y + dct_offset + block_size, dct_linesize, s->qscale); >+ >+ if (!CONFIG_GRAY || !(s->avctx->flags & AV_CODEC_FLAG_GRAY)) { >+ if (s->chroma_y_shift) { >+ put_dct(s, block[4], 4, dest_cb, uvlinesize, s->chroma_qscale); >+ put_dct(s, block[5], 5, dest_cr, uvlinesize, s->chroma_qscale); >+ } else { >+ dct_offset >>=1; >+ dct_linesize >>=1; >+ put_dct(s, block[4], 4, dest_cb, dct_linesize, s->chroma_qscale); >+ put_dct(s, block[5], 5, dest_cr, dct_linesize, s->chroma_qscale); >+ put_dct(s, block[6], 6, dest_cb + dct_offset, dct_linesize, s->chroma_qscale); >+ put_dct(s, block[7], 7, dest_cr + dct_offset, dct_linesize, s->chroma_qscale); >+ } >+ } >+ } >+ } > } > > static int get_sae(const uint8_t *src, int ref, int stride) >diff --git a/libavcodec/mpv_reconstruct_mb_template.c b/libavcodec/mpv_reconstruct_mb_template.c >deleted file mode 100644 >index ae7a9e34ce..0000000000 >--- a/libavcodec/mpv_reconstruct_mb_template.c >+++ /dev/null >@@ -1,272 +0,0 @@ >-/* >- * MPEG macroblock reconstruction >- * Copyright (c) 2000,2001 Fabrice Bellard >- * Copyright (c) 2002-2004 Michael Niedermayer <michaelni@gmx.at> >- * >- * This file is part of FFmpeg. >- * >- * FFmpeg is free software; you can redistribute it and/or >- * modify it under the terms of the GNU Lesser General Public >- * License as published by the Free Software Foundation; either >- * version 2.1 of the License, or (at your option) any later version. >- * >- * FFmpeg is distributed in the hope that it will be useful, >- * but WITHOUT ANY WARRANTY; without even the implied warranty of >- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU >- * Lesser General Public License for more details. >- * >- * You should have received a copy of the GNU Lesser General Public >- * License along with FFmpeg; if not, write to the Free Software >- * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA >- */ >- >-#define NOT_MPEG12_H261 0 >-#define MAY_BE_MPEG12_H261 1 >-#define DEFINITELY_MPEG12_H261 2 >- >-/* put block[] to dest[] */ >-static inline void put_dct(MpegEncContext *s, >- int16_t *block, int i, uint8_t *dest, int line_size, int qscale) >-{ >- s->dct_unquantize_intra(s, block, i, qscale); >- s->idsp.idct_put(dest, line_size, block); >-} >- >-static inline void add_dequant_dct(MpegEncContext *s, >- int16_t *block, int i, uint8_t *dest, int line_size, int qscale) >-{ >- if (s->block_last_index[i] >= 0) { >- s->dct_unquantize_inter(s, block, i, qscale); >- >- s->idsp.idct_add(dest, line_size, block); >- } >-} >- >-/* generic function called after a macroblock has been parsed by the >- decoder or after it has been encoded by the encoder. >- >- Important variables used: >- s->mb_intra : true if intra macroblock >- s->mv_dir : motion vector direction >- s->mv_type : motion vector type >- s->mv : motion vector >- s->interlaced_dct : true if interlaced dct used (mpeg2) >- */ >-static av_always_inline >-void mpv_reconstruct_mb_internal(MpegEncContext *s, int16_t block[12][64], >- int lowres_flag, int is_mpeg12) >-{ >-#define IS_MPEG12_H261(s) (is_mpeg12 == MAY_BE_MPEG12_H261 ? ((s)->out_format <= FMT_H261) : is_mpeg12) >- const int mb_xy = s->mb_y * s->mb_stride + s->mb_x; >- >- s->cur_pic.qscale_table[mb_xy] = s->qscale; >- >-#if IS_ENCODER >- if ((s->avctx->flags & AV_CODEC_FLAG_PSNR) || s->frame_skip_threshold || s->frame_skip_factor || >- !((s->intra_only || s->pict_type == AV_PICTURE_TYPE_B) && >- s->avctx->mb_decision != FF_MB_DECISION_RD)) // FIXME precalc >-#endif /* IS_ENCODER */ >- { >- uint8_t *dest_y = s->dest[0], *dest_cb = s->dest[1], *dest_cr = s->dest[2]; >- int dct_linesize, dct_offset; >- const int linesize = s->cur_pic.linesize[0]; //not s->linesize as this would be wrong for field pics >- const int uvlinesize = s->cur_pic.linesize[1]; >- const int block_size = lowres_flag ? 8 >> s->avctx->lowres : 8; >- >- /* avoid copy if macroblock skipped in last frame too */ >- /* skip only during decoding as we might trash the buffers during encoding a bit */ >- if (!IS_ENCODER) { >- uint8_t *mbskip_ptr = &s->mbskip_table[mb_xy]; >- >- if (s->mb_skipped) { >- s->mb_skipped = 0; >- av_assert2(s->pict_type!=AV_PICTURE_TYPE_I); >- *mbskip_ptr = 1; >- } else if (!s->cur_pic.reference) { >- *mbskip_ptr = 1; >- } else{ >- *mbskip_ptr = 0; /* not skipped */ >- } >- } >- >- dct_linesize = linesize << s->interlaced_dct; >- dct_offset = s->interlaced_dct ? linesize : linesize * block_size; >- >- if (!s->mb_intra) { >- /* motion handling */ >- /* decoding or more than one mb_type (MC was already done otherwise) */ >- >-#if !IS_ENCODER >- if (HAVE_THREADS && is_mpeg12 != DEFINITELY_MPEG12_H261 && >- s->avctx->active_thread_type & FF_THREAD_FRAME) { >- if (s->mv_dir & MV_DIR_FORWARD) { >- ff_thread_progress_await(&s->last_pic.ptr->progress, >- lowest_referenced_row(s, 0)); >- } >- if (s->mv_dir & MV_DIR_BACKWARD) { >- ff_thread_progress_await(&s->next_pic.ptr->progress, >- lowest_referenced_row(s, 1)); >- } >- } >- >- if (lowres_flag) { >- const h264_chroma_mc_func *op_pix = s->h264chroma.put_h264_chroma_pixels_tab; >- >- if (s->mv_dir & MV_DIR_FORWARD) { >- MPV_motion_lowres(s, dest_y, dest_cb, dest_cr, 0, s->last_pic.data, op_pix); >- op_pix = s->h264chroma.avg_h264_chroma_pixels_tab; >- } >- if (s->mv_dir & MV_DIR_BACKWARD) { >- MPV_motion_lowres(s, dest_y, dest_cb, dest_cr, 1, s->next_pic.data, op_pix); >- } >- } else { >- const op_pixels_func (*op_pix)[4]; >- const qpel_mc_func (*op_qpix)[16]; >- >- if ((is_mpeg12 == DEFINITELY_MPEG12_H261 || !s->no_rounding) || s->pict_type == AV_PICTURE_TYPE_B) { >- op_pix = s->hdsp.put_pixels_tab; >- op_qpix = s->qdsp.put_qpel_pixels_tab; >- } else { >- op_pix = s->hdsp.put_no_rnd_pixels_tab; >- op_qpix = s->qdsp.put_no_rnd_qpel_pixels_tab; >- } >- if (s->mv_dir & MV_DIR_FORWARD) { >- ff_mpv_motion(s, dest_y, dest_cb, dest_cr, 0, s->last_pic.data, op_pix, op_qpix); >- op_pix = s->hdsp.avg_pixels_tab; >- op_qpix = s->qdsp.avg_qpel_pixels_tab; >- } >- if (s->mv_dir & MV_DIR_BACKWARD) { >- ff_mpv_motion(s, dest_y, dest_cb, dest_cr, 1, s->next_pic.data, op_pix, op_qpix); >- } >- } >- >- /* skip dequant / idct if we are really late ;) */ >- if (s->avctx->skip_idct) { >- if( (s->avctx->skip_idct >= AVDISCARD_NONREF && s->pict_type == AV_PICTURE_TYPE_B) >- ||(s->avctx->skip_idct >= AVDISCARD_NONKEY && s->pict_type != AV_PICTURE_TYPE_I) >- || s->avctx->skip_idct >= AVDISCARD_ALL) >- return; >- } >- >- /* add dct residue */ >- if (!(IS_MPEG12_H261(s) || s->msmpeg4_version != MSMP4_UNUSED || >- (s->codec_id == AV_CODEC_ID_MPEG4 && !s->mpeg_quant))) >-#endif /* !IS_ENCODER */ >- { >- add_dequant_dct(s, block[0], 0, dest_y , dct_linesize, s->qscale); >- add_dequant_dct(s, block[1], 1, dest_y + block_size, dct_linesize, s->qscale); >- add_dequant_dct(s, block[2], 2, dest_y + dct_offset , dct_linesize, s->qscale); >- add_dequant_dct(s, block[3], 3, dest_y + dct_offset + block_size, dct_linesize, s->qscale); >- >- if (!CONFIG_GRAY || !(s->avctx->flags & AV_CODEC_FLAG_GRAY)) { >- av_assert2(IS_ENCODER || s->chroma_y_shift); >- if (!IS_ENCODER || s->chroma_y_shift) { >- add_dequant_dct(s, block[4], 4, dest_cb, uvlinesize, s->chroma_qscale); >- add_dequant_dct(s, block[5], 5, dest_cr, uvlinesize, s->chroma_qscale); >- } else { >- dct_linesize >>= 1; >- dct_offset >>= 1; >- add_dequant_dct(s, block[4], 4, dest_cb, dct_linesize, s->chroma_qscale); >- add_dequant_dct(s, block[5], 5, dest_cr, dct_linesize, s->chroma_qscale); >- add_dequant_dct(s, block[6], 6, dest_cb + dct_offset, dct_linesize, s->chroma_qscale); >- add_dequant_dct(s, block[7], 7, dest_cr + dct_offset, dct_linesize, s->chroma_qscale); >- } >- } >- } >-#if !IS_ENCODER >- else if (is_mpeg12 == DEFINITELY_MPEG12_H261 || lowres_flag || (s->codec_id != AV_CODEC_ID_WMV2)) { >- add_dct(s, block[0], 0, dest_y , dct_linesize); >- add_dct(s, block[1], 1, dest_y + block_size, dct_linesize); >- add_dct(s, block[2], 2, dest_y + dct_offset , dct_linesize); >- add_dct(s, block[3], 3, dest_y + dct_offset + block_size, dct_linesize); >- >- if (!CONFIG_GRAY || !(s->avctx->flags & AV_CODEC_FLAG_GRAY)) { >- if (s->chroma_y_shift) {//Chroma420 >- add_dct(s, block[4], 4, dest_cb, uvlinesize); >- add_dct(s, block[5], 5, dest_cr, uvlinesize); >- } else { >- //chroma422 >- dct_linesize = uvlinesize << s->interlaced_dct; >- dct_offset = s->interlaced_dct ? uvlinesize : uvlinesize*block_size; >- >- add_dct(s, block[4], 4, dest_cb, dct_linesize); >- add_dct(s, block[5], 5, dest_cr, dct_linesize); >- add_dct(s, block[6], 6, dest_cb+dct_offset, dct_linesize); >- add_dct(s, block[7], 7, dest_cr+dct_offset, dct_linesize); >- if (!s->chroma_x_shift) {//Chroma444 >- add_dct(s, block[8], 8, dest_cb+block_size, dct_linesize); >- add_dct(s, block[9], 9, dest_cr+block_size, dct_linesize); >- add_dct(s, block[10], 10, dest_cb+block_size+dct_offset, dct_linesize); >- add_dct(s, block[11], 11, dest_cr+block_size+dct_offset, dct_linesize); >- } >- } >- } //fi gray >- } else if (CONFIG_WMV2_DECODER) { >- ff_wmv2_add_mb(s, block, dest_y, dest_cb, dest_cr); >- } >-#endif /* !IS_ENCODER */ >- } else { >-#if !IS_ENCODER >- /* Only MPEG-4 Simple Studio Profile is supported in > 8-bit mode. >- TODO: Integrate 10-bit properly into mpegvideo.c so that ER works properly */ >- if (is_mpeg12 != DEFINITELY_MPEG12_H261 && CONFIG_MPEG4_DECODER && >- /* s->codec_id == AV_CODEC_ID_MPEG4 && */ >- s->avctx->bits_per_raw_sample > 8) { >- ff_mpeg4_decode_studio(s, dest_y, dest_cb, dest_cr, block_size, >- uvlinesize, dct_linesize, dct_offset); >- } else if (!IS_MPEG12_H261(s)) >-#endif /* !IS_ENCODER */ >- { >- /* dct only in intra block */ >- put_dct(s, block[0], 0, dest_y , dct_linesize, s->qscale); >- put_dct(s, block[1], 1, dest_y + block_size, dct_linesize, s->qscale); >- put_dct(s, block[2], 2, dest_y + dct_offset , dct_linesize, s->qscale); >- put_dct(s, block[3], 3, dest_y + dct_offset + block_size, dct_linesize, s->qscale); >- >- if (!CONFIG_GRAY || !(s->avctx->flags & AV_CODEC_FLAG_GRAY)) { >- if (s->chroma_y_shift) { >- put_dct(s, block[4], 4, dest_cb, uvlinesize, s->chroma_qscale); >- put_dct(s, block[5], 5, dest_cr, uvlinesize, s->chroma_qscale); >- } else { >- dct_offset >>=1; >- dct_linesize >>=1; >- put_dct(s, block[4], 4, dest_cb, dct_linesize, s->chroma_qscale); >- put_dct(s, block[5], 5, dest_cr, dct_linesize, s->chroma_qscale); >- put_dct(s, block[6], 6, dest_cb + dct_offset, dct_linesize, s->chroma_qscale); >- put_dct(s, block[7], 7, dest_cr + dct_offset, dct_linesize, s->chroma_qscale); >- } >- } >- } >-#if !IS_ENCODER >- else { >- s->idsp.idct_put(dest_y, dct_linesize, block[0]); >- s->idsp.idct_put(dest_y + block_size, dct_linesize, block[1]); >- s->idsp.idct_put(dest_y + dct_offset, dct_linesize, block[2]); >- s->idsp.idct_put(dest_y + dct_offset + block_size, dct_linesize, block[3]); >- >- if (!CONFIG_GRAY || !(s->avctx->flags & AV_CODEC_FLAG_GRAY)) { >- if (s->chroma_y_shift) { >- s->idsp.idct_put(dest_cb, uvlinesize, block[4]); >- s->idsp.idct_put(dest_cr, uvlinesize, block[5]); >- } else { >- dct_linesize = uvlinesize << s->interlaced_dct; >- dct_offset = s->interlaced_dct ? uvlinesize : uvlinesize*block_size; >- >- s->idsp.idct_put(dest_cb, dct_linesize, block[4]); >- s->idsp.idct_put(dest_cr, dct_linesize, block[5]); >- s->idsp.idct_put(dest_cb + dct_offset, dct_linesize, block[6]); >- s->idsp.idct_put(dest_cr + dct_offset, dct_linesize, block[7]); >- if (!s->chroma_x_shift) { //Chroma444 >- s->idsp.idct_put(dest_cb + block_size, dct_linesize, block[8]); >- s->idsp.idct_put(dest_cr + block_size, dct_linesize, block[9]); >- s->idsp.idct_put(dest_cb + block_size + dct_offset, dct_linesize, block[10]); >- s->idsp.idct_put(dest_cr + block_size + dct_offset, dct_linesize, block[11]); >- } >- } >- } //gray >- } >-#endif /* !IS_ENCODER */ >- } >- } >-} >- _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
next prev parent reply other threads:[~2024-07-01 13:04 UTC|newest] Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top 2024-07-01 12:01 [FFmpeg-devel] [PATCH 01/13] avcodec/h261enc: Use LUT to write motion vector differences Andreas Rheinhardt 2024-07-01 12:15 ` [FFmpeg-devel] [PATCH 02/13] avcodec/mpeg12dec: Move resetting last_dc to decoder Andreas Rheinhardt 2024-07-01 12:16 ` [FFmpeg-devel] [PATCH 03/13] avcodec/mpeg12enc: Move resetting last_dc to encoder Andreas Rheinhardt 2024-07-01 12:16 ` [FFmpeg-devel] [PATCH 04/13] avcodec/h263dec: Clean intra tables in decoder, not ff_mpv_reconstruct_mb Andreas Rheinhardt 2024-07-01 12:16 ` [FFmpeg-devel] [PATCH 05/13] avcodec/mpegvideo_enc: Don't reset intra buffers in mpv_reconstruct_mb() Andreas Rheinhardt 2024-07-01 12:16 ` [FFmpeg-devel] [PATCH 06/13] avcodec/mpv_reconstruct_mb_template: Merge template into its users Andreas Rheinhardt 2024-07-01 13:04 ` Rémi Denis-Courmont [this message] 2024-07-01 12:16 ` [FFmpeg-devel] [PATCH 07/13] avcodec/mpegvideo_{dec, enc}: Reindent after the previous commit Andreas Rheinhardt 2024-07-01 12:16 ` [FFmpeg-devel] [PATCH 08/13] avcodec/mpegvideo_enc: Don't set qscale_table value prematurely Andreas Rheinhardt 2024-07-01 12:16 ` [FFmpeg-devel] [PATCH 09/13] avcodec/mpegvideo_enc: Add AV_CODEC_CAP_DR1 Andreas Rheinhardt 2024-07-01 12:16 ` [FFmpeg-devel] [PATCH 10/13] avcodec/motion_est: Avoid branches for put(_no_rnd) selection Andreas Rheinhardt 2024-07-01 12:16 ` [FFmpeg-devel] [PATCH 11/13] avcodec/mpegvideo_dec: Use picture-dimensions in ff_print_debug_info() Andreas Rheinhardt 2024-07-01 12:16 ` [FFmpeg-devel] [PATCH 12/13] avcodec/vc1dec: Reenable debug-info output for field pictures Andreas Rheinhardt 2024-07-01 12:16 ` [FFmpeg-devel] [PATCH 13/13] avcodec/h261dec: Remove dead check Andreas Rheinhardt
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=8EADD83B-FB16-4449-BB76-6F50353E14E1@remlab.net \ --to=remi@remlab.net \ --cc=ffmpeg-devel@ffmpeg.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
Git Inbox Mirror of the ffmpeg-devel mailing list - see https://ffmpeg.org/mailman/listinfo/ffmpeg-devel This inbox may be cloned and mirrored by anyone: git clone --mirror https://master.gitmailbox.com/ffmpegdev/0 ffmpegdev/git/0.git # If you have public-inbox 1.1+ installed, you may # initialize and index your mirror using the following commands: public-inbox-init -V2 ffmpegdev ffmpegdev/ https://master.gitmailbox.com/ffmpegdev \ ffmpegdev@gitmailbox.com public-inbox-index ffmpegdev Example config snippet for mirrors. AGPL code for this site: git clone https://public-inbox.org/public-inbox.git