From: Niklas Haas <ffmpeg@haasn.xyz> To: ffmpeg-devel@ffmpeg.org Cc: Niklas Haas <git@haasn.dev> Subject: [FFmpeg-devel] [PATCH v2 2/4] avcodec/aom_film_grain: implement AFGS1 Date: Thu, 29 Feb 2024 13:33:37 +0100 Message-ID: <20240229123340.49055-3-ffmpeg@haasn.xyz> (raw) In-Reply-To: <20240229123340.49055-2-ffmpeg@haasn.xyz> From: Niklas Haas <git@haasn.dev> Based on the AOMedia Film Grain Synthesis 1 (AFGS1) spec: https://aomediacodec.github.io/afgs1-spec/ The parsing has been changed substantially relative to the AV1 film grain OBU. In particular: 1. There is the possibility of maintaining multiple independent film grain parameter sets, and decoders are recommended to pick the one most appropriate for the intended display resolution. This is to support scalable / multi-level codecs, although this could also be used to e.g. switch between different grain profiles without having to re-signal the appropriate coefficients. 2. Supporting this, it's possible to *predict* the grain coefficients from previously signalled parameter sets, transmitting only the residual. 3. When not predicting, the parameter sets are now stored as a series of increments, rather than being directly transmitted. I placed this parser in its own file, rather than h2645_sei.c, since nothing in the generic AFGS1 film grain payload is specific to T.35. Note: Due to an ambiguity/mistake in the specification, the parsing of AR coefficients is possibly incorrect. See for details: https://github.com/AOMediaCodec/afgs1-spec/issues/115 --- libavcodec/aom_film_grain.c | 227 ++++++++++++++++++++++++++++++++++++ libavcodec/aom_film_grain.h | 25 ++++ 2 files changed, 252 insertions(+) diff --git a/libavcodec/aom_film_grain.c b/libavcodec/aom_film_grain.c index dc0bd0c205e..2afd53b058a 100644 --- a/libavcodec/aom_film_grain.c +++ b/libavcodec/aom_film_grain.c @@ -29,6 +29,7 @@ #include "libavutil/imgutils.h" #include "aom_film_grain.h" +#include "get_bits.h" // Common/shared helpers (not dependent on BITDEPTH) static inline int get_random_number(const int bits, unsigned *const state) { @@ -110,6 +111,232 @@ int ff_aom_apply_film_grain(AVFrame *out, const AVFrame *in, return AVERROR_INVALIDDATA; } +int ff_aom_parse_film_grain_sets(AVFilmGrainAOMParamSets *s, + const uint8_t *payload, int payload_size) +{ + GetBitContext gbc, *gb = &gbc; + AVFilmGrainAOMParams *aom; + AVFilmGrainAOMParamSet *fgps, *ref = NULL; + int ret, num_sets, n, i, uv, num_y_coeffs, update_grain, luma_only; + + ret = init_get_bits8(gb, payload, payload_size); + if (ret < 0) + return ret; + + s->enable = get_bits1(gb); + if (!s->enable) + return 0; + + skip_bits(gb, 4); // reserved + num_sets = get_bits(gb, 3); + for (n = 0; n < num_sets; n++) { + int payload_4byte, payload_size, set_idx, apply_units_log2, vsc_flag; + int predict_scaling, predict_y_scaling, predict_uv_scaling[2]; + int payload_bits, start_position; + + start_position = get_bits_count(gb); + payload_4byte = get_bits1(gb); + payload_size = get_bits(gb, payload_4byte ? 2 : 8); + set_idx = get_bits(gb, 3); + fgps = &s->sets[set_idx]; + + fgps->apply_grain = get_bits1(gb); + if (!fgps->apply_grain) + continue; + + fgps->grain_seed = get_bits(gb, 16); + update_grain = get_bits1(gb); + if (!update_grain) + continue; + + apply_units_log2 = get_bits(gb, 4); + fgps->apply_width = get_bits(gb, 12) << apply_units_log2; + fgps->apply_height = get_bits(gb, 12) << apply_units_log2; + luma_only = get_bits1(gb); + if (luma_only) { + fgps->subx = fgps->suby = 0; + } else { + fgps->subx = get_bits1(gb); + fgps->suby = get_bits1(gb); + } + + vsc_flag = get_bits1(gb); // video_signal_characteristics_flag + if (vsc_flag) { + int cicp_flag; + skip_bits(gb, 3); // bit_depth_minus8 + cicp_flag = get_bits1(gb); + if (cicp_flag) + skip_bits(gb, 8 + 8 + 8 + 1); // cicp_info + } + + aom = &fgps->params; + predict_scaling = get_bits1(gb); + if (predict_scaling && (!ref || ref == fgps)) + goto error; // prediction must be from valid, different set + + predict_y_scaling = predict_scaling ? get_bits1(gb) : 0; + if (predict_y_scaling) { + int y_scale, y_offset, bits_res; + y_scale = get_bits(gb, 9) - 256; + y_offset = get_bits(gb, 9) - 256; + bits_res = get_bits(gb, 3); + if (bits_res) { + int res[14], pred, granularity; + aom->num_y_points = ref->params.num_y_points; + for (i = 0; i < aom->num_y_points; i++) + res[i] = get_bits(gb, bits_res); + granularity = get_bits(gb, 3); + for (i = 0; i < aom->num_y_points; i++) { + pred = ref->params.y_points[i][1]; + pred = ((pred * y_scale + 8) >> 4) + y_offset; + pred += (res[i] - (1 << (bits_res - 1))) * granularity; + aom->y_points[i][0] = ref->params.y_points[i][0]; + aom->y_points[i][1] = av_clip_uint8(pred); + } + } + } else { + aom->num_y_points = get_bits(gb, 4); + if (aom->num_y_points > 14) { + goto error; + } else if (aom->num_y_points) { + int bits_inc, bits_scaling; + int y_value = 0; + bits_inc = get_bits(gb, 3) + 1; + bits_scaling = get_bits(gb, 2) + 5; + for (i = 0; i < aom->num_y_points; i++) { + y_value += get_bits(gb, bits_inc); + if (y_value > UINT8_MAX) + goto error; + aom->y_points[i][0] = y_value; + aom->y_points[i][1] = get_bits(gb, bits_scaling); + } + } + } + + if (luma_only) { + aom->chroma_scaling_from_luma = 0; + aom->num_uv_points[0] = aom->num_uv_points[1] = 0; + } else { + aom->chroma_scaling_from_luma = get_bits1(gb); + if (aom->chroma_scaling_from_luma) { + aom->num_uv_points[0] = aom->num_uv_points[1] = 0; + } else { + for (uv = 0; uv < 2; uv++) { + predict_uv_scaling[uv] = predict_scaling ? get_bits1(gb) : 0; + if (predict_uv_scaling[uv]) { + int uv_scale, uv_offset, bits_res; + uv_scale = get_bits(gb, 9) - 256; + uv_offset = get_bits(gb, 9) - 256; + bits_res = get_bits(gb, 3); + aom->uv_mult[uv] = ref->params.uv_mult[uv]; + aom->uv_mult_luma[uv] = ref->params.uv_mult_luma[uv]; + aom->uv_offset[uv] = ref->params.uv_offset[uv]; + if (bits_res) { + int res[10], pred, granularity; + aom->num_uv_points[uv] = ref->params.num_uv_points[uv]; + for (i = 0; i < aom->num_uv_points[uv]; i++) + res[i] = get_bits(gb, bits_res); + granularity = get_bits(gb, 3); + for (i = 0; i < aom->num_uv_points[uv]; i++) { + pred = ref->params.uv_points[uv][i][1]; + pred = ((pred * uv_scale + 8) >> 4) + uv_offset; + pred += (res[i] - (1 << (bits_res - 1))) * granularity; + aom->uv_points[uv][i][0] = ref->params.uv_points[uv][i][0]; + aom->uv_points[uv][i][1] = av_clip_uint8(pred); + } + } + } else { + int bits_inc, bits_scaling, uv_offset; + int uv_value = 0; + aom->num_uv_points[uv] = get_bits(gb, 4); + if (aom->num_uv_points[uv] > 10) + goto error; + bits_inc = get_bits(gb, 3) + 1; + bits_scaling = get_bits(gb, 2) + 5; + uv_offset = get_bits(gb, 8); + for (i = 0; i < aom->num_uv_points[uv]; i++) { + uv_value += get_bits(gb, bits_inc); + if (uv_value > UINT8_MAX) + goto error; + aom->uv_points[uv][i][0] = uv_value; + aom->uv_points[uv][i][1] = get_bits(gb, bits_scaling) + uv_offset; + } + } + } + } + } + + aom->scaling_shift = get_bits(gb, 2) + 8; + aom->ar_coeff_lag = get_bits(gb, 2); + num_y_coeffs = 2 * aom->ar_coeff_lag * (aom->ar_coeff_lag + 1); + if (aom->num_y_points) { + int ar_bits = get_bits(gb, 2) + 5; + for (i = 0; i < num_y_coeffs; i++) + aom->ar_coeffs_y[i] = get_bits(gb, ar_bits) - (1 << (ar_bits - 1)); + } + for (uv = 0; uv < 2; uv++) { + if (aom->chroma_scaling_from_luma || aom->num_uv_points[uv]) { + int ar_bits = get_bits(gb, 2) + 5; + for (i = 0; i < num_y_coeffs + !!aom->num_y_points; i++) + aom->ar_coeffs_uv[uv][i] = get_bits(gb, ar_bits) - (1 << (ar_bits - 1)); + } + } + aom->ar_coeff_shift = get_bits(gb, 2) + 6; + aom->grain_scale_shift = get_bits(gb, 2); + for (uv = 0; uv < 2; uv++) { + if (aom->num_uv_points[uv] && !predict_uv_scaling[uv]) { + aom->uv_mult[uv] = get_bits(gb, 8) - 128; + aom->uv_mult_luma[uv] = get_bits(gb, 8) - 128; + aom->uv_offset[uv] = get_bits(gb, 9) - 256; + } + } + aom->overlap_flag = get_bits1(gb); + aom->limit_output_range = get_bits1(gb); + + // use first set as reference only if it was fully transmitted + if (n == 0) + ref = fgps; + + payload_bits = get_bits_count(gb) - start_position; + if (payload_bits > payload_size * 8) + goto error; + skip_bits(gb, payload_size * 8 - payload_bits); + } + return 0; + +error: + s->enable = 0; + return AVERROR_INVALIDDATA; +} + +const AVFilmGrainAOMParamSet *ff_aom_select_film_grain_set(const AVFilmGrainAOMParamSets *s, + const AVFrame *frame) +{ + const AVFilmGrainAOMParamSet *fgps, *best; + const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(frame->format); + if (!s->enable || !desc) + return NULL; + + best = NULL; + for (int i = 0; i < 8; i++) { + fgps = &s->sets[i]; + if (!fgps->apply_width || + !fgps->apply_height || + fgps->apply_width > frame->width || + fgps->apply_height > frame->height || + fgps->subx != desc->log2_chroma_w || + fgps->suby != desc->log2_chroma_h) + continue; + + if (!best || + fgps->apply_width > best->apply_width || + fgps->apply_height > best->apply_height) + best = fgps; + } + + return best; +} + // Taken from the AV1 spec. Range is [-2048, 2047], mean is 0 and stddev is 512 static const int16_t gaussian_sequence[2048] = { 56, 568, -180, 172, 124, -84, 172, -64, -900, 24, 820, diff --git a/libavcodec/aom_film_grain.h b/libavcodec/aom_film_grain.h index 5d772bd7d17..b985451dbc3 100644 --- a/libavcodec/aom_film_grain.h +++ b/libavcodec/aom_film_grain.h @@ -30,9 +30,34 @@ #include "libavutil/film_grain_params.h" +// Stand-alone AFGS1 metadata parameter set +typedef struct AVFilmGrainAOMParamSet { + int apply_grain; + int apply_width; + int apply_height; + int subx, suby; + uint16_t grain_seed; + AVFilmGrainAOMParams params; +} AVFilmGrainAOMParamSet; + +typedef struct AVFilmGrainAOMParamSets { + int enable; + AVFilmGrainAOMParamSet sets[8]; +} AVFilmGrainAOMParamSets; + // Synthesizes film grain on top of `in` and stores the result to `out`. `out` // must already have been allocated and set to the same size and format as `in`. int ff_aom_apply_film_grain(AVFrame *out, const AVFrame *in, const AVFilmGrainParams *params); +// Parse AFGS1 parameter sets from an ITU-T T.35 payload. Returns 0 on success, +// or a negative error code. +int ff_aom_parse_film_grain_sets(AVFilmGrainAOMParamSets *s, + const uint8_t *payload, int payload_size); + +// Select the most appropriate film grain parameter set for a given +// frame. Returns the parameter set, or NULL if none was selected. +const AVFilmGrainAOMParamSet *ff_aom_select_film_grain_set(const AVFilmGrainAOMParamSets *s, + const AVFrame *frame); + #endif /* AVCODEC_AOM_FILM_GRAIN_H */ -- 2.43.2 _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
next prev parent reply other threads:[~2024-02-29 12:38 UTC|newest] Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top 2024-02-29 12:33 [FFmpeg-devel] [PATCH v2 1/4] avcodec/aom_film_grain: add AOM film grain synthesis Niklas Haas 2024-02-29 12:33 ` Niklas Haas [this message] 2024-02-29 12:33 ` [FFmpeg-devel] [PATCH v2 3/4] avcodec/h2645_sei: decode AFGS1 T.35 SEI Niklas Haas 2024-02-29 12:33 ` [FFmpeg-devel] [PATCH v2 4/4] avcodec/hevcdec: apply AOM film grain synthesis Niklas Haas 2024-02-29 12:43 ` [FFmpeg-devel] [PATCH v2 1/4] avcodec/aom_film_grain: add " Niklas Haas 2024-02-29 12:50 ` Andreas Rheinhardt 2024-03-08 13:21 Niklas Haas 2024-03-08 13:21 ` [FFmpeg-devel] [PATCH v2 2/4] avcodec/aom_film_grain: implement AFGS1 Niklas Haas 2024-03-08 13:28 ` Niklas Haas
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20240229123340.49055-3-ffmpeg@haasn.xyz \ --to=ffmpeg@haasn.xyz \ --cc=ffmpeg-devel@ffmpeg.org \ --cc=git@haasn.dev \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
Git Inbox Mirror of the ffmpeg-devel mailing list - see https://ffmpeg.org/mailman/listinfo/ffmpeg-devel This inbox may be cloned and mirrored by anyone: git clone --mirror https://master.gitmailbox.com/ffmpegdev/0 ffmpegdev/git/0.git # If you have public-inbox 1.1+ installed, you may # initialize and index your mirror using the following commands: public-inbox-init -V2 ffmpegdev ffmpegdev/ https://master.gitmailbox.com/ffmpegdev \ ffmpegdev@gitmailbox.com public-inbox-index ffmpegdev Example config snippet for mirrors. AGPL code for this site: git clone https://public-inbox.org/public-inbox.git