From: Drew Dunne <asdunne-at-google.com@ffmpeg.org>
To: ffmpeg-devel@ffmpeg.org
Subject: Re: [FFmpeg-devel] [PATCH] sws_scale: Use av_sat_add32 in yuv2rgba64 template
Date: Tue, 1 Nov 2022 15:12:08 -0400
Message-ID: <CAHHpSXb8McyEK3zoOrReZyB92eEWuXkidb3BppWTHDeQvWxf3Q@mail.gmail.com> (raw)
In-Reply-To: <20221101190507.3229714-1-asdunne@google.com>
[-- Attachment #1: Type: text/plain, Size: 13050 bytes --]
This was supposed to be in reply to a previous patch I sent, but I changed
the git message so maybe it didn't like that. Sorry about that.
Attached is the example YUVA file that can reproduce the overflow bug.
The command to reproduce is:
./ffmpeg \
-f rawvideo -video_size 66x64 -pixel_format yuva420p10le \
-i overflow_input_w66h64.yuva420p10le \
-filter_complex
"scale=flags=bicubic+full_chroma_int+full_chroma_inp+bitexact+accurate_rnd:in_color_matrix=bt2020:out_color_matrix=bt2020:in_range=full:out_range=full,format=rgba64[out]"
\
-f rawvideo -codec:v:0 rawvideo -pixel_format rgba64 -map '[out]' \
-y overflow_w66h64.rgba64
I've attached a PNG of the resulting overflowed image, there is a clear
discoloration in the bottom left corner that's orange instead of pink.
I've also attached a PNG of the image using the patch to correct this
overflow.
On Tue, Nov 1, 2022 at 3:05 PM Drew Dunne <asdunne@google.com> wrote:
> Avoid a possible integer overflow in the yuv2rgba64 templates by using
> av_sat_add32 when combing the R, G, B components with Y. On certain
> inputs, this addition can overflow to a negative, is then clipped to a
> power of two and shifted down 14. This results in a much different value
> in the output than had it been saturated instead. I will attach an
> example input YUV in a follow up and some images that show the artifacts
> resulting from this overflow.
>
> ---
> libswscale/output.c | 96 ++++++++++++++++++++++-----------------------
> 1 file changed, 48 insertions(+), 48 deletions(-)
>
> diff --git a/libswscale/output.c b/libswscale/output.c
> index 0e1c1225a0..8c8f62682a 100644
> --- a/libswscale/output.c
> +++ b/libswscale/output.c
> @@ -1109,20 +1109,20 @@ yuv2rgba64_X_c_template(SwsContext *c, const
> int16_t *lumFilter,
> B = U * c->yuv2rgb_u2b_coeff;
>
> // 8 bits: 30 - 22 = 8 bits, 16 bits: 30 bits - 14 = 16 bits
> - output_pixel(&dest[0], av_clip_uintp2(R_B + Y1, 30) >> 14);
> - output_pixel(&dest[1], av_clip_uintp2( G + Y1, 30) >> 14);
> - output_pixel(&dest[2], av_clip_uintp2(B_R + Y1, 30) >> 14);
> + output_pixel(&dest[0], av_clip_uintp2(av_sat_add32(R_B, Y1), 30)
> >> 14);
> + output_pixel(&dest[1], av_clip_uintp2(av_sat_add32( G, Y1), 30)
> >> 14);
> + output_pixel(&dest[2], av_clip_uintp2(av_sat_add32(B_R, Y1), 30)
> >> 14);
> if (eightbytes) {
> output_pixel(&dest[3], av_clip_uintp2(A1 , 30) >> 14);
> - output_pixel(&dest[4], av_clip_uintp2(R_B + Y2, 30) >> 14);
> - output_pixel(&dest[5], av_clip_uintp2( G + Y2, 30) >> 14);
> - output_pixel(&dest[6], av_clip_uintp2(B_R + Y2, 30) >> 14);
> + output_pixel(&dest[4], av_clip_uintp2(av_sat_add32(R_B, Y2),
> 30) >> 14);
> + output_pixel(&dest[5], av_clip_uintp2(av_sat_add32( G, Y2),
> 30) >> 14);
> + output_pixel(&dest[6], av_clip_uintp2(av_sat_add32(B_R, Y2),
> 30) >> 14);
> output_pixel(&dest[7], av_clip_uintp2(A2 , 30) >> 14);
> dest += 8;
> } else {
> - output_pixel(&dest[3], av_clip_uintp2(R_B + Y2, 30) >> 14);
> - output_pixel(&dest[4], av_clip_uintp2( G + Y2, 30) >> 14);
> - output_pixel(&dest[5], av_clip_uintp2(B_R + Y2, 30) >> 14);
> + output_pixel(&dest[3], av_clip_uintp2(av_sat_add32(R_B, Y2),
> 30) >> 14);
> + output_pixel(&dest[4], av_clip_uintp2(av_sat_add32( G, Y2),
> 30) >> 14);
> + output_pixel(&dest[5], av_clip_uintp2(av_sat_add32(B_R, Y2),
> 30) >> 14);
> dest += 6;
> }
> }
> @@ -1175,20 +1175,20 @@ yuv2rgba64_2_c_template(SwsContext *c, const
> int32_t *buf[2],
> A2 += 1 << 13;
> }
>
> - output_pixel(&dest[0], av_clip_uintp2(R_B + Y1, 30) >> 14);
> - output_pixel(&dest[1], av_clip_uintp2( G + Y1, 30) >> 14);
> - output_pixel(&dest[2], av_clip_uintp2(B_R + Y1, 30) >> 14);
> + output_pixel(&dest[0], av_clip_uintp2(av_sat_add32(R_B, Y1), 30)
> >> 14);
> + output_pixel(&dest[1], av_clip_uintp2(av_sat_add32( G, Y1), 30)
> >> 14);
> + output_pixel(&dest[2], av_clip_uintp2(av_sat_add32(B_R, Y1), 30)
> >> 14);
> if (eightbytes) {
> output_pixel(&dest[3], av_clip_uintp2(A1 , 30) >> 14);
> - output_pixel(&dest[4], av_clip_uintp2(R_B + Y2, 30) >> 14);
> - output_pixel(&dest[5], av_clip_uintp2( G + Y2, 30) >> 14);
> - output_pixel(&dest[6], av_clip_uintp2(B_R + Y2, 30) >> 14);
> + output_pixel(&dest[4], av_clip_uintp2(av_sat_add32(R_B, Y2),
> 30) >> 14);
> + output_pixel(&dest[5], av_clip_uintp2(av_sat_add32( G, Y2),
> 30) >> 14);
> + output_pixel(&dest[6], av_clip_uintp2(av_sat_add32(B_R, Y2),
> 30) >> 14);
> output_pixel(&dest[7], av_clip_uintp2(A2 , 30) >> 14);
> dest += 8;
> } else {
> - output_pixel(&dest[3], av_clip_uintp2(R_B + Y2, 30) >> 14);
> - output_pixel(&dest[4], av_clip_uintp2( G + Y2, 30) >> 14);
> - output_pixel(&dest[5], av_clip_uintp2(B_R + Y2, 30) >> 14);
> + output_pixel(&dest[3], av_clip_uintp2(av_sat_add32(R_B, Y2),
> 30) >> 14);
> + output_pixel(&dest[4], av_clip_uintp2(av_sat_add32( G, Y2),
> 30) >> 14);
> + output_pixel(&dest[5], av_clip_uintp2(av_sat_add32(B_R, Y2),
> 30) >> 14);
> dest += 6;
> }
> }
> @@ -1232,20 +1232,20 @@ yuv2rgba64_1_c_template(SwsContext *c, const
> int32_t *buf0,
> G = V * c->yuv2rgb_v2g_coeff + U * c->yuv2rgb_u2g_coeff;
> B = U * c->yuv2rgb_u2b_coeff;
>
> - output_pixel(&dest[0], av_clip_uintp2(R_B + Y1, 30) >> 14);
> - output_pixel(&dest[1], av_clip_uintp2( G + Y1, 30) >> 14);
> - output_pixel(&dest[2], av_clip_uintp2(B_R + Y1, 30) >> 14);
> + output_pixel(&dest[0], av_clip_uintp2(av_sat_add32(R_B, Y1),
> 30) >> 14);
> + output_pixel(&dest[1], av_clip_uintp2(av_sat_add32( G, Y1),
> 30) >> 14);
> + output_pixel(&dest[2], av_clip_uintp2(av_sat_add32(B_R, Y1),
> 30) >> 14);
> if (eightbytes) {
> output_pixel(&dest[3], av_clip_uintp2(A1 , 30) >>
> 14);
> - output_pixel(&dest[4], av_clip_uintp2(R_B + Y2, 30) >>
> 14);
> - output_pixel(&dest[5], av_clip_uintp2( G + Y2, 30) >>
> 14);
> - output_pixel(&dest[6], av_clip_uintp2(B_R + Y2, 30) >>
> 14);
> + output_pixel(&dest[4], av_clip_uintp2(av_sat_add32(R_B,
> Y2), 30) >> 14);
> + output_pixel(&dest[5], av_clip_uintp2(av_sat_add32( G,
> Y2), 30) >> 14);
> + output_pixel(&dest[6], av_clip_uintp2(av_sat_add32(B_R,
> Y2), 30) >> 14);
> output_pixel(&dest[7], av_clip_uintp2(A2 , 30) >>
> 14);
> dest += 8;
> } else {
> - output_pixel(&dest[3], av_clip_uintp2(R_B + Y2, 30) >>
> 14);
> - output_pixel(&dest[4], av_clip_uintp2( G + Y2, 30) >>
> 14);
> - output_pixel(&dest[5], av_clip_uintp2(B_R + Y2, 30) >>
> 14);
> + output_pixel(&dest[3], av_clip_uintp2(av_sat_add32(R_B,
> Y2), 30) >> 14);
> + output_pixel(&dest[4], av_clip_uintp2(av_sat_add32( G,
> Y2), 30) >> 14);
> + output_pixel(&dest[5], av_clip_uintp2(av_sat_add32(B_R,
> Y2), 30) >> 14);
> dest += 6;
> }
> }
> @@ -1278,20 +1278,20 @@ yuv2rgba64_1_c_template(SwsContext *c, const
> int32_t *buf0,
> G = V * c->yuv2rgb_v2g_coeff + U * c->yuv2rgb_u2g_coeff;
> B = U * c->yuv2rgb_u2b_coeff;
>
> - output_pixel(&dest[0], av_clip_uintp2(R_B + Y1, 30) >> 14);
> - output_pixel(&dest[1], av_clip_uintp2( G + Y1, 30) >> 14);
> - output_pixel(&dest[2], av_clip_uintp2(B_R + Y1, 30) >> 14);
> + output_pixel(&dest[0], av_clip_uintp2(av_sat_add32(R_B, Y1),
> 30) >> 14);
> + output_pixel(&dest[1], av_clip_uintp2(av_sat_add32( G, Y1),
> 30) >> 14);
> + output_pixel(&dest[2], av_clip_uintp2(av_sat_add32(B_R, Y1),
> 30) >> 14);
> if (eightbytes) {
> output_pixel(&dest[3], av_clip_uintp2(A1 , 30) >>
> 14);
> - output_pixel(&dest[4], av_clip_uintp2(R_B + Y2, 30) >>
> 14);
> - output_pixel(&dest[5], av_clip_uintp2( G + Y2, 30) >>
> 14);
> - output_pixel(&dest[6], av_clip_uintp2(B_R + Y2, 30) >>
> 14);
> + output_pixel(&dest[4], av_clip_uintp2(av_sat_add32(R_B,
> Y2), 30) >> 14);
> + output_pixel(&dest[5], av_clip_uintp2(av_sat_add32( G,
> Y2), 30) >> 14);
> + output_pixel(&dest[6], av_clip_uintp2(av_sat_add32(B_R,
> Y2), 30) >> 14);
> output_pixel(&dest[7], av_clip_uintp2(A2 , 30) >>
> 14);
> dest += 8;
> } else {
> - output_pixel(&dest[3], av_clip_uintp2(R_B + Y2, 30) >>
> 14);
> - output_pixel(&dest[4], av_clip_uintp2( G + Y2, 30) >>
> 14);
> - output_pixel(&dest[5], av_clip_uintp2(B_R + Y2, 30) >>
> 14);
> + output_pixel(&dest[3], av_clip_uintp2(av_sat_add32(R_B,
> Y2), 30) >> 14);
> + output_pixel(&dest[4], av_clip_uintp2(av_sat_add32( G,
> Y2), 30) >> 14);
> + output_pixel(&dest[5], av_clip_uintp2(av_sat_add32(B_R,
> Y2), 30) >> 14);
> dest += 6;
> }
> }
> @@ -1351,9 +1351,9 @@ yuv2rgba64_full_X_c_template(SwsContext *c, const
> int16_t *lumFilter,
> B = U * c->yuv2rgb_u2b_coeff;
>
> // 8bit: 30 - 22 = 8bit, 16bit: 30bit - 14 = 16bit
> - output_pixel(&dest[0], av_clip_uintp2(R_B + Y, 30) >> 14);
> - output_pixel(&dest[1], av_clip_uintp2( G + Y, 30) >> 14);
> - output_pixel(&dest[2], av_clip_uintp2(B_R + Y, 30) >> 14);
> + output_pixel(&dest[0], av_clip_uintp2(av_sat_add32(R_B, Y), 30)
> >> 14);
> + output_pixel(&dest[1], av_clip_uintp2(av_sat_add32( G, Y), 30)
> >> 14);
> + output_pixel(&dest[2], av_clip_uintp2(av_sat_add32(B_R, Y), 30)
> >> 14);
> if (eightbytes) {
> output_pixel(&dest[3], av_clip_uintp2(A, 30) >> 14);
> dest += 4;
> @@ -1404,9 +1404,9 @@ yuv2rgba64_full_2_c_template(SwsContext *c, const
> int32_t *buf[2],
> A += 1 << 13;
> }
>
> - output_pixel(&dest[0], av_clip_uintp2(R_B + Y, 30) >> 14);
> - output_pixel(&dest[1], av_clip_uintp2( G + Y, 30) >> 14);
> - output_pixel(&dest[2], av_clip_uintp2(B_R + Y, 30) >> 14);
> + output_pixel(&dest[0], av_clip_uintp2(av_sat_add32(R_B, Y), 30)
> >> 14);
> + output_pixel(&dest[1], av_clip_uintp2(av_sat_add32( G, Y), 30)
> >> 14);
> + output_pixel(&dest[2], av_clip_uintp2(av_sat_add32(B_R, Y), 30)
> >> 14);
> if (eightbytes) {
> output_pixel(&dest[3], av_clip_uintp2(A, 30) >> 14);
> dest += 4;
> @@ -1448,9 +1448,9 @@ yuv2rgba64_full_1_c_template(SwsContext *c, const
> int32_t *buf0,
> G = V * c->yuv2rgb_v2g_coeff + U * c->yuv2rgb_u2g_coeff;
> B = U * c->yuv2rgb_u2b_coeff;
>
> - output_pixel(&dest[0], av_clip_uintp2(R_B + Y, 30) >> 14);
> - output_pixel(&dest[1], av_clip_uintp2( G + Y, 30) >> 14);
> - output_pixel(&dest[2], av_clip_uintp2(B_R + Y, 30) >> 14);
> + output_pixel(&dest[0], av_clip_uintp2(av_sat_add32(R_B, Y),
> 30) >> 14);
> + output_pixel(&dest[1], av_clip_uintp2(av_sat_add32( G, Y),
> 30) >> 14);
> + output_pixel(&dest[2], av_clip_uintp2(av_sat_add32(B_R, Y),
> 30) >> 14);
> if (eightbytes) {
> output_pixel(&dest[3], av_clip_uintp2(A, 30) >> 14);
> dest += 4;
> @@ -1481,9 +1481,9 @@ yuv2rgba64_full_1_c_template(SwsContext *c, const
> int32_t *buf0,
> G = V * c->yuv2rgb_v2g_coeff + U * c->yuv2rgb_u2g_coeff;
> B = U * c->yuv2rgb_u2b_coeff;
>
> - output_pixel(&dest[0], av_clip_uintp2(R_B + Y, 30) >> 14);
> - output_pixel(&dest[1], av_clip_uintp2( G + Y, 30) >> 14);
> - output_pixel(&dest[2], av_clip_uintp2(B_R + Y, 30) >> 14);
> + output_pixel(&dest[0], av_clip_uintp2(av_sat_add32(R_B, Y),
> 30) >> 14);
> + output_pixel(&dest[1], av_clip_uintp2(av_sat_add32( G, Y),
> 30) >> 14);
> + output_pixel(&dest[2], av_clip_uintp2(av_sat_add32(B_R, Y),
> 30) >> 14);
> if (eightbytes) {
> output_pixel(&dest[3], av_clip_uintp2(A, 30) >> 14);
> dest += 4;
> --
> 2.38.1.273.g43a17bfeac-goog
>
>
--
Drew Dunne
asdunne@google.com
[-- Attachment #2: overflow_input_w66h64.yuva420p10le --]
[-- Type: application/octet-stream, Size: 21120 bytes --]
[-- Attachment #3: overflow.png --]
[-- Type: image/png, Size: 43399 bytes --]
[-- Attachment #4: patched.png --]
[-- Type: image/png, Size: 37535 bytes --]
[-- Attachment #5: Type: text/plain, Size: 251 bytes --]
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
next prev parent reply other threads:[~2022-11-01 19:12 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-10-26 16:17 [FFmpeg-devel] [PATCH] Avoid possible integer overflow in yuv420 to rgba64 templates by saturating Drew Dunne
2022-11-01 19:05 ` [FFmpeg-devel] [PATCH] sws_scale: Use av_sat_add32 in yuv2rgba64 template Drew Dunne
2022-11-01 19:12 ` Drew Dunne [this message]
2022-11-02 19:05 ` Michael Niedermayer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAHHpSXb8McyEK3zoOrReZyB92eEWuXkidb3BppWTHDeQvWxf3Q@mail.gmail.com \
--to=asdunne-at-google.com@ffmpeg.org \
--cc=ffmpeg-devel@ffmpeg.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Git Inbox Mirror of the ffmpeg-devel mailing list - see https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
This inbox may be cloned and mirrored by anyone:
git clone --mirror https://master.gitmailbox.com/ffmpegdev/0 ffmpegdev/git/0.git
# If you have public-inbox 1.1+ installed, you may
# initialize and index your mirror using the following commands:
public-inbox-init -V2 ffmpegdev ffmpegdev/ https://master.gitmailbox.com/ffmpegdev \
ffmpegdev@gitmailbox.com
public-inbox-index ffmpegdev
Example config snippet for mirrors.
AGPL code for this site: git clone https://public-inbox.org/public-inbox.git