From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from ffbox0-bg.ffmpeg.org (ffbox0-bg.ffmpeg.org [79.124.17.100]) by master.gitmailbox.com (Postfix) with ESMTPS id CDD9C4CF76 for ; Fri, 30 May 2025 07:07:24 +0000 (UTC) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.ffmpeg.org (Postfix) with ESMTP id A6F2268D481; Fri, 30 May 2025 10:07:20 +0300 (EEST) Received: from mail-lf1-f46.google.com (mail-lf1-f46.google.com [209.85.167.46]) by ffbox0-bg.ffmpeg.org (Postfix) with ESMTPS id 1AC6068D27A for ; Fri, 30 May 2025 10:07:14 +0300 (EEST) Received: by mail-lf1-f46.google.com with SMTP id 2adb3069b0e04-553331c3dc7so1948055e87.3 for ; Fri, 30 May 2025 00:07:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=martin-st.20230601.gappssmtp.com; s=20230601; t=1748588833; x=1749193633; darn=ffmpeg.org; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:from:to:cc:subject:date:message-id:reply-to; bh=onRKq1lJXbk/jCsDWwHjqd5V/R1nT/STZvprRFHn/MQ=; b=zzCO1r9uQe57iLyL30ItAUxpomb0m5RuF72mgn+qPOT9E3Cxjnzsavx/AAZIQAz/3j PT5Q+sh9sHwNMtAUyn5zXOcpe8WX49blnypsKuTI8cDihLMldlN4D4JiFDLOh2C+ERAr dX6IqAZnukITWiSrTZRDhnTVXQImkMLDoFgeqJxpt7AUvMFIqZIDC0BVsQmnJxSlMNE5 4gEVlSQzepqvyFpQXXcA3ct9ovS3/4QNizKVl418TpPw/i3FauAxroMFzxA0QGJt8ixV hoCgFWsTns91hu6t9FHed5znjkbzHsXjwUVnpmZmQP22H2PxBMxAmOJ3Tk5bFmEpvCk8 T+Rw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748588833; x=1749193633; h=mime-version:references:message-id:in-reply-to:subject:cc:to:from :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=onRKq1lJXbk/jCsDWwHjqd5V/R1nT/STZvprRFHn/MQ=; b=uWM8d2Oc52xbTMXyV69EYouj9yXQgcQQd28zt+N1FrVgL2A5YotZpQTCX/W1rLJrSU 4RzIP4RJ9s8d2UgqobKnTZBm260/jl4p66V1ygSIyYQJx8MI2ibVZ0otJzqmkwmS4i9c TJ6vROZlU2dQtQmk3KWSJuw+8SeBdR1Z+cVIAGC09NxhvqNdljuH5k8TabEtP203Egll i0/+KwfvzBP2mvLi/eJQY5T1pgOnnpH6HXHepDUSE58PKRaeDPEJJEFcB9qGxsPraLEr WwNdJm6ZHP1HuhaNEx7X8Zrk9To8z8pVZqcnuhVPlSBhTa/JtqHT5NdOFyqUpx+YX9gw 2eMw== X-Gm-Message-State: AOJu0YwAo/CzBf53nHOjQXMHW7mahptJZPnBuc/Z+f81W4YEZOKnhzUe 72U+5vO+2lmxTW+zfeJ73Vho7Dx38QzsdeJGgH9rJY52uYg4xtj4pYuxWRijVDwRHh5RfWa82mE 4NJ8m5g== X-Gm-Gg: ASbGncsXsXgOSfRvNxvEQd4Aff3Os9kzsqSVextY6mY9Rh14gfn2lIF1FT0/sTku50u BJtDn9U3wWJRGNJ0QIOzQJYkioVKW+rjqvMwNXer1rppH0WuoOv3jJmRjtDiouH7ZnkxPCL6G7T TieoToBdxrTa72gE8i1dg4KHGutyzwFTPHHdwFPMGSg1mC44V2eeoHSiev8PztnQOtiC5BHdSDX DuiW43Y211nPo5RAJrQcresipzilszFeqE5RQuyPlSgPltlSVlqbfeHM2XwmK0hF+cd1GeRWzro +f5PBPXhKBFswgtHz+BkPLMb+uzVchQZNXvXbaORVnQFEA8EqtUOIJMPzvvO7tkGLr9dXTxkRVM C4d5Jecc3oHvPL7mXHR5bJ4OX+GhO59zcmhc5Pc4aaMK1BsFpG/NdwiEbcg== X-Google-Smtp-Source: AGHT+IHkcCJqiMe9BowrFE9xU70FI1FnILiQ1c0H+rfxc/bwnJ3wHbrqTBlsvATnpbzUq4yGvJsAqQ== X-Received: by 2002:a05:6512:b8c:b0:553:3332:b65f with SMTP id 2adb3069b0e04-5533b8e17f3mr865545e87.12.1748588832626; Fri, 30 May 2025 00:07:12 -0700 (PDT) Received: from tunnel335574-pt.tunnel.tserv24.sto1.ipv6.he.net (tunnel335574-pt.tunnel.tserv24.sto1.ipv6.he.net. [2001:470:27:11::2]) by smtp.gmail.com with ESMTPSA id 2adb3069b0e04-5533793744fsm596448e87.212.2025.05.30.00.07.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 30 May 2025 00:07:12 -0700 (PDT) Date: Fri, 30 May 2025 10:07:10 +0300 (EEST) From: =?ISO-8859-15?Q?Martin_Storsj=F6?= To: FFmpeg development discussions and patches In-Reply-To: <379d301d-1ee0-243a-256b-4a4b73e2ac99@martin.st> Message-ID: References: <379d301d-1ee0-243a-256b-4a4b73e2ac99@martin.st> MIME-Version: 1.0 X-Content-Filtered-By: Mailman/MimeDel 2.1.29 Subject: Re: [FFmpeg-devel] [PATCH 1/2] swscale: rgb_to_yuv neon optimizations X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Dmitriy Kovalenko Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="iso-8859-15"; Format="flowed" Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" Archived-At: List-Archive: List-Post: On Thu, 29 May 2025, Martin Storsj=F6 wrote: > In this case, they also add a direct dependence on = > the updated pointer register from the directly preceding instruction, whi= ch = > _is_ harmful on in-order cores, unless it entirely ignores the instructio= n.) I did benchmark this, and indeed it causes a large slowdown on Cortex A53: After this patch: bgra_to_uv_8_neon: 209.0 ( 0.75x) bgra_to_uv_128_neon: 541.5 ( 4.09x) bgra_to_uv_1080_neon: 4584.8 ( 4.03x) bgra_to_uv_1920_neon: 7865.9 ( 4.18x) bgra_to_uv_half_8_neon: 112.0 ( 0.85x) bgra_to_uv_half_128_neon: 348.8 ( 3.58x) bgra_to_uv_half_1080_neon: 2873.6 ( 3.60x) bgra_to_uv_half_1920_neon: 4973.5 ( 3.69x) With the prfm instructions removed: bgra_to_uv_8_neon: 215.3 ( 0.72x) bgra_to_uv_128_neon: 516.5 ( 4.30x) bgra_to_uv_1080_neon: 4387.6 ( 4.21x) bgra_to_uv_1920_neon: 7503.5 ( 4.37x) bgra_to_uv_half_8_neon: 111.8 ( 0.86x) bgra_to_uv_half_128_neon: 331.8 ( 3.77x) bgra_to_uv_half_1080_neon: 2739.1 ( 3.78x) bgra_to_uv_half_1920_neon: 4728.9 ( 3.88x) This is 5% faster when the prfm instructions are removed. Since they are controversial within ffmpeg, I urge you to split the = addition of prefetch instructions to a separate patch. If they are scheduled where they are now, right after a postincrement load = that updates the pointer, it is problematic for in-order cores. If you can = schedule them elsewhere where they don't actively hurt in-order cores, we = can maybe consider it. >> diff --git a/libswscale/aarch64/input.S b/libswscale/aarch64/input.S >> index c1c0adffc8..ee8eb24c14 100644 >> --- a/libswscale/aarch64/input.S >> +++ b/libswscale/aarch64/input.S >> @@ -1,5 +1,4 @@ >> -/* >> - * Copyright (c) 2024 Zhao Zhili >> +/* Copyright (c) 2024 Zhao Zhili > > This is an unrelated change This comment is unaddressed >> * >> * This file is part of FFmpeg. >> * >> @@ -57,20 +56,41 @@ >> sqshrn2 \dst\().8h, \dst2\().4s, \right_shift // = >> dst_higher_half =3D dst2 >> right_shift >> .endm >> = >> +// interleaved product version of the rgb to yuv gives slightly better = >> performance on non-performant mobile >> +.macro rgb_to_uv_interleaved_product r, g, b, u_coef0, u_coef1, u_coef2= , = >> v_coef0, v_coef1, v_coef2, u_dst1, u_dst2, v_dst1, v_dst2, u_dst, v_dst, = >> right_shift >> + smlal \u_dst1\().4s, \u_coef0\().4h, \r\().4h // U +=3D ru * = r = >> (first 4) >> + smlal \v_dst1\().4s, \v_coef0\().4h, \r\().4h // V +=3D rv * = r = >> (first 4) >> + smlal2 \u_dst2\().4s, \u_coef0\().8h, \r\().8h // U +=3D ru * = r = >> (second 4) >> + smlal2 \v_dst2\().4s, \v_coef0\().8h, \r\().8h // V +=3D rv * = r = >> (second 4) >> + > > The patch adds trailing whitespace here and in many other places; make su= re = > you don't do that. (It is visible by doing e.g. "git show".) This comment is unaddressed - you still have trailing whitespace. > Also, as a general rule, indent instructions within macros in the same wa= y as = > elsewhere. The instructions are better indented now, but the operand column is still = misindented by one space. The branch I referred you to on github does = contain an indent checker script which would point this out. >> + smlal \u_dst1\().4s, \u_coef1\().4h, \g\().4h // U +=3D gu * = g = >> (first 4) >> + smlal \v_dst1\().4s, \v_coef1\().4h, \g\().4h // V +=3D gv * = g = >> (first 4) >> + smlal2 \u_dst2\().4s, \u_coef1\().8h, \g\().8h // U +=3D gu * = g = >> (second 4) >> + smlal2 \v_dst2\().4s, \v_coef1\().8h, \g\().8h // V +=3D gv * = g = >> (second 4) > > If you with "non-performant mobile" mean small in-order cores, most of th= em = > can handle repeated accumulation like these even faster, if you sequence = > these so that all accumulations to one register is sequentially. E.g. fir= st = > all "smlal \u_dst1\().4s", followed by all "smlal \u_dst2\().4s", followe= d by = > \v_dst1, followed by \v_dst2. It's worth benchmarking if you do have acce= ss = > to such cores (e.g. Cortex-A53/A55; perhaps that's also the case on the = > Cortex-R you mentioned in the commit message). > > That kind of code sequence is very counterintuitive, when considering = > instruction scheduling for an in-order core, but there is an explicit men= tion = > of it in the code optimization guides for them; it's usually worthwhile t= o = > test out such a form of these accumulations. This comment is unaddressed > >> + >> + smlal \u_dst1\().4s, \u_coef2\().4h, \b\().4h // U +=3D bu * = b = >> (first 4) >> + smlal \v_dst1\().4s, \v_coef2\().4h, \b\().4h // V +=3D bv * = b = >> (first 4) >> + smlal2 \u_dst2\().4s, \u_coef2\().8h, \b\().8h // U +=3D bu * = b = >> (second 4) >> + smlal2 \v_dst2\().4s, \v_coef2\().8h, \b\().8h // V +=3D bv * = b = >> (second 4) >> + >> + sqshrn \u_dst\().4h, \u_dst1\().4s, \right_shift // U first 4 = >> pixels >> + sqshrn2 \u_dst\().8h, \u_dst2\().4s, \right_shift // U all 8 pixe= ls >> + sqshrn \v_dst\().4h, \v_dst1\().4s, \right_shift // V first 4 = >> pixels >> + sqshrn2 \v_dst\().8h, \v_dst2\().4s, \right_shift // V all 8 pixe= ls >> +.endm >> + >> .macro rgbToY_neon fmt_bgr, fmt_rgb, element, alpha_first=3D0 >> function ff_\fmt_bgr\()ToY_neon, export=3D1 >> - cmp w4, #0 // check width > 0 >> + cbz w4, 3f // check width > 0 >> ldp w12, w11, [x5] // w12: ry, w11: gy >> ldr w10, [x5, #8] // w10: by >> - b.gt 4f >> - ret >> + b 4f >> endfunc >> = >> function ff_\fmt_rgb\()ToY_neon, export=3D1 >> - cmp w4, #0 // check width > 0 >> + cbz w4, 3f // check width > 0 >> ldp w10, w11, [x5] // w10: ry, w11: gy >> ldr w12, [x5, #8] // w12: by >> - b.le 3f >> 4: >> mov w9, #256 // w9 =3D 1 << (RGB2YUV_= SHIFT = >> - 7) >> movk w9, #8, lsl #16 // w9 +=3D 32 << = >> (RGB2YUV_SHIFT - 1) >> @@ -158,8 +178,7 @@ rgbToY_neon abgr32, argb32, element=3D4, alpha_first= =3D1 >> = >> .macro rgbToUV_half_neon fmt_bgr, fmt_rgb, element, alpha_first=3D0 >> function ff_\fmt_bgr\()ToUV_half_neon, export=3D1 >> - cmp w5, #0 // check width > 0 >> - b.le 3f >> + cbz w5, 3f // check width > 0 >> >> ldp w12, w11, [x6, #12] >> ldp w10, w15, [x6, #20] >> @@ -168,7 +187,7 @@ function ff_\fmt_bgr\()ToUV_half_neon, export=3D1 >> endfunc >> = >> function ff_\fmt_rgb\()ToUV_half_neon, export=3D1 >> - cmp w5, #0 // check width > 0 >> + cmp w5, #0 // check width > 0 >> b.le 3f >> >> ldp w10, w11, [x6, #12] // w10: ru, w11: gu >> @@ -178,32 +197,41 @@ function ff_\fmt_rgb\()ToUV_half_neon, export=3D1 >> cmp w5, #8 >> rgb_set_uv_coeff half=3D1 >> b.lt 2f >> -1: >> +1: // load 16 pixels and prefetch memory for the next block >> .if \element =3D=3D 3 >> - ld3 { v16.16b, v17.16b, v18.16b }, [x3] >> + ld3 { v16.16b, v17.16b, v18.16b }, [x3], #48 >> + prfm pldl1strm, [x3, #48] >> .else >> - ld4 { v16.16b, v17.16b, v18.16b, v19.16b }, [x3] >> + ld4 { v16.16b, v17.16b, v18.16b, v19.16b }, [x3], #= 64 >> + prfm pldl1strm, [x3, #64] >> .endif >> + >> .if \alpha_first >> - uaddlp v21.8h, v19.16b >> - uaddlp v20.8h, v18.16b >> - uaddlp v19.8h, v17.16b >> + uaddlp v21.8h, v19.16b // v21: summed b pairs >> + uaddlp v20.8h, v18.16b // v20: summed g pairs >> + uaddlp v19.8h, v17.16b // v19: summed r pairs >> .else >> - uaddlp v19.8h, v16.16b // v19: r >> - uaddlp v20.8h, v17.16b // v20: g >> - uaddlp v21.8h, v18.16b // v21: b >> + uaddlp v19.8h, v16.16b // v19: summed r pairs >> + uaddlp v20.8h, v17.16b // v20: summed g pairs >> + uaddlp v21.8h, v18.16b // v21: summed b pairs >> .endif >> = >> - rgb_to_yuv_product v19, v20, v21, v22, v23, v16, v0, v1, v2, #10 >> - rgb_to_yuv_product v19, v20, v21, v24, v25, v17, v3, v4, v5, #10 >> - sub w5, w5, #8 // width -=3D 8 >> - add x3, x3, #(16*\element) >> - cmp w5, #8 // width >=3D 8 ? >> + mov v22.16b, v6.16b // U first half >> + mov v23.16b, v6.16b // U second half >> + mov v24.16b, v6.16b // V first half >> + mov v25.16b, v6.16b // V second half >> + >> + rgb_to_uv_interleaved_product v19, v20, v21, v0, v1, v2, v3, v4= , = >> v5, v22, v23, v24, v25, v16, v17, #10 >> + >> str q16, [x0], #16 // store dst_u >> str q17, [x1], #16 // store dst_v >> + >> + sub w5, w5, #8 // width -=3D 8 >> + cmp w5, #8 // width >=3D 8 ? >> b.ge 1b >> - cbz w5, 3f >> -2: >> + cbz w5, 3f // No pixels left? Exit >> + >> +2: // Scalar fallback for remaining pixels >> .if \alpha_first >> rgb_load_add_half 1, 5, 2, 6, 3, 7 >> .else >> @@ -213,21 +241,24 @@ function ff_\fmt_rgb\()ToUV_half_neon, export=3D1 >> rgb_load_add_half 0, 4, 1, 5, 2, 6 >> .endif >> .endif >> - >> smaddl x8, w2, w10, x9 // dst_u =3D ru * r + = >> const_offset >> + smaddl x16, w2, w13, x9 // dst_v =3D rv * r + = >> const_offset (parallel) >> + >> smaddl x8, w4, w11, x8 // dst_u +=3D gu * g >> + smaddl x16, w4, w14, x16 // dst_v +=3D gv * g = >> (parallel) >> + >> smaddl x8, w7, w12, x8 // dst_u +=3D bu * b >> - asr x8, x8, #10 // dst_u >>=3D 10 >> + smaddl x16, w7, w15, x16 // dst_v +=3D bv * b = >> (parallel) >> + >> + asr w8, w8, #10 // dst_u >>=3D 10 >> + asr w16, w16, #10 // dst_v >>=3D 10 >> + >> strh w8, [x0], #2 // store dst_u >> - >> - smaddl x8, w2, w13, x9 // dst_v =3D rv * r + = >> const_offset >> - smaddl x8, w4, w14, x8 // dst_v +=3D gv * g >> - smaddl x8, w7, w15, x8 // dst_v +=3D bv * b >> - asr x8, x8, #10 // dst_v >>=3D 10 >> - sub w5, w5, #1 >> - add x3, x3, #(2*\element) >> - strh w8, [x1], #2 // store dst_v >> - cbnz w5, 2b >> + strh w16, [x1], #2 // store dst_v >> + >> + sub w5, w5, #1 // width-- >> + add x3, x3, #(2*\element) // Advance source point= er >> + cbnz w5, 2b // Process next pixel i= f = >> any left >> 3: >> ret >> endfunc >> @@ -244,9 +275,9 @@ function ff_\fmt_bgr\()ToUV_neon, export=3D1 >> cmp w5, #0 // check width > 0 >> b.le 3f >> = >> - ldp w12, w11, [x6, #12] >> - ldp w10, w15, [x6, #20] >> - ldp w14, w13, [x6, #28] >> + ldp w12, w11, [x6, #12] // bu, gu >> + ldp w10, w15, [x6, #20] // ru, bv >> + ldp w14, w13, [x6, #28] // gv, rv >> b 4f >> endfunc >> = >> + .else // element =3D=3D 4 >> + ld4 { v16.16b, v17.16b, v18.16b, v19.16b }, [x3= ], = >> #64 >> + prfm pldl1strm, [x3, #64] >> + .endif >> + uxtl v19.8h, v16.8b // v19: r >> + uxtl v20.8h, v17.8b // v20: g >> + uxtl v21.8h, v18.8b // v21: b >> + uxtl2 v22.8h, v16.16b // v22: r >> + uxtl2 v23.8h, v17.16b // v23: g >> + uxtl2 v24.8h, v18.16b // v24: b >> .endif >> - rgb_to_yuv_product v19, v20, v21, v25, v26, v16, v0, v1, v2, #9 >> - rgb_to_yuv_product v22, v23, v24, v27, v28, v17, v0, v1, v2, #9 >> - rgb_to_yuv_product v19, v20, v21, v25, v26, v18, v3, v4, v5, #9 >> - rgb_to_yuv_product v22, v23, v24, v27, v28, v19, v3, v4, v5, #9 >> - sub w5, w5, #16 >> - add x3, x3, #(16*\element) >> - cmp w5, #16 >> - stp q16, q17, [x0], #32 // store to dst_u >> - stp q18, q19, [x1], #32 // store to dst_v >> + // process 2 groups of 8 pixels >> + mov v25.16b, v6.16b // U_dst1 =3D const_off= set = >> (32-bit accumulators) >> + mov v26.16b, v6.16b // U_dst2 =3D const_off= set >> + mov v27.16b, v6.16b // V_dst1 =3D const_off= set >> + mov v28.16b, v6.16b // V_dst2 =3D const_off= set >> + rgb_to_uv_interleaved_product v19, v20, v21, v0, v1, v2, v3, v4= , = >> v5, v25, v26, v27, v28, v16, v18, #9 >> + >> + mov v25.16b, v6.16b >> + mov v26.16b, v6.16b >> + mov v27.16b, v6.16b >> + mov v28.16b, v6.16b >> + rgb_to_uv_interleaved_product v22, v23, v24, v0, v1, v2, v3, v4= , = >> v5, v25, v26, v27, v28, v17, v19, #9 >> + >> + sub w5, w5, #16 // width -=3D 16 >> + cmp w5, #16 // width >=3D 16 ? >> + stp q16, q17, [x0], #32 // store to dst_u = >> (post-increment) >> + stp q18, q19, [x1], #32 // store to dst_v = >> (post-increment) >> b.ge 1b >> - cbz w5, 3f >> + cbz w5, 3f // No pixels left? Exit >> + >> 2: >> .if \alpha_first >> ldrb w16, [x3, #1] // w16: r >> @@ -292,13 +350,13 @@ function ff_\fmt_rgb\()ToUV_neon, export=3D1 >> smaddl x8, w16, w10, x9 // x8 =3D ru * r + = >> const_offset >> smaddl x8, w17, w11, x8 // x8 +=3D gu * g >> smaddl x8, w4, w12, x8 // x8 +=3D bu * b >> - asr w8, w8, #9 // x8 >>=3D 9 >> + asr x8, x8, #9 // x8 >>=3D 9 >> strh w8, [x0], #2 // store to dst_u > > Does this make any practical difference, as we're just storing the lower = 32 = > bits anyway? This comment is unaddressed. // Martin _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".