From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by master.gitmailbox.com (Postfix) with ESMTP id A732C44298 for ; Sun, 4 Sep 2022 21:23:56 +0000 (UTC) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 51BEB68B8CF; Mon, 5 Sep 2022 00:23:54 +0300 (EEST) Received: from mail8.parnet.fi (mail8.parnet.fi [77.234.108.134]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 9F8E168B20B for ; Mon, 5 Sep 2022 00:23:47 +0300 (EEST) Received: from mail9.parnet.fi (mail9.parnet.fi [77.234.108.21]) by mail8.parnet.fi with ESMTP id 284LNgWW032350-284LNgWX032350; Mon, 5 Sep 2022 00:23:42 +0300 Received: from foo.martin.st (host-97-187.parnet.fi [77.234.97.187]) by mail9.parnet.fi (Postfix) with ESMTPS id 403E3A1467; Mon, 5 Sep 2022 00:23:42 +0300 (EEST) Date: Mon, 5 Sep 2022 00:23:41 +0300 (EEST) From: =?ISO-8859-15?Q?Martin_Storsj=F6?= To: Hubert Mazur In-Reply-To: <20220822152627.1992008-6-hum@semihalf.com> Message-ID: References: <20220822152627.1992008-1-hum@semihalf.com> <20220822152627.1992008-6-hum@semihalf.com> MIME-Version: 1.0 X-FE-Policy-ID: 3:14:2:SYSTEM Subject: Re: [FFmpeg-devel] [PATCH 5/5] lavc/aarch64: Provide neon implementation of nsse16 X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: gjb@semihalf.com, upstream@semihalf.com, jswinney@amazon.com, ffmpeg-devel@ffmpeg.org, mw@semihalf.com, spop@amazon.com Content-Transfer-Encoding: 7bit Content-Type: text/plain; charset="us-ascii"; Format="flowed" Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" Archived-At: List-Archive: List-Post: On Mon, 22 Aug 2022, Hubert Mazur wrote: > Add vectorized implementation of nsse16 function. > > Performance comparison tests are shown below. > - nsse_0_c: 707.0 > - nsse_0_neon: 120.0 > > Benchmarks and tests run with checkasm tool on AWS Graviton 3. > > Signed-off-by: Hubert Mazur > --- > libavcodec/aarch64/me_cmp_init_aarch64.c | 15 +++ > libavcodec/aarch64/me_cmp_neon.S | 126 +++++++++++++++++++++++ > 2 files changed, 141 insertions(+) > > diff --git a/libavcodec/aarch64/me_cmp_init_aarch64.c b/libavcodec/aarch64/me_cmp_init_aarch64.c > index 8c295d5457..146ef04345 100644 > --- a/libavcodec/aarch64/me_cmp_init_aarch64.c > +++ b/libavcodec/aarch64/me_cmp_init_aarch64.c > @@ -49,6 +49,10 @@ int vsse16_neon(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2, > ptrdiff_t stride, int h); > int vsse_intra16_neon(MpegEncContext *c, const uint8_t *s, const uint8_t *dummy, > ptrdiff_t stride, int h); > +int nsse16_neon(int multiplier, const uint8_t *s, const uint8_t *s2, > + ptrdiff_t stride, int h); > +int nsse16_neon_wrapper(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2, > + ptrdiff_t stride, int h); > > av_cold void ff_me_cmp_init_aarch64(MECmpContext *c, AVCodecContext *avctx) > { > @@ -72,5 +76,16 @@ av_cold void ff_me_cmp_init_aarch64(MECmpContext *c, AVCodecContext *avctx) > > c->vsse[0] = vsse16_neon; > c->vsse[4] = vsse_intra16_neon; > + > + c->nsse[0] = nsse16_neon_wrapper; > } > } > + > +int nsse16_neon_wrapper(MpegEncContext *c, const uint8_t *s1, const uint8_t *s2, > + ptrdiff_t stride, int h) > +{ > + if (c) > + return nsse16_neon(c->avctx->nsse_weight, s1, s2, stride, h); > + else > + return nsse16_neon(8, s1, s2, stride, h); > +} > \ No newline at end of file The indentation is off for this file, and it's missing the final newline. > diff --git a/libavcodec/aarch64/me_cmp_neon.S b/libavcodec/aarch64/me_cmp_neon.S > index 46d4dade5d..9fe96e111c 100644 > --- a/libavcodec/aarch64/me_cmp_neon.S > +++ b/libavcodec/aarch64/me_cmp_neon.S > @@ -889,3 +889,129 @@ function vsse_intra16_neon, export=1 > > ret > endfunc > + > +function nsse16_neon, export=1 > + // x0 multiplier > + // x1 uint8_t *pix1 > + // x2 uint8_t *pix2 > + // x3 ptrdiff_t stride > + // w4 int h > + > + str x0, [sp, #-0x40]! > + stp x1, x2, [sp, #0x10] > + stp x3, x4, [sp, #0x20] > + str lr, [sp, #0x30] > + bl sse16_neon > + ldr lr, [sp, #0x30] > + mov w9, w0 // here we store score1 > + ldr x5, [sp] > + ldp x1, x2, [sp, #0x10] > + ldp x3, x4, [sp, #0x20] > + add sp, sp, #0x40 > + > + movi v16.8h, #0 > + movi v17.8h, #0 > + movi v18.8h, #0 > + movi v19.8h, #0 > + > + mov x10, x1 // x1 > + mov x14, x2 // x2 I don't see why you need to make a copy of x1/x2 here, as you don't use x1/x2 after this at all. > + add x11, x1, x3 // x1 + stride > + add x15, x2, x3 // x2 + stride > + add x12, x1, #1 // x1 + 1 > + add x16, x2, #1 // x2 + 1 FWIW, instead of making two loads, for [x1] and [x1+1], as we don't need the final value at [x1+16], I would normally just do one load of [x1] and then make a shifted version with the 'ext' instruction; ext is generally cheaper than doing redundant loads. On the other hand, by doing two loads, you don't have a serial dependency on the first load. > +// iterate by one > +2: > + ld1 {v0.16b}, [x10], x3 > + ld1 {v1.16b}, [x11], x3 > + ld1 {v2.16b}, [x12], x3 > + usubl v31.8h, v0.8b, v1.8b > + ld1 {v3.16b}, [x13], x3 > + usubl2 v30.8h, v0.16b, v1.16b > + usubl v29.8h, v2.8b, v3.8b > + usubl2 v28.8h, v2.16b, v3.16b > + saba v16.8h, v31.8h, v29.8h > + ld1 {v4.16b}, [x14], x3 > + ld1 {v5.16b}, [x15], x3 > + saba v17.8h, v30.8h, v28.8h > + ld1 {v6.16b}, [x16], x3 > + usubl v27.8h, v4.8b, v5.8b > + ld1 {v7.16b}, [x17], x3 So, looking at the main implementation structure here, by looking at the non-unrolled version: You're doing 8 loads per iteration here - and I would say you can do this with 2 loads per iteration. By reusing the loaded data from the previous iteration instead of duplicated loading, you can get this down from 8 to 4 loads. And by shifting with 'ext' instead of a separate load, you can get it down to 2 loads. (Then again, with 4 loads instead of 2, you can have the overlapping loads running in parallel, instead of having to wait for the first load to complete if using ext. I'd suggest trying both and seeing which one worke better - although the tradeoff might be different between different cores. But storing data from the previous line instead of such duplicated loading is certainly better in any case.) > + usubl2 v26.8h, v4.16b, v5.16b > + usubl v25.8h, v6.8b, v7.8b > + usubl2 v24.8h, v6.16b, v7.16b > + saba v18.8h, v27.8h, v25.8h > + subs w4, w4, #1 > + saba v19.8h, v26.8h, v24.8h > + > + cbnz w4, 2b > + > +3: > + sqsub v16.8h, v16.8h, v18.8h > + sqsub v17.8h, v17.8h, v19.8h > + ins v17.h[7], wzr This is very good, that you figured out how to handle the odd element here outside of the loops! // Martin _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".