Git Inbox Mirror of the ffmpeg-devel mailing list - see https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
 help / color / mirror / Atom feed
* [FFmpeg-devel] [PATCH 0/6] avcodec/vc1: Arm optimisations
@ 2022-03-17 18:58 Ben Avison
  2022-03-17 18:58 ` [FFmpeg-devel] [PATCH 1/6] avcodec/vc1: Arm 64-bit NEON deblocking filter fast paths Ben Avison
                   ` (7 more replies)
  0 siblings, 8 replies; 55+ messages in thread
From: Ben Avison @ 2022-03-17 18:58 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Ben Avison

The VC1 decoder was missing lots of important fast paths for Arm, especially
for 64-bit Arm. This submission fills in implementations for all functions
where a fast path already existed and the fallback C implementation was
taking 1% or more of the runtime, and adds a new fast path to permit
vc1_unescape_buffer() to be overridden.

I've measured the playback speed on a 1.5 GHz Cortex-A72 (Raspberry Pi 4)
using `ffmpeg -i <bitstream> -f null -` for a couple of example streams:

Architecture:  AArch32    AArch32    AArch64    AArch64
Stream:        1          2          1          2
Before speed:  1.22x      0.82x      1.00x      0.67x
After speed:   1.31x      0.98x      1.39x      1.06x
Improvement:   7.4%       20%        39%        58%

`make fate` passes on both AArch32 and AArch64.

Ben Avison (6):
  avcodec/vc1: Arm 64-bit NEON deblocking filter fast paths
  avcodec/vc1: Arm 32-bit NEON deblocking filter fast paths
  avcodec/vc1: Arm 64-bit NEON inverse transform fast paths
  avcodec/idctdsp: Arm 64-bit NEON block add and clamp fast paths
  avcodec/blockdsp: Arm 64-bit NEON block clear fast paths
  avcodec/vc1: Introduce fast path for unescaping bitstream buffer

 libavcodec/aarch64/Makefile                |    6 +-
 libavcodec/aarch64/blockdsp_init_aarch64.c |   42 +
 libavcodec/aarch64/blockdsp_neon.S         |   43 +
 libavcodec/aarch64/idctdsp_init_aarch64.c  |   26 +-
 libavcodec/aarch64/idctdsp_neon.S          |  130 ++
 libavcodec/aarch64/vc1dsp_init_aarch64.c   |   93 ++
 libavcodec/aarch64/vc1dsp_neon.S           | 1552 ++++++++++++++++++++
 libavcodec/arm/vc1dsp_init_neon.c          |   74 +
 libavcodec/arm/vc1dsp_neon.S               |  761 ++++++++++
 libavcodec/blockdsp.c                      |    2 +
 libavcodec/blockdsp.h                      |    1 +
 libavcodec/vc1dec.c                        |   20 +-
 libavcodec/vc1dsp.c                        |    2 +
 libavcodec/vc1dsp.h                        |    3 +
 14 files changed, 2736 insertions(+), 19 deletions(-)
 create mode 100644 libavcodec/aarch64/blockdsp_init_aarch64.c
 create mode 100644 libavcodec/aarch64/blockdsp_neon.S
 create mode 100644 libavcodec/aarch64/idctdsp_neon.S
 create mode 100644 libavcodec/aarch64/vc1dsp_neon.S

-- 
2.25.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [FFmpeg-devel] [PATCH 1/6] avcodec/vc1: Arm 64-bit NEON deblocking filter fast paths
  2022-03-17 18:58 [FFmpeg-devel] [PATCH 0/6] avcodec/vc1: Arm optimisations Ben Avison
@ 2022-03-17 18:58 ` Ben Avison
  2022-03-17 18:58 ` [FFmpeg-devel] [PATCH 2/6] avcodec/vc1: Arm 32-bit " Ben Avison
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 55+ messages in thread
From: Ben Avison @ 2022-03-17 18:58 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Ben Avison

Signed-off-by: Ben Avison <bavison@riscosopen.org>
---
 libavcodec/aarch64/Makefile              |   1 +
 libavcodec/aarch64/vc1dsp_init_aarch64.c |  14 +
 libavcodec/aarch64/vc1dsp_neon.S         | 698 +++++++++++++++++++++++
 3 files changed, 713 insertions(+)
 create mode 100644 libavcodec/aarch64/vc1dsp_neon.S

diff --git a/libavcodec/aarch64/Makefile b/libavcodec/aarch64/Makefile
index 954461f81d..5b25e4dfb9 100644
--- a/libavcodec/aarch64/Makefile
+++ b/libavcodec/aarch64/Makefile
@@ -48,6 +48,7 @@ NEON-OBJS-$(CONFIG_IDCTDSP)             += aarch64/simple_idct_neon.o
 NEON-OBJS-$(CONFIG_MDCT)                += aarch64/mdct_neon.o
 NEON-OBJS-$(CONFIG_MPEGAUDIODSP)        += aarch64/mpegaudiodsp_neon.o
 NEON-OBJS-$(CONFIG_PIXBLOCKDSP)         += aarch64/pixblockdsp_neon.o
+NEON-OBJS-$(CONFIG_VC1DSP)              += aarch64/vc1dsp_neon.o
 NEON-OBJS-$(CONFIG_VP8DSP)              += aarch64/vp8dsp_neon.o
 
 # decoders/encoders
diff --git a/libavcodec/aarch64/vc1dsp_init_aarch64.c b/libavcodec/aarch64/vc1dsp_init_aarch64.c
index 13dfd74940..edfb296b75 100644
--- a/libavcodec/aarch64/vc1dsp_init_aarch64.c
+++ b/libavcodec/aarch64/vc1dsp_init_aarch64.c
@@ -25,6 +25,13 @@
 
 #include "config.h"
 
+void ff_vc1_v_loop_filter4_neon(uint8_t *src, int stride, int pq);
+void ff_vc1_h_loop_filter4_neon(uint8_t *src, int stride, int pq);
+void ff_vc1_v_loop_filter8_neon(uint8_t *src, int stride, int pq);
+void ff_vc1_h_loop_filter8_neon(uint8_t *src, int stride, int pq);
+void ff_vc1_v_loop_filter16_neon(uint8_t *src, int stride, int pq);
+void ff_vc1_h_loop_filter16_neon(uint8_t *src, int stride, int pq);
+
 void ff_put_vc1_chroma_mc8_neon(uint8_t *dst, uint8_t *src, ptrdiff_t stride,
                                 int h, int x, int y);
 void ff_avg_vc1_chroma_mc8_neon(uint8_t *dst, uint8_t *src, ptrdiff_t stride,
@@ -39,6 +46,13 @@ av_cold void ff_vc1dsp_init_aarch64(VC1DSPContext *dsp)
     int cpu_flags = av_get_cpu_flags();
 
     if (have_neon(cpu_flags)) {
+        dsp->vc1_v_loop_filter4  = ff_vc1_v_loop_filter4_neon;
+        dsp->vc1_h_loop_filter4  = ff_vc1_h_loop_filter4_neon;
+        dsp->vc1_v_loop_filter8  = ff_vc1_v_loop_filter8_neon;
+        dsp->vc1_h_loop_filter8  = ff_vc1_h_loop_filter8_neon;
+        dsp->vc1_v_loop_filter16 = ff_vc1_v_loop_filter16_neon;
+        dsp->vc1_h_loop_filter16 = ff_vc1_h_loop_filter16_neon;
+
         dsp->put_no_rnd_vc1_chroma_pixels_tab[0] = ff_put_vc1_chroma_mc8_neon;
         dsp->avg_no_rnd_vc1_chroma_pixels_tab[0] = ff_avg_vc1_chroma_mc8_neon;
         dsp->put_no_rnd_vc1_chroma_pixels_tab[1] = ff_put_vc1_chroma_mc4_neon;
diff --git a/libavcodec/aarch64/vc1dsp_neon.S b/libavcodec/aarch64/vc1dsp_neon.S
new file mode 100644
index 0000000000..fe8963545a
--- /dev/null
+++ b/libavcodec/aarch64/vc1dsp_neon.S
@@ -0,0 +1,698 @@
+/*
+ * VC1 AArch64 NEON optimisations
+ *
+ * Copyright (c) 2022 Ben Avison <bavison@riscosopen.org>
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include "libavutil/aarch64/asm.S"
+
+.align  5
+.Lcoeffs:
+.quad   0x00050002
+
+// VC-1 in-loop deblocking filter for 4 pixel pairs at boundary of vertically-neighbouring blocks
+// On entry:
+//   x0 -> top-left pel of lower block
+//   w1 = row stride, bytes
+//   w2 = PQUANT bitstream parameter
+function ff_vc1_v_loop_filter4_neon, export=1
+        sub     x3, x0, w1, sxtw #2
+        sxtw    x1, w1                  // technically, stride is signed int
+        ldr     d0, .Lcoeffs
+        ld1     {v1.s}[0], [x0], x1     // P5
+        ld1     {v2.s}[0], [x3], x1     // P1
+        ld1     {v3.s}[0], [x3], x1     // P2
+        ld1     {v4.s}[0], [x0], x1     // P6
+        ld1     {v5.s}[0], [x3], x1     // P3
+        ld1     {v6.s}[0], [x0], x1     // P7
+        ld1     {v7.s}[0], [x3]         // P4
+        ld1     {v16.s}[0], [x0]        // P8
+        ushll   v17.8h, v1.8b, #1       // 2*P5
+        dup     v18.8h, w2              // pq
+        ushll   v2.8h, v2.8b, #1        // 2*P1
+        uxtl    v3.8h, v3.8b            // P2
+        uxtl    v4.8h, v4.8b            // P6
+        uxtl    v19.8h, v5.8b           // P3
+        mls     v2.4h, v3.4h, v0.h[1]   // 2*P1-5*P2
+        uxtl    v3.8h, v6.8b            // P7
+        mls     v17.4h, v4.4h, v0.h[1]  // 2*P5-5*P6
+        ushll   v5.8h, v5.8b, #1        // 2*P3
+        uxtl    v6.8h, v7.8b            // P4
+        mla     v17.4h, v3.4h, v0.h[1]  // 2*P5-5*P6+5*P7
+        uxtl    v3.8h, v16.8b           // P8
+        mla     v2.4h, v19.4h, v0.h[1]  // 2*P1-5*P2+5*P3
+        uxtl    v1.8h, v1.8b            // P5
+        mls     v5.4h, v6.4h, v0.h[1]   // 2*P3-5*P4
+        mls     v17.4h, v3.4h, v0.h[0]  // 2*P5-5*P6+5*P7-2*P8
+        sub     v3.4h, v6.4h, v1.4h     // P4-P5
+        mls     v2.4h, v6.4h, v0.h[0]   // 2*P1-5*P2+5*P3-2*P4
+        mla     v5.4h, v1.4h, v0.h[1]   // 2*P3-5*P4+5*P5
+        mls     v5.4h, v4.4h, v0.h[0]   // 2*P3-5*P4+5*P5-2*P6
+        abs     v4.4h, v3.4h
+        srshr   v7.4h, v17.4h, #3
+        srshr   v2.4h, v2.4h, #3
+        sshr    v4.4h, v4.4h, #1        // clip
+        srshr   v5.4h, v5.4h, #3
+        abs     v7.4h, v7.4h            // a2
+        sshr    v3.4h, v3.4h, #8        // clip_sign
+        abs     v2.4h, v2.4h            // a1
+        cmeq    v16.4h, v4.4h, #0       // test clip == 0
+        abs     v17.4h, v5.4h           // a0
+        sshr    v5.4h, v5.4h, #8        // a0_sign
+        cmhs    v19.4h, v2.4h, v7.4h    // test a1 >= a2
+        cmhs    v18.4h, v17.4h, v18.4h  // test a0 >= pq
+        sub     v3.4h, v3.4h, v5.4h     // clip_sign - a0_sign
+        bsl     v19.8b, v7.8b, v2.8b    // a3
+        orr     v2.8b, v16.8b, v18.8b   // test clip == 0 || a0 >= pq
+        uqsub   v5.4h, v17.4h, v19.4h   // a0 >= a3 ? a0-a3 : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        cmhs    v7.4h, v19.4h, v17.4h   // test a3 >= a0
+        mul     v0.4h, v5.4h, v0.h[1]   // a0 >= a3 ? 5*(a0-a3) : 0
+        orr     v5.8b, v2.8b, v7.8b     // test clip == 0 || a0 >= pq || a3 >= a0
+        mov     w0, v5.s[1]             // move to gp reg
+        ushr    v0.4h, v0.4h, #3        // a0 >= a3 ? (5*(a0-a3))>>3 : 0
+        cmhs    v5.4h, v0.4h, v4.4h
+        tbnz    w0, #0, 1f              // none of the 4 pixel pairs should be updated if this one is not filtered
+        bsl     v5.8b, v4.8b, v0.8b     // FFMIN(d, clip)
+        bic     v0.8b, v5.8b, v2.8b     // set each d to zero if it should not be filtered because clip == 0 || a0 >= pq (a3 > a0 case already zeroed by saturating sub)
+        mls     v6.4h, v0.4h, v3.4h     // invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P4
+        mla     v1.4h, v0.4h, v3.4h     // invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P5
+        sqxtun  v0.8b, v6.8h
+        sqxtun  v1.8b, v1.8h
+        st1     {v0.s}[0], [x3], x1
+        st1     {v1.s}[0], [x3]
+1:      ret
+endfunc
+
+// VC-1 in-loop deblocking filter for 4 pixel pairs at boundary of horizontally-neighbouring blocks
+// On entry:
+//   x0 -> top-left pel of right block
+//   w1 = row stride, bytes
+//   w2 = PQUANT bitstream parameter
+function ff_vc1_h_loop_filter4_neon, export=1
+        sub     x3, x0, #4              // where to start reading
+        sxtw    x1, w1                  // technically, stride is signed int
+        ldr     d0, .Lcoeffs
+        ld1     {v1.8b}, [x3], x1
+        sub     x0, x0, #1              // where to start writing
+        ld1     {v2.8b}, [x3], x1
+        ld1     {v3.8b}, [x3], x1
+        ld1     {v4.8b}, [x3]
+        dup     v5.8h, w2               // pq
+        trn1    v6.8b, v1.8b, v2.8b
+        trn2    v1.8b, v1.8b, v2.8b
+        trn1    v2.8b, v3.8b, v4.8b
+        trn2    v3.8b, v3.8b, v4.8b
+        trn1    v4.4h, v6.4h, v2.4h     // P1, P5
+        trn1    v7.4h, v1.4h, v3.4h     // P2, P6
+        trn2    v2.4h, v6.4h, v2.4h     // P3, P7
+        trn2    v1.4h, v1.4h, v3.4h     // P4, P8
+        ushll   v3.8h, v4.8b, #1        // 2*P1, 2*P5
+        uxtl    v6.8h, v7.8b            // P2, P6
+        uxtl    v7.8h, v2.8b            // P3, P7
+        uxtl    v1.8h, v1.8b            // P4, P8
+        mls     v3.8h, v6.8h, v0.h[1]   // 2*P1-5*P2, 2*P5-5*P6
+        ushll   v2.8h, v2.8b, #1        // 2*P3, 2*P7
+        uxtl    v4.8h, v4.8b            // P1, P5
+        mla     v3.8h, v7.8h, v0.h[1]   // 2*P1-5*P2+5*P3, 2*P5-5*P6+5*P7
+        mov     d6, v6.d[1]             // P6
+        mls     v3.8h, v1.8h, v0.h[0]   // 2*P1-5*P2+5*P3-2*P4, 2*P5-5*P6+5*P7-2*P8
+        mov     d4, v4.d[1]             // P5
+        mls     v2.4h, v1.4h, v0.h[1]   // 2*P3-5*P4
+        mla     v2.4h, v4.4h, v0.h[1]   // 2*P3-5*P4+5*P5
+        sub     v7.4h, v1.4h, v4.4h     // P4-P5
+        mls     v2.4h, v6.4h, v0.h[0]   // 2*P3-5*P4+5*P5-2*P6
+        srshr   v3.8h, v3.8h, #3
+        abs     v6.4h, v7.4h
+        sshr    v7.4h, v7.4h, #8        // clip_sign
+        srshr   v2.4h, v2.4h, #3
+        abs     v3.8h, v3.8h            // a1, a2
+        sshr    v6.4h, v6.4h, #1        // clip
+        mov     d16, v3.d[1]            // a2
+        abs     v17.4h, v2.4h           // a0
+        cmeq    v18.4h, v6.4h, #0       // test clip == 0
+        sshr    v2.4h, v2.4h, #8        // a0_sign
+        cmhs    v19.4h, v3.4h, v16.4h   // test a1 >= a2
+        cmhs    v5.4h, v17.4h, v5.4h    // test a0 >= pq
+        sub     v2.4h, v7.4h, v2.4h     // clip_sign - a0_sign
+        bsl     v19.8b, v16.8b, v3.8b   // a3
+        orr     v3.8b, v18.8b, v5.8b    // test clip == 0 || a0 >= pq
+        uqsub   v5.4h, v17.4h, v19.4h   // a0 >= a3 ? a0-a3 : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        cmhs    v7.4h, v19.4h, v17.4h   // test a3 >= a0
+        mul     v0.4h, v5.4h, v0.h[1]   // a0 >= a3 ? 5*(a0-a3) : 0
+        orr     v5.8b, v3.8b, v7.8b     // test clip == 0 || a0 >= pq || a3 >= a0
+        mov     w2, v5.s[1]             // move to gp reg
+        ushr    v0.4h, v0.4h, #3        // a0 >= a3 ? (5*(a0-a3))>>3 : 0
+        cmhs    v5.4h, v0.4h, v6.4h
+        tbnz    w2, #0, 1f              // none of the 4 pixel pairs should be updated if this one is not filtered
+        bsl     v5.8b, v6.8b, v0.8b     // FFMIN(d, clip)
+        bic     v0.8b, v5.8b, v3.8b     // set each d to zero if it should not be filtered because clip == 0 || a0 >= pq (a3 > a0 case already zeroed by saturating sub)
+        mla     v4.4h, v0.4h, v2.4h     // invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P5
+        mls     v1.4h, v0.4h, v2.4h     // invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P4
+        sqxtun  v3.8b, v4.8h
+        sqxtun  v2.8b, v1.8h
+        st2     {v2.b, v3.b}[0], [x0], x1
+        st2     {v2.b, v3.b}[1], [x0], x1
+        st2     {v2.b, v3.b}[2], [x0], x1
+        st2     {v2.b, v3.b}[3], [x0]
+1:      ret
+endfunc
+
+// VC-1 in-loop deblocking filter for 8 pixel pairs at boundary of vertically-neighbouring blocks
+// On entry:
+//   x0 -> top-left pel of lower block
+//   w1 = row stride, bytes
+//   w2 = PQUANT bitstream parameter
+function ff_vc1_v_loop_filter8_neon, export=1
+        sub     x3, x0, w1, sxtw #2
+        sxtw    x1, w1                  // technically, stride is signed int
+        ldr     d0, .Lcoeffs
+        ld1     {v1.8b}, [x0], x1       // P5
+        movi    v2.2d, #0x0000ffff00000000
+        ld1     {v3.8b}, [x3], x1       // P1
+        ld1     {v4.8b}, [x3], x1       // P2
+        ld1     {v5.8b}, [x0], x1       // P6
+        ld1     {v6.8b}, [x3], x1       // P3
+        ld1     {v7.8b}, [x0], x1       // P7
+        ushll   v16.8h, v1.8b, #1       // 2*P5
+        ushll   v3.8h, v3.8b, #1        // 2*P1
+        ld1     {v17.8b}, [x3]          // P4
+        uxtl    v4.8h, v4.8b            // P2
+        ld1     {v18.8b}, [x0]          // P8
+        uxtl    v5.8h, v5.8b            // P6
+        dup     v19.8h, w2              // pq
+        uxtl    v20.8h, v6.8b           // P3
+        mls     v3.8h, v4.8h, v0.h[1]   // 2*P1-5*P2
+        uxtl    v4.8h, v7.8b            // P7
+        ushll   v6.8h, v6.8b, #1        // 2*P3
+        mls     v16.8h, v5.8h, v0.h[1]  // 2*P5-5*P6
+        uxtl    v7.8h, v17.8b           // P4
+        uxtl    v17.8h, v18.8b          // P8
+        mla     v16.8h, v4.8h, v0.h[1]  // 2*P5-5*P6+5*P7
+        uxtl    v1.8h, v1.8b            // P5
+        mla     v3.8h, v20.8h, v0.h[1]  // 2*P1-5*P2+5*P3
+        sub     v4.8h, v7.8h, v1.8h     // P4-P5
+        mls     v6.8h, v7.8h, v0.h[1]   // 2*P3-5*P4
+        mls     v16.8h, v17.8h, v0.h[0] // 2*P5-5*P6+5*P7-2*P8
+        abs     v17.8h, v4.8h
+        sshr    v4.8h, v4.8h, #8        // clip_sign
+        mls     v3.8h, v7.8h, v0.h[0]   // 2*P1-5*P2+5*P3-2*P4
+        sshr    v17.8h, v17.8h, #1      // clip
+        mla     v6.8h, v1.8h, v0.h[1]   // 2*P3-5*P4+5*P5
+        srshr   v16.8h, v16.8h, #3
+        mls     v6.8h, v5.8h, v0.h[0]   // 2*P3-5*P4+5*P5-2*P6
+        cmeq    v5.8h, v17.8h, #0       // test clip == 0
+        srshr   v3.8h, v3.8h, #3
+        abs     v16.8h, v16.8h          // a2
+        abs     v3.8h, v3.8h            // a1
+        srshr   v6.8h, v6.8h, #3
+        cmhs    v18.8h, v3.8h, v16.8h   // test a1 >= a2
+        abs     v20.8h, v6.8h           // a0
+        sshr    v6.8h, v6.8h, #8        // a0_sign
+        bsl     v18.16b, v16.16b, v3.16b // a3
+        cmhs    v3.8h, v20.8h, v19.8h   // test a0 >= pq
+        sub     v4.8h, v4.8h, v6.8h     // clip_sign - a0_sign
+        uqsub   v6.8h, v20.8h, v18.8h   // a0 >= a3 ? a0-a3 : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        cmhs    v16.8h, v18.8h, v20.8h  // test a3 >= a0
+        orr     v3.16b, v5.16b, v3.16b  // test clip == 0 || a0 >= pq
+        mul     v0.8h, v6.8h, v0.h[1]   // a0 >= a3 ? 5*(a0-a3) : 0
+        orr     v5.16b, v3.16b, v16.16b // test clip == 0 || a0 >= pq || a3 >= a0
+        cmtst   v2.2d, v5.2d, v2.2d     // if 2nd of each group of is not filtered, then none of the others in the group should be either
+        mov     w0, v5.s[1]             // move to gp reg
+        ushr    v0.8h, v0.8h, #3        // a0 >= a3 ? (5*(a0-a3))>>3 : 0
+        mov     w2, v5.s[3]
+        orr     v2.16b, v3.16b, v2.16b
+        cmhs    v3.8h, v0.8h, v17.8h
+        and     w0, w0, w2
+        bsl     v3.16b, v17.16b, v0.16b // FFMIN(d, clip)
+        tbnz    w0, #0, 1f              // none of the 8 pixel pairs should be updated in this case
+        bic     v0.16b, v3.16b, v2.16b  // set each d to zero if it should not be filtered
+        mls     v7.8h, v0.8h, v4.8h     // invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P4
+        mla     v1.8h, v0.8h, v4.8h     // invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P5
+        sqxtun  v0.8b, v7.8h
+        sqxtun  v1.8b, v1.8h
+        st1     {v0.8b}, [x3], x1
+        st1     {v1.8b}, [x3]
+1:      ret
+endfunc
+
+// VC-1 in-loop deblocking filter for 8 pixel pairs at boundary of horizontally-neighbouring blocks
+// On entry:
+//   x0 -> top-left pel of right block
+//   w1 = row stride, bytes
+//   w2 = PQUANT bitstream parameter
+function ff_vc1_h_loop_filter8_neon, export=1
+        sub     x3, x0, #4              // where to start reading
+        sxtw    x1, w1                  // technically, stride is signed int
+        ldr     d0, .Lcoeffs
+        ld1     {v1.8b}, [x3], x1       // P1[0], P2[0]...
+        sub     x0, x0, #1              // where to start writing
+        ld1     {v2.8b}, [x3], x1
+        add     x4, x0, x1, lsl #2
+        ld1     {v3.8b}, [x3], x1
+        ld1     {v4.8b}, [x3], x1
+        ld1     {v5.8b}, [x3], x1
+        ld1     {v6.8b}, [x3], x1
+        ld1     {v7.8b}, [x3], x1
+        trn1    v16.8b, v1.8b, v2.8b    // P1[0], P1[1], P3[0]...
+        ld1     {v17.8b}, [x3]
+        trn2    v1.8b, v1.8b, v2.8b     // P2[0], P2[1], P4[0]...
+        trn1    v2.8b, v3.8b, v4.8b     // P1[2], P1[3], P3[2]...
+        trn2    v3.8b, v3.8b, v4.8b     // P2[2], P2[3], P4[2]...
+        dup     v4.8h, w2               // pq
+        trn1    v18.8b, v5.8b, v6.8b    // P1[4], P1[5], P3[4]...
+        trn2    v5.8b, v5.8b, v6.8b     // P2[4], P2[5], P4[4]...
+        trn1    v6.4h, v16.4h, v2.4h    // P1[0], P1[1], P1[2], P1[3], P5[0]...
+        trn1    v19.4h, v1.4h, v3.4h    // P2[0], P2[1], P2[2], P2[3], P6[0]...
+        trn1    v20.8b, v7.8b, v17.8b   // P1[6], P1[7], P3[6]...
+        trn2    v7.8b, v7.8b, v17.8b    // P2[6], P2[7], P4[6]...
+        trn2    v2.4h, v16.4h, v2.4h    // P3[0], P3[1], P3[2], P3[3], P7[0]...
+        trn2    v1.4h, v1.4h, v3.4h     // P4[0], P4[1], P4[2], P4[3], P8[0]...
+        trn1    v3.4h, v18.4h, v20.4h   // P1[4], P1[5], P1[6], P1[7], P5[4]...
+        trn1    v16.4h, v5.4h, v7.4h    // P2[4], P2[5], P2[6], P2[7], P6[4]...
+        trn2    v17.4h, v18.4h, v20.4h  // P3[4], P3[5], P3[6], P3[7], P7[4]...
+        trn2    v5.4h, v5.4h, v7.4h     // P4[4], P4[5], P4[6], P4[7], P8[4]...
+        trn1    v7.2s, v6.2s, v3.2s     // P1
+        trn1    v18.2s, v19.2s, v16.2s  // P2
+        trn2    v3.2s, v6.2s, v3.2s     // P5
+        trn2    v6.2s, v19.2s, v16.2s   // P6
+        trn1    v16.2s, v2.2s, v17.2s   // P3
+        trn2    v2.2s, v2.2s, v17.2s    // P7
+        ushll   v7.8h, v7.8b, #1        // 2*P1
+        trn1    v17.2s, v1.2s, v5.2s    // P4
+        ushll   v19.8h, v3.8b, #1       // 2*P5
+        trn2    v1.2s, v1.2s, v5.2s     // P8
+        uxtl    v5.8h, v18.8b           // P2
+        uxtl    v6.8h, v6.8b            // P6
+        uxtl    v18.8h, v16.8b          // P3
+        mls     v7.8h, v5.8h, v0.h[1]   // 2*P1-5*P2
+        uxtl    v2.8h, v2.8b            // P7
+        ushll   v5.8h, v16.8b, #1       // 2*P3
+        mls     v19.8h, v6.8h, v0.h[1]  // 2*P5-5*P6
+        uxtl    v16.8h, v17.8b          // P4
+        uxtl    v1.8h, v1.8b            // P8
+        mla     v19.8h, v2.8h, v0.h[1]  // 2*P5-5*P6+5*P7
+        uxtl    v2.8h, v3.8b            // P5
+        mla     v7.8h, v18.8h, v0.h[1]  // 2*P1-5*P2+5*P3
+        sub     v3.8h, v16.8h, v2.8h    // P4-P5
+        mls     v5.8h, v16.8h, v0.h[1]  // 2*P3-5*P4
+        mls     v19.8h, v1.8h, v0.h[0]  // 2*P5-5*P6+5*P7-2*P8
+        abs     v1.8h, v3.8h
+        sshr    v3.8h, v3.8h, #8        // clip_sign
+        mls     v7.8h, v16.8h, v0.h[0]  // 2*P1-5*P2+5*P3-2*P4
+        sshr    v1.8h, v1.8h, #1        // clip
+        mla     v5.8h, v2.8h, v0.h[1]   // 2*P3-5*P4+5*P5
+        srshr   v17.8h, v19.8h, #3
+        mls     v5.8h, v6.8h, v0.h[0]   // 2*P3-5*P4+5*P5-2*P6
+        cmeq    v6.8h, v1.8h, #0        // test clip == 0
+        srshr   v7.8h, v7.8h, #3
+        abs     v17.8h, v17.8h          // a2
+        abs     v7.8h, v7.8h            // a1
+        srshr   v5.8h, v5.8h, #3
+        cmhs    v18.8h, v7.8h, v17.8h   // test a1 >= a2
+        abs     v19.8h, v5.8h           // a0
+        sshr    v5.8h, v5.8h, #8        // a0_sign
+        bsl     v18.16b, v17.16b, v7.16b // a3
+        cmhs    v4.8h, v19.8h, v4.8h    // test a0 >= pq
+        sub     v3.8h, v3.8h, v5.8h     // clip_sign - a0_sign
+        uqsub   v5.8h, v19.8h, v18.8h   // a0 >= a3 ? a0-a3 : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        cmhs    v7.8h, v18.8h, v19.8h   // test a3 >= a0
+        orr     v4.16b, v6.16b, v4.16b  // test clip == 0 || a0 >= pq
+        mul     v0.8h, v5.8h, v0.h[1]   // a0 >= a3 ? 5*(a0-a3) : 0
+        orr     v5.16b, v4.16b, v7.16b  // test clip == 0 || a0 >= pq || a3 >= a0
+        mov     w2, v5.s[1]             // move to gp reg
+        ushr    v0.8h, v0.8h, #3        // a0 >= a3 ? (5*(a0-a3))>>3 : 0
+        mov     w3, v5.s[3]
+        cmhs    v5.8h, v0.8h, v1.8h
+        and     w5, w2, w3
+        bsl     v5.16b, v1.16b, v0.16b  // FFMIN(d, clip)
+        tbnz    w5, #0, 2f              // none of the 8 pixel pairs should be updated in this case
+        bic     v0.16b, v5.16b, v4.16b  // set each d to zero if it should not be filtered because clip == 0 || a0 >= pq (a3 > a0 case already zeroed by saturating sub)
+        mla     v2.8h, v0.8h, v3.8h     // invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P5
+        mls     v16.8h, v0.8h, v3.8h    // invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P4
+        sqxtun  v1.8b, v2.8h
+        sqxtun  v0.8b, v16.8h
+        tbnz    w2, #0, 1f              // none of the first 4 pixel pairs should be updated if so
+        st2     {v0.b, v1.b}[0], [x0], x1
+        st2     {v0.b, v1.b}[1], [x0], x1
+        st2     {v0.b, v1.b}[2], [x0], x1
+        st2     {v0.b, v1.b}[3], [x0]
+1:      tbnz    w3, #0, 2f              // none of the second 4 pixel pairs should be updated if so
+        st2     {v0.b, v1.b}[4], [x4], x1
+        st2     {v0.b, v1.b}[5], [x4], x1
+        st2     {v0.b, v1.b}[6], [x4], x1
+        st2     {v0.b, v1.b}[7], [x4]
+2:      ret
+endfunc
+
+// VC-1 in-loop deblocking filter for 16 pixel pairs at boundary of vertically-neighbouring blocks
+// On entry:
+//   x0 -> top-left pel of lower block
+//   w1 = row stride, bytes
+//   w2 = PQUANT bitstream parameter
+function ff_vc1_v_loop_filter16_neon, export=1
+        sub     x3, x0, w1, sxtw #2
+        sxtw    x1, w1                  // technically, stride is signed int
+        ldr     d0, .Lcoeffs
+        ld1     {v1.16b}, [x0], x1      // P5
+        movi    v2.2d, #0x0000ffff00000000
+        ld1     {v3.16b}, [x3], x1      // P1
+        ld1     {v4.16b}, [x3], x1      // P2
+        ld1     {v5.16b}, [x0], x1      // P6
+        ld1     {v6.16b}, [x3], x1      // P3
+        ld1     {v7.16b}, [x0], x1      // P7
+        ushll   v16.8h, v1.8b, #1       // 2*P5[0..7]
+        ushll   v17.8h, v3.8b, #1       // 2*P1[0..7]
+        ld1     {v18.16b}, [x3]         // P4
+        uxtl    v19.8h, v4.8b           // P2[0..7]
+        ld1     {v20.16b}, [x0]         // P8
+        uxtl    v21.8h, v5.8b           // P6[0..7]
+        dup     v22.8h, w2              // pq
+        ushll2  v3.8h, v3.16b, #1       // 2*P1[8..15]
+        mls     v17.8h, v19.8h, v0.h[1] // 2*P1[0..7]-5*P2[0..7]
+        ushll2  v19.8h, v1.16b, #1      // 2*P5[8..15]
+        uxtl2   v4.8h, v4.16b           // P2[8..15]
+        mls     v16.8h, v21.8h, v0.h[1] // 2*P5[0..7]-5*P6[0..7]
+        uxtl2   v5.8h, v5.16b           // P6[8..15]
+        uxtl    v23.8h, v6.8b           // P3[0..7]
+        uxtl    v24.8h, v7.8b           // P7[0..7]
+        mls     v3.8h, v4.8h, v0.h[1]   // 2*P1[8..15]-5*P2[8..15]
+        ushll   v4.8h, v6.8b, #1        // 2*P3[0..7]
+        uxtl    v25.8h, v18.8b          // P4[0..7]
+        mls     v19.8h, v5.8h, v0.h[1]  // 2*P5[8..15]-5*P6[8..15]
+        uxtl2   v26.8h, v6.16b          // P3[8..15]
+        mla     v17.8h, v23.8h, v0.h[1] // 2*P1[0..7]-5*P2[0..7]+5*P3[0..7]
+        uxtl2   v7.8h, v7.16b           // P7[8..15]
+        ushll2  v6.8h, v6.16b, #1       // 2*P3[8..15]
+        mla     v16.8h, v24.8h, v0.h[1] // 2*P5[0..7]-5*P6[0..7]+5*P7[0..7]
+        uxtl2   v18.8h, v18.16b         // P4[8..15]
+        uxtl    v23.8h, v20.8b          // P8[0..7]
+        mls     v4.8h, v25.8h, v0.h[1]  // 2*P3[0..7]-5*P4[0..7]
+        uxtl    v24.8h, v1.8b           // P5[0..7]
+        uxtl2   v20.8h, v20.16b         // P8[8..15]
+        mla     v3.8h, v26.8h, v0.h[1]  // 2*P1[8..15]-5*P2[8..15]+5*P3[8..15]
+        uxtl2   v1.8h, v1.16b           // P5[8..15]
+        sub     v26.8h, v25.8h, v24.8h  // P4[0..7]-P5[0..7]
+        mla     v19.8h, v7.8h, v0.h[1]  // 2*P5[8..15]-5*P6[8..15]+5*P7[8..15]
+        sub     v7.8h, v18.8h, v1.8h    // P4[8..15]-P5[8..15]
+        mls     v6.8h, v18.8h, v0.h[1]  // 2*P3[8..15]-5*P4[8..15]
+        abs     v27.8h, v26.8h
+        sshr    v26.8h, v26.8h, #8      // clip_sign[0..7]
+        mls     v17.8h, v25.8h, v0.h[0] // 2*P1[0..7]-5*P2[0..7]+5*P3[0..7]-2*P4[0..7]
+        abs     v28.8h, v7.8h
+        sshr    v27.8h, v27.8h, #1      // clip[0..7]
+        mls     v16.8h, v23.8h, v0.h[0] // 2*P5[0..7]-5*P6[0..7]+5*P7[0..7]-2*P8[0..7]
+        sshr    v7.8h, v7.8h, #8        // clip_sign[8..15]
+        sshr    v23.8h, v28.8h, #1      // clip[8..15]
+        mla     v4.8h, v24.8h, v0.h[1]  // 2*P3[0..7]-5*P4[0..7]+5*P5[0..7]
+        cmeq    v28.8h, v27.8h, #0      // test clip[0..7] == 0
+        srshr   v17.8h, v17.8h, #3
+        mls     v3.8h, v18.8h, v0.h[0]  // 2*P1[8..15]-5*P2[8..15]+5*P3[8..15]-2*P4[8..15]
+        cmeq    v29.8h, v23.8h, #0      // test clip[8..15] == 0
+        srshr   v16.8h, v16.8h, #3
+        mls     v19.8h, v20.8h, v0.h[0] // 2*P5[8..15]-5*P6[8..15]+5*P7[8..15]-2*P8[8..15]
+        abs     v17.8h, v17.8h          // a1[0..7]
+        mla     v6.8h, v1.8h, v0.h[1]   // 2*P3[8..15]-5*P4[8..15]+5*P5[8..15]
+        srshr   v3.8h, v3.8h, #3
+        mls     v4.8h, v21.8h, v0.h[0]  // 2*P3[0..7]-5*P4[0..7]+5*P5[0..7]-2*P6[0..7]
+        abs     v16.8h, v16.8h          // a2[0..7]
+        srshr   v19.8h, v19.8h, #3
+        mls     v6.8h, v5.8h, v0.h[0]   // 2*P3[8..15]-5*P4[8..15]+5*P5[8..15]-2*P6[8..15]
+        cmhs    v5.8h, v17.8h, v16.8h   // test a1[0..7] >= a2[0..7]
+        abs     v3.8h, v3.8h            // a1[8..15]
+        srshr   v4.8h, v4.8h, #3
+        abs     v19.8h, v19.8h          // a2[8..15]
+        bsl     v5.16b, v16.16b, v17.16b // a3[0..7]
+        srshr   v6.8h, v6.8h, #3
+        cmhs    v16.8h, v3.8h, v19.8h   // test a1[8..15] >= a2[8.15]
+        abs     v17.8h, v4.8h           // a0[0..7]
+        sshr    v4.8h, v4.8h, #8        // a0_sign[0..7]
+        bsl     v16.16b, v19.16b, v3.16b // a3[8..15]
+        uqsub   v3.8h, v17.8h, v5.8h    // a0[0..7] >= a3[0..7] ? a0[0..7]-a3[0..7] : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        abs     v19.8h, v6.8h           // a0[8..15]
+        cmhs    v20.8h, v17.8h, v22.8h  // test a0[0..7] >= pq
+        cmhs    v5.8h, v5.8h, v17.8h    // test a3[0..7] >= a0[0..7]
+        sub     v4.8h, v26.8h, v4.8h    // clip_sign[0..7] - a0_sign[0..7]
+        sshr    v6.8h, v6.8h, #8        // a0_sign[8..15]
+        mul     v3.8h, v3.8h, v0.h[1]   // a0[0..7] >= a3[0..7] ? 5*(a0[0..7]-a3[0..7]) : 0
+        uqsub   v17.8h, v19.8h, v16.8h  // a0[8..15] >= a3[8..15] ? a0[8..15]-a3[8..15] : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        orr     v20.16b, v28.16b, v20.16b // test clip[0..7] == 0 || a0[0..7] >= pq
+        cmhs    v21.8h, v19.8h, v22.8h  // test a0[8..15] >= pq
+        cmhs    v16.8h, v16.8h, v19.8h  // test a3[8..15] >= a0[8..15]
+        mul     v0.8h, v17.8h, v0.h[1]  // a0[8..15] >= a3[8..15] ? 5*(a0[8..15]-a3[8..15]) : 0
+        sub     v6.8h, v7.8h, v6.8h     // clip_sign[8..15] - a0_sign[8..15]
+        orr     v5.16b, v20.16b, v5.16b // test clip[0..7] == 0 || a0[0..7] >= pq || a3[0..7] >= a0[0..7]
+        ushr    v3.8h, v3.8h, #3        // a0[0..7] >= a3[0..7] ? (5*(a0[0..7]-a3[0..7]))>>3 : 0
+        orr     v7.16b, v29.16b, v21.16b // test clip[8..15] == 0 || a0[8..15] >= pq
+        cmtst   v17.2d, v5.2d, v2.2d    // if 2nd of each group of is not filtered, then none of the others in the group should be either
+        mov     w0, v5.s[1]             // move to gp reg
+        cmhs    v19.8h, v3.8h, v27.8h
+        ushr    v0.8h, v0.8h, #3        // a0[8..15] >= a3[8..15] ? (5*(a0[8..15]-a3[8..15]))>>3 : 0
+        mov     w2, v5.s[3]
+        orr     v5.16b, v7.16b, v16.16b // test clip[8..15] == 0 || a0[8..15] >= pq || a3[8..15] >= a0[8..15]
+        orr     v16.16b, v20.16b, v17.16b
+        bsl     v19.16b, v27.16b, v3.16b // FFMIN(d[0..7], clip[0..7])
+        cmtst   v2.2d, v5.2d, v2.2d
+        cmhs    v3.8h, v0.8h, v23.8h
+        mov     w4, v5.s[1]
+        mov     w5, v5.s[3]
+        and     w0, w0, w2
+        bic     v5.16b, v19.16b, v16.16b // set each d[0..7] to zero if it should not be filtered because clip[0..7] == 0 || a0[0..7] >= pq (a3 > a0 case already zeroed by saturating sub)
+        orr     v2.16b, v7.16b, v2.16b
+        bsl     v3.16b, v23.16b, v0.16b // FFMIN(d[8..15], clip[8..15])
+        mls     v25.8h, v5.8h, v4.8h    // invert d[0..7] depending on clip_sign[0..7] & a0_sign[0..7], or zero it if they match, and accumulate into P4[0..7]
+        and     w2, w4, w5
+        bic     v0.16b, v3.16b, v2.16b  // set each d[8..15] to zero if it should not be filtered because clip[8..15] == 0 || a0[8..15] >= pq (a3 > a0 case already zeroed by saturating sub)
+        mla     v24.8h, v5.8h, v4.8h    // invert d[0..7] depending on clip_sign[0..7] & a0_sign[0..7], or zero it if they match, and accumulate into P5[0..7]
+        and     w0, w0, w2
+        mls     v18.8h, v0.8h, v6.8h    // invert d[8..15] depending on clip_sign[8..15] & a0_sign[8..15], or zero it if they match, and accumulate into P4[8..15]
+        sqxtun  v2.8b, v25.8h
+        tbnz    w0, #0, 1f              // none of the 16 pixel pairs should be updated in this case
+        mla     v1.8h, v0.8h, v6.8h     // invert d[8..15] depending on clip_sign[8..15] & a0_sign[8..15], or zero it if they match, and accumulate into P5[8..15]
+        sqxtun  v0.8b, v24.8h
+        sqxtun2 v2.16b, v18.8h
+        sqxtun2 v0.16b, v1.8h
+        st1     {v2.16b}, [x3], x1
+        st1     {v0.16b}, [x3]
+1:      ret
+endfunc
+
+// VC-1 in-loop deblocking filter for 16 pixel pairs at boundary of horizontally-neighbouring blocks
+// On entry:
+//   x0 -> top-left pel of right block
+//   w1 = row stride, bytes
+//   w2 = PQUANT bitstream parameter
+function ff_vc1_h_loop_filter16_neon, export=1
+        sub     x3, x0, #4              // where to start reading
+        sxtw    x1, w1                  // technically, stride is signed int
+        ldr     d0, .Lcoeffs
+        ld1     {v1.8b}, [x3], x1       // P1[0], P2[0]...
+        sub     x0, x0, #1              // where to start writing
+        ld1     {v2.8b}, [x3], x1
+        add     x4, x0, x1, lsl #3
+        ld1     {v3.8b}, [x3], x1
+        add     x5, x0, x1, lsl #2
+        ld1     {v4.8b}, [x3], x1
+        add     x6, x4, x1, lsl #2
+        ld1     {v5.8b}, [x3], x1
+        ld1     {v6.8b}, [x3], x1
+        ld1     {v7.8b}, [x3], x1
+        trn1    v16.8b, v1.8b, v2.8b    // P1[0], P1[1], P3[0]...
+        ld1     {v17.8b}, [x3], x1
+        trn2    v1.8b, v1.8b, v2.8b     // P2[0], P2[1], P4[0]...
+        ld1     {v2.8b}, [x3], x1
+        trn1    v18.8b, v3.8b, v4.8b    // P1[2], P1[3], P3[2]...
+        ld1     {v19.8b}, [x3], x1
+        trn2    v3.8b, v3.8b, v4.8b     // P2[2], P2[3], P4[2]...
+        ld1     {v4.8b}, [x3], x1
+        trn1    v20.8b, v5.8b, v6.8b    // P1[4], P1[5], P3[4]...
+        ld1     {v21.8b}, [x3], x1
+        trn2    v5.8b, v5.8b, v6.8b     // P2[4], P2[5], P4[4]...
+        ld1     {v6.8b}, [x3], x1
+        trn1    v22.8b, v7.8b, v17.8b   // P1[6], P1[7], P3[6]...
+        ld1     {v23.8b}, [x3], x1
+        trn2    v7.8b, v7.8b, v17.8b    // P2[6], P2[7], P4[6]...
+        ld1     {v17.8b}, [x3], x1
+        trn1    v24.8b, v2.8b, v19.8b   // P1[8], P1[9], P3[8]...
+        ld1     {v25.8b}, [x3]
+        trn2    v2.8b, v2.8b, v19.8b    // P2[8], P2[9], P4[8]...
+        trn1    v19.4h, v16.4h, v18.4h  // P1[0], P1[1], P1[2], P1[3], P5[0]...
+        trn1    v26.8b, v4.8b, v21.8b   // P1[10], P1[11], P3[10]...
+        trn2    v4.8b, v4.8b, v21.8b    // P2[10], P2[11], P4[10]...
+        trn1    v21.4h, v1.4h, v3.4h    // P2[0], P2[1], P2[2], P2[3], P6[0]...
+        trn1    v27.4h, v20.4h, v22.4h  // P1[4], P1[5], P1[6], P1[7], P5[4]...
+        trn1    v28.8b, v6.8b, v23.8b   // P1[12], P1[13], P3[12]...
+        trn2    v6.8b, v6.8b, v23.8b    // P2[12], P2[13], P4[12]...
+        trn1    v23.4h, v5.4h, v7.4h    // P2[4], P2[5], P2[6], P2[7], P6[4]...
+        trn1    v29.4h, v24.4h, v26.4h  // P1[8], P1[9], P1[10], P1[11], P5[8]...
+        trn1    v30.8b, v17.8b, v25.8b  // P1[14], P1[15], P3[14]...
+        trn2    v17.8b, v17.8b, v25.8b  // P2[14], P2[15], P4[14]...
+        trn1    v25.4h, v2.4h, v4.4h    // P2[8], P2[9], P2[10], P2[11], P6[8]...
+        trn1    v31.2s, v19.2s, v27.2s  // P1[0..7]
+        trn2    v19.2s, v19.2s, v27.2s  // P5[0..7]
+        trn1    v27.2s, v21.2s, v23.2s  // P2[0..7]
+        trn2    v21.2s, v21.2s, v23.2s  // P6[0..7]
+        trn1    v23.4h, v28.4h, v30.4h  // P1[12], P1[13], P1[14], P1[15], P5[12]...
+        trn2    v16.4h, v16.4h, v18.4h  // P3[0], P3[1], P3[2], P3[3], P7[0]...
+        trn1    v18.4h, v6.4h, v17.4h   // P2[12], P2[13], P2[14], P2[15], P6[12]...
+        trn2    v20.4h, v20.4h, v22.4h  // P3[4], P3[5], P3[6], P3[7], P7[4]...
+        trn2    v22.4h, v24.4h, v26.4h  // P3[8], P3[9], P3[10], P3[11], P7[8]...
+        trn1    v24.2s, v29.2s, v23.2s  // P1[8..15]
+        trn2    v23.2s, v29.2s, v23.2s  // P5[8..15]
+        trn1    v26.2s, v25.2s, v18.2s  // P2[8..15]
+        trn2    v18.2s, v25.2s, v18.2s  // P6[8..15]
+        trn2    v25.4h, v28.4h, v30.4h  // P3[12], P3[13], P3[14], P3[15], P7[12]...
+        trn2    v1.4h, v1.4h, v3.4h     // P4[0], P4[1], P4[2], P4[3], P8[0]...
+        trn2    v3.4h, v5.4h, v7.4h     // P4[4], P4[5], P4[6], P4[7], P8[4]...
+        trn2    v2.4h, v2.4h, v4.4h     // P4[8], P4[9], P4[10], P4[11], P8[8]...
+        trn2    v4.4h, v6.4h, v17.4h    // P4[12], P4[13], P4[14], P4[15], P8[12]...
+        ushll   v5.8h, v31.8b, #1       // 2*P1[0..7]
+        ushll   v6.8h, v19.8b, #1       // 2*P5[0..7]
+        trn1    v7.2s, v16.2s, v20.2s   // P3[0..7]
+        uxtl    v17.8h, v27.8b          // P2[0..7]
+        trn2    v16.2s, v16.2s, v20.2s  // P7[0..7]
+        uxtl    v20.8h, v21.8b          // P6[0..7]
+        trn1    v21.2s, v22.2s, v25.2s  // P3[8..15]
+        ushll   v24.8h, v24.8b, #1      // 2*P1[8..15]
+        trn2    v22.2s, v22.2s, v25.2s  // P7[8..15]
+        ushll   v25.8h, v23.8b, #1      // 2*P5[8..15]
+        trn1    v27.2s, v1.2s, v3.2s    // P4[0..7]
+        uxtl    v26.8h, v26.8b          // P2[8..15]
+        mls     v5.8h, v17.8h, v0.h[1]  // 2*P1[0..7]-5*P2[0..7]
+        uxtl    v17.8h, v18.8b          // P6[8..15]
+        mls     v6.8h, v20.8h, v0.h[1]  // 2*P5[0..7]-5*P6[0..7]
+        trn1    v18.2s, v2.2s, v4.2s    // P4[8..15]
+        uxtl    v28.8h, v7.8b           // P3[0..7]
+        mls     v24.8h, v26.8h, v0.h[1] // 2*P1[8..15]-5*P2[8..15]
+        uxtl    v16.8h, v16.8b          // P7[0..7]
+        uxtl    v26.8h, v21.8b          // P3[8..15]
+        mls     v25.8h, v17.8h, v0.h[1] // 2*P5[8..15]-5*P6[8..15]
+        uxtl    v22.8h, v22.8b          // P7[8..15]
+        ushll   v7.8h, v7.8b, #1        // 2*P3[0..7]
+        uxtl    v27.8h, v27.8b          // P4[0..7]
+        trn2    v1.2s, v1.2s, v3.2s     // P8[0..7]
+        ushll   v3.8h, v21.8b, #1       // 2*P3[8..15]
+        trn2    v2.2s, v2.2s, v4.2s     // P8[8..15]
+        uxtl    v4.8h, v18.8b           // P4[8..15]
+        mla     v5.8h, v28.8h, v0.h[1]  // 2*P1[0..7]-5*P2[0..7]+5*P3[0..7]
+        uxtl    v1.8h, v1.8b            // P8[0..7]
+        mla     v6.8h, v16.8h, v0.h[1]  // 2*P5[0..7]-5*P6[0..7]+5*P7[0..7]
+        uxtl    v2.8h, v2.8b            // P8[8..15]
+        uxtl    v16.8h, v19.8b          // P5[0..7]
+        mla     v24.8h, v26.8h, v0.h[1] // 2*P1[8..15]-5*P2[8..15]+5*P3[8..15]
+        uxtl    v18.8h, v23.8b          // P5[8..15]
+        dup     v19.8h, w2              // pq
+        mla     v25.8h, v22.8h, v0.h[1] // 2*P5[8..15]-5*P6[8..15]+5*P7[8..15]
+        sub     v21.8h, v27.8h, v16.8h  // P4[0..7]-P5[0..7]
+        sub     v22.8h, v4.8h, v18.8h   // P4[8..15]-P5[8..15]
+        mls     v7.8h, v27.8h, v0.h[1]  // 2*P3[0..7]-5*P4[0..7]
+        abs     v23.8h, v21.8h
+        mls     v3.8h, v4.8h, v0.h[1]   // 2*P3[8..15]-5*P4[8..15]
+        abs     v26.8h, v22.8h
+        sshr    v21.8h, v21.8h, #8      // clip_sign[0..7]
+        mls     v5.8h, v27.8h, v0.h[0]  // 2*P1[0..7]-5*P2[0..7]+5*P3[0..7]-2*P4[0..7]
+        sshr    v23.8h, v23.8h, #1      // clip[0..7]
+        sshr    v26.8h, v26.8h, #1      // clip[8..15]
+        mls     v6.8h, v1.8h, v0.h[0]   // 2*P5[0..7]-5*P6[0..7]+5*P7[0..7]-2*P8[0..7]
+        sshr    v1.8h, v22.8h, #8       // clip_sign[8..15]
+        cmeq    v22.8h, v23.8h, #0      // test clip[0..7] == 0
+        mls     v24.8h, v4.8h, v0.h[0]  // 2*P1[8..15]-5*P2[8..15]+5*P3[8..15]-2*P4[8..15]
+        cmeq    v28.8h, v26.8h, #0      // test clip[8..15] == 0
+        srshr   v5.8h, v5.8h, #3
+        mls     v25.8h, v2.8h, v0.h[0]  // 2*P5[8..15]-5*P6[8..15]+5*P7[8..15]-2*P8[8..15]
+        srshr   v2.8h, v6.8h, #3
+        mla     v7.8h, v16.8h, v0.h[1]  // 2*P3[0..7]-5*P4[0..7]+5*P5[0..7]
+        srshr   v6.8h, v24.8h, #3
+        mla     v3.8h, v18.8h, v0.h[1]  // 2*P3[8..15]-5*P4[8..15]+5*P5[8..15]
+        abs     v5.8h, v5.8h            // a1[0..7]
+        srshr   v24.8h, v25.8h, #3
+        mls     v3.8h, v17.8h, v0.h[0]  // 2*P3[8..15]-5*P4[8..15]+5*P5[8..15]-2*P6[8..15]
+        abs     v2.8h, v2.8h            // a2[0..7]
+        abs     v6.8h, v6.8h            // a1[8..15]
+        mls     v7.8h, v20.8h, v0.h[0]  // 2*P3[0..7]-5*P4[0..7]+5*P5[0..7]-2*P6[0..7]
+        abs     v17.8h, v24.8h          // a2[8..15]
+        cmhs    v20.8h, v5.8h, v2.8h    // test a1[0..7] >= a2[0..7]
+        srshr   v3.8h, v3.8h, #3
+        cmhs    v24.8h, v6.8h, v17.8h   // test a1[8..15] >= a2[8.15]
+        srshr   v7.8h, v7.8h, #3
+        bsl     v20.16b, v2.16b, v5.16b // a3[0..7]
+        abs     v2.8h, v3.8h            // a0[8..15]
+        sshr    v3.8h, v3.8h, #8        // a0_sign[8..15]
+        bsl     v24.16b, v17.16b, v6.16b // a3[8..15]
+        abs     v5.8h, v7.8h            // a0[0..7]
+        sshr    v6.8h, v7.8h, #8        // a0_sign[0..7]
+        cmhs    v7.8h, v2.8h, v19.8h    // test a0[8..15] >= pq
+        sub     v1.8h, v1.8h, v3.8h     // clip_sign[8..15] - a0_sign[8..15]
+        uqsub   v3.8h, v2.8h, v24.8h    // a0[8..15] >= a3[8..15] ? a0[8..15]-a3[8..15] : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        cmhs    v2.8h, v24.8h, v2.8h    // test a3[8..15] >= a0[8..15]
+        uqsub   v17.8h, v5.8h, v20.8h   // a0[0..7] >= a3[0..7] ? a0[0..7]-a3[0..7] : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        cmhs    v19.8h, v5.8h, v19.8h   // test a0[0..7] >= pq
+        orr     v7.16b, v28.16b, v7.16b // test clip[8..15] == 0 || a0[8..15] >= pq
+        sub     v6.8h, v21.8h, v6.8h    // clip_sign[0..7] - a0_sign[0..7]
+        mul     v3.8h, v3.8h, v0.h[1]   // a0[8..15] >= a3[8..15] ? 5*(a0[8..15]-a3[8..15]) : 0
+        cmhs    v5.8h, v20.8h, v5.8h    // test a3[0..7] >= a0[0..7]
+        orr     v19.16b, v22.16b, v19.16b // test clip[0..7] == 0 || a0[0..7] >= pq
+        mul     v0.8h, v17.8h, v0.h[1]  // a0[0..7] >= a3[0..7] ? 5*(a0[0..7]-a3[0..7]) : 0
+        orr     v2.16b, v7.16b, v2.16b  // test clip[8..15] == 0 || a0[8..15] >= pq || a3[8..15] >= a0[8..15]
+        orr     v5.16b, v19.16b, v5.16b // test clip[0..7] == 0 || a0[0..7] >= pq || a3[0..7] >= a0[0..7]
+        ushr    v3.8h, v3.8h, #3        // a0[8..15] >= a3[8..15] ? (5*(a0[8..15]-a3[8..15]))>>3 : 0
+        mov     w7, v2.s[1]
+        mov     w8, v2.s[3]
+        ushr    v0.8h, v0.8h, #3        // a0[0..7] >= a3[0..7] ? (5*(a0[0..7]-a3[0..7]))>>3 : 0
+        mov     w2, v5.s[1]             // move to gp reg
+        cmhs    v2.8h, v3.8h, v26.8h
+        mov     w3, v5.s[3]
+        cmhs    v5.8h, v0.8h, v23.8h
+        bsl     v2.16b, v26.16b, v3.16b // FFMIN(d[8..15], clip[8..15])
+        and     w9, w7, w8
+        bsl     v5.16b, v23.16b, v0.16b // FFMIN(d[0..7], clip[0..7])
+        and     w10, w2, w3
+        bic     v0.16b, v2.16b, v7.16b  // set each d[8..15] to zero if it should not be filtered because clip[8..15] == 0 || a0[8..15] >= pq (a3 > a0 case already zeroed by saturating sub)
+        and     w9, w10, w9
+        bic     v2.16b, v5.16b, v19.16b // set each d[0..7] to zero if it should not be filtered because clip[0..7] == 0 || a0[0..7] >= pq (a3 > a0 case already zeroed by saturating sub)
+        mls     v4.8h, v0.8h, v1.8h     // invert d[8..15] depending on clip_sign[8..15] & a0_sign[8..15], or zero it if they match, and accumulate into P4
+        tbnz    w9, #0, 4f              // none of the 16 pixel pairs should be updated in this case
+        mls     v27.8h, v2.8h, v6.8h    // invert d[0..7] depending on clip_sign[0..7] & a0_sign[0..7], or zero it if they match, and accumulate into P4
+        mla     v16.8h, v2.8h, v6.8h    // invert d[0..7] depending on clip_sign[0..7] & a0_sign[0..7], or zero it if they match, and accumulate into P5
+        sqxtun  v2.8b, v4.8h
+        mla     v18.8h, v0.8h, v1.8h    // invert d[8..15] depending on clip_sign[8..15] & a0_sign[8..15], or zero it if they match, and accumulate into P5
+        sqxtun  v0.8b, v27.8h
+        sqxtun  v1.8b, v16.8h
+        sqxtun  v3.8b, v18.8h
+        tbnz    w2, #0, 1f
+        st2     {v0.b, v1.b}[0], [x0], x1
+        st2     {v0.b, v1.b}[1], [x0], x1
+        st2     {v0.b, v1.b}[2], [x0], x1
+        st2     {v0.b, v1.b}[3], [x0]
+1:      tbnz    w3, #0, 2f
+        st2     {v0.b, v1.b}[4], [x5], x1
+        st2     {v0.b, v1.b}[5], [x5], x1
+        st2     {v0.b, v1.b}[6], [x5], x1
+        st2     {v0.b, v1.b}[7], [x5]
+2:      tbnz    w7, #0, 3f
+        st2     {v2.b, v3.b}[0], [x4], x1
+        st2     {v2.b, v3.b}[1], [x4], x1
+        st2     {v2.b, v3.b}[2], [x4], x1
+        st2     {v2.b, v3.b}[3], [x4]
+3:      tbnz    w8, #0, 4f
+        st2     {v2.b, v3.b}[4], [x6], x1
+        st2     {v2.b, v3.b}[5], [x6], x1
+        st2     {v2.b, v3.b}[6], [x6], x1
+        st2     {v2.b, v3.b}[7], [x6]
+4:      ret
+endfunc
-- 
2.25.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [FFmpeg-devel] [PATCH 2/6] avcodec/vc1: Arm 32-bit NEON deblocking filter fast paths
  2022-03-17 18:58 [FFmpeg-devel] [PATCH 0/6] avcodec/vc1: Arm optimisations Ben Avison
  2022-03-17 18:58 ` [FFmpeg-devel] [PATCH 1/6] avcodec/vc1: Arm 64-bit NEON deblocking filter fast paths Ben Avison
@ 2022-03-17 18:58 ` Ben Avison
  2022-03-17 18:58 ` [FFmpeg-devel] [PATCH 3/6] avcodec/vc1: Arm 64-bit NEON inverse transform " Ben Avison
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 55+ messages in thread
From: Ben Avison @ 2022-03-17 18:58 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Ben Avison

Signed-off-by: Ben Avison <bavison@riscosopen.org>
---
 libavcodec/arm/vc1dsp_init_neon.c |  14 +
 libavcodec/arm/vc1dsp_neon.S      | 643 ++++++++++++++++++++++++++++++
 2 files changed, 657 insertions(+)

diff --git a/libavcodec/arm/vc1dsp_init_neon.c b/libavcodec/arm/vc1dsp_init_neon.c
index 2cca784f5a..f5f5c702d7 100644
--- a/libavcodec/arm/vc1dsp_init_neon.c
+++ b/libavcodec/arm/vc1dsp_init_neon.c
@@ -32,6 +32,13 @@ void ff_vc1_inv_trans_4x8_dc_neon(uint8_t *dest, ptrdiff_t stride, int16_t *bloc
 void ff_vc1_inv_trans_8x4_dc_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
 void ff_vc1_inv_trans_4x4_dc_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
 
+void ff_vc1_v_loop_filter4_neon(uint8_t *src, int stride, int pq);
+void ff_vc1_h_loop_filter4_neon(uint8_t *src, int stride, int pq);
+void ff_vc1_v_loop_filter8_neon(uint8_t *src, int stride, int pq);
+void ff_vc1_h_loop_filter8_neon(uint8_t *src, int stride, int pq);
+void ff_vc1_v_loop_filter16_neon(uint8_t *src, int stride, int pq);
+void ff_vc1_h_loop_filter16_neon(uint8_t *src, int stride, int pq);
+
 void ff_put_pixels8x8_neon(uint8_t *block, const uint8_t *pixels,
                            ptrdiff_t line_size, int rnd);
 
@@ -92,6 +99,13 @@ av_cold void ff_vc1dsp_init_neon(VC1DSPContext *dsp)
     dsp->vc1_inv_trans_8x4_dc = ff_vc1_inv_trans_8x4_dc_neon;
     dsp->vc1_inv_trans_4x4_dc = ff_vc1_inv_trans_4x4_dc_neon;
 
+    dsp->vc1_v_loop_filter4  = ff_vc1_v_loop_filter4_neon;
+    dsp->vc1_h_loop_filter4  = ff_vc1_h_loop_filter4_neon;
+    dsp->vc1_v_loop_filter8  = ff_vc1_v_loop_filter8_neon;
+    dsp->vc1_h_loop_filter8  = ff_vc1_h_loop_filter8_neon;
+    dsp->vc1_v_loop_filter16 = ff_vc1_v_loop_filter16_neon;
+    dsp->vc1_h_loop_filter16 = ff_vc1_h_loop_filter16_neon;
+
     dsp->put_vc1_mspel_pixels_tab[1][ 0] = ff_put_pixels8x8_neon;
     FN_ASSIGN(1, 0);
     FN_ASSIGN(2, 0);
diff --git a/libavcodec/arm/vc1dsp_neon.S b/libavcodec/arm/vc1dsp_neon.S
index 93f043bf08..4ef083102b 100644
--- a/libavcodec/arm/vc1dsp_neon.S
+++ b/libavcodec/arm/vc1dsp_neon.S
@@ -1161,3 +1161,646 @@ function ff_vc1_inv_trans_4x4_dc_neon, export=1
         vst1.32         {d1[1]},  [r0,:32]
         bx              lr
 endfunc
+
+@ VC-1 in-loop deblocking filter for 4 pixel pairs at boundary of vertically-neighbouring blocks
+@ On entry:
+@   r0 -> top-left pel of lower block
+@   r1 = row stride, bytes
+@   r2 = PQUANT bitstream parameter
+function ff_vc1_v_loop_filter4_neon, export=1
+        sub             r3, r0, r1, lsl #2
+        vldr            d0, .Lcoeffs
+        vld1.32         {d1[0]}, [r0], r1       @ P5
+        vld1.32         {d2[0]}, [r3], r1       @ P1
+        vld1.32         {d3[0]}, [r3], r1       @ P2
+        vld1.32         {d4[0]}, [r0], r1       @ P6
+        vld1.32         {d5[0]}, [r3], r1       @ P3
+        vld1.32         {d6[0]}, [r0], r1       @ P7
+        vld1.32         {d7[0]}, [r3]           @ P4
+        vld1.32         {d16[0]}, [r0]          @ P8
+        vshll.u8        q9, d1, #1              @ 2*P5
+        vdup.16         d17, r2                 @ pq
+        vshll.u8        q10, d2, #1             @ 2*P1
+        vmovl.u8        q11, d3                 @ P2
+        vmovl.u8        q1, d4                  @ P6
+        vmovl.u8        q12, d5                 @ P3
+        vmls.i16        d20, d22, d0[1]         @ 2*P1-5*P2
+        vmovl.u8        q11, d6                 @ P7
+        vmls.i16        d18, d2, d0[1]          @ 2*P5-5*P6
+        vshll.u8        q2, d5, #1              @ 2*P3
+        vmovl.u8        q3, d7                  @ P4
+        vmla.i16        d18, d22, d0[1]         @ 2*P5-5*P6+5*P7
+        vmovl.u8        q11, d16                @ P8
+        vmla.u16        d20, d24, d0[1]         @ 2*P1-5*P2+5*P3
+        vmovl.u8        q12, d1                 @ P5
+        vmls.u16        d4, d6, d0[1]           @ 2*P3-5*P4
+        vmls.u16        d18, d22, d0[0]         @ 2*P5-5*P6+5*P7-2*P8
+        vsub.i16        d1, d6, d24             @ P4-P5
+        vmls.i16        d20, d6, d0[0]          @ 2*P1-5*P2+5*P3-2*P4
+        vmla.i16        d4, d24, d0[1]          @ 2*P3-5*P4+5*P5
+        vmls.i16        d4, d2, d0[0]           @ 2*P3-5*P4+5*P5-2*P6
+        vabs.s16        d2, d1
+        vrshr.s16       d3, d18, #3
+        vrshr.s16       d5, d20, #3
+        vshr.s16        d2, d2, #1              @ clip
+        vrshr.s16       d4, d4, #3
+        vabs.s16        d3, d3                  @ a2
+        vshr.s16        d1, d1, #8              @ clip_sign
+        vabs.s16        d5, d5                  @ a1
+        vceq.i16        d7, d2, #0              @ test clip == 0
+        vabs.s16        d16, d4                 @ a0
+        vshr.s16        d4, d4, #8              @ a0_sign
+        vcge.s16        d18, d5, d3             @ test a1 >= a2
+        vcge.s16        d17, d16, d17           @ test a0 >= pq
+        vbsl            d18, d3, d5             @ a3
+        vsub.i16        d1, d1, d4              @ clip_sign - a0_sign
+        vorr            d3, d7, d17             @ test clip == 0 || a0 >= pq
+        vqsub.u16       d4, d16, d18            @ a0 >= a3 ? a0-a3 : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        vcge.s16        d5, d18, d16            @ test a3 >= a0
+        vmul.i16        d0, d4, d0[1]           @ a0 >= a3 ? 5*(a0-a3) : 0
+        vorr            d4, d3, d5              @ test clip == 0 || a0 >= pq || a3 >= a0
+        vmov.32         r0, d4[1]               @ move to gp reg
+        vshr.u16        d0, d0, #3              @ a0 >= a3 ? (5*(a0-a3))>>3 : 0
+        vcge.s16        d4, d0, d2
+        tst             r0, #1
+        bne             1f                      @ none of the 4 pixel pairs should be updated if this one is not filtered
+        vbsl            d4, d2, d0              @ FFMIN(d, clip)
+        vbic            d0, d4, d3              @ set each d to zero if it should not be filtered because clip == 0 || a0 >= pq (a3 > a0 case already zeroed by saturating sub)
+        vmls.i16        d6, d0, d1              @ invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P4
+        vmla.i16        d24, d0, d1             @ invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P5
+        vqmovun.s16     d0, q3
+        vqmovun.s16     d1, q12
+        vst1.32         {d0[0]}, [r3], r1
+        vst1.32         {d1[0]}, [r3]
+1:      bx              lr
+endfunc
+
+@ VC-1 in-loop deblocking filter for 4 pixel pairs at boundary of horizontally-neighbouring blocks
+@ On entry:
+@   r0 -> top-left pel of right block
+@   r1 = row stride, bytes
+@   r2 = PQUANT bitstream parameter
+function ff_vc1_h_loop_filter4_neon, export=1
+        sub             r3, r0, #4              @ where to start reading
+        vldr            d0, .Lcoeffs
+        vld1.32         {d2}, [r3], r1
+        sub             r0, r0, #1              @ where to start writing
+        vld1.32         {d4}, [r3], r1
+        vld1.32         {d3}, [r3], r1
+        vld1.32         {d5}, [r3]
+        vdup.16         d1, r2                  @ pq
+        vtrn.8          q1, q2
+        vtrn.16         d2, d3                  @ P1, P5, P3, P7
+        vtrn.16         d4, d5                  @ P2, P6, P4, P8
+        vshll.u8        q3, d2, #1              @ 2*P1, 2*P5
+        vmovl.u8        q8, d4                  @ P2, P6
+        vmovl.u8        q9, d3                  @ P3, P7
+        vmovl.u8        q2, d5                  @ P4, P8
+        vmls.i16        q3, q8, d0[1]           @ 2*P1-5*P2, 2*P5-5*P6
+        vshll.u8        q10, d3, #1             @ 2*P3, 2*P7
+        vmovl.u8        q1, d2                  @ P1, P5
+        vmla.i16        q3, q9, d0[1]           @ 2*P1-5*P2+5*P3, 2*P5-5*P6+5*P7
+        vmls.i16        q3, q2, d0[0]           @ 2*P1-5*P2+5*P3-2*P4, 2*P5-5*P6+5*P7-2*P8
+        vmov            d2, d3                  @ needs to be in an even-numbered vector for when we come to narrow it later
+        vmls.i16        d20, d4, d0[1]          @ 2*P3-5*P4
+        vmla.i16        d20, d3, d0[1]          @ 2*P3-5*P4+5*P5
+        vsub.i16        d3, d4, d2              @ P4-P5
+        vmls.i16        d20, d17, d0[0]         @ 2*P3-5*P4+5*P5-2*P6
+        vrshr.s16       q3, q3, #3
+        vabs.s16        d5, d3
+        vshr.s16        d3, d3, #8              @ clip_sign
+        vrshr.s16       d16, d20, #3
+        vabs.s16        q3, q3                  @ a1, a2
+        vshr.s16        d5, d5, #1              @ clip
+        vabs.s16        d17, d16                @ a0
+        vceq.i16        d18, d5, #0             @ test clip == 0
+        vshr.s16        d16, d16, #8            @ a0_sign
+        vcge.s16        d19, d6, d7             @ test a1 >= a2
+        vcge.s16        d1, d17, d1             @ test a0 >= pq
+        vsub.i16        d16, d3, d16            @ clip_sign - a0_sign
+        vbsl            d19, d7, d6             @ a3
+        vorr            d1, d18, d1             @ test clip == 0 || a0 >= pq
+        vqsub.u16       d3, d17, d19            @ a0 >= a3 ? a0-a3 : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        vcge.s16        d6, d19, d17            @ test a3 >= a0    @
+        vmul.i16        d0, d3, d0[1]           @ a0 >= a3 ? 5*(a0-a3) : 0
+        vorr            d3, d1, d6              @ test clip == 0 || a0 >= pq || a3 >= a0
+        vmov.32         r2, d3[1]               @ move to gp reg
+        vshr.u16        d0, d0, #3              @ a0 >= a3 ? (5*(a0-a3))>>3 : 0
+        vcge.s16        d3, d0, d5
+        tst             r2, #1
+        bne             1f                      @ none of the 4 pixel pairs should be updated if this one is not filtered
+        vbsl            d3, d5, d0              @ FFMIN(d, clip)
+        vbic            d0, d3, d1              @ set each d to zero if it should not be filtered because clip == 0 || a0 >= pq (a3 > a0 case already zeroed by saturating sub)
+        vmla.i16        d2, d0, d16             @ invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P5
+        vmls.i16        d4, d0, d16             @ invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P4
+        vqmovun.s16     d1, q1
+        vqmovun.s16     d0, q2
+        vst2.8          {d0[0], d1[0]}, [r0], r1
+        vst2.8          {d0[1], d1[1]}, [r0], r1
+        vst2.8          {d0[2], d1[2]}, [r0], r1
+        vst2.8          {d0[3], d1[3]}, [r0]
+1:      bx              lr
+endfunc
+
+@ VC-1 in-loop deblocking filter for 8 pixel pairs at boundary of vertically-neighbouring blocks
+@ On entry:
+@   r0 -> top-left pel of lower block
+@   r1 = row stride, bytes
+@   r2 = PQUANT bitstream parameter
+function ff_vc1_v_loop_filter8_neon, export=1
+        sub             r3, r0, r1, lsl #2
+        vldr            d0, .Lcoeffs
+        vld1.32         {d1}, [r0], r1          @ P5
+        vld1.32         {d2}, [r3], r1          @ P1
+        vld1.32         {d3}, [r3], r1          @ P2
+        vld1.32         {d4}, [r0], r1          @ P6
+        vld1.32         {d5}, [r3], r1          @ P3
+        vld1.32         {d6}, [r0], r1          @ P7
+        vshll.u8        q8, d1, #1              @ 2*P5
+        vshll.u8        q9, d2, #1              @ 2*P1
+        vld1.32         {d7}, [r3]              @ P4
+        vmovl.u8        q1, d3                  @ P2
+        vld1.32         {d20}, [r0]             @ P8
+        vmovl.u8        q11, d4                 @ P6
+        vdup.16         q12, r2                 @ pq
+        vmovl.u8        q13, d5                 @ P3
+        vmls.i16        q9, q1, d0[1]           @ 2*P1-5*P2
+        vmovl.u8        q1, d6                  @ P7
+        vshll.u8        q2, d5, #1              @ 2*P3
+        vmls.i16        q8, q11, d0[1]          @ 2*P5-5*P6
+        vmovl.u8        q3, d7                  @ P4
+        vmovl.u8        q10, d20                @ P8
+        vmla.i16        q8, q1, d0[1]           @ 2*P5-5*P6+5*P7
+        vmovl.u8        q1, d1                  @ P5
+        vmla.i16        q9, q13, d0[1]          @ 2*P1-5*P2+5*P3
+        vsub.i16        q13, q3, q1             @ P4-P5
+        vmls.i16        q2, q3, d0[1]           @ 2*P3-5*P4
+        vmls.i16        q8, q10, d0[0]          @ 2*P5-5*P6+5*P7-2*P8
+        vabs.s16        q10, q13
+        vshr.s16        q13, q13, #8            @ clip_sign
+        vmls.i16        q9, q3, d0[0]           @ 2*P1-5*P2+5*P3-2*P4
+        vshr.s16        q10, q10, #1            @ clip
+        vmla.i16        q2, q1, d0[1]           @ 2*P3-5*P4+5*P5
+        vrshr.s16       q8, q8, #3
+        vmls.i16        q2, q11, d0[0]          @ 2*P3-5*P4+5*P5-2*P6
+        vceq.i16        q11, q10, #0            @ test clip == 0
+        vrshr.s16       q9, q9, #3
+        vabs.s16        q8, q8                  @ a2
+        vabs.s16        q9, q9                  @ a1
+        vrshr.s16       q2, q2, #3
+        vcge.s16        q14, q9, q8             @ test a1 >= a2
+        vabs.s16        q15, q2                 @ a0
+        vshr.s16        q2, q2, #8              @ a0_sign
+        vbsl            q14, q8, q9             @ a3
+        vcge.s16        q8, q15, q12            @ test a0 >= pq
+        vsub.i16        q2, q13, q2             @ clip_sign - a0_sign
+        vqsub.u16       q9, q15, q14            @ a0 >= a3 ? a0-a3 : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        vcge.s16        q12, q14, q15           @ test a3 >= a0
+        vorr            q8, q11, q8             @ test clip == 0 || a0 >= pq
+        vmul.i16        q0, q9, d0[1]           @ a0 >= a3 ? 5*(a0-a3) : 0
+        vorr            q9, q8, q12             @ test clip == 0 || a0 >= pq || a3 >= a0
+        vshl.i64        q11, q9, #16
+        vmov.32         r0, d18[1]              @ move to gp reg
+        vshr.u16        q0, q0, #3              @ a0 >= a3 ? (5*(a0-a3))>>3 : 0
+        vmov.32         r2, d19[1]
+        vshr.s64        q9, q11, #48
+        vcge.s16        q11, q0, q10
+        vorr            q8, q8, q9
+        and             r0, r0, r2
+        vbsl            q11, q10, q0            @ FFMIN(d, clip)
+        tst             r0, #1
+        bne             1f                      @ none of the 8 pixel pairs should be updated in this case
+        vbic            q0, q11, q8             @ set each d to zero if it should not be filtered
+        vmls.i16        q3, q0, q2              @ invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P4
+        vmla.i16        q1, q0, q2              @ invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P5
+        vqmovun.s16     d0, q3
+        vqmovun.s16     d1, q1
+        vst1.32         {d0}, [r3], r1
+        vst1.32         {d1}, [r3]
+1:      bx              lr
+endfunc
+
+.align  5
+.Lcoeffs:
+.quad   0x00050002
+
+@ VC-1 in-loop deblocking filter for 8 pixel pairs at boundary of horizontally-neighbouring blocks
+@ On entry:
+@   r0 -> top-left pel of right block
+@   r1 = row stride, bytes
+@   r2 = PQUANT bitstream parameter
+function ff_vc1_h_loop_filter8_neon, export=1
+        push            {lr}
+        sub             r3, r0, #4              @ where to start reading
+        vldr            d0, .Lcoeffs
+        vld1.32         {d2}, [r3], r1          @ P1[0], P2[0]...
+        sub             r0, r0, #1              @ where to start writing
+        vld1.32         {d4}, [r3], r1
+        add             r12, r0, r1, lsl #2
+        vld1.32         {d3}, [r3], r1
+        vld1.32         {d5}, [r3], r1
+        vld1.32         {d6}, [r3], r1
+        vld1.32         {d16}, [r3], r1
+        vld1.32         {d7}, [r3], r1
+        vld1.32         {d17}, [r3]
+        vtrn.8          q1, q2                  @ P1[0], P1[1], P3[0]... P1[2], P1[3], P3[2]... P2[0], P2[1], P4[0]... P2[2], P2[3], P4[2]...
+        vdup.16         q9, r2                  @ pq
+        vtrn.16         d2, d3                  @ P1[0], P1[1], P1[2], P1[3], P5[0]... P3[0], P3[1], P3[2], P3[3], P7[0]...
+        vtrn.16         d4, d5                  @ P2[0], P2[1], P2[2], P2[3], P6[0]... P4[0], P4[1], P4[2], P4[3], P8[0]...
+        vtrn.8          q3, q8                  @ P1[4], P1[5], P3[4]... P1[6], P1[7], P3[6]... P2[4], P2[5], P4[4]... P2[6], P2[7], P4[6]...
+        vtrn.16         d6, d7                  @ P1[4], P1[5], P1[6], P1[7], P5[4]... P3[4], P3[5], P3[5], P3[7], P7[4]...
+        vtrn.16         d16, d17                @ P2[4], P2[5], P2[6], P2[7], P6[4]... P4[4], P4[5], P4[6], P4[7], P8[4]...
+        vtrn.32         d2, d6                  @ P1, P5
+        vtrn.32         d4, d16                 @ P2, P6
+        vtrn.32         d3, d7                  @ P3, P7
+        vtrn.32         d5, d17                 @ P4, P8
+        vshll.u8        q10, d2, #1             @ 2*P1
+        vshll.u8        q11, d6, #1             @ 2*P5
+        vmovl.u8        q12, d4                 @ P2
+        vmovl.u8        q13, d16                @ P6
+        vmovl.u8        q14, d3                 @ P3
+        vmls.i16        q10, q12, d0[1]         @ 2*P1-5*P2
+        vmovl.u8        q12, d7                 @ P7
+        vshll.u8        q1, d3, #1              @ 2*P3
+        vmls.i16        q11, q13, d0[1]         @ 2*P5-5*P6
+        vmovl.u8        q2, d5                  @ P4
+        vmovl.u8        q8, d17                 @ P8
+        vmla.i16        q11, q12, d0[1]         @ 2*P5-5*P6+5*P7
+        vmovl.u8        q3, d6                  @ P5
+        vmla.i16        q10, q14, d0[1]         @ 2*P1-5*P2+5*P3
+        vsub.i16        q12, q2, q3             @ P4-P5
+        vmls.i16        q1, q2, d0[1]           @ 2*P3-5*P4
+        vmls.i16        q11, q8, d0[0]          @ 2*P5-5*P6+5*P7-2*P8
+        vabs.s16        q8, q12
+        vshr.s16        q12, q12, #8            @ clip_sign
+        vmls.i16        q10, q2, d0[0]          @ 2*P1-5*P2+5*P3-2*P4
+        vshr.s16        q8, q8, #1              @ clip
+        vmla.i16        q1, q3, d0[1]           @ 2*P3-5*P4+5*P5
+        vrshr.s16       q11, q11, #3
+        vmls.i16        q1, q13, d0[0]          @ 2*P3-5*P4+5*P5-2*P6
+        vceq.i16        q13, q8, #0             @ test clip == 0
+        vrshr.s16       q10, q10, #3
+        vabs.s16        q11, q11                @ a2
+        vabs.s16        q10, q10                @ a1
+        vrshr.s16       q1, q1, #3
+        vcge.s16        q14, q10, q11           @ test a1 >= a2
+        vabs.s16        q15, q1                 @ a0
+        vshr.s16        q1, q1, #8              @ a0_sign
+        vbsl            q14, q11, q10           @ a3
+        vcge.s16        q9, q15, q9             @ test a0 >= pq
+        vsub.i16        q1, q12, q1             @ clip_sign - a0_sign
+        vqsub.u16       q10, q15, q14           @ a0 >= a3 ? a0-a3 : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        vcge.s16        q11, q14, q15           @ test a3 >= a0
+        vorr            q9, q13, q9             @ test clip == 0 || a0 >= pq
+        vmul.i16        q0, q10, d0[1]          @ a0 >= a3 ? 5*(a0-a3) : 0
+        vorr            q10, q9, q11            @ test clip == 0 || a0 >= pq || a3 >= a0
+        vmov.32         r2, d20[1]              @ move to gp reg
+        vshr.u16        q0, q0, #3              @ a0 >= a3 ? (5*(a0-a3))>>3 : 0
+        vmov.32         r3, d21[1]
+        vcge.s16        q10, q0, q8
+        and             r14, r2, r3
+        vbsl            q10, q8, q0             @ FFMIN(d, clip)
+        tst             r14, #1
+        bne             2f                      @ none of the 8 pixel pairs should be updated in this case
+        vbic            q0, q10, q9             @ set each d to zero if it should not be filtered because clip == 0 || a0 >= pq (a3 > a0 case already zeroed by saturating sub)
+        vmla.i16        q3, q0, q1              @ invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P5
+        vmls.i16        q2, q0, q1              @ invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P4
+        vqmovun.s16     d1, q3
+        vqmovun.s16     d0, q2
+        tst             r2, #1
+        bne             1f                      @ none of the first 4 pixel pairs should be updated if so
+        vst2.8          {d0[0], d1[0]}, [r0], r1
+        vst2.8          {d0[1], d1[1]}, [r0], r1
+        vst2.8          {d0[2], d1[2]}, [r0], r1
+        vst2.8          {d0[3], d1[3]}, [r0]
+1:      tst             r3, #1
+        bne             2f                      @ none of the second 4 pixel pairs should be updated if so
+        vst2.8          {d0[4], d1[4]}, [r12], r1
+        vst2.8          {d0[5], d1[5]}, [r12], r1
+        vst2.8          {d0[6], d1[6]}, [r12], r1
+        vst2.8          {d0[7], d1[7]}, [r12]
+2:      pop             {pc}
+endfunc
+
+@ VC-1 in-loop deblocking filter for 16 pixel pairs at boundary of vertically-neighbouring blocks
+@ On entry:
+@   r0 -> top-left pel of lower block
+@   r1 = row stride, bytes
+@   r2 = PQUANT bitstream parameter
+function ff_vc1_v_loop_filter16_neon, export=1
+        vpush           {d8-d15}
+        sub             r3, r0, r1, lsl #2
+        vldr            d0, .Lcoeffs
+        vld1.64         {q1}, [r0], r1          @ P5
+        vld1.64         {q2}, [r3], r1          @ P1
+        vld1.64         {q3}, [r3], r1          @ P2
+        vld1.64         {q4}, [r0], r1          @ P6
+        vld1.64         {q5}, [r3], r1          @ P3
+        vld1.64         {q6}, [r0], r1          @ P7
+        vshll.u8        q7, d2, #1              @ 2*P5[0..7]
+        vshll.u8        q8, d4, #1              @ 2*P1[0..7]
+        vld1.64         {q9}, [r3]              @ P4
+        vmovl.u8        q10, d6                 @ P2[0..7]
+        vld1.64         {q11}, [r0]             @ P8
+        vmovl.u8        q12, d8                 @ P6[0..7]
+        vdup.16         q13, r2                 @ pq
+        vshll.u8        q2, d5, #1              @ 2*P1[8..15]
+        vmls.i16        q8, q10, d0[1]          @ 2*P1[0..7]-5*P2[0..7]
+        vshll.u8        q10, d3, #1             @ 2*P5[8..15]
+        vmovl.u8        q3, d7                  @ P2[8..15]
+        vmls.i16        q7, q12, d0[1]          @ 2*P5[0..7]-5*P6[0..7]
+        vmovl.u8        q4, d9                  @ P6[8..15]
+        vmovl.u8        q14, d10                @ P3[0..7]
+        vmovl.u8        q15, d12                @ P7[0..7]
+        vmls.i16        q2, q3, d0[1]           @ 2*P1[8..15]-5*P2[8..15]
+        vshll.u8        q3, d10, #1             @ 2*P3[0..7]
+        vmls.i16        q10, q4, d0[1]          @ 2*P5[8..15]-5*P6[8..15]
+        vmovl.u8        q6, d13                 @ P7[8..15]
+        vmla.i16        q8, q14, d0[1]          @ 2*P1[0..7]-5*P2[0..7]+5*P3[0..7]
+        vmovl.u8        q14, d18                @ P4[0..7]
+        vmovl.u8        q9, d19                 @ P4[8..15]
+        vmla.i16        q7, q15, d0[1]          @ 2*P5[0..7]-5*P6[0..7]+5*P7[0..7]
+        vmovl.u8        q15, d11                @ P3[8..15]
+        vshll.u8        q5, d11, #1             @ 2*P3[8..15]
+        vmls.i16        q3, q14, d0[1]          @ 2*P3[0..7]-5*P4[0..7]
+        vmla.i16        q2, q15, d0[1]          @ 2*P1[8..15]-5*P2[8..15]+5*P3[8..15]
+        vmovl.u8        q15, d22                @ P8[0..7]
+        vmovl.u8        q11, d23                @ P8[8..15]
+        vmla.i16        q10, q6, d0[1]          @ 2*P5[8..15]-5*P6[8..15]+5*P7[8..15]
+        vmovl.u8        q6, d2                  @ P5[0..7]
+        vmovl.u8        q1, d3                  @ P5[8..15]
+        vmls.i16        q5, q9, d0[1]           @ 2*P3[8..15]-5*P4[8..15]
+        vmls.i16        q8, q14, d0[0]          @ 2*P1[0..7]-5*P2[0..7]+5*P3[0..7]-2*P4[0..7]
+        vmls.i16        q7, q15, d0[0]          @ 2*P5[0..7]-5*P6[0..7]+5*P7[0..7]-2*P8[0..7]
+        vsub.i16        q15, q14, q6            @ P4[0..7]-P5[0..7]
+        vmla.i16        q3, q6, d0[1]           @ 2*P3[0..7]-5*P4[0..7]+5*P5[0..7]
+        vrshr.s16       q8, q8, #3
+        vmls.i16        q2, q9, d0[0]           @ 2*P1[8..15]-5*P2[8..15]+5*P3[8..15]-2*P4[8..15]
+        vrshr.s16       q7, q7, #3
+        vmls.i16        q10, q11, d0[0]         @ 2*P5[8..15]-5*P6[8..15]+5*P7[8..15]-2*P8[8..15]
+        vabs.s16        q11, q15
+        vabs.s16        q8, q8                  @ a1[0..7]
+        vmla.i16        q5, q1, d0[1]           @ 2*P3[8..15]-5*P4[8..15]+5*P5[8..15]
+        vshr.s16        q15, q15, #8            @ clip_sign[0..7]
+        vrshr.s16       q2, q2, #3
+        vmls.i16        q3, q12, d0[0]          @ 2*P3[0..7]-5*P4[0..7]+5*P5[0..7]-2*P6[0..7]
+        vabs.s16        q7, q7                  @ a2[0..7]
+        vrshr.s16       q10, q10, #3
+        vsub.i16        q12, q9, q1             @ P4[8..15]-P5[8..15]
+        vshr.s16        q11, q11, #1            @ clip[0..7]
+        vmls.i16        q5, q4, d0[0]           @ 2*P3[8..15]-5*P4[8..15]+5*P5[8..15]-2*P6[8..15]
+        vcge.s16        q4, q8, q7              @ test a1[0..7] >= a2[0..7]
+        vabs.s16        q2, q2                  @ a1[8..15]
+        vrshr.s16       q3, q3, #3
+        vabs.s16        q10, q10                @ a2[8..15]
+        vbsl            q4, q7, q8              @ a3[0..7]
+        vabs.s16        q7, q12
+        vshr.s16        q8, q12, #8             @ clip_sign[8..15]
+        vrshr.s16       q5, q5, #3
+        vcge.s16        q12, q2, q10            @ test a1[8..15] >= a2[8.15]
+        vshr.s16        q7, q7, #1              @ clip[8..15]
+        vbsl            q12, q10, q2            @ a3[8..15]
+        vabs.s16        q2, q3                  @ a0[0..7]
+        vceq.i16        q10, q11, #0            @ test clip[0..7] == 0
+        vshr.s16        q3, q3, #8              @ a0_sign[0..7]
+        vsub.i16        q3, q15, q3             @ clip_sign[0..7] - a0_sign[0..7]
+        vcge.s16        q15, q2, q13            @ test a0[0..7] >= pq
+        vorr            q10, q10, q15           @ test clip[0..7] == 0 || a0[0..7] >= pq
+        vqsub.u16       q15, q2, q4             @ a0[0..7] >= a3[0..7] ? a0[0..7]-a3[0..7] : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        vcge.s16        q2, q4, q2              @ test a3[0..7] >= a0[0..7]
+        vabs.s16        q4, q5                  @ a0[8..15]
+        vshr.s16        q5, q5, #8              @ a0_sign[8..15]
+        vmul.i16        q15, q15, d0[1]         @ a0[0..7] >= a3[0..7] ? 5*(a0[0..7]-a3[0..7]) : 0
+        vcge.s16        q13, q4, q13            @ test a0[8..15] >= pq
+        vorr            q2, q10, q2             @ test clip[0..7] == 0 || a0[0..7] >= pq || a3[0..7] >= a0[0..7]
+        vsub.i16        q5, q8, q5              @ clip_sign[8..15] - a0_sign[8..15]
+        vceq.i16        q8, q7, #0              @ test clip[8..15] == 0
+        vshr.u16        q15, q15, #3            @ a0[0..7] >= a3[0..7] ? (5*(a0[0..7]-a3[0..7]))>>3 : 0
+        vmov            r0, d4[1]               @ move to gp reg
+        vorr            q8, q8, q13             @ test clip[8..15] == 0 || a0[8..15] >= pq
+        vqsub.u16       q13, q4, q12            @ a0[8..15] >= a3[8..15] ? a0[8..15]-a3[8..15] : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        vmov            r2, d5[1]
+        vcge.s16        q4, q12, q4             @ test a3[8..15] >= a0[8..15]
+        vshl.i64        q2, q2, #16
+        vcge.s16        q12, q15, q11
+        vmul.i16        q0, q13, d0[1]          @ a0[8..15] >= a3[8..15] ? 5*(a0[8..15]-a3[8..15]) : 0
+        vorr            q4, q8, q4              @ test clip[8..15] == 0 || a0[8..15] >= pq || a3[8..15] >= a0[8..15]
+        vshr.s64        q2, q2, #48
+        and             r0, r0, r2
+        vbsl            q12, q11, q15           @ FFMIN(d[0..7], clip[0..7])
+        vshl.i64        q11, q4, #16
+        vmov            r2, d8[1]
+        vshr.u16        q0, q0, #3              @ a0[8..15] >= a3[8..15] ? (5*(a0[8..15]-a3[8..15]))>>3 : 0
+        vorr            q2, q10, q2
+        vmov            r12, d9[1]
+        vshr.s64        q4, q11, #48
+        vcge.s16        q10, q0, q7
+        vbic            q2, q12, q2             @ set each d[0..7] to zero if it should not be filtered because clip[0..7] == 0 || a0[0..7] >= pq (a3 > a0 case already zeroed by saturating sub)
+        vorr            q4, q8, q4
+        and             r2, r2, r12
+        vbsl            q10, q7, q0             @ FFMIN(d[8..15], clip[8..15])
+        vmls.i16        q14, q2, q3             @ invert d[0..7] depending on clip_sign[0..7] & a0_sign[0..7], or zero it if they match, and accumulate into P4[0..7]
+        and             r0, r0, r2
+        vbic            q0, q10, q4             @ set each d[8..15] to zero if it should not be filtered because clip[8..15] == 0 || a0[8..15] >= pq (a3 > a0 case already zeroed by saturating sub)
+        tst             r0, #1
+        bne             1f                      @ none of the 16 pixel pairs should be updated in this case
+        vmla.i16        q6, q2, q3              @ invert d[0..7] depending on clip_sign[0..7] & a0_sign[0..7], or zero it if they match, and accumulate into P5[0..7]
+        vmls.i16        q9, q0, q5              @ invert d[8..15] depending on clip_sign[8..15] & a0_sign[8..15], or zero it if they match, and accumulate into P4[8..15]
+        vqmovun.s16     d4, q14
+        vmla.i16        q1, q0, q5              @ invert d[8..15] depending on clip_sign[8..15] & a0_sign[8..15], or zero it if they match, and accumulate into P5[8..15]
+        vqmovun.s16     d0, q6
+        vqmovun.s16     d5, q9
+        vqmovun.s16     d1, q1
+        vst1.64         {q2}, [r3], r1
+        vst1.64         {q0}, [r3]
+1:      vpop            {d8-d15}
+        bx              lr
+endfunc
+
+@ VC-1 in-loop deblocking filter for 16 pixel pairs at boundary of horizontally-neighbouring blocks
+@ On entry:
+@   r0 -> top-left pel of right block
+@   r1 = row stride, bytes
+@   r2 = PQUANT bitstream parameter
+function ff_vc1_h_loop_filter16_neon, export=1
+        push            {r4-r6,lr}
+        vpush           {d8-d15}
+        sub             r3, r0, #4              @ where to start reading
+        vldr            d0, .Lcoeffs
+        vld1.32         {d2}, [r3], r1          @ P1[0], P2[0]...
+        sub             r0, r0, #1              @ where to start writing
+        vld1.32         {d3}, [r3], r1
+        add             r4, r0, r1, lsl #2
+        vld1.32         {d10}, [r3], r1
+        vld1.32         {d11}, [r3], r1
+        vld1.32         {d16}, [r3], r1
+        vld1.32         {d4}, [r3], r1
+        vld1.32         {d8}, [r3], r1
+        vtrn.8          d2, d3                  @ P1[0], P1[1], P3[0]... P2[0], P2[1], P4[0]...
+        vld1.32         {d14}, [r3], r1
+        vld1.32         {d5}, [r3], r1
+        vtrn.8          d10, d11                @ P1[2], P1[3], P3[2]... P2[2], P2[3], P4[2]...
+        vld1.32         {d6}, [r3], r1
+        vld1.32         {d12}, [r3], r1
+        vtrn.8          d16, d4                 @ P1[4], P1[5], P3[4]... P2[4], P2[5], P4[4]...
+        vld1.32         {d13}, [r3], r1
+        vtrn.16         d2, d10                 @ P1[0], P1[1], P1[2], P1[3], P5[0]... P3[0], P3[1], P3[2], P3[3], P7[0]...
+        vld1.32         {d1}, [r3], r1
+        vtrn.8          d8, d14                 @ P1[6], P1[7], P3[6]... P2[6], P2[7], P4[6]...
+        vld1.32         {d7}, [r3], r1
+        vtrn.16         d3, d11                 @ P2[0], P2[1], P2[2], P2[3], P6[0]... P4[0], P4[1], P4[2], P4[3], P8[0]...
+        vld1.32         {d9}, [r3], r1
+        vtrn.8          d5, d6                  @ P1[8], P1[9], P3[8]... P2[8], P2[9], P4[8]...
+        vld1.32         {d15}, [r3]
+        vtrn.16         d16, d8                 @ P1[4], P1[5], P1[6], P1[7], P5[4]... P3[4], P3[5], P3[6], P3[7], P7[4]...
+        vtrn.16         d4, d14                 @ P2[4], P2[5], P2[6], P2[7], P6[4]... P4[4], P4[5], P4[6], P4[7], P8[4]...
+        vtrn.8          d12, d13                @ P1[10], P1[11], P3[10]... P2[10], P2[11], P4[10]...
+        vdup.16         q9, r2                  @ pq
+        vtrn.8          d1, d7                  @ P1[12], P1[13], P3[12]... P2[12], P2[13], P4[12]...
+        vtrn.32         d2, d16                 @ P1[0..7], P5[0..7]
+        vtrn.16         d5, d12                 @ P1[8], P1[7], P1[10], P1[11], P5[8]... P3[8], P3[9], P3[10], P3[11], P7[8]...
+        vtrn.16         d6, d13                 @ P2[8], P2[7], P2[10], P2[11], P6[8]... P4[8], P4[9], P4[10], P4[11], P8[8]...
+        vtrn.8          d9, d15                 @ P1[14], P1[15], P3[14]... P2[14], P2[15], P4[14]...
+        vtrn.32         d3, d4                  @ P2[0..7], P6[0..7]
+        vshll.u8        q10, d2, #1             @ 2*P1[0..7]
+        vtrn.32         d10, d8                 @ P3[0..7], P7[0..7]
+        vshll.u8        q11, d16, #1            @ 2*P5[0..7]
+        vtrn.32         d11, d14                @ P4[0..7], P8[0..7]
+        vtrn.16         d1, d9                  @ P1[12], P1[13], P1[14], P1[15], P5[12]... P3[12], P3[13], P3[14], P3[15], P7[12]...
+        vtrn.16         d7, d15                 @ P2[12], P2[13], P2[14], P2[15], P6[12]... P4[12], P4[13], P4[14], P4[15], P8[12]...
+        vmovl.u8        q1, d3                  @ P2[0..7]
+        vmovl.u8        q12, d4                 @ P6[0..7]
+        vtrn.32         d5, d1                  @ P1[8..15], P5[8..15]
+        vtrn.32         d6, d7                  @ P2[8..15], P6[8..15]
+        vtrn.32         d12, d9                 @ P3[8..15], P7[8..15]
+        vtrn.32         d13, d15                @ P4[8..15], P8[8..15]
+        vmls.i16        q10, q1, d0[1]          @ 2*P1[0..7]-5*P2[0..7]
+        vmovl.u8        q1, d10                 @ P3[0..7]
+        vshll.u8        q2, d5, #1              @ 2*P1[8..15]
+        vshll.u8        q13, d1, #1             @ 2*P5[8..15]
+        vmls.i16        q11, q12, d0[1]         @ 2*P5[0..7]-5*P6[0..7]
+        vmovl.u8        q14, d6                 @ P2[8..15]
+        vmovl.u8        q3, d7                  @ P6[8..15]
+        vmovl.u8        q15, d8                 @ P7[0..7]
+        vmla.i16        q10, q1, d0[1]          @ 2*P1[0..7]-5*P2[0..7]+5*P3[0..7]
+        vmovl.u8        q1, d12                 @ P3[8..15]
+        vmls.i16        q2, q14, d0[1]          @ 2*P1[8..15]-5*P2[8..15]
+        vmovl.u8        q4, d9                  @ P7[8..15]
+        vshll.u8        q14, d10, #1            @ 2*P3[0..7]
+        vmls.i16        q13, q3, d0[1]          @ 2*P5[8..15]-5*P6[8..15]
+        vmovl.u8        q5, d11                 @ P4[0..7]
+        vmla.i16        q11, q15, d0[1]         @ 2*P5[0..7]-5*P6[0..7]+5*P7[0..7]
+        vshll.u8        q15, d12, #1            @ 2*P3[8..15]
+        vmovl.u8        q6, d13                 @ P4[8..15]
+        vmla.i16        q2, q1, d0[1]           @ 2*P1[8..15]-5*P2[8..15]+5*P3[8..15]
+        vmovl.u8        q1, d14                 @ P8[0..7]
+        vmovl.u8        q7, d15                 @ P8[8..15]
+        vmla.i16        q13, q4, d0[1]          @ 2*P5[8..15]-5*P6[8..15]+5*P7[8..15]
+        vmovl.u8        q4, d16                 @ P5[0..7]
+        vmovl.u8        q8, d1                  @ P5[8..15]
+        vmls.i16        q14, q5, d0[1]          @ 2*P3[0..7]-5*P4[0..7]
+        vmls.i16        q15, q6, d0[1]          @ 2*P3[8..15]-5*P4[8..15]
+        vmls.i16        q10, q5, d0[0]          @ 2*P1[0..7]-5*P2[0..7]+5*P3[0..7]-2*P4[0..7]
+        vmls.i16        q11, q1, d0[0]          @ 2*P5[0..7]-5*P6[0..7]+5*P7[0..7]-2*P8[0..7]
+        vsub.i16        q1, q5, q4              @ P4[0..7]-P5[0..7]
+        vmls.i16        q2, q6, d0[0]           @ 2*P1[8..15]-5*P2[8..15]+5*P3[8..15]-2*P4[8..15]
+        vrshr.s16       q10, q10, #3
+        vmls.i16        q13, q7, d0[0]          @ 2*P5[8..15]-5*P6[8..15]+5*P7[8..15]-2*P8[8..15]
+        vsub.i16        q7, q6, q8              @ P4[8..15]-P5[8..15]
+        vrshr.s16       q11, q11, #3
+        vmla.s16        q14, q4, d0[1]          @ 2*P3[0..7]-5*P4[0..7]+5*P5[0..7]
+        vrshr.s16       q2, q2, #3
+        vmla.i16        q15, q8, d0[1]          @ 2*P3[8..15]-5*P4[8..15]+5*P5[8..15]
+        vabs.s16        q10, q10                @ a1[0..7]
+        vrshr.s16       q13, q13, #3
+        vmls.i16        q15, q3, d0[0]          @ 2*P3[8..15]-5*P4[8..15]+5*P5[8..15]-2*P6[8..15]
+        vabs.s16        q3, q11                 @ a2[0..7]
+        vabs.s16        q2, q2                  @ a1[8..15]
+        vmls.i16        q14, q12, d0[0]         @ 2*P3[0..7]-5*P4[0..7]+5*P5[0..7]-2*P6[0..7]
+        vabs.s16        q11, q1
+        vabs.s16        q12, q13                @ a2[8..15]
+        vcge.s16        q13, q10, q3            @ test a1[0..7] >= a2[0..7]
+        vshr.s16        q1, q1, #8              @ clip_sign[0..7]
+        vrshr.s16       q15, q15, #3
+        vshr.s16        q11, q11, #1            @ clip[0..7]
+        vrshr.s16       q14, q14, #3
+        vbsl            q13, q3, q10            @ a3[0..7]
+        vcge.s16        q3, q2, q12             @ test a1[8..15] >= a2[8.15]
+        vabs.s16        q10, q15                @ a0[8..15]
+        vshr.s16        q15, q15, #8            @ a0_sign[8..15]
+        vbsl            q3, q12, q2             @ a3[8..15]
+        vabs.s16        q2, q14                 @ a0[0..7]
+        vabs.s16        q12, q7
+        vshr.s16        q7, q7, #8              @ clip_sign[8..15]
+        vshr.s16        q14, q14, #8            @ a0_sign[0..7]
+        vshr.s16        q12, q12, #1            @ clip[8..15]
+        vsub.i16        q7, q7, q15             @ clip_sign[8..15] - a0_sign[8..15]
+        vqsub.u16       q15, q10, q3            @ a0[8..15] >= a3[8..15] ? a0[8..15]-a3[8..15] : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        vcge.s16        q3, q3, q10             @ test a3[8..15] >= a0[8..15]
+        vcge.s16        q10, q10, q9            @ test a0[8..15] >= pq
+        vcge.s16        q9, q2, q9              @ test a0[0..7] >= pq
+        vsub.i16        q1, q1, q14             @ clip_sign[0..7] - a0_sign[0..7]
+        vqsub.u16       q14, q2, q13            @ a0[0..7] >= a3[0..7] ? a0[0..7]-a3[0..7] : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        vcge.s16        q2, q13, q2             @ test a3[0..7] >= a0[0..7]
+        vmul.i16        q13, q15, d0[1]         @ a0[8..15] >= a3[8..15] ? 5*(a0[8..15]-a3[8..15]) : 0
+        vceq.i16        q15, q11, #0            @ test clip[0..7] == 0
+        vmul.i16        q0, q14, d0[1]          @ a0[0..7] >= a3[0..7] ? 5*(a0[0..7]-a3[0..7]) : 0
+        vorr            q9, q15, q9             @ test clip[0..7] == 0 || a0[0..7] >= pq
+        vceq.i16        q14, q12, #0            @ test clip[8..15] == 0
+        vshr.u16        q13, q13, #3            @ a0[8..15] >= a3[8..15] ? (5*(a0[8..15]-a3[8..15]))>>3 : 0
+        vorr            q2, q9, q2              @ test clip[0..7] == 0 || a0[0..7] >= pq || a3[0..7] >= a0[0..7]
+        vshr.u16        q0, q0, #3              @ a0[0..7] >= a3[0..7] ? (5*(a0[0..7]-a3[0..7]))>>3 : 0
+        vorr            q10, q14, q10           @ test clip[8..15] == 0 || a0[8..15] >= pq
+        vcge.s16        q14, q13, q12
+        vmov.32         r2, d4[1]               @ move to gp reg
+        vorr            q3, q10, q3             @ test clip[8..15] == 0 || a0[8..15] >= pq || a3[8..15] >= a0[8..15]
+        vmov.32         r3, d5[1]
+        vcge.s16        q2, q0, q11
+        vbsl            q14, q12, q13           @ FFMIN(d[8..15], clip[8..15])
+        vbsl            q2, q11, q0             @ FFMIN(d[0..7], clip[0..7])
+        vmov.32         r5, d6[1]
+        vbic            q0, q14, q10            @ set each d[8..15] to zero if it should not be filtered because clip[8..15] == 0 || a0[8..15] >= pq (a3 > a0 case already zeroed by saturating sub)
+        vmov.32         r6, d7[1]
+        and             r12, r2, r3
+        vbic            q2, q2, q9              @ set each d[0..7] to zero if it should not be filtered because clip[0..7] == 0 || a0[0..7] >= pq (a3 > a0 case already zeroed by saturating sub)
+        vmls.i16        q6, q0, q7              @ invert d[8..15] depending on clip_sign[8..15] & a0_sign[8..15], or zero it if they match, and accumulate into P4
+        vmls.i16        q5, q2, q1              @ invert d[0..7] depending on clip_sign[0..7] & a0_sign[0..7], or zero it if they match, and accumulate into P4
+        and             r14, r5, r6
+        vmla.i16        q4, q2, q1              @ invert d[0..7] depending on clip_sign[0..7] & a0_sign[0..7], or zero it if they match, and accumulate into P5
+        and             r12, r12, r14
+        vqmovun.s16     d4, q6
+        vmla.i16        q8, q0, q7              @ invert d[8..15] depending on clip_sign[8..15] & a0_sign[8..15], or zero it if they match, and accumulate into P5
+        tst             r12, #1
+        bne             4f                      @ none of the 16 pixel pairs should be updated in this case
+        vqmovun.s16     d2, q5
+        vqmovun.s16     d3, q4
+        vqmovun.s16     d5, q8
+        tst             r2, #1
+        bne             1f
+        vst2.8          {d2[0], d3[0]}, [r0], r1
+        vst2.8          {d2[1], d3[1]}, [r0], r1
+        vst2.8          {d2[2], d3[2]}, [r0], r1
+        vst2.8          {d2[3], d3[3]}, [r0]
+1:      add             r0, r4, r1, lsl #2
+        tst             r3, #1
+        bne             2f
+        vst2.8          {d2[4], d3[4]}, [r4], r1
+        vst2.8          {d2[5], d3[5]}, [r4], r1
+        vst2.8          {d2[6], d3[6]}, [r4], r1
+        vst2.8          {d2[7], d3[7]}, [r4]
+2:      add             r4, r0, r1, lsl #2
+        tst             r5, #1
+        bne             3f
+        vst2.8          {d4[0], d5[0]}, [r0], r1
+        vst2.8          {d4[1], d5[1]}, [r0], r1
+        vst2.8          {d4[2], d5[2]}, [r0], r1
+        vst2.8          {d4[3], d5[3]}, [r0]
+3:      tst             r6, #1
+        bne             4f
+        vst2.8          {d4[4], d5[4]}, [r4], r1
+        vst2.8          {d4[5], d5[5]}, [r4], r1
+        vst2.8          {d4[6], d5[6]}, [r4], r1
+        vst2.8          {d4[7], d5[7]}, [r4]
+4:      vpop            {d8-d15}
+        pop             {r4-r6,pc}
+endfunc
-- 
2.25.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [FFmpeg-devel] [PATCH 3/6] avcodec/vc1: Arm 64-bit NEON inverse transform fast paths
  2022-03-17 18:58 [FFmpeg-devel] [PATCH 0/6] avcodec/vc1: Arm optimisations Ben Avison
  2022-03-17 18:58 ` [FFmpeg-devel] [PATCH 1/6] avcodec/vc1: Arm 64-bit NEON deblocking filter fast paths Ben Avison
  2022-03-17 18:58 ` [FFmpeg-devel] [PATCH 2/6] avcodec/vc1: Arm 32-bit " Ben Avison
@ 2022-03-17 18:58 ` Ben Avison
  2022-03-17 18:58 ` [FFmpeg-devel] [PATCH 4/6] avcodec/idctdsp: Arm 64-bit NEON block add and clamp " Ben Avison
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 55+ messages in thread
From: Ben Avison @ 2022-03-17 18:58 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Ben Avison

Signed-off-by: Ben Avison <bavison@riscosopen.org>
---
 libavcodec/aarch64/vc1dsp_init_aarch64.c |  19 +
 libavcodec/aarch64/vc1dsp_neon.S         | 678 +++++++++++++++++++++++
 2 files changed, 697 insertions(+)

diff --git a/libavcodec/aarch64/vc1dsp_init_aarch64.c b/libavcodec/aarch64/vc1dsp_init_aarch64.c
index edfb296b75..b672b2aa99 100644
--- a/libavcodec/aarch64/vc1dsp_init_aarch64.c
+++ b/libavcodec/aarch64/vc1dsp_init_aarch64.c
@@ -25,6 +25,16 @@
 
 #include "config.h"
 
+void ff_vc1_inv_trans_8x8_neon(int16_t *block);
+void ff_vc1_inv_trans_8x4_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
+void ff_vc1_inv_trans_4x8_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
+void ff_vc1_inv_trans_4x4_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
+
+void ff_vc1_inv_trans_8x8_dc_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
+void ff_vc1_inv_trans_8x4_dc_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
+void ff_vc1_inv_trans_4x8_dc_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
+void ff_vc1_inv_trans_4x4_dc_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
+
 void ff_vc1_v_loop_filter4_neon(uint8_t *src, int stride, int pq);
 void ff_vc1_h_loop_filter4_neon(uint8_t *src, int stride, int pq);
 void ff_vc1_v_loop_filter8_neon(uint8_t *src, int stride, int pq);
@@ -46,6 +56,15 @@ av_cold void ff_vc1dsp_init_aarch64(VC1DSPContext *dsp)
     int cpu_flags = av_get_cpu_flags();
 
     if (have_neon(cpu_flags)) {
+        dsp->vc1_inv_trans_8x8 = ff_vc1_inv_trans_8x8_neon;
+        dsp->vc1_inv_trans_8x4 = ff_vc1_inv_trans_8x4_neon;
+        dsp->vc1_inv_trans_4x8 = ff_vc1_inv_trans_4x8_neon;
+        dsp->vc1_inv_trans_4x4 = ff_vc1_inv_trans_4x4_neon;
+        dsp->vc1_inv_trans_8x8_dc = ff_vc1_inv_trans_8x8_dc_neon;
+        dsp->vc1_inv_trans_8x4_dc = ff_vc1_inv_trans_8x4_dc_neon;
+        dsp->vc1_inv_trans_4x8_dc = ff_vc1_inv_trans_4x8_dc_neon;
+        dsp->vc1_inv_trans_4x4_dc = ff_vc1_inv_trans_4x4_dc_neon;
+
         dsp->vc1_v_loop_filter4  = ff_vc1_v_loop_filter4_neon;
         dsp->vc1_h_loop_filter4  = ff_vc1_h_loop_filter4_neon;
         dsp->vc1_v_loop_filter8  = ff_vc1_v_loop_filter8_neon;
diff --git a/libavcodec/aarch64/vc1dsp_neon.S b/libavcodec/aarch64/vc1dsp_neon.S
index fe8963545a..c3ca3eae1e 100644
--- a/libavcodec/aarch64/vc1dsp_neon.S
+++ b/libavcodec/aarch64/vc1dsp_neon.S
@@ -22,7 +22,685 @@
 
 #include "libavutil/aarch64/asm.S"
 
+// VC-1 8x8 inverse transform
+// On entry:
+//   x0 -> array of 16-bit inverse transform coefficients, in column-major order
+// On exit:
+//   array at x0 updated to hold transformed block; also now held in row-major order
+function ff_vc1_inv_trans_8x8_neon, export=1
+        ld1     {v1.16b, v2.16b}, [x0], #32
+        ld1     {v3.16b, v4.16b}, [x0], #32
+        ld1     {v5.16b, v6.16b}, [x0], #32
+        shl     v1.8h, v1.8h, #2        //         8/2 * src[0]
+        sub     x1, x0, #3*32
+        ld1     {v16.16b, v17.16b}, [x0]
+        shl     v7.8h, v2.8h, #4        //          16 * src[8]
+        shl     v18.8h, v2.8h, #2       //           4 * src[8]
+        shl     v19.8h, v4.8h, #4       //                        16 * src[24]
+        ldr     d0, .Lcoeffs_it8
+        shl     v5.8h, v5.8h, #2        //                                      8/2 * src[32]
+        shl     v20.8h, v6.8h, #4       //                                       16 * src[40]
+        shl     v21.8h, v6.8h, #2       //                                        4 * src[40]
+        shl     v22.8h, v17.8h, #4      //                                                      16 * src[56]
+        ssra    v20.8h, v19.8h, #2      //                         4 * src[24] + 16 * src[40]
+        mul     v23.8h, v3.8h, v0.h[0]  //                       6/2 * src[16]
+        sub     v19.8h, v19.8h, v21.8h  //                        16 * src[24] -  4 * src[40]
+        ssra    v7.8h, v22.8h, #2       //          16 * src[8]                               +  4 * src[56]
+        sub     v18.8h, v22.8h, v18.8h  //        -  4 * src[8]                               + 16 * src[56]
+        shl     v3.8h, v3.8h, #3        //                      16/2 * src[16]
+        mls     v20.8h, v2.8h, v0.h[2]  //        - 15 * src[8] +  4 * src[24] + 16 * src[40]
+        ssra    v1.8h, v1.8h, #1        //        12/2 * src[0]
+        ssra    v5.8h, v5.8h, #1        //                                     12/2 * src[32]
+        mla     v7.8h, v4.8h, v0.h[2]   //          16 * src[8] + 15 * src[24]                +  4 * src[56]
+        shl     v21.8h, v16.8h, #3      //                                                    16/2 * src[48]
+        mls     v19.8h, v2.8h, v0.h[1]  //        -  9 * src[8] + 16 * src[24] -  4 * src[40]
+        sub     v2.8h, v23.8h, v21.8h   // t4/2 =                6/2 * src[16]              - 16/2 * src[48]
+        mla     v18.8h, v4.8h, v0.h[1]  //        -  4 * src[8] +  9 * src[24]                + 16 * src[56]
+        add     v4.8h, v1.8h, v5.8h     // t1/2 = 12/2 * src[0]              + 12/2 * src[32]
+        sub     v1.8h, v1.8h, v5.8h     // t2/2 = 12/2 * src[0]              - 12/2 * src[32]
+        mla     v3.8h, v16.8h, v0.h[0]  // t3/2 =               16/2 * src[16]              +  6/2 * src[48]
+        mla     v7.8h, v6.8h, v0.h[1]   //  t1  =   16 * src[8] + 15 * src[24] +  9 * src[40] +  4 * src[56]
+        add     v5.8h, v1.8h, v2.8h     // t6/2 = t2/2 + t4/2
+        sub     v16.8h, v1.8h, v2.8h    // t7/2 = t2/2 - t4/2
+        mla     v20.8h, v17.8h, v0.h[1] // -t2  = - 15 * src[8] +  4 * src[24] + 16 * src[40] +  9 * src[56]
+        add     v21.8h, v1.8h, v2.8h    // t6/2 = t2/2 + t4/2
+        add     v22.8h, v4.8h, v3.8h    // t5/2 = t1/2 + t3/2
+        mls     v19.8h, v17.8h, v0.h[2] // -t3  = -  9 * src[8] + 16 * src[24] -  4 * src[40] - 15 * src[56]
+        sub     v17.8h, v4.8h, v3.8h    // t8/2 = t1/2 - t3/2
+        add     v23.8h, v4.8h, v3.8h    // t5/2 = t1/2 + t3/2
+        mls     v18.8h, v6.8h, v0.h[2]  // -t4  = -  4 * src[8] +  9 * src[24] - 15 * src[40] + 16 * src[56]
+        sub     v1.8h, v1.8h, v2.8h     // t7/2 = t2/2 - t4/2
+        sub     v2.8h, v4.8h, v3.8h     // t8/2 = t1/2 - t3/2
+        neg     v3.8h, v7.8h            // -t1
+        neg     v4.8h, v20.8h           // +t2
+        neg     v6.8h, v19.8h           // +t3
+        ssra    v22.8h, v7.8h, #1       // (t5 + t1) >> 1
+        ssra    v1.8h, v19.8h, #1       // (t7 - t3) >> 1
+        neg     v7.8h, v18.8h           // +t4
+        ssra    v5.8h, v4.8h, #1        // (t6 + t2) >> 1
+        ssra    v16.8h, v6.8h, #1       // (t7 + t3) >> 1
+        ssra    v2.8h, v18.8h, #1       // (t8 - t4) >> 1
+        ssra    v17.8h, v7.8h, #1       // (t8 + t4) >> 1
+        ssra    v21.8h, v20.8h, #1      // (t6 - t2) >> 1
+        ssra    v23.8h, v3.8h, #1       // (t5 - t1) >> 1
+        srshr   v3.8h, v22.8h, #2       // (t5 + t1 + 4) >> 3
+        srshr   v4.8h, v5.8h, #2        // (t6 + t2 + 4) >> 3
+        srshr   v5.8h, v16.8h, #2       // (t7 + t3 + 4) >> 3
+        srshr   v6.8h, v17.8h, #2       // (t8 + t4 + 4) >> 3
+        srshr   v2.8h, v2.8h, #2        // (t8 - t4 + 4) >> 3
+        srshr   v1.8h, v1.8h, #2        // (t7 - t3 + 4) >> 3
+        srshr   v7.8h, v21.8h, #2       // (t6 - t2 + 4) >> 3
+        srshr   v16.8h, v23.8h, #2      // (t5 - t1 + 4) >> 3
+        trn2    v17.8h, v3.8h, v4.8h
+        trn2    v18.8h, v5.8h, v6.8h
+        trn2    v19.8h, v2.8h, v1.8h
+        trn2    v20.8h, v7.8h, v16.8h
+        trn1    v21.4s, v17.4s, v18.4s
+        trn2    v17.4s, v17.4s, v18.4s
+        trn1    v18.4s, v19.4s, v20.4s
+        trn2    v19.4s, v19.4s, v20.4s
+        trn1    v3.8h, v3.8h, v4.8h
+        trn2    v4.2d, v21.2d, v18.2d
+        trn1    v20.2d, v17.2d, v19.2d
+        trn1    v5.8h, v5.8h, v6.8h
+        trn1    v1.8h, v2.8h, v1.8h
+        trn1    v2.8h, v7.8h, v16.8h
+        trn1    v6.2d, v21.2d, v18.2d
+        trn2    v7.2d, v17.2d, v19.2d
+        shl     v16.8h, v20.8h, #4      //                        16 * src[24]
+        shl     v17.8h, v4.8h, #4       //                                       16 * src[40]
+        trn1    v18.4s, v3.4s, v5.4s
+        trn1    v19.4s, v1.4s, v2.4s
+        shl     v21.8h, v7.8h, #4       //                                                      16 * src[56]
+        shl     v22.8h, v6.8h, #2       //           4 * src[8]
+        shl     v23.8h, v4.8h, #2       //                                        4 * src[40]
+        trn2    v3.4s, v3.4s, v5.4s
+        trn2    v1.4s, v1.4s, v2.4s
+        shl     v2.8h, v6.8h, #4        //          16 * src[8]
+        sub     v5.8h, v16.8h, v23.8h   //                        16 * src[24] -  4 * src[40]
+        ssra    v17.8h, v16.8h, #2      //                         4 * src[24] + 16 * src[40]
+        sub     v16.8h, v21.8h, v22.8h  //        -  4 * src[8]                               + 16 * src[56]
+        trn1    v22.2d, v18.2d, v19.2d
+        trn2    v18.2d, v18.2d, v19.2d
+        trn1    v19.2d, v3.2d, v1.2d
+        ssra    v2.8h, v21.8h, #2       //          16 * src[8]                               +  4 * src[56]
+        mls     v17.8h, v6.8h, v0.h[2]  //        - 15 * src[8] +  4 * src[24] + 16 * src[40]
+        shl     v21.8h, v22.8h, #2      //         8/2 * src[0]
+        shl     v18.8h, v18.8h, #2      //                                      8/2 * src[32]
+        mls     v5.8h, v6.8h, v0.h[1]   //        -  9 * src[8] + 16 * src[24] -  4 * src[40]
+        shl     v6.8h, v19.8h, #3       //                      16/2 * src[16]
+        trn2    v1.2d, v3.2d, v1.2d
+        mla     v16.8h, v20.8h, v0.h[1] //        -  4 * src[8] +  9 * src[24]                + 16 * src[56]
+        ssra    v21.8h, v21.8h, #1      //        12/2 * src[0]
+        ssra    v18.8h, v18.8h, #1      //                                     12/2 * src[32]
+        mul     v3.8h, v19.8h, v0.h[0]  //                       6/2 * src[16]
+        shl     v19.8h, v1.8h, #3       //                                                    16/2 * src[48]
+        mla     v2.8h, v20.8h, v0.h[2]  //          16 * src[8] + 15 * src[24]                +  4 * src[56]
+        add     v20.8h, v21.8h, v18.8h  // t1/2 = 12/2 * src[0]              + 12/2 * src[32]
+        mla     v6.8h, v1.8h, v0.h[0]   // t3/2 =               16/2 * src[16]              +  6/2 * src[48]
+        sub     v1.8h, v21.8h, v18.8h   // t2/2 = 12/2 * src[0]              - 12/2 * src[32]
+        sub     v3.8h, v3.8h, v19.8h    // t4/2 =                6/2 * src[16]              - 16/2 * src[48]
+        mla     v17.8h, v7.8h, v0.h[1]  // -t2  = - 15 * src[8] +  4 * src[24] + 16 * src[40] +  9 * src[56]
+        mls     v5.8h, v7.8h, v0.h[2]   // -t3  = -  9 * src[8] + 16 * src[24] -  4 * src[40] - 15 * src[56]
+        add     v7.8h, v1.8h, v3.8h     // t6/2 = t2/2 + t4/2
+        add     v18.8h, v20.8h, v6.8h   // t5/2 = t1/2 + t3/2
+        mls     v16.8h, v4.8h, v0.h[2]  // -t4  = -  4 * src[8] +  9 * src[24] - 15 * src[40] + 16 * src[56]
+        sub     v19.8h, v1.8h, v3.8h    // t7/2 = t2/2 - t4/2
+        neg     v21.8h, v17.8h          // +t2
+        mla     v2.8h, v4.8h, v0.h[1]   //  t1  =   16 * src[8] + 15 * src[24] +  9 * src[40] +  4 * src[56]
+        sub     v0.8h, v20.8h, v6.8h    // t8/2 = t1/2 - t3/2
+        neg     v4.8h, v5.8h            // +t3
+        sub     v22.8h, v1.8h, v3.8h    // t7/2 = t2/2 - t4/2
+        sub     v23.8h, v20.8h, v6.8h   // t8/2 = t1/2 - t3/2
+        neg     v24.8h, v16.8h          // +t4
+        add     v6.8h, v20.8h, v6.8h    // t5/2 = t1/2 + t3/2
+        add     v1.8h, v1.8h, v3.8h     // t6/2 = t2/2 + t4/2
+        ssra    v7.8h, v21.8h, #1       // (t6 + t2) >> 1
+        neg     v3.8h, v2.8h            // -t1
+        ssra    v18.8h, v2.8h, #1       // (t5 + t1) >> 1
+        ssra    v19.8h, v4.8h, #1       // (t7 + t3) >> 1
+        ssra    v0.8h, v24.8h, #1       // (t8 + t4) >> 1
+        srsra   v23.8h, v16.8h, #1      // (t8 - t4 + 1) >> 1
+        srsra   v22.8h, v5.8h, #1       // (t7 - t3 + 1) >> 1
+        srsra   v1.8h, v17.8h, #1       // (t6 - t2 + 1) >> 1
+        srsra   v6.8h, v3.8h, #1        // (t5 - t1 + 1) >> 1
+        srshr   v2.8h, v18.8h, #6       // (t5 + t1 + 64) >> 7
+        srshr   v3.8h, v7.8h, #6        // (t6 + t2 + 64) >> 7
+        srshr   v4.8h, v19.8h, #6       // (t7 + t3 + 64) >> 7
+        srshr   v5.8h, v0.8h, #6        // (t8 + t4 + 64) >> 7
+        srshr   v16.8h, v23.8h, #6      // (t8 - t4 + 65) >> 7
+        srshr   v17.8h, v22.8h, #6      // (t7 - t3 + 65) >> 7
+        st1     {v2.16b, v3.16b}, [x1], #32
+        srshr   v0.8h, v1.8h, #6        // (t6 - t2 + 65) >> 7
+        srshr   v1.8h, v6.8h, #6        // (t5 - t1 + 65) >> 7
+        st1     {v4.16b, v5.16b}, [x1], #32
+        st1     {v16.16b, v17.16b}, [x1], #32
+        st1     {v0.16b, v1.16b}, [x1]
+        ret
+endfunc
+
+// VC-1 8x4 inverse transform
+// On entry:
+//   x0 -> array of 8-bit samples, in row-major order
+//   x1 = row stride for 8-bit sample array
+//   x2 -> array of 16-bit inverse transform coefficients, in row-major order
+// On exit:
+//   array at x0 updated by saturated addition of (narrowed) transformed block
+function ff_vc1_inv_trans_8x4_neon, export=1
+        ld1     {v1.8b, v2.8b, v3.8b, v4.8b}, [x2], #32
+        mov     x3, x0
+        ld1     {v16.8b, v17.8b, v18.8b, v19.8b}, [x2]
+        ldr     q0, .Lcoeffs_it8        // includes 4-point coefficients in upper half of vector
+        ld1     {v5.8b}, [x0], x1
+        trn2    v6.4h, v1.4h, v3.4h
+        trn2    v7.4h, v2.4h, v4.4h
+        trn1    v1.4h, v1.4h, v3.4h
+        trn1    v2.4h, v2.4h, v4.4h
+        trn2    v3.4h, v16.4h, v18.4h
+        trn2    v4.4h, v17.4h, v19.4h
+        trn1    v16.4h, v16.4h, v18.4h
+        trn1    v17.4h, v17.4h, v19.4h
+        ld1     {v18.8b}, [x0], x1
+        trn1    v19.2s, v6.2s, v3.2s
+        trn2    v3.2s, v6.2s, v3.2s
+        trn1    v6.2s, v7.2s, v4.2s
+        trn2    v4.2s, v7.2s, v4.2s
+        trn1    v7.2s, v1.2s, v16.2s
+        trn1    v20.2s, v2.2s, v17.2s
+        shl     v21.4h, v19.4h, #4      //          16 * src[1]
+        trn2    v1.2s, v1.2s, v16.2s
+        shl     v16.4h, v3.4h, #4       //                        16 * src[3]
+        trn2    v2.2s, v2.2s, v17.2s
+        shl     v17.4h, v6.4h, #4       //                                      16 * src[5]
+        ld1     {v22.8b}, [x0], x1
+        shl     v23.4h, v4.4h, #4       //                                                    16 * src[7]
+        mul     v24.4h, v1.4h, v0.h[0]  //                       6/2 * src[2]
+        ld1     {v25.8b}, [x0]
+        shl     v26.4h, v19.4h, #2      //           4 * src[1]
+        shl     v27.4h, v6.4h, #2       //                                       4 * src[5]
+        ssra    v21.4h, v23.4h, #2      //          16 * src[1]                             +  4 * src[7]
+        ssra    v17.4h, v16.4h, #2      //                         4 * src[3] + 16 * src[5]
+        sub     v23.4h, v23.4h, v26.4h  //        -  4 * src[1]                             + 16 * src[7]
+        sub     v16.4h, v16.4h, v27.4h  //                        16 * src[3] -  4 * src[5]
+        shl     v7.4h, v7.4h, #2        //         8/2 * src[0]
+        shl     v20.4h, v20.4h, #2      //                                     8/2 * src[4]
+        mla     v21.4h, v3.4h, v0.h[2]  //          16 * src[1] + 15 * src[3]               +  4 * src[7]
+        shl     v1.4h, v1.4h, #3        //                      16/2 * src[2]
+        mls     v17.4h, v19.4h, v0.h[2] //        - 15 * src[1] +  4 * src[3] + 16 * src[5]
+        ssra    v7.4h, v7.4h, #1        //        12/2 * src[0]
+        mls     v16.4h, v19.4h, v0.h[1] //        -  9 * src[1] + 16 * src[3] -  4 * src[5]
+        ssra    v20.4h, v20.4h, #1      //                                    12/2 * src[4]
+        mla     v23.4h, v3.4h, v0.h[1]  //        -  4 * src[1] +  9 * src[3]               + 16 * src[7]
+        shl     v3.4h, v2.4h, #3        //                                                  16/2 * src[6]
+        mla     v1.4h, v2.4h, v0.h[0]   // t3/2 =               16/2 * src[2]             +  6/2 * src[6]
+        mla     v21.4h, v6.4h, v0.h[1]  //  t1  =   16 * src[1] + 15 * src[3] +  9 * src[5] +  4 * src[7]
+        mla     v17.4h, v4.4h, v0.h[1]  // -t2  = - 15 * src[1] +  4 * src[3] + 16 * src[5] +  9 * src[7]
+        sub     v2.4h, v24.4h, v3.4h    // t4/2 =                6/2 * src[2]             - 16/2 * src[6]
+        mls     v16.4h, v4.4h, v0.h[2]  // -t3  = -  9 * src[1] + 16 * src[3] -  4 * src[5] - 15 * src[7]
+        add     v3.4h, v7.4h, v20.4h    // t1/2 = 12/2 * src[0]             + 12/2 * src[4]
+        mls     v23.4h, v6.4h, v0.h[2]  // -t4  = -  4 * src[1] +  9 * src[3] - 15 * src[5] + 16 * src[7]
+        sub     v4.4h, v7.4h, v20.4h    // t2/2 = 12/2 * src[0]             - 12/2 * src[4]
+        neg     v6.4h, v21.4h           // -t1
+        add     v7.4h, v3.4h, v1.4h     // t5/2 = t1/2 + t3/2
+        sub     v19.4h, v3.4h, v1.4h    // t8/2 = t1/2 - t3/2
+        add     v20.4h, v4.4h, v2.4h    // t6/2 = t2/2 + t4/2
+        sub     v24.4h, v4.4h, v2.4h    // t7/2 = t2/2 - t4/2
+        add     v26.4h, v3.4h, v1.4h    // t5/2 = t1/2 + t3/2
+        add     v27.4h, v4.4h, v2.4h    // t6/2 = t2/2 + t4/2
+        sub     v2.4h, v4.4h, v2.4h     // t7/2 = t2/2 - t4/2
+        sub     v1.4h, v3.4h, v1.4h     // t8/2 = t1/2 - t3/2
+        neg     v3.4h, v17.4h           // +t2
+        neg     v4.4h, v16.4h           // +t3
+        neg     v28.4h, v23.4h          // +t4
+        ssra    v7.4h, v21.4h, #1       // (t5 + t1) >> 1
+        ssra    v1.4h, v23.4h, #1       // (t8 - t4) >> 1
+        ssra    v20.4h, v3.4h, #1       // (t6 + t2) >> 1
+        ssra    v24.4h, v4.4h, #1       // (t7 + t3) >> 1
+        ssra    v19.4h, v28.4h, #1      // (t8 + t4) >> 1
+        ssra    v2.4h, v16.4h, #1       // (t7 - t3) >> 1
+        ssra    v27.4h, v17.4h, #1      // (t6 - t2) >> 1
+        ssra    v26.4h, v6.4h, #1       // (t5 - t1) >> 1
+        trn1    v1.2d, v7.2d, v1.2d
+        trn1    v2.2d, v20.2d, v2.2d
+        trn1    v3.2d, v24.2d, v27.2d
+        trn1    v4.2d, v19.2d, v26.2d
+        srshr   v1.8h, v1.8h, #2        // (t5 + t1 + 4) >> 3, (t8 - t4 + 4) >> 3
+        srshr   v2.8h, v2.8h, #2        // (t6 + t2 + 4) >> 3, (t7 - t3 + 4) >> 3
+        srshr   v3.8h, v3.8h, #2        // (t7 + t3 + 4) >> 3, (t6 - t2 + 4) >> 3
+        srshr   v4.8h, v4.8h, #2        // (t8 + t4 + 4) >> 3, (t5 - t1 + 4) >> 3
+        trn2    v6.8h, v1.8h, v2.8h
+        trn1    v1.8h, v1.8h, v2.8h
+        trn2    v2.8h, v3.8h, v4.8h
+        trn1    v3.8h, v3.8h, v4.8h
+        trn2    v4.4s, v6.4s, v2.4s
+        trn1    v7.4s, v1.4s, v3.4s
+        trn2    v1.4s, v1.4s, v3.4s
+        mul     v3.8h, v4.8h, v0.h[5]   //                                                           22/2 * src[24]
+        trn1    v2.4s, v6.4s, v2.4s
+        mul     v4.8h, v4.8h, v0.h[4]   //                                                           10/2 * src[24]
+        mul     v6.8h, v7.8h, v0.h[6]   //            17 * src[0]
+        mul     v1.8h, v1.8h, v0.h[6]   //                                            17 * src[16]
+        mls     v3.8h, v2.8h, v0.h[4]   //  t4/2 =                - 10/2 * src[8]                  + 22/2 * src[24]
+        mla     v4.8h, v2.8h, v0.h[5]   //  t3/2 =                  22/2 * src[8]                  + 10/2 * src[24]
+        add     v0.8h, v6.8h, v1.8h     //   t1  =    17 * src[0]                 +   17 * src[16]
+        sub     v1.8h, v6.8h, v1.8h     //   t2  =    17 * src[0]                 -   17 * src[16]
+        neg     v2.8h, v3.8h            // -t4/2
+        neg     v6.8h, v4.8h            // -t3/2
+        ssra    v4.8h, v0.8h, #1        // (t1 + t3) >> 1
+        ssra    v2.8h, v1.8h, #1        // (t2 - t4) >> 1
+        ssra    v3.8h, v1.8h, #1        // (t2 + t4) >> 1
+        ssra    v6.8h, v0.8h, #1        // (t1 - t3) >> 1
+        srshr   v0.8h, v4.8h, #6        // (t1 + t3 + 64) >> 7
+        srshr   v1.8h, v2.8h, #6        // (t2 - t4 + 64) >> 7
+        srshr   v2.8h, v3.8h, #6        // (t2 + t4 + 64) >> 7
+        srshr   v3.8h, v6.8h, #6        // (t1 - t3 + 64) >> 7
+        uaddw   v0.8h, v0.8h, v5.8b
+        uaddw   v1.8h, v1.8h, v18.8b
+        uaddw   v2.8h, v2.8h, v22.8b
+        uaddw   v3.8h, v3.8h, v25.8b
+        sqxtun  v0.8b, v0.8h
+        sqxtun  v1.8b, v1.8h
+        sqxtun  v2.8b, v2.8h
+        sqxtun  v3.8b, v3.8h
+        st1     {v0.8b}, [x3], x1
+        st1     {v1.8b}, [x3], x1
+        st1     {v2.8b}, [x3], x1
+        st1     {v3.8b}, [x3]
+        ret
+endfunc
+
+// VC-1 4x8 inverse transform
+// On entry:
+//   x0 -> array of 8-bit samples, in row-major order
+//   x1 = row stride for 8-bit sample array
+//   x2 -> array of 16-bit inverse transform coefficients, in row-major order (row stride is 8 coefficients)
+// On exit:
+//   array at x0 updated by saturated addition of (narrowed) transformed block
+function ff_vc1_inv_trans_4x8_neon, export=1
+        mov     x3, #16
+        ldr     q0, .Lcoeffs_it8        // includes 4-point coefficients in upper half of vector
+        mov     x4, x0
+        ld1     {v1.d}[0], [x2], x3     // 00 01 02 03
+        ld1     {v2.d}[0], [x2], x3     // 10 11 12 13
+        ld1     {v3.d}[0], [x2], x3     // 20 21 22 23
+        ld1     {v4.d}[0], [x2], x3     // 30 31 32 33
+        ld1     {v1.d}[1], [x2], x3     // 40 41 42 43
+        ld1     {v2.d}[1], [x2], x3     // 50 51 52 53
+        ld1     {v3.d}[1], [x2], x3     // 60 61 62 63
+        ld1     {v4.d}[1], [x2]         // 70 71 72 73
+        ld1     {v5.s}[0], [x0], x1
+        ld1     {v6.s}[0], [x0], x1
+        ld1     {v7.s}[0], [x0], x1
+        trn2    v16.8h, v1.8h, v2.8h    // 01 11 03 13 41 51 43 53
+        trn1    v1.8h, v1.8h, v2.8h     // 00 10 02 12 40 50 42 52
+        trn2    v2.8h, v3.8h, v4.8h     // 21 31 23 33 61 71 63 73
+        trn1    v3.8h, v3.8h, v4.8h     // 20 30 22 32 60 70 62 72
+        ld1     {v4.s}[0], [x0], x1
+        trn2    v17.4s, v16.4s, v2.4s   // 03 13 23 33 43 53 63 73
+        trn1    v18.4s, v1.4s, v3.4s    // 00 10 20 30 40 50 60 70
+        trn1    v2.4s, v16.4s, v2.4s    // 01 11 21 31 41 51 61 71
+        mul     v16.8h, v17.8h, v0.h[4] //                                                          10/2 * src[3]
+        ld1     {v5.s}[1], [x0], x1
+        mul     v17.8h, v17.8h, v0.h[5] //                                                          22/2 * src[3]
+        ld1     {v6.s}[1], [x0], x1
+        trn2    v1.4s, v1.4s, v3.4s     // 02 12 22 32 42 52 62 72
+        mul     v3.8h, v18.8h, v0.h[6]  //            17 * src[0]
+        ld1     {v7.s}[1], [x0], x1
+        mul     v1.8h, v1.8h, v0.h[6]   //                                            17 * src[2]
+        ld1     {v4.s}[1], [x0]
+        mla     v16.8h, v2.8h, v0.h[5]  //  t3/2 =                  22/2 * src[1]                 + 10/2 * src[3]
+        mls     v17.8h, v2.8h, v0.h[4]  //  t4/2 =                - 10/2 * src[1]                 + 22/2 * src[3]
+        add     v2.8h, v3.8h, v1.8h     //   t1  =    17 * src[0]                 +   17 * src[2]
+        sub     v1.8h, v3.8h, v1.8h     //   t2  =    17 * src[0]                 -   17 * src[2]
+        neg     v3.8h, v16.8h           // -t3/2
+        ssra    v16.8h, v2.8h, #1       // (t1 + t3) >> 1
+        neg     v18.8h, v17.8h          // -t4/2
+        ssra    v17.8h, v1.8h, #1       // (t2 + t4) >> 1
+        ssra    v3.8h, v2.8h, #1        // (t1 - t3) >> 1
+        ssra    v18.8h, v1.8h, #1       // (t2 - t4) >> 1
+        srshr   v1.8h, v16.8h, #2       // (t1 + t3 + 64) >> 3
+        srshr   v2.8h, v17.8h, #2       // (t2 + t4 + 64) >> 3
+        srshr   v3.8h, v3.8h, #2        // (t1 - t3 + 64) >> 3
+        srshr   v16.8h, v18.8h, #2      // (t2 - t4 + 64) >> 3
+        trn2    v17.8h, v2.8h, v3.8h    // 12 13 32 33 52 53 72 73
+        trn2    v18.8h, v1.8h, v16.8h   // 10 11 30 31 50 51 70 71
+        trn1    v1.8h, v1.8h, v16.8h    // 00 01 20 21 40 41 60 61
+        trn1    v2.8h, v2.8h, v3.8h     // 02 03 22 23 42 43 62 63
+        trn1    v3.4s, v18.4s, v17.4s   // 10 11 12 13 50 51 52 53
+        trn2    v16.4s, v18.4s, v17.4s  // 30 31 32 33 70 71 72 73
+        trn1    v17.4s, v1.4s, v2.4s    // 00 01 02 03 40 41 42 43
+        mov     d18, v3.d[1]            // 50 51 52 53
+        shl     v19.4h, v3.4h, #4       //          16 * src[8]
+        mov     d20, v16.d[1]           // 70 71 72 73
+        shl     v21.4h, v16.4h, #4      //                        16 * src[24]
+        mov     d22, v17.d[1]           // 40 41 42 43
+        shl     v23.4h, v3.4h, #2       //           4 * src[8]
+        shl     v24.4h, v18.4h, #4      //                                       16 * src[40]
+        shl     v25.4h, v20.4h, #4      //                                                      16 * src[56]
+        shl     v26.4h, v18.4h, #2      //                                        4 * src[40]
+        trn2    v1.4s, v1.4s, v2.4s     // 20 21 22 23 60 61 62 63
+        ssra    v24.4h, v21.4h, #2      //                         4 * src[24] + 16 * src[40]
+        sub     v2.4h, v25.4h, v23.4h   //        -  4 * src[8]                               + 16 * src[56]
+        shl     v17.4h, v17.4h, #2      //         8/2 * src[0]
+        sub     v21.4h, v21.4h, v26.4h  //                        16 * src[24] -  4 * src[40]
+        shl     v22.4h, v22.4h, #2      //                                      8/2 * src[32]
+        mov     d23, v1.d[1]            // 60 61 62 63
+        ssra    v19.4h, v25.4h, #2      //          16 * src[8]                               +  4 * src[56]
+        mul     v25.4h, v1.4h, v0.h[0]  //                       6/2 * src[16]
+        shl     v1.4h, v1.4h, #3        //                      16/2 * src[16]
+        mls     v24.4h, v3.4h, v0.h[2]  //        - 15 * src[8] +  4 * src[24] + 16 * src[40]
+        ssra    v17.4h, v17.4h, #1      //        12/2 * src[0]
+        mls     v21.4h, v3.4h, v0.h[1]  //        -  9 * src[8] + 16 * src[24] -  4 * src[40]
+        ssra    v22.4h, v22.4h, #1      //                                     12/2 * src[32]
+        mla     v2.4h, v16.4h, v0.h[1]  //        -  4 * src[8] +  9 * src[24]                + 16 * src[56]
+        shl     v3.4h, v23.4h, #3       //                                                    16/2 * src[48]
+        mla     v19.4h, v16.4h, v0.h[2] //          16 * src[8] + 15 * src[24]                +  4 * src[56]
+        mla     v1.4h, v23.4h, v0.h[0]  // t3/2 =               16/2 * src[16]              +  6/2 * src[48]
+        mla     v24.4h, v20.4h, v0.h[1] // -t2  = - 15 * src[8] +  4 * src[24] + 16 * src[40] +  9 * src[56]
+        add     v16.4h, v17.4h, v22.4h  // t1/2 = 12/2 * src[0]              + 12/2 * src[32]
+        sub     v3.4h, v25.4h, v3.4h    // t4/2 =                6/2 * src[16]              - 16/2 * src[48]
+        sub     v17.4h, v17.4h, v22.4h  // t2/2 = 12/2 * src[0]              - 12/2 * src[32]
+        mls     v21.4h, v20.4h, v0.h[2] // -t3  = -  9 * src[8] + 16 * src[24] -  4 * src[40] - 15 * src[56]
+        mla     v19.4h, v18.4h, v0.h[1] //  t1  =   16 * src[8] + 15 * src[24] +  9 * src[40] +  4 * src[56]
+        add     v20.4h, v16.4h, v1.4h   // t5/2 = t1/2 + t3/2
+        mls     v2.4h, v18.4h, v0.h[2]  // -t4  = -  4 * src[8] +  9 * src[24] - 15 * src[40] + 16 * src[56]
+        sub     v0.4h, v16.4h, v1.4h    // t8/2 = t1/2 - t3/2
+        add     v18.4h, v17.4h, v3.4h   // t6/2 = t2/2 + t4/2
+        sub     v22.4h, v17.4h, v3.4h   // t7/2 = t2/2 - t4/2
+        neg     v23.4h, v24.4h          // +t2
+        sub     v25.4h, v17.4h, v3.4h   // t7/2 = t2/2 - t4/2
+        add     v3.4h, v17.4h, v3.4h    // t6/2 = t2/2 + t4/2
+        neg     v17.4h, v21.4h          // +t3
+        sub     v26.4h, v16.4h, v1.4h   // t8/2 = t1/2 - t3/2
+        add     v1.4h, v16.4h, v1.4h    // t5/2 = t1/2 + t3/2
+        neg     v16.4h, v19.4h          // -t1
+        neg     v27.4h, v2.4h           // +t4
+        ssra    v20.4h, v19.4h, #1      // (t5 + t1) >> 1
+        srsra   v0.4h, v2.4h, #1        // (t8 - t4 + 1) >> 1
+        ssra    v18.4h, v23.4h, #1      // (t6 + t2) >> 1
+        srsra   v22.4h, v21.4h, #1      // (t7 - t3 + 1) >> 1
+        ssra    v25.4h, v17.4h, #1      // (t7 + t3) >> 1
+        srsra   v3.4h, v24.4h, #1       // (t6 - t2 + 1) >> 1
+        ssra    v26.4h, v27.4h, #1      // (t8 + t4) >> 1
+        srsra   v1.4h, v16.4h, #1       // (t5 - t1 + 1) >> 1
+        trn1    v0.2d, v20.2d, v0.2d
+        trn1    v2.2d, v18.2d, v22.2d
+        trn1    v3.2d, v25.2d, v3.2d
+        trn1    v1.2d, v26.2d, v1.2d
+        srshr   v0.8h, v0.8h, #6        // (t5 + t1 + 64) >> 7, (t8 - t4 + 65) >> 7
+        srshr   v2.8h, v2.8h, #6        // (t6 + t2 + 64) >> 7, (t7 - t3 + 65) >> 7
+        srshr   v3.8h, v3.8h, #6        // (t7 + t3 + 64) >> 7, (t6 - t2 + 65) >> 7
+        srshr   v1.8h, v1.8h, #6        // (t8 + t4 + 64) >> 7, (t5 - t1 + 65) >> 7
+        uaddw   v0.8h, v0.8h, v5.8b
+        uaddw   v2.8h, v2.8h, v6.8b
+        uaddw   v3.8h, v3.8h, v7.8b
+        uaddw   v1.8h, v1.8h, v4.8b
+        sqxtun  v0.8b, v0.8h
+        sqxtun  v2.8b, v2.8h
+        sqxtun  v3.8b, v3.8h
+        sqxtun  v1.8b, v1.8h
+        st1     {v0.s}[0], [x4], x1
+        st1     {v2.s}[0], [x4], x1
+        st1     {v3.s}[0], [x4], x1
+        st1     {v1.s}[0], [x4], x1
+        st1     {v0.s}[1], [x4], x1
+        st1     {v2.s}[1], [x4], x1
+        st1     {v3.s}[1], [x4], x1
+        st1     {v1.s}[1], [x4]
+        ret
+endfunc
+
+// VC-1 4x4 inverse transform
+// On entry:
+//   x0 -> array of 8-bit samples, in row-major order
+//   x1 = row stride for 8-bit sample array
+//   x2 -> array of 16-bit inverse transform coefficients, in row-major order (row stride is 8 coefficients)
+// On exit:
+//   array at x0 updated by saturated addition of (narrowed) transformed block
+function ff_vc1_inv_trans_4x4_neon, export=1
+        mov     x3, #16
+        ldr     d0, .Lcoeffs_it4
+        mov     x4, x0
+        ld1     {v1.d}[0], [x2], x3     // 00 01 02 03
+        ld1     {v2.d}[0], [x2], x3     // 10 11 12 13
+        ld1     {v3.d}[0], [x2], x3     // 20 21 22 23
+        ld1     {v4.d}[0], [x2]         // 30 31 32 33
+        ld1     {v5.s}[0], [x0], x1
+        ld1     {v5.s}[1], [x0], x1
+        ld1     {v6.s}[0], [x0], x1
+        trn2    v7.4h, v1.4h, v2.4h     // 01 11 03 13
+        trn1    v1.4h, v1.4h, v2.4h     // 00 10 02 12
+        ld1     {v6.s}[1], [x0]
+        trn2    v2.4h, v3.4h, v4.4h     // 21 31 23 33
+        trn1    v3.4h, v3.4h, v4.4h     // 20 30 22 32
+        trn2    v4.2s, v7.2s, v2.2s     // 03 13 23 33
+        trn1    v16.2s, v1.2s, v3.2s    // 00 10 20 30
+        trn1    v2.2s, v7.2s, v2.2s     // 01 11 21 31
+        trn2    v1.2s, v1.2s, v3.2s     // 02 12 22 32
+        mul     v3.4h, v4.4h, v0.h[0]   //                                                          10/2 * src[3]
+        mul     v4.4h, v4.4h, v0.h[1]   //                                                          22/2 * src[3]
+        mul     v7.4h, v16.4h, v0.h[2]  //            17 * src[0]
+        mul     v1.4h, v1.4h, v0.h[2]   //                                            17 * src[2]
+        mla     v3.4h, v2.4h, v0.h[1]   //  t3/2 =                  22/2 * src[1]                 + 10/2 * src[3]
+        mls     v4.4h, v2.4h, v0.h[0]   //  t4/2 =                - 10/2 * src[1]                 + 22/2 * src[3]
+        add     v2.4h, v7.4h, v1.4h     //   t1  =    17 * src[0]                 +   17 * src[2]
+        sub     v1.4h, v7.4h, v1.4h     //   t2  =    17 * src[0]                 -   17 * src[2]
+        neg     v7.4h, v3.4h            // -t3/2
+        neg     v16.4h, v4.4h           // -t4/2
+        ssra    v3.4h, v2.4h, #1        // (t1 + t3) >> 1
+        ssra    v4.4h, v1.4h, #1        // (t2 + t4) >> 1
+        ssra    v16.4h, v1.4h, #1       // (t2 - t4) >> 1
+        ssra    v7.4h, v2.4h, #1        // (t1 - t3) >> 1
+        srshr   v1.4h, v3.4h, #2        // (t1 + t3 + 64) >> 3
+        srshr   v2.4h, v4.4h, #2        // (t2 + t4 + 64) >> 3
+        srshr   v3.4h, v16.4h, #2       // (t2 - t4 + 64) >> 3
+        srshr   v4.4h, v7.4h, #2        // (t1 - t3 + 64) >> 3
+        trn2    v7.4h, v1.4h, v3.4h     // 10 11 30 31
+        trn1    v1.4h, v1.4h, v3.4h     // 00 01 20 21
+        trn2    v3.4h, v2.4h, v4.4h     // 12 13 32 33
+        trn1    v2.4h, v2.4h, v4.4h     // 02 03 22 23
+        trn2    v4.2s, v7.2s, v3.2s     // 30 31 32 33
+        trn1    v16.2s, v1.2s, v2.2s    // 00 01 02 03
+        trn1    v3.2s, v7.2s, v3.2s     // 10 11 12 13
+        trn2    v1.2s, v1.2s, v2.2s     // 20 21 22 23
+        mul     v2.4h, v4.4h, v0.h[1]   //                                                           22/2 * src[24]
+        mul     v4.4h, v4.4h, v0.h[0]   //                                                           10/2 * src[24]
+        mul     v7.4h, v16.4h, v0.h[2]  //            17 * src[0]
+        mul     v1.4h, v1.4h, v0.h[2]   //                                            17 * src[16]
+        mls     v2.4h, v3.4h, v0.h[0]   //  t4/2 =                - 10/2 * src[8]                  + 22/2 * src[24]
+        mla     v4.4h, v3.4h, v0.h[1]   //  t3/2 =                  22/2 * src[8]                  + 10/2 * src[24]
+        add     v0.4h, v7.4h, v1.4h     //   t1  =    17 * src[0]                 +   17 * src[16]
+        sub     v1.4h, v7.4h, v1.4h     //   t2  =    17 * src[0]                 -   17 * src[16]
+        neg     v3.4h, v2.4h            // -t4/2
+        neg     v7.4h, v4.4h            // -t3/2
+        ssra    v4.4h, v0.4h, #1        // (t1 + t3) >> 1
+        ssra    v3.4h, v1.4h, #1        // (t2 - t4) >> 1
+        ssra    v2.4h, v1.4h, #1        // (t2 + t4) >> 1
+        ssra    v7.4h, v0.4h, #1        // (t1 - t3) >> 1
+        trn1    v0.2d, v4.2d, v3.2d
+        trn1    v1.2d, v2.2d, v7.2d
+        srshr   v0.8h, v0.8h, #6        // (t1 + t3 + 64) >> 7, (t2 - t4 + 64) >> 7
+        srshr   v1.8h, v1.8h, #6        // (t2 + t4 + 64) >> 7, (t1 - t3 + 64) >> 7
+        uaddw   v0.8h, v0.8h, v5.8b
+        uaddw   v1.8h, v1.8h, v6.8b
+        sqxtun  v0.8b, v0.8h
+        sqxtun  v1.8b, v1.8h
+        st1     {v0.s}[0], [x4], x1
+        st1     {v0.s}[1], [x4], x1
+        st1     {v1.s}[0], [x4], x1
+        st1     {v1.s}[1], [x4]
+        ret
+endfunc
+
+// VC-1 8x8 inverse transform, DC case
+// On entry:
+//   x0 -> array of 8-bit samples, in row-major order
+//   x1 = row stride for 8-bit sample array
+//   x2 -> 16-bit inverse transform DC coefficient
+// On exit:
+//   array at x0 updated by saturated addition of (narrowed) transformed block
+function ff_vc1_inv_trans_8x8_dc_neon, export=1
+        ldrsh   w2, [x2]
+        mov     x3, x0
+        ld1     {v0.8b}, [x0], x1
+        ld1     {v1.8b}, [x0], x1
+        ld1     {v2.8b}, [x0], x1
+        add     w2, w2, w2, lsl #1
+        ld1     {v3.8b}, [x0], x1
+        ld1     {v4.8b}, [x0], x1
+        add     w2, w2, #1
+        ld1     {v5.8b}, [x0], x1
+        asr     w2, w2, #1
+        ld1     {v6.8b}, [x0], x1
+        add     w2, w2, w2, lsl #1
+        ld1     {v7.8b}, [x0]
+        add     w0, w2, #16
+        asr     w0, w0, #5
+        dup     v16.8h, w0
+        uaddw   v0.8h, v16.8h, v0.8b
+        uaddw   v1.8h, v16.8h, v1.8b
+        uaddw   v2.8h, v16.8h, v2.8b
+        uaddw   v3.8h, v16.8h, v3.8b
+        uaddw   v4.8h, v16.8h, v4.8b
+        uaddw   v5.8h, v16.8h, v5.8b
+        sqxtun  v0.8b, v0.8h
+        uaddw   v6.8h, v16.8h, v6.8b
+        sqxtun  v1.8b, v1.8h
+        uaddw   v7.8h, v16.8h, v7.8b
+        sqxtun  v2.8b, v2.8h
+        sqxtun  v3.8b, v3.8h
+        sqxtun  v4.8b, v4.8h
+        st1     {v0.8b}, [x3], x1
+        sqxtun  v0.8b, v5.8h
+        st1     {v1.8b}, [x3], x1
+        sqxtun  v1.8b, v6.8h
+        st1     {v2.8b}, [x3], x1
+        sqxtun  v2.8b, v7.8h
+        st1     {v3.8b}, [x3], x1
+        st1     {v4.8b}, [x3], x1
+        st1     {v0.8b}, [x3], x1
+        st1     {v1.8b}, [x3], x1
+        st1     {v2.8b}, [x3]
+        ret
+endfunc
+
+// VC-1 8x4 inverse transform, DC case
+// On entry:
+//   x0 -> array of 8-bit samples, in row-major order
+//   x1 = row stride for 8-bit sample array
+//   x2 -> 16-bit inverse transform DC coefficient
+// On exit:
+//   array at x0 updated by saturated addition of (narrowed) transformed block
+function ff_vc1_inv_trans_8x4_dc_neon, export=1
+        ldrsh   w2, [x2]
+        mov     x3, x0
+        ld1     {v0.8b}, [x0], x1
+        ld1     {v1.8b}, [x0], x1
+        ld1     {v2.8b}, [x0], x1
+        add     w2, w2, w2, lsl #1
+        ld1     {v3.8b}, [x0]
+        add     w0, w2, #1
+        asr     w0, w0, #1
+        add     w0, w0, w0, lsl #4
+        add     w0, w0, #64
+        asr     w0, w0, #7
+        dup     v4.8h, w0
+        uaddw   v0.8h, v4.8h, v0.8b
+        uaddw   v1.8h, v4.8h, v1.8b
+        uaddw   v2.8h, v4.8h, v2.8b
+        uaddw   v3.8h, v4.8h, v3.8b
+        sqxtun  v0.8b, v0.8h
+        sqxtun  v1.8b, v1.8h
+        sqxtun  v2.8b, v2.8h
+        sqxtun  v3.8b, v3.8h
+        st1     {v0.8b}, [x3], x1
+        st1     {v1.8b}, [x3], x1
+        st1     {v2.8b}, [x3], x1
+        st1     {v3.8b}, [x3]
+        ret
+endfunc
+
+// VC-1 4x8 inverse transform, DC case
+// On entry:
+//   x0 -> array of 8-bit samples, in row-major order
+//   x1 = row stride for 8-bit sample array
+//   x2 -> 16-bit inverse transform DC coefficient
+// On exit:
+//   array at x0 updated by saturated addition of (narrowed) transformed block
+function ff_vc1_inv_trans_4x8_dc_neon, export=1
+        ldrsh   w2, [x2]
+        mov     x3, x0
+        ld1     {v0.s}[0], [x0], x1
+        ld1     {v1.s}[0], [x0], x1
+        ld1     {v2.s}[0], [x0], x1
+        add     w2, w2, w2, lsl #4
+        ld1     {v3.s}[0], [x0], x1
+        add     w2, w2, #4
+        asr     w2, w2, #3
+        add     w2, w2, w2, lsl #1
+        ld1     {v0.s}[1], [x0], x1
+        add     w2, w2, #16
+        asr     w2, w2, #5
+        dup     v4.8h, w2
+        ld1     {v1.s}[1], [x0], x1
+        ld1     {v2.s}[1], [x0], x1
+        ld1     {v3.s}[1], [x0]
+        uaddw   v0.8h, v4.8h, v0.8b
+        uaddw   v1.8h, v4.8h, v1.8b
+        uaddw   v2.8h, v4.8h, v2.8b
+        uaddw   v3.8h, v4.8h, v3.8b
+        sqxtun  v0.8b, v0.8h
+        sqxtun  v1.8b, v1.8h
+        sqxtun  v2.8b, v2.8h
+        sqxtun  v3.8b, v3.8h
+        st1     {v0.s}[0], [x3], x1
+        st1     {v1.s}[0], [x3], x1
+        st1     {v2.s}[0], [x3], x1
+        st1     {v3.s}[0], [x3], x1
+        st1     {v0.s}[1], [x3], x1
+        st1     {v1.s}[1], [x3], x1
+        st1     {v2.s}[1], [x3], x1
+        st1     {v3.s}[1], [x3]
+        ret
+endfunc
+
+// VC-1 4x4 inverse transform, DC case
+// On entry:
+//   x0 -> array of 8-bit samples, in row-major order
+//   x1 = row stride for 8-bit sample array
+//   x2 -> 16-bit inverse transform DC coefficient
+// On exit:
+//   array at x0 updated by saturated addition of (narrowed) transformed block
+function ff_vc1_inv_trans_4x4_dc_neon, export=1
+        ldrsh   w2, [x2]
+        mov     x3, x0
+        ld1     {v0.s}[0], [x0], x1
+        ld1     {v1.s}[0], [x0], x1
+        ld1     {v0.s}[1], [x0], x1
+        add     w2, w2, w2, lsl #4
+        ld1     {v1.s}[1], [x0]
+        add     w0, w2, #4
+        asr     w0, w0, #3
+        add     w0, w0, w0, lsl #4
+        add     w0, w0, #64
+        asr     w0, w0, #7
+        dup     v2.8h, w0
+        uaddw   v0.8h, v2.8h, v0.8b
+        uaddw   v1.8h, v2.8h, v1.8b
+        sqxtun  v0.8b, v0.8h
+        sqxtun  v1.8b, v1.8h
+        st1     {v0.s}[0], [x3], x1
+        st1     {v1.s}[0], [x3], x1
+        st1     {v0.s}[1], [x3], x1
+        st1     {v1.s}[1], [x3]
+        ret
+endfunc
+
 .align  5
+.Lcoeffs_it8:
+.quad   0x000F00090003
+.Lcoeffs_it4:
+.quad   0x0011000B0005
 .Lcoeffs:
 .quad   0x00050002
 
-- 
2.25.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [FFmpeg-devel] [PATCH 4/6] avcodec/idctdsp: Arm 64-bit NEON block add and clamp fast paths
  2022-03-17 18:58 [FFmpeg-devel] [PATCH 0/6] avcodec/vc1: Arm optimisations Ben Avison
                   ` (2 preceding siblings ...)
  2022-03-17 18:58 ` [FFmpeg-devel] [PATCH 3/6] avcodec/vc1: Arm 64-bit NEON inverse transform " Ben Avison
@ 2022-03-17 18:58 ` Ben Avison
  2022-03-17 18:58 ` [FFmpeg-devel] [PATCH 5/6] avcodec/blockdsp: Arm 64-bit NEON block clear " Ben Avison
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 55+ messages in thread
From: Ben Avison @ 2022-03-17 18:58 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Ben Avison

Signed-off-by: Ben Avison <bavison@riscosopen.org>
---
 libavcodec/aarch64/Makefile               |   3 +-
 libavcodec/aarch64/idctdsp_init_aarch64.c |  26 +++--
 libavcodec/aarch64/idctdsp_neon.S         | 130 ++++++++++++++++++++++
 3 files changed, 150 insertions(+), 9 deletions(-)
 create mode 100644 libavcodec/aarch64/idctdsp_neon.S

diff --git a/libavcodec/aarch64/Makefile b/libavcodec/aarch64/Makefile
index 5b25e4dfb9..c8935f205e 100644
--- a/libavcodec/aarch64/Makefile
+++ b/libavcodec/aarch64/Makefile
@@ -44,7 +44,8 @@ NEON-OBJS-$(CONFIG_H264PRED)            += aarch64/h264pred_neon.o
 NEON-OBJS-$(CONFIG_H264QPEL)            += aarch64/h264qpel_neon.o             \
                                            aarch64/hpeldsp_neon.o
 NEON-OBJS-$(CONFIG_HPELDSP)             += aarch64/hpeldsp_neon.o
-NEON-OBJS-$(CONFIG_IDCTDSP)             += aarch64/simple_idct_neon.o
+NEON-OBJS-$(CONFIG_IDCTDSP)             += aarch64/idctdsp_neon.o              \
+                                           aarch64/simple_idct_neon.o
 NEON-OBJS-$(CONFIG_MDCT)                += aarch64/mdct_neon.o
 NEON-OBJS-$(CONFIG_MPEGAUDIODSP)        += aarch64/mpegaudiodsp_neon.o
 NEON-OBJS-$(CONFIG_PIXBLOCKDSP)         += aarch64/pixblockdsp_neon.o
diff --git a/libavcodec/aarch64/idctdsp_init_aarch64.c b/libavcodec/aarch64/idctdsp_init_aarch64.c
index 742a3372e3..eec21aa5a2 100644
--- a/libavcodec/aarch64/idctdsp_init_aarch64.c
+++ b/libavcodec/aarch64/idctdsp_init_aarch64.c
@@ -27,19 +27,29 @@
 #include "libavcodec/idctdsp.h"
 #include "idct.h"
 
+void ff_put_pixels_clamped_neon(const int16_t *, uint8_t *, ptrdiff_t);
+void ff_put_signed_pixels_clamped_neon(const int16_t *, uint8_t *, ptrdiff_t);
+void ff_add_pixels_clamped_neon(const int16_t *, uint8_t *, ptrdiff_t);
+
 av_cold void ff_idctdsp_init_aarch64(IDCTDSPContext *c, AVCodecContext *avctx,
                                      unsigned high_bit_depth)
 {
     int cpu_flags = av_get_cpu_flags();
 
-    if (have_neon(cpu_flags) && !avctx->lowres && !high_bit_depth) {
-        if (avctx->idct_algo == FF_IDCT_AUTO ||
-            avctx->idct_algo == FF_IDCT_SIMPLEAUTO ||
-            avctx->idct_algo == FF_IDCT_SIMPLENEON) {
-            c->idct_put  = ff_simple_idct_put_neon;
-            c->idct_add  = ff_simple_idct_add_neon;
-            c->idct      = ff_simple_idct_neon;
-            c->perm_type = FF_IDCT_PERM_PARTTRANS;
+    if (have_neon(cpu_flags)) {
+        if (!avctx->lowres && !high_bit_depth) {
+            if (avctx->idct_algo == FF_IDCT_AUTO ||
+                avctx->idct_algo == FF_IDCT_SIMPLEAUTO ||
+                avctx->idct_algo == FF_IDCT_SIMPLENEON) {
+                c->idct_put  = ff_simple_idct_put_neon;
+                c->idct_add  = ff_simple_idct_add_neon;
+                c->idct      = ff_simple_idct_neon;
+                c->perm_type = FF_IDCT_PERM_PARTTRANS;
+            }
         }
+
+        c->add_pixels_clamped        = ff_add_pixels_clamped_neon;
+        c->put_pixels_clamped        = ff_put_pixels_clamped_neon;
+        c->put_signed_pixels_clamped = ff_put_signed_pixels_clamped_neon;
     }
 }
diff --git a/libavcodec/aarch64/idctdsp_neon.S b/libavcodec/aarch64/idctdsp_neon.S
new file mode 100644
index 0000000000..bbc9dc3f84
--- /dev/null
+++ b/libavcodec/aarch64/idctdsp_neon.S
@@ -0,0 +1,130 @@
+/*
+ * IDCT AArch64 NEON optimisations
+ *
+ * Copyright (c) 2022 Ben Avison <bavison@riscosopen.org>
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include "libavutil/aarch64/asm.S"
+
+// Clamp 16-bit signed block coefficients to unsigned 8-bit
+// On entry:
+//   x0 -> array of 64x 16-bit coefficients
+//   x1 -> 8-bit results
+//   x2 = row stride for results, bytes
+function ff_put_pixels_clamped_neon, export=1
+        ld1     {v0.16b, v1.16b, v2.16b, v3.16b}, [x0], #64
+        ld1     {v4.16b, v5.16b, v6.16b, v7.16b}, [x0]
+        sqxtun  v0.8b, v0.8h
+        sqxtun  v1.8b, v1.8h
+        sqxtun  v2.8b, v2.8h
+        sqxtun  v3.8b, v3.8h
+        sqxtun  v4.8b, v4.8h
+        st1     {v0.8b}, [x1], x2
+        sqxtun  v0.8b, v5.8h
+        st1     {v1.8b}, [x1], x2
+        sqxtun  v1.8b, v6.8h
+        st1     {v2.8b}, [x1], x2
+        sqxtun  v2.8b, v7.8h
+        st1     {v3.8b}, [x1], x2
+        st1     {v4.8b}, [x1], x2
+        st1     {v0.8b}, [x1], x2
+        st1     {v1.8b}, [x1], x2
+        st1     {v2.8b}, [x1]
+        ret
+endfunc
+
+// Clamp 16-bit signed block coefficients to signed 8-bit (biased by 128)
+// On entry:
+//   x0 -> array of 64x 16-bit coefficients
+//   x1 -> 8-bit results
+//   x2 = row stride for results, bytes
+function ff_put_signed_pixels_clamped_neon, export=1
+        ld1     {v0.16b, v1.16b, v2.16b, v3.16b}, [x0], #64
+        movi    v4.8b, #128
+        ld1     {v16.16b, v17.16b, v18.16b, v19.16b}, [x0]
+        sqxtn   v0.8b, v0.8h
+        sqxtn   v1.8b, v1.8h
+        sqxtn   v2.8b, v2.8h
+        sqxtn   v3.8b, v3.8h
+        sqxtn   v5.8b, v16.8h
+        add     v0.8b, v0.8b, v4.8b
+        sqxtn   v6.8b, v17.8h
+        add     v1.8b, v1.8b, v4.8b
+        sqxtn   v7.8b, v18.8h
+        add     v2.8b, v2.8b, v4.8b
+        sqxtn   v16.8b, v19.8h
+        add     v3.8b, v3.8b, v4.8b
+        st1     {v0.8b}, [x1], x2
+        add     v0.8b, v5.8b, v4.8b
+        st1     {v1.8b}, [x1], x2
+        add     v1.8b, v6.8b, v4.8b
+        st1     {v2.8b}, [x1], x2
+        add     v2.8b, v7.8b, v4.8b
+        st1     {v3.8b}, [x1], x2
+        add     v3.8b, v16.8b, v4.8b
+        st1     {v0.8b}, [x1], x2
+        st1     {v1.8b}, [x1], x2
+        st1     {v2.8b}, [x1], x2
+        st1     {v3.8b}, [x1]
+        ret
+endfunc
+
+// Add 16-bit signed block coefficients to unsigned 8-bit
+// On entry:
+//   x0 -> array of 64x 16-bit coefficients
+//   x1 -> 8-bit input and results
+//   x2 = row stride for 8-bit input and results, bytes
+function ff_add_pixels_clamped_neon, export=1
+        ld1     {v0.16b, v1.16b, v2.16b, v3.16b}, [x0], #64
+        mov     x3, x1
+        ld1     {v4.8b}, [x1], x2
+        ld1     {v5.8b}, [x1], x2
+        ld1     {v6.8b}, [x1], x2
+        ld1     {v7.8b}, [x1], x2
+        ld1     {v16.16b, v17.16b, v18.16b, v19.16b}, [x0]
+        uaddw   v0.8h, v0.8h, v4.8b
+        uaddw   v1.8h, v1.8h, v5.8b
+        uaddw   v2.8h, v2.8h, v6.8b
+        ld1     {v4.8b}, [x1], x2
+        uaddw   v3.8h, v3.8h, v7.8b
+        ld1     {v5.8b}, [x1], x2
+        sqxtun  v0.8b, v0.8h
+        ld1     {v6.8b}, [x1], x2
+        sqxtun  v1.8b, v1.8h
+        ld1     {v7.8b}, [x1]
+        sqxtun  v2.8b, v2.8h
+        sqxtun  v3.8b, v3.8h
+        uaddw   v4.8h, v16.8h, v4.8b
+        st1     {v0.8b}, [x3], x2
+        uaddw   v0.8h, v17.8h, v5.8b
+        st1     {v1.8b}, [x3], x2
+        uaddw   v1.8h, v18.8h, v6.8b
+        st1     {v2.8b}, [x3], x2
+        uaddw   v2.8h, v19.8h, v7.8b
+        sqxtun  v4.8b, v4.8h
+        sqxtun  v0.8b, v0.8h
+        st1     {v3.8b}, [x3], x2
+        sqxtun  v1.8b, v1.8h
+        sqxtun  v2.8b, v2.8h
+        st1     {v4.8b}, [x3], x2
+        st1     {v0.8b}, [x3], x2
+        st1     {v1.8b}, [x3], x2
+        st1     {v2.8b}, [x3]
+        ret
+endfunc
-- 
2.25.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [FFmpeg-devel] [PATCH 5/6] avcodec/blockdsp: Arm 64-bit NEON block clear fast paths
  2022-03-17 18:58 [FFmpeg-devel] [PATCH 0/6] avcodec/vc1: Arm optimisations Ben Avison
                   ` (3 preceding siblings ...)
  2022-03-17 18:58 ` [FFmpeg-devel] [PATCH 4/6] avcodec/idctdsp: Arm 64-bit NEON block add and clamp " Ben Avison
@ 2022-03-17 18:58 ` Ben Avison
  2022-03-17 18:58 ` [FFmpeg-devel] [PATCH 6/6] avcodec/vc1: Introduce fast path for unescaping bitstream buffer Ben Avison
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 55+ messages in thread
From: Ben Avison @ 2022-03-17 18:58 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Ben Avison

Signed-off-by: Ben Avison <bavison@riscosopen.org>
---
 libavcodec/aarch64/Makefile                |  2 +
 libavcodec/aarch64/blockdsp_init_aarch64.c | 42 +++++++++++++++++++++
 libavcodec/aarch64/blockdsp_neon.S         | 43 ++++++++++++++++++++++
 libavcodec/blockdsp.c                      |  2 +
 libavcodec/blockdsp.h                      |  1 +
 5 files changed, 90 insertions(+)
 create mode 100644 libavcodec/aarch64/blockdsp_init_aarch64.c
 create mode 100644 libavcodec/aarch64/blockdsp_neon.S

diff --git a/libavcodec/aarch64/Makefile b/libavcodec/aarch64/Makefile
index c8935f205e..7078dc6089 100644
--- a/libavcodec/aarch64/Makefile
+++ b/libavcodec/aarch64/Makefile
@@ -35,6 +35,8 @@ ARMV8-OBJS-$(CONFIG_VIDEODSP)           += aarch64/videodsp.o
 
 # subsystems
 NEON-OBJS-$(CONFIG_AAC_DECODER)         += aarch64/sbrdsp_neon.o
+NEON-OBJS-$(CONFIG_BLOCKDSP)            += aarch64/blockdsp_init_aarch64.o     \
+                                           aarch64/blockdsp_neon.o
 NEON-OBJS-$(CONFIG_FFT)                 += aarch64/fft_neon.o
 NEON-OBJS-$(CONFIG_FMTCONVERT)          += aarch64/fmtconvert_neon.o
 NEON-OBJS-$(CONFIG_H264CHROMA)          += aarch64/h264cmc_neon.o
diff --git a/libavcodec/aarch64/blockdsp_init_aarch64.c b/libavcodec/aarch64/blockdsp_init_aarch64.c
new file mode 100644
index 0000000000..9f3280f007
--- /dev/null
+++ b/libavcodec/aarch64/blockdsp_init_aarch64.c
@@ -0,0 +1,42 @@
+/*
+ * AArch64 NEON optimised block operations
+ *
+ * Copyright (c) 2022 Ben Avison <bavison@riscosopen.org>
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include <stdint.h>
+
+#include "libavutil/attributes.h"
+#include "libavutil/cpu.h"
+#include "libavutil/arm/cpu.h"
+#include "libavcodec/avcodec.h"
+#include "libavcodec/blockdsp.h"
+
+void ff_clear_block_neon(int16_t *block);
+void ff_clear_blocks_neon(int16_t *blocks);
+
+av_cold void ff_blockdsp_init_aarch64(BlockDSPContext *c)
+{
+    int cpu_flags = av_get_cpu_flags();
+
+    if (have_neon(cpu_flags)) {
+        c->clear_block  = ff_clear_block_neon;
+        c->clear_blocks = ff_clear_blocks_neon;
+    }
+}
diff --git a/libavcodec/aarch64/blockdsp_neon.S b/libavcodec/aarch64/blockdsp_neon.S
new file mode 100644
index 0000000000..a310647a5d
--- /dev/null
+++ b/libavcodec/aarch64/blockdsp_neon.S
@@ -0,0 +1,43 @@
+/*
+ * AArch64 NEON optimised block operations
+ *
+ * Copyright (c) 2022 Ben Avison <bavison@riscosopen.org>
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include "libavutil/aarch64/asm.S"
+
+function ff_clear_block_neon, export=1
+        movi     v0.16b, #0
+        movi     v1.16b, #0
+        st1      {v0.16b, v1.16b}, [x0], #32
+        st1      {v0.16b, v1.16b}, [x0], #32
+        st1      {v0.16b, v1.16b}, [x0], #32
+        st1      {v0.16b, v1.16b}, [x0]
+        ret
+endfunc
+
+function ff_clear_blocks_neon, export=1
+        movi     v0.16b, #0
+        movi     v1.16b, #0
+        .rept    23
+        st1      {v0.16b, v1.16b}, [x0], #32
+        .endr
+        st1      {v0.16b, v1.16b}, [x0]
+        ret
+endfunc
diff --git a/libavcodec/blockdsp.c b/libavcodec/blockdsp.c
index 5fb242ea65..97cc074765 100644
--- a/libavcodec/blockdsp.c
+++ b/libavcodec/blockdsp.c
@@ -64,6 +64,8 @@ av_cold void ff_blockdsp_init(BlockDSPContext *c, AVCodecContext *avctx)
     c->fill_block_tab[0] = fill_block16_c;
     c->fill_block_tab[1] = fill_block8_c;
 
+    if (ARCH_AARCH64)
+        ff_blockdsp_init_aarch64(c);
     if (ARCH_ALPHA)
         ff_blockdsp_init_alpha(c);
     if (ARCH_ARM)
diff --git a/libavcodec/blockdsp.h b/libavcodec/blockdsp.h
index 58eecffb07..9838922d22 100644
--- a/libavcodec/blockdsp.h
+++ b/libavcodec/blockdsp.h
@@ -40,6 +40,7 @@ typedef struct BlockDSPContext {
 
 void ff_blockdsp_init(BlockDSPContext *c, AVCodecContext *avctx);
 
+void ff_blockdsp_init_aarch64(BlockDSPContext *c);
 void ff_blockdsp_init_alpha(BlockDSPContext *c);
 void ff_blockdsp_init_arm(BlockDSPContext *c);
 void ff_blockdsp_init_ppc(BlockDSPContext *c);
-- 
2.25.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [FFmpeg-devel] [PATCH 6/6] avcodec/vc1: Introduce fast path for unescaping bitstream buffer
  2022-03-17 18:58 [FFmpeg-devel] [PATCH 0/6] avcodec/vc1: Arm optimisations Ben Avison
                   ` (4 preceding siblings ...)
  2022-03-17 18:58 ` [FFmpeg-devel] [PATCH 5/6] avcodec/blockdsp: Arm 64-bit NEON block clear " Ben Avison
@ 2022-03-17 18:58 ` Ben Avison
  2022-03-18 19:10   ` Andreas Rheinhardt
  2022-03-19 23:06 ` [FFmpeg-devel] [PATCH 0/6] avcodec/vc1: Arm optimisations Martin Storsjö
  2022-03-25 18:52 ` [FFmpeg-devel] [PATCH v2 00/10] " Ben Avison
  7 siblings, 1 reply; 55+ messages in thread
From: Ben Avison @ 2022-03-17 18:58 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Ben Avison

Populate with implementations suitable for 32-bit and 64-bit Arm.

Signed-off-by: Ben Avison <bavison@riscosopen.org>
---
 libavcodec/aarch64/vc1dsp_init_aarch64.c |  60 ++++++++
 libavcodec/aarch64/vc1dsp_neon.S         | 176 +++++++++++++++++++++++
 libavcodec/arm/vc1dsp_init_neon.c        |  60 ++++++++
 libavcodec/arm/vc1dsp_neon.S             | 118 +++++++++++++++
 libavcodec/vc1dec.c                      |  20 +--
 libavcodec/vc1dsp.c                      |   2 +
 libavcodec/vc1dsp.h                      |   3 +
 7 files changed, 429 insertions(+), 10 deletions(-)

diff --git a/libavcodec/aarch64/vc1dsp_init_aarch64.c b/libavcodec/aarch64/vc1dsp_init_aarch64.c
index b672b2aa99..2fc2d5d1d3 100644
--- a/libavcodec/aarch64/vc1dsp_init_aarch64.c
+++ b/libavcodec/aarch64/vc1dsp_init_aarch64.c
@@ -51,6 +51,64 @@ void ff_put_vc1_chroma_mc4_neon(uint8_t *dst, uint8_t *src, ptrdiff_t stride,
 void ff_avg_vc1_chroma_mc4_neon(uint8_t *dst, uint8_t *src, ptrdiff_t stride,
                                 int h, int x, int y);
 
+int ff_vc1_unescape_buffer_helper_neon(const uint8_t *src, int size, uint8_t *dst);
+
+static int vc1_unescape_buffer_neon(const uint8_t *src, int size, uint8_t *dst)
+{
+    /* Dealing with starting and stopping, and removing escape bytes, are
+     * comparatively less time-sensitive, so are more clearly expressed using
+     * a C wrapper around the assembly inner loop. Note that we assume a
+     * little-endian machine that supports unaligned loads. */
+    int dsize = 0;
+    while (size >= 4)
+    {
+        int found = 0;
+        while (!found && (((uintptr_t) dst) & 7) && size >= 4)
+        {
+            found = (*(uint32_t *)src &~ 0x03000000) == 0x00030000;
+            if (!found)
+            {
+                *dst++ = *src++;
+                --size;
+                ++dsize;
+            }
+        }
+        if (!found)
+        {
+            int skip = size - ff_vc1_unescape_buffer_helper_neon(src, size, dst);
+            dst += skip;
+            src += skip;
+            size -= skip;
+            dsize += skip;
+            while (!found && size >= 4)
+            {
+                found = (*(uint32_t *)src &~ 0x03000000) == 0x00030000;
+                if (!found)
+                {
+                    *dst++ = *src++;
+                    --size;
+                    ++dsize;
+                }
+            }
+        }
+        if (found)
+        {
+            *dst++ = *src++;
+            *dst++ = *src++;
+            ++src;
+            size -= 3;
+            dsize += 2;
+        }
+    }
+    while (size > 0)
+    {
+        *dst++ = *src++;
+        --size;
+        ++dsize;
+    }
+    return dsize;
+}
+
 av_cold void ff_vc1dsp_init_aarch64(VC1DSPContext *dsp)
 {
     int cpu_flags = av_get_cpu_flags();
@@ -76,5 +134,7 @@ av_cold void ff_vc1dsp_init_aarch64(VC1DSPContext *dsp)
         dsp->avg_no_rnd_vc1_chroma_pixels_tab[0] = ff_avg_vc1_chroma_mc8_neon;
         dsp->put_no_rnd_vc1_chroma_pixels_tab[1] = ff_put_vc1_chroma_mc4_neon;
         dsp->avg_no_rnd_vc1_chroma_pixels_tab[1] = ff_avg_vc1_chroma_mc4_neon;
+
+        dsp->vc1_unescape_buffer = vc1_unescape_buffer_neon;
     }
 }
diff --git a/libavcodec/aarch64/vc1dsp_neon.S b/libavcodec/aarch64/vc1dsp_neon.S
index c3ca3eae1e..8bdeffab44 100644
--- a/libavcodec/aarch64/vc1dsp_neon.S
+++ b/libavcodec/aarch64/vc1dsp_neon.S
@@ -1374,3 +1374,179 @@ function ff_vc1_h_loop_filter16_neon, export=1
         st2     {v2.b, v3.b}[7], [x6]
 4:      ret
 endfunc
+
+// Copy at most the specified number of bytes from source to destination buffer,
+// stopping at a multiple of 32 bytes, none of which are the start of an escape sequence
+// On entry:
+//   x0 -> source buffer
+//   w1 = max number of bytes to copy
+//   x2 -> destination buffer, optimally 8-byte aligned
+// On exit:
+//   w0 = number of bytes not copied
+function ff_vc1_unescape_buffer_helper_neon, export=1
+        // Offset by 80 to screen out cases that are too short for us to handle,
+        // and also make it easy to test for loop termination, or to determine
+        // whether we need an odd number of half-iterations of the loop.
+        subs    w1, w1, #80
+        b.mi    90f
+
+        // Set up useful constants
+        movi    v20.4s, #3, lsl #24
+        movi    v21.4s, #3, lsl #16
+
+        tst     w1, #32
+        b.ne    1f
+
+          ld1     {v0.16b, v1.16b, v2.16b}, [x0], #48
+          ext     v25.16b, v0.16b, v1.16b, #1
+          ext     v26.16b, v0.16b, v1.16b, #2
+          ext     v27.16b, v0.16b, v1.16b, #3
+          ext     v29.16b, v1.16b, v2.16b, #1
+          ext     v30.16b, v1.16b, v2.16b, #2
+          ext     v31.16b, v1.16b, v2.16b, #3
+          bic     v24.16b, v0.16b, v20.16b
+          bic     v25.16b, v25.16b, v20.16b
+          bic     v26.16b, v26.16b, v20.16b
+          bic     v27.16b, v27.16b, v20.16b
+          bic     v28.16b, v1.16b, v20.16b
+          bic     v29.16b, v29.16b, v20.16b
+          bic     v30.16b, v30.16b, v20.16b
+          bic     v31.16b, v31.16b, v20.16b
+          eor     v24.16b, v24.16b, v21.16b
+          eor     v25.16b, v25.16b, v21.16b
+          eor     v26.16b, v26.16b, v21.16b
+          eor     v27.16b, v27.16b, v21.16b
+          eor     v28.16b, v28.16b, v21.16b
+          eor     v29.16b, v29.16b, v21.16b
+          eor     v30.16b, v30.16b, v21.16b
+          eor     v31.16b, v31.16b, v21.16b
+          cmeq    v24.4s, v24.4s, #0
+          cmeq    v25.4s, v25.4s, #0
+          cmeq    v26.4s, v26.4s, #0
+          cmeq    v27.4s, v27.4s, #0
+          add     w1, w1, #32
+          b       3f
+
+1:      ld1     {v3.16b, v4.16b, v5.16b}, [x0], #48
+        ext     v25.16b, v3.16b, v4.16b, #1
+        ext     v26.16b, v3.16b, v4.16b, #2
+        ext     v27.16b, v3.16b, v4.16b, #3
+        ext     v29.16b, v4.16b, v5.16b, #1
+        ext     v30.16b, v4.16b, v5.16b, #2
+        ext     v31.16b, v4.16b, v5.16b, #3
+        bic     v24.16b, v3.16b, v20.16b
+        bic     v25.16b, v25.16b, v20.16b
+        bic     v26.16b, v26.16b, v20.16b
+        bic     v27.16b, v27.16b, v20.16b
+        bic     v28.16b, v4.16b, v20.16b
+        bic     v29.16b, v29.16b, v20.16b
+        bic     v30.16b, v30.16b, v20.16b
+        bic     v31.16b, v31.16b, v20.16b
+        eor     v24.16b, v24.16b, v21.16b
+        eor     v25.16b, v25.16b, v21.16b
+        eor     v26.16b, v26.16b, v21.16b
+        eor     v27.16b, v27.16b, v21.16b
+        eor     v28.16b, v28.16b, v21.16b
+        eor     v29.16b, v29.16b, v21.16b
+        eor     v30.16b, v30.16b, v21.16b
+        eor     v31.16b, v31.16b, v21.16b
+        cmeq    v24.4s, v24.4s, #0
+        cmeq    v25.4s, v25.4s, #0
+        cmeq    v26.4s, v26.4s, #0
+        cmeq    v27.4s, v27.4s, #0
+        // Drop through...
+2:        mov     v0.16b, v5.16b
+          ld1     {v1.16b, v2.16b}, [x0], #32
+        cmeq    v28.4s, v28.4s, #0
+        cmeq    v29.4s, v29.4s, #0
+        cmeq    v30.4s, v30.4s, #0
+        cmeq    v31.4s, v31.4s, #0
+        orr     v24.16b, v24.16b, v25.16b
+        orr     v26.16b, v26.16b, v27.16b
+        orr     v28.16b, v28.16b, v29.16b
+        orr     v30.16b, v30.16b, v31.16b
+          ext     v25.16b, v0.16b, v1.16b, #1
+        orr     v22.16b, v24.16b, v26.16b
+          ext     v26.16b, v0.16b, v1.16b, #2
+          ext     v27.16b, v0.16b, v1.16b, #3
+          ext     v29.16b, v1.16b, v2.16b, #1
+        orr     v23.16b, v28.16b, v30.16b
+          ext     v30.16b, v1.16b, v2.16b, #2
+          ext     v31.16b, v1.16b, v2.16b, #3
+          bic     v24.16b, v0.16b, v20.16b
+          bic     v25.16b, v25.16b, v20.16b
+          bic     v26.16b, v26.16b, v20.16b
+        orr     v22.16b, v22.16b, v23.16b
+          bic     v27.16b, v27.16b, v20.16b
+          bic     v28.16b, v1.16b, v20.16b
+          bic     v29.16b, v29.16b, v20.16b
+          bic     v30.16b, v30.16b, v20.16b
+          bic     v31.16b, v31.16b, v20.16b
+        addv    s22, v22.4s
+          eor     v24.16b, v24.16b, v21.16b
+          eor     v25.16b, v25.16b, v21.16b
+          eor     v26.16b, v26.16b, v21.16b
+          eor     v27.16b, v27.16b, v21.16b
+          eor     v28.16b, v28.16b, v21.16b
+        mov     w3, v22.s[0]
+          eor     v29.16b, v29.16b, v21.16b
+          eor     v30.16b, v30.16b, v21.16b
+          eor     v31.16b, v31.16b, v21.16b
+          cmeq    v24.4s, v24.4s, #0
+          cmeq    v25.4s, v25.4s, #0
+          cmeq    v26.4s, v26.4s, #0
+          cmeq    v27.4s, v27.4s, #0
+        cbnz    w3, 90f
+        st1     {v3.16b, v4.16b}, [x2], #32
+3:          mov     v3.16b, v2.16b
+            ld1     {v4.16b, v5.16b}, [x0], #32
+          cmeq    v28.4s, v28.4s, #0
+          cmeq    v29.4s, v29.4s, #0
+          cmeq    v30.4s, v30.4s, #0
+          cmeq    v31.4s, v31.4s, #0
+          orr     v24.16b, v24.16b, v25.16b
+          orr     v26.16b, v26.16b, v27.16b
+          orr     v28.16b, v28.16b, v29.16b
+          orr     v30.16b, v30.16b, v31.16b
+            ext     v25.16b, v3.16b, v4.16b, #1
+          orr     v22.16b, v24.16b, v26.16b
+            ext     v26.16b, v3.16b, v4.16b, #2
+            ext     v27.16b, v3.16b, v4.16b, #3
+            ext     v29.16b, v4.16b, v5.16b, #1
+          orr     v23.16b, v28.16b, v30.16b
+            ext     v30.16b, v4.16b, v5.16b, #2
+            ext     v31.16b, v4.16b, v5.16b, #3
+            bic     v24.16b, v3.16b, v20.16b
+            bic     v25.16b, v25.16b, v20.16b
+            bic     v26.16b, v26.16b, v20.16b
+          orr     v22.16b, v22.16b, v23.16b
+            bic     v27.16b, v27.16b, v20.16b
+            bic     v28.16b, v4.16b, v20.16b
+            bic     v29.16b, v29.16b, v20.16b
+            bic     v30.16b, v30.16b, v20.16b
+            bic     v31.16b, v31.16b, v20.16b
+          addv    s22, v22.4s
+            eor     v24.16b, v24.16b, v21.16b
+            eor     v25.16b, v25.16b, v21.16b
+            eor     v26.16b, v26.16b, v21.16b
+            eor     v27.16b, v27.16b, v21.16b
+            eor     v28.16b, v28.16b, v21.16b
+          mov     w3, v22.s[0]
+            eor     v29.16b, v29.16b, v21.16b
+            eor     v30.16b, v30.16b, v21.16b
+            eor     v31.16b, v31.16b, v21.16b
+            cmeq    v24.4s, v24.4s, #0
+            cmeq    v25.4s, v25.4s, #0
+            cmeq    v26.4s, v26.4s, #0
+            cmeq    v27.4s, v27.4s, #0
+          cbnz    w3, 91f
+          st1     {v0.16b, v1.16b}, [x2], #32
+        subs    w1, w1, #64
+        b.pl    2b
+
+90:     add     w0, w1, #80
+        ret
+
+91:     sub     w1, w1, #32
+        b       90b
+endfunc
diff --git a/libavcodec/arm/vc1dsp_init_neon.c b/libavcodec/arm/vc1dsp_init_neon.c
index f5f5c702d7..3aefbcaf6d 100644
--- a/libavcodec/arm/vc1dsp_init_neon.c
+++ b/libavcodec/arm/vc1dsp_init_neon.c
@@ -84,6 +84,64 @@ void ff_put_vc1_chroma_mc4_neon(uint8_t *dst, uint8_t *src, ptrdiff_t stride,
 void ff_avg_vc1_chroma_mc4_neon(uint8_t *dst, uint8_t *src, ptrdiff_t stride,
                                 int h, int x, int y);
 
+int ff_vc1_unescape_buffer_helper_neon(const uint8_t *src, int size, uint8_t *dst);
+
+static int vc1_unescape_buffer_neon(const uint8_t *src, int size, uint8_t *dst)
+{
+    /* Dealing with starting and stopping, and removing escape bytes, are
+     * comparatively less time-sensitive, so are more clearly expressed using
+     * a C wrapper around the assembly inner loop. Note that we assume a
+     * little-endian machine that supports unaligned loads. */
+    int dsize = 0;
+    while (size >= 4)
+    {
+        int found = 0;
+        while (!found && (((uintptr_t) dst) & 7) && size >= 4)
+        {
+            found = (*(uint32_t *)src &~ 0x03000000) == 0x00030000;
+            if (!found)
+            {
+                *dst++ = *src++;
+                --size;
+                ++dsize;
+            }
+        }
+        if (!found)
+        {
+            int skip = size - ff_vc1_unescape_buffer_helper_neon(src, size, dst);
+            dst += skip;
+            src += skip;
+            size -= skip;
+            dsize += skip;
+            while (!found && size >= 4)
+            {
+                found = (*(uint32_t *)src &~ 0x03000000) == 0x00030000;
+                if (!found)
+                {
+                    *dst++ = *src++;
+                    --size;
+                    ++dsize;
+                }
+            }
+        }
+        if (found)
+        {
+            *dst++ = *src++;
+            *dst++ = *src++;
+            ++src;
+            size -= 3;
+            dsize += 2;
+        }
+    }
+    while (size > 0)
+    {
+        *dst++ = *src++;
+        --size;
+        ++dsize;
+    }
+    return dsize;
+}
+
 #define FN_ASSIGN(X, Y) \
     dsp->put_vc1_mspel_pixels_tab[0][X+4*Y] = ff_put_vc1_mspel_mc##X##Y##_16_neon; \
     dsp->put_vc1_mspel_pixels_tab[1][X+4*Y] = ff_put_vc1_mspel_mc##X##Y##_neon
@@ -130,4 +188,6 @@ av_cold void ff_vc1dsp_init_neon(VC1DSPContext *dsp)
     dsp->avg_no_rnd_vc1_chroma_pixels_tab[0] = ff_avg_vc1_chroma_mc8_neon;
     dsp->put_no_rnd_vc1_chroma_pixels_tab[1] = ff_put_vc1_chroma_mc4_neon;
     dsp->avg_no_rnd_vc1_chroma_pixels_tab[1] = ff_avg_vc1_chroma_mc4_neon;
+
+    dsp->vc1_unescape_buffer = vc1_unescape_buffer_neon;
 }
diff --git a/libavcodec/arm/vc1dsp_neon.S b/libavcodec/arm/vc1dsp_neon.S
index 4ef083102b..9d7333cf12 100644
--- a/libavcodec/arm/vc1dsp_neon.S
+++ b/libavcodec/arm/vc1dsp_neon.S
@@ -1804,3 +1804,121 @@ function ff_vc1_h_loop_filter16_neon, export=1
 4:      vpop            {d8-d15}
         pop             {r4-r6,pc}
 endfunc
+
+@ Copy at most the specified number of bytes from source to destination buffer,
+@ stopping at a multiple of 16 bytes, none of which are the start of an escape sequence
+@ On entry:
+@   r0 -> source buffer
+@   r1 = max number of bytes to copy
+@   r2 -> destination buffer, optimally 8-byte aligned
+@ On exit:
+@   r0 = number of bytes not copied
+function ff_vc1_unescape_buffer_helper_neon, export=1
+        @ Offset by 48 to screen out cases that are too short for us to handle,
+        @ and also make it easy to test for loop termination, or to determine
+        @ whether we need an odd number of half-iterations of the loop.
+        subs    r1, r1, #48
+        bmi     90f
+
+        @ Set up useful constants
+        vmov.i32        q0, #0x3000000
+        vmov.i32        q1, #0x30000
+
+        tst             r1, #16
+        bne             1f
+
+          vld1.8          {q8, q9}, [r0]!
+          vbic            q12, q8, q0
+          vext.8          q13, q8, q9, #1
+          vext.8          q14, q8, q9, #2
+          vext.8          q15, q8, q9, #3
+          veor            q12, q12, q1
+          vbic            q13, q13, q0
+          vbic            q14, q14, q0
+          vbic            q15, q15, q0
+          vceq.i32        q12, q12, #0
+          veor            q13, q13, q1
+          veor            q14, q14, q1
+          veor            q15, q15, q1
+          vceq.i32        q13, q13, #0
+          vceq.i32        q14, q14, #0
+          vceq.i32        q15, q15, #0
+          add             r1, r1, #16
+          b               3f
+
+1:      vld1.8          {q10, q11}, [r0]!
+        vbic            q12, q10, q0
+        vext.8          q13, q10, q11, #1
+        vext.8          q14, q10, q11, #2
+        vext.8          q15, q10, q11, #3
+        veor            q12, q12, q1
+        vbic            q13, q13, q0
+        vbic            q14, q14, q0
+        vbic            q15, q15, q0
+        vceq.i32        q12, q12, #0
+        veor            q13, q13, q1
+        veor            q14, q14, q1
+        veor            q15, q15, q1
+        vceq.i32        q13, q13, #0
+        vceq.i32        q14, q14, #0
+        vceq.i32        q15, q15, #0
+        @ Drop through...
+2:        vmov            q8, q11
+          vld1.8          {q9}, [r0]!
+        vorr            q13, q12, q13
+        vorr            q15, q14, q15
+          vbic            q12, q8, q0
+        vorr            q3, q13, q15
+          vext.8          q13, q8, q9, #1
+          vext.8          q14, q8, q9, #2
+          vext.8          q15, q8, q9, #3
+          veor            q12, q12, q1
+        vorr            d6, d6, d7
+          vbic            q13, q13, q0
+          vbic            q14, q14, q0
+          vbic            q15, q15, q0
+          vceq.i32        q12, q12, #0
+        vmov            r3, r12, d6
+          veor            q13, q13, q1
+          veor            q14, q14, q1
+          veor            q15, q15, q1
+          vceq.i32        q13, q13, #0
+          vceq.i32        q14, q14, #0
+          vceq.i32        q15, q15, #0
+        orrs            r3, r3, r12
+        bne             90f
+        vst1.64         {q10}, [r2]!
+3:          vmov            q10, q9
+            vld1.8          {q11}, [r0]!
+          vorr            q13, q12, q13
+          vorr            q15, q14, q15
+            vbic            q12, q10, q0
+          vorr            q3, q13, q15
+            vext.8          q13, q10, q11, #1
+            vext.8          q14, q10, q11, #2
+            vext.8          q15, q10, q11, #3
+            veor            q12, q12, q1
+          vorr            d6, d6, d7
+            vbic            q13, q13, q0
+            vbic            q14, q14, q0
+            vbic            q15, q15, q0
+            vceq.i32        q12, q12, #0
+          vmov            r3, r12, d6
+            veor            q13, q13, q1
+            veor            q14, q14, q1
+            veor            q15, q15, q1
+            vceq.i32        q13, q13, #0
+            vceq.i32        q14, q14, #0
+            vceq.i32        q15, q15, #0
+          orrs            r3, r3, r12
+          bne             91f
+          vst1.64         {q8}, [r2]!
+        subs            r1, r1, #32
+        bpl             2b
+
+90:     add             r0, r1, #48
+        bx              lr
+
+91:     sub             r1, r1, #16
+        b               90b
+endfunc
diff --git a/libavcodec/vc1dec.c b/libavcodec/vc1dec.c
index 1c92b9d401..6a30b5b664 100644
--- a/libavcodec/vc1dec.c
+++ b/libavcodec/vc1dec.c
@@ -490,7 +490,7 @@ static av_cold int vc1_decode_init(AVCodecContext *avctx)
             size = next - start - 4;
             if (size <= 0)
                 continue;
-            buf2_size = vc1_unescape_buffer(start + 4, size, buf2);
+            buf2_size = v->vc1dsp.vc1_unescape_buffer(start + 4, size, buf2);
             init_get_bits(&gb, buf2, buf2_size * 8);
             switch (AV_RB32(start)) {
             case VC1_CODE_SEQHDR:
@@ -680,7 +680,7 @@ static int vc1_decode_frame(AVCodecContext *avctx, void *data,
                 case VC1_CODE_FRAME:
                     if (avctx->hwaccel)
                         buf_start = start;
-                    buf_size2 = vc1_unescape_buffer(start + 4, size, buf2);
+                    buf_size2 = v->vc1dsp.vc1_unescape_buffer(start + 4, size, buf2);
                     break;
                 case VC1_CODE_FIELD: {
                     int buf_size3;
@@ -697,8 +697,8 @@ static int vc1_decode_frame(AVCodecContext *avctx, void *data,
                         ret = AVERROR(ENOMEM);
                         goto err;
                     }
-                    buf_size3 = vc1_unescape_buffer(start + 4, size,
-                                                    slices[n_slices].buf);
+                    buf_size3 = v->vc1dsp.vc1_unescape_buffer(start + 4, size,
+                                                              slices[n_slices].buf);
                     init_get_bits(&slices[n_slices].gb, slices[n_slices].buf,
                                   buf_size3 << 3);
                     slices[n_slices].mby_start = avctx->coded_height + 31 >> 5;
@@ -709,7 +709,7 @@ static int vc1_decode_frame(AVCodecContext *avctx, void *data,
                     break;
                 }
                 case VC1_CODE_ENTRYPOINT: /* it should be before frame data */
-                    buf_size2 = vc1_unescape_buffer(start + 4, size, buf2);
+                    buf_size2 = v->vc1dsp.vc1_unescape_buffer(start + 4, size, buf2);
                     init_get_bits(&s->gb, buf2, buf_size2 * 8);
                     ff_vc1_decode_entry_point(avctx, v, &s->gb);
                     break;
@@ -726,8 +726,8 @@ static int vc1_decode_frame(AVCodecContext *avctx, void *data,
                         ret = AVERROR(ENOMEM);
                         goto err;
                     }
-                    buf_size3 = vc1_unescape_buffer(start + 4, size,
-                                                    slices[n_slices].buf);
+                    buf_size3 = v->vc1dsp.vc1_unescape_buffer(start + 4, size,
+                                                              slices[n_slices].buf);
                     init_get_bits(&slices[n_slices].gb, slices[n_slices].buf,
                                   buf_size3 << 3);
                     slices[n_slices].mby_start = get_bits(&slices[n_slices].gb, 9);
@@ -761,7 +761,7 @@ static int vc1_decode_frame(AVCodecContext *avctx, void *data,
                     ret = AVERROR(ENOMEM);
                     goto err;
                 }
-                buf_size3 = vc1_unescape_buffer(divider + 4, buf + buf_size - divider - 4, slices[n_slices].buf);
+                buf_size3 = v->vc1dsp.vc1_unescape_buffer(divider + 4, buf + buf_size - divider - 4, slices[n_slices].buf);
                 init_get_bits(&slices[n_slices].gb, slices[n_slices].buf,
                               buf_size3 << 3);
                 slices[n_slices].mby_start = s->mb_height + 1 >> 1;
@@ -770,9 +770,9 @@ static int vc1_decode_frame(AVCodecContext *avctx, void *data,
                 n_slices1 = n_slices - 1;
                 n_slices++;
             }
-            buf_size2 = vc1_unescape_buffer(buf, divider - buf, buf2);
+            buf_size2 = v->vc1dsp.vc1_unescape_buffer(buf, divider - buf, buf2);
         } else {
-            buf_size2 = vc1_unescape_buffer(buf, buf_size, buf2);
+            buf_size2 = v->vc1dsp.vc1_unescape_buffer(buf, buf_size, buf2);
         }
         init_get_bits(&s->gb, buf2, buf_size2*8);
     } else{
diff --git a/libavcodec/vc1dsp.c b/libavcodec/vc1dsp.c
index a29b91bf3d..11d493f002 100644
--- a/libavcodec/vc1dsp.c
+++ b/libavcodec/vc1dsp.c
@@ -34,6 +34,7 @@
 #include "rnd_avg.h"
 #include "vc1dsp.h"
 #include "startcode.h"
+#include "vc1_common.h"
 
 /* Apply overlap transform to horizontal edge */
 static void vc1_v_overlap_c(uint8_t *src, int stride)
@@ -1030,6 +1031,7 @@ av_cold void ff_vc1dsp_init(VC1DSPContext *dsp)
 #endif /* CONFIG_WMV3IMAGE_DECODER || CONFIG_VC1IMAGE_DECODER */
 
     dsp->startcode_find_candidate = ff_startcode_find_candidate_c;
+    dsp->vc1_unescape_buffer      = vc1_unescape_buffer;
 
     if (ARCH_AARCH64)
         ff_vc1dsp_init_aarch64(dsp);
diff --git a/libavcodec/vc1dsp.h b/libavcodec/vc1dsp.h
index c6443acb20..8be1198071 100644
--- a/libavcodec/vc1dsp.h
+++ b/libavcodec/vc1dsp.h
@@ -80,6 +80,9 @@ typedef struct VC1DSPContext {
      * one or more further zero bytes and a one byte.
      */
     int (*startcode_find_candidate)(const uint8_t *buf, int size);
+
+    /* Copy a buffer, removing startcode emulation escape bytes as we go */
+    int (*vc1_unescape_buffer)(const uint8_t *src, int size, uint8_t *dst);
 } VC1DSPContext;
 
 void ff_vc1dsp_init(VC1DSPContext* c);
-- 
2.25.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 6/6] avcodec/vc1: Introduce fast path for unescaping bitstream buffer
  2022-03-17 18:58 ` [FFmpeg-devel] [PATCH 6/6] avcodec/vc1: Introduce fast path for unescaping bitstream buffer Ben Avison
@ 2022-03-18 19:10   ` Andreas Rheinhardt
  2022-03-21 15:51     ` Ben Avison
  0 siblings, 1 reply; 55+ messages in thread
From: Andreas Rheinhardt @ 2022-03-18 19:10 UTC (permalink / raw)
  To: ffmpeg-devel

Ben Avison:
> Populate with implementations suitable for 32-bit and 64-bit Arm.
> 
> Signed-off-by: Ben Avison <bavison@riscosopen.org>
> ---
>  libavcodec/aarch64/vc1dsp_init_aarch64.c |  60 ++++++++
>  libavcodec/aarch64/vc1dsp_neon.S         | 176 +++++++++++++++++++++++
>  libavcodec/arm/vc1dsp_init_neon.c        |  60 ++++++++
>  libavcodec/arm/vc1dsp_neon.S             | 118 +++++++++++++++
>  libavcodec/vc1dec.c                      |  20 +--
>  libavcodec/vc1dsp.c                      |   2 +
>  libavcodec/vc1dsp.h                      |   3 +
>  7 files changed, 429 insertions(+), 10 deletions(-)
> 
> diff --git a/libavcodec/aarch64/vc1dsp_init_aarch64.c b/libavcodec/aarch64/vc1dsp_init_aarch64.c
> index b672b2aa99..2fc2d5d1d3 100644
> --- a/libavcodec/aarch64/vc1dsp_init_aarch64.c
> +++ b/libavcodec/aarch64/vc1dsp_init_aarch64.c
> @@ -51,6 +51,64 @@ void ff_put_vc1_chroma_mc4_neon(uint8_t *dst, uint8_t *src, ptrdiff_t stride,
>  void ff_avg_vc1_chroma_mc4_neon(uint8_t *dst, uint8_t *src, ptrdiff_t stride,
>                                  int h, int x, int y);
>  
> +int ff_vc1_unescape_buffer_helper_neon(const uint8_t *src, int size, uint8_t *dst);
> +
> +static int vc1_unescape_buffer_neon(const uint8_t *src, int size, uint8_t *dst)
> +{
> +    /* Dealing with starting and stopping, and removing escape bytes, are
> +     * comparatively less time-sensitive, so are more clearly expressed using
> +     * a C wrapper around the assembly inner loop. Note that we assume a
> +     * little-endian machine that supports unaligned loads. */
> +    int dsize = 0;
> +    while (size >= 4)
> +    {
> +        int found = 0;
> +        while (!found && (((uintptr_t) dst) & 7) && size >= 4)
> +        {
> +            found = (*(uint32_t *)src &~ 0x03000000) == 0x00030000;
> +            if (!found)
> +            {
> +                *dst++ = *src++;
> +                --size;
> +                ++dsize;
> +            }
> +        }
> +        if (!found)
> +        {
> +            int skip = size - ff_vc1_unescape_buffer_helper_neon(src, size, dst);
> +            dst += skip;
> +            src += skip;
> +            size -= skip;
> +            dsize += skip;
> +            while (!found && size >= 4)
> +            {
> +                found = (*(uint32_t *)src &~ 0x03000000) == 0x00030000;
> +                if (!found)
> +                {
> +                    *dst++ = *src++;
> +                    --size;
> +                    ++dsize;
> +                }
> +            }
> +        }
> +        if (found)
> +        {
> +            *dst++ = *src++;
> +            *dst++ = *src++;
> +            ++src;
> +            size -= 3;
> +            dsize += 2;
> +        }
> +    }
> +    while (size > 0)
> +    {
> +        *dst++ = *src++;
> +        --size;
> +        ++dsize;
> +    }
> +    return dsize;
> +}
> +
>  av_cold void ff_vc1dsp_init_aarch64(VC1DSPContext *dsp)
>  {
>      int cpu_flags = av_get_cpu_flags();
> @@ -76,5 +134,7 @@ av_cold void ff_vc1dsp_init_aarch64(VC1DSPContext *dsp)
>          dsp->avg_no_rnd_vc1_chroma_pixels_tab[0] = ff_avg_vc1_chroma_mc8_neon;
>          dsp->put_no_rnd_vc1_chroma_pixels_tab[1] = ff_put_vc1_chroma_mc4_neon;
>          dsp->avg_no_rnd_vc1_chroma_pixels_tab[1] = ff_avg_vc1_chroma_mc4_neon;
> +
> +        dsp->vc1_unescape_buffer = vc1_unescape_buffer_neon;
>      }
>  }
> diff --git a/libavcodec/aarch64/vc1dsp_neon.S b/libavcodec/aarch64/vc1dsp_neon.S
> index c3ca3eae1e..8bdeffab44 100644
> --- a/libavcodec/aarch64/vc1dsp_neon.S
> +++ b/libavcodec/aarch64/vc1dsp_neon.S
> @@ -1374,3 +1374,179 @@ function ff_vc1_h_loop_filter16_neon, export=1
>          st2     {v2.b, v3.b}[7], [x6]
>  4:      ret
>  endfunc
> +
> +// Copy at most the specified number of bytes from source to destination buffer,
> +// stopping at a multiple of 32 bytes, none of which are the start of an escape sequence
> +// On entry:
> +//   x0 -> source buffer
> +//   w1 = max number of bytes to copy
> +//   x2 -> destination buffer, optimally 8-byte aligned
> +// On exit:
> +//   w0 = number of bytes not copied
> +function ff_vc1_unescape_buffer_helper_neon, export=1
> +        // Offset by 80 to screen out cases that are too short for us to handle,
> +        // and also make it easy to test for loop termination, or to determine
> +        // whether we need an odd number of half-iterations of the loop.
> +        subs    w1, w1, #80
> +        b.mi    90f
> +
> +        // Set up useful constants
> +        movi    v20.4s, #3, lsl #24
> +        movi    v21.4s, #3, lsl #16
> +
> +        tst     w1, #32
> +        b.ne    1f
> +
> +          ld1     {v0.16b, v1.16b, v2.16b}, [x0], #48
> +          ext     v25.16b, v0.16b, v1.16b, #1
> +          ext     v26.16b, v0.16b, v1.16b, #2
> +          ext     v27.16b, v0.16b, v1.16b, #3
> +          ext     v29.16b, v1.16b, v2.16b, #1
> +          ext     v30.16b, v1.16b, v2.16b, #2
> +          ext     v31.16b, v1.16b, v2.16b, #3
> +          bic     v24.16b, v0.16b, v20.16b
> +          bic     v25.16b, v25.16b, v20.16b
> +          bic     v26.16b, v26.16b, v20.16b
> +          bic     v27.16b, v27.16b, v20.16b
> +          bic     v28.16b, v1.16b, v20.16b
> +          bic     v29.16b, v29.16b, v20.16b
> +          bic     v30.16b, v30.16b, v20.16b
> +          bic     v31.16b, v31.16b, v20.16b
> +          eor     v24.16b, v24.16b, v21.16b
> +          eor     v25.16b, v25.16b, v21.16b
> +          eor     v26.16b, v26.16b, v21.16b
> +          eor     v27.16b, v27.16b, v21.16b
> +          eor     v28.16b, v28.16b, v21.16b
> +          eor     v29.16b, v29.16b, v21.16b
> +          eor     v30.16b, v30.16b, v21.16b
> +          eor     v31.16b, v31.16b, v21.16b
> +          cmeq    v24.4s, v24.4s, #0
> +          cmeq    v25.4s, v25.4s, #0
> +          cmeq    v26.4s, v26.4s, #0
> +          cmeq    v27.4s, v27.4s, #0
> +          add     w1, w1, #32
> +          b       3f
> +
> +1:      ld1     {v3.16b, v4.16b, v5.16b}, [x0], #48
> +        ext     v25.16b, v3.16b, v4.16b, #1
> +        ext     v26.16b, v3.16b, v4.16b, #2
> +        ext     v27.16b, v3.16b, v4.16b, #3
> +        ext     v29.16b, v4.16b, v5.16b, #1
> +        ext     v30.16b, v4.16b, v5.16b, #2
> +        ext     v31.16b, v4.16b, v5.16b, #3
> +        bic     v24.16b, v3.16b, v20.16b
> +        bic     v25.16b, v25.16b, v20.16b
> +        bic     v26.16b, v26.16b, v20.16b
> +        bic     v27.16b, v27.16b, v20.16b
> +        bic     v28.16b, v4.16b, v20.16b
> +        bic     v29.16b, v29.16b, v20.16b
> +        bic     v30.16b, v30.16b, v20.16b
> +        bic     v31.16b, v31.16b, v20.16b
> +        eor     v24.16b, v24.16b, v21.16b
> +        eor     v25.16b, v25.16b, v21.16b
> +        eor     v26.16b, v26.16b, v21.16b
> +        eor     v27.16b, v27.16b, v21.16b
> +        eor     v28.16b, v28.16b, v21.16b
> +        eor     v29.16b, v29.16b, v21.16b
> +        eor     v30.16b, v30.16b, v21.16b
> +        eor     v31.16b, v31.16b, v21.16b
> +        cmeq    v24.4s, v24.4s, #0
> +        cmeq    v25.4s, v25.4s, #0
> +        cmeq    v26.4s, v26.4s, #0
> +        cmeq    v27.4s, v27.4s, #0
> +        // Drop through...
> +2:        mov     v0.16b, v5.16b
> +          ld1     {v1.16b, v2.16b}, [x0], #32
> +        cmeq    v28.4s, v28.4s, #0
> +        cmeq    v29.4s, v29.4s, #0
> +        cmeq    v30.4s, v30.4s, #0
> +        cmeq    v31.4s, v31.4s, #0
> +        orr     v24.16b, v24.16b, v25.16b
> +        orr     v26.16b, v26.16b, v27.16b
> +        orr     v28.16b, v28.16b, v29.16b
> +        orr     v30.16b, v30.16b, v31.16b
> +          ext     v25.16b, v0.16b, v1.16b, #1
> +        orr     v22.16b, v24.16b, v26.16b
> +          ext     v26.16b, v0.16b, v1.16b, #2
> +          ext     v27.16b, v0.16b, v1.16b, #3
> +          ext     v29.16b, v1.16b, v2.16b, #1
> +        orr     v23.16b, v28.16b, v30.16b
> +          ext     v30.16b, v1.16b, v2.16b, #2
> +          ext     v31.16b, v1.16b, v2.16b, #3
> +          bic     v24.16b, v0.16b, v20.16b
> +          bic     v25.16b, v25.16b, v20.16b
> +          bic     v26.16b, v26.16b, v20.16b
> +        orr     v22.16b, v22.16b, v23.16b
> +          bic     v27.16b, v27.16b, v20.16b
> +          bic     v28.16b, v1.16b, v20.16b
> +          bic     v29.16b, v29.16b, v20.16b
> +          bic     v30.16b, v30.16b, v20.16b
> +          bic     v31.16b, v31.16b, v20.16b
> +        addv    s22, v22.4s
> +          eor     v24.16b, v24.16b, v21.16b
> +          eor     v25.16b, v25.16b, v21.16b
> +          eor     v26.16b, v26.16b, v21.16b
> +          eor     v27.16b, v27.16b, v21.16b
> +          eor     v28.16b, v28.16b, v21.16b
> +        mov     w3, v22.s[0]
> +          eor     v29.16b, v29.16b, v21.16b
> +          eor     v30.16b, v30.16b, v21.16b
> +          eor     v31.16b, v31.16b, v21.16b
> +          cmeq    v24.4s, v24.4s, #0
> +          cmeq    v25.4s, v25.4s, #0
> +          cmeq    v26.4s, v26.4s, #0
> +          cmeq    v27.4s, v27.4s, #0
> +        cbnz    w3, 90f
> +        st1     {v3.16b, v4.16b}, [x2], #32
> +3:          mov     v3.16b, v2.16b
> +            ld1     {v4.16b, v5.16b}, [x0], #32
> +          cmeq    v28.4s, v28.4s, #0
> +          cmeq    v29.4s, v29.4s, #0
> +          cmeq    v30.4s, v30.4s, #0
> +          cmeq    v31.4s, v31.4s, #0
> +          orr     v24.16b, v24.16b, v25.16b
> +          orr     v26.16b, v26.16b, v27.16b
> +          orr     v28.16b, v28.16b, v29.16b
> +          orr     v30.16b, v30.16b, v31.16b
> +            ext     v25.16b, v3.16b, v4.16b, #1
> +          orr     v22.16b, v24.16b, v26.16b
> +            ext     v26.16b, v3.16b, v4.16b, #2
> +            ext     v27.16b, v3.16b, v4.16b, #3
> +            ext     v29.16b, v4.16b, v5.16b, #1
> +          orr     v23.16b, v28.16b, v30.16b
> +            ext     v30.16b, v4.16b, v5.16b, #2
> +            ext     v31.16b, v4.16b, v5.16b, #3
> +            bic     v24.16b, v3.16b, v20.16b
> +            bic     v25.16b, v25.16b, v20.16b
> +            bic     v26.16b, v26.16b, v20.16b
> +          orr     v22.16b, v22.16b, v23.16b
> +            bic     v27.16b, v27.16b, v20.16b
> +            bic     v28.16b, v4.16b, v20.16b
> +            bic     v29.16b, v29.16b, v20.16b
> +            bic     v30.16b, v30.16b, v20.16b
> +            bic     v31.16b, v31.16b, v20.16b
> +          addv    s22, v22.4s
> +            eor     v24.16b, v24.16b, v21.16b
> +            eor     v25.16b, v25.16b, v21.16b
> +            eor     v26.16b, v26.16b, v21.16b
> +            eor     v27.16b, v27.16b, v21.16b
> +            eor     v28.16b, v28.16b, v21.16b
> +          mov     w3, v22.s[0]
> +            eor     v29.16b, v29.16b, v21.16b
> +            eor     v30.16b, v30.16b, v21.16b
> +            eor     v31.16b, v31.16b, v21.16b
> +            cmeq    v24.4s, v24.4s, #0
> +            cmeq    v25.4s, v25.4s, #0
> +            cmeq    v26.4s, v26.4s, #0
> +            cmeq    v27.4s, v27.4s, #0
> +          cbnz    w3, 91f
> +          st1     {v0.16b, v1.16b}, [x2], #32
> +        subs    w1, w1, #64
> +        b.pl    2b
> +
> +90:     add     w0, w1, #80
> +        ret
> +
> +91:     sub     w1, w1, #32
> +        b       90b
> +endfunc
> diff --git a/libavcodec/arm/vc1dsp_init_neon.c b/libavcodec/arm/vc1dsp_init_neon.c
> index f5f5c702d7..3aefbcaf6d 100644
> --- a/libavcodec/arm/vc1dsp_init_neon.c
> +++ b/libavcodec/arm/vc1dsp_init_neon.c
> @@ -84,6 +84,64 @@ void ff_put_vc1_chroma_mc4_neon(uint8_t *dst, uint8_t *src, ptrdiff_t stride,
>  void ff_avg_vc1_chroma_mc4_neon(uint8_t *dst, uint8_t *src, ptrdiff_t stride,
>                                  int h, int x, int y);
>  
> +int ff_vc1_unescape_buffer_helper_neon(const uint8_t *src, int size, uint8_t *dst);
> +
> +static int vc1_unescape_buffer_neon(const uint8_t *src, int size, uint8_t *dst)
> +{
> +    /* Dealing with starting and stopping, and removing escape bytes, are
> +     * comparatively less time-sensitive, so are more clearly expressed using
> +     * a C wrapper around the assembly inner loop. Note that we assume a
> +     * little-endian machine that supports unaligned loads. */

You should nevertheless use AV_RL32 for your unaligned LE loads

> +    int dsize = 0;
> +    while (size >= 4)
> +    {
> +        int found = 0;
> +        while (!found && (((uintptr_t) dst) & 7) && size >= 4)
> +        {
> +            found = (*(uint32_t *)src &~ 0x03000000) == 0x00030000;
> +            if (!found)
> +            {
> +                *dst++ = *src++;
> +                --size;
> +                ++dsize;
> +            }
> +        }
> +        if (!found)
> +        {
> +            int skip = size - ff_vc1_unescape_buffer_helper_neon(src, size, dst);
> +            dst += skip;
> +            src += skip;
> +            size -= skip;
> +            dsize += skip;
> +            while (!found && size >= 4)
> +            {
> +                found = (*(uint32_t *)src &~ 0x03000000) == 0x00030000;
> +                if (!found)
> +                {
> +                    *dst++ = *src++;
> +                    --size;
> +                    ++dsize;
> +                }
> +            }
> +        }
> +        if (found)
> +        {
> +            *dst++ = *src++;
> +            *dst++ = *src++;
> +            ++src;
> +            size -= 3;
> +            dsize += 2;
> +        }
> +    }
> +    while (size > 0)
> +    {
> +        *dst++ = *src++;
> +        --size;
> +        ++dsize;
> +    }
> +    return dsize;
> +}
> +
>  #define FN_ASSIGN(X, Y) \
>      dsp->put_vc1_mspel_pixels_tab[0][X+4*Y] = ff_put_vc1_mspel_mc##X##Y##_16_neon; \
>      dsp->put_vc1_mspel_pixels_tab[1][X+4*Y] = ff_put_vc1_mspel_mc##X##Y##_neon
> @@ -130,4 +188,6 @@ av_cold void ff_vc1dsp_init_neon(VC1DSPContext *dsp)
>      dsp->avg_no_rnd_vc1_chroma_pixels_tab[0] = ff_avg_vc1_chroma_mc8_neon;
>      dsp->put_no_rnd_vc1_chroma_pixels_tab[1] = ff_put_vc1_chroma_mc4_neon;
>      dsp->avg_no_rnd_vc1_chroma_pixels_tab[1] = ff_avg_vc1_chroma_mc4_neon;
> +
> +    dsp->vc1_unescape_buffer = vc1_unescape_buffer_neon;
>  }
> diff --git a/libavcodec/arm/vc1dsp_neon.S b/libavcodec/arm/vc1dsp_neon.S
> index 4ef083102b..9d7333cf12 100644
> --- a/libavcodec/arm/vc1dsp_neon.S
> +++ b/libavcodec/arm/vc1dsp_neon.S
> @@ -1804,3 +1804,121 @@ function ff_vc1_h_loop_filter16_neon, export=1
>  4:      vpop            {d8-d15}
>          pop             {r4-r6,pc}
>  endfunc
> +
> +@ Copy at most the specified number of bytes from source to destination buffer,
> +@ stopping at a multiple of 16 bytes, none of which are the start of an escape sequence
> +@ On entry:
> +@   r0 -> source buffer
> +@   r1 = max number of bytes to copy
> +@   r2 -> destination buffer, optimally 8-byte aligned
> +@ On exit:
> +@   r0 = number of bytes not copied
> +function ff_vc1_unescape_buffer_helper_neon, export=1
> +        @ Offset by 48 to screen out cases that are too short for us to handle,
> +        @ and also make it easy to test for loop termination, or to determine
> +        @ whether we need an odd number of half-iterations of the loop.
> +        subs    r1, r1, #48
> +        bmi     90f
> +
> +        @ Set up useful constants
> +        vmov.i32        q0, #0x3000000
> +        vmov.i32        q1, #0x30000
> +
> +        tst             r1, #16
> +        bne             1f
> +
> +          vld1.8          {q8, q9}, [r0]!
> +          vbic            q12, q8, q0
> +          vext.8          q13, q8, q9, #1
> +          vext.8          q14, q8, q9, #2
> +          vext.8          q15, q8, q9, #3
> +          veor            q12, q12, q1
> +          vbic            q13, q13, q0
> +          vbic            q14, q14, q0
> +          vbic            q15, q15, q0
> +          vceq.i32        q12, q12, #0
> +          veor            q13, q13, q1
> +          veor            q14, q14, q1
> +          veor            q15, q15, q1
> +          vceq.i32        q13, q13, #0
> +          vceq.i32        q14, q14, #0
> +          vceq.i32        q15, q15, #0
> +          add             r1, r1, #16
> +          b               3f
> +
> +1:      vld1.8          {q10, q11}, [r0]!
> +        vbic            q12, q10, q0
> +        vext.8          q13, q10, q11, #1
> +        vext.8          q14, q10, q11, #2
> +        vext.8          q15, q10, q11, #3
> +        veor            q12, q12, q1
> +        vbic            q13, q13, q0
> +        vbic            q14, q14, q0
> +        vbic            q15, q15, q0
> +        vceq.i32        q12, q12, #0
> +        veor            q13, q13, q1
> +        veor            q14, q14, q1
> +        veor            q15, q15, q1
> +        vceq.i32        q13, q13, #0
> +        vceq.i32        q14, q14, #0
> +        vceq.i32        q15, q15, #0
> +        @ Drop through...
> +2:        vmov            q8, q11
> +          vld1.8          {q9}, [r0]!
> +        vorr            q13, q12, q13
> +        vorr            q15, q14, q15
> +          vbic            q12, q8, q0
> +        vorr            q3, q13, q15
> +          vext.8          q13, q8, q9, #1
> +          vext.8          q14, q8, q9, #2
> +          vext.8          q15, q8, q9, #3
> +          veor            q12, q12, q1
> +        vorr            d6, d6, d7
> +          vbic            q13, q13, q0
> +          vbic            q14, q14, q0
> +          vbic            q15, q15, q0
> +          vceq.i32        q12, q12, #0
> +        vmov            r3, r12, d6
> +          veor            q13, q13, q1
> +          veor            q14, q14, q1
> +          veor            q15, q15, q1
> +          vceq.i32        q13, q13, #0
> +          vceq.i32        q14, q14, #0
> +          vceq.i32        q15, q15, #0
> +        orrs            r3, r3, r12
> +        bne             90f
> +        vst1.64         {q10}, [r2]!
> +3:          vmov            q10, q9
> +            vld1.8          {q11}, [r0]!
> +          vorr            q13, q12, q13
> +          vorr            q15, q14, q15
> +            vbic            q12, q10, q0
> +          vorr            q3, q13, q15
> +            vext.8          q13, q10, q11, #1
> +            vext.8          q14, q10, q11, #2
> +            vext.8          q15, q10, q11, #3
> +            veor            q12, q12, q1
> +          vorr            d6, d6, d7
> +            vbic            q13, q13, q0
> +            vbic            q14, q14, q0
> +            vbic            q15, q15, q0
> +            vceq.i32        q12, q12, #0
> +          vmov            r3, r12, d6
> +            veor            q13, q13, q1
> +            veor            q14, q14, q1
> +            veor            q15, q15, q1
> +            vceq.i32        q13, q13, #0
> +            vceq.i32        q14, q14, #0
> +            vceq.i32        q15, q15, #0
> +          orrs            r3, r3, r12
> +          bne             91f
> +          vst1.64         {q8}, [r2]!
> +        subs            r1, r1, #32
> +        bpl             2b
> +
> +90:     add             r0, r1, #48
> +        bx              lr
> +
> +91:     sub             r1, r1, #16
> +        b               90b
> +endfunc
> diff --git a/libavcodec/vc1dec.c b/libavcodec/vc1dec.c
> index 1c92b9d401..6a30b5b664 100644
> --- a/libavcodec/vc1dec.c
> +++ b/libavcodec/vc1dec.c
> @@ -490,7 +490,7 @@ static av_cold int vc1_decode_init(AVCodecContext *avctx)
>              size = next - start - 4;
>              if (size <= 0)
>                  continue;
> -            buf2_size = vc1_unescape_buffer(start + 4, size, buf2);
> +            buf2_size = v->vc1dsp.vc1_unescape_buffer(start + 4, size, buf2);
>              init_get_bits(&gb, buf2, buf2_size * 8);
>              switch (AV_RB32(start)) {
>              case VC1_CODE_SEQHDR:
> @@ -680,7 +680,7 @@ static int vc1_decode_frame(AVCodecContext *avctx, void *data,
>                  case VC1_CODE_FRAME:
>                      if (avctx->hwaccel)
>                          buf_start = start;
> -                    buf_size2 = vc1_unescape_buffer(start + 4, size, buf2);
> +                    buf_size2 = v->vc1dsp.vc1_unescape_buffer(start + 4, size, buf2);
>                      break;
>                  case VC1_CODE_FIELD: {
>                      int buf_size3;
> @@ -697,8 +697,8 @@ static int vc1_decode_frame(AVCodecContext *avctx, void *data,
>                          ret = AVERROR(ENOMEM);
>                          goto err;
>                      }
> -                    buf_size3 = vc1_unescape_buffer(start + 4, size,
> -                                                    slices[n_slices].buf);
> +                    buf_size3 = v->vc1dsp.vc1_unescape_buffer(start + 4, size,
> +                                                              slices[n_slices].buf);
>                      init_get_bits(&slices[n_slices].gb, slices[n_slices].buf,
>                                    buf_size3 << 3);
>                      slices[n_slices].mby_start = avctx->coded_height + 31 >> 5;
> @@ -709,7 +709,7 @@ static int vc1_decode_frame(AVCodecContext *avctx, void *data,
>                      break;
>                  }
>                  case VC1_CODE_ENTRYPOINT: /* it should be before frame data */
> -                    buf_size2 = vc1_unescape_buffer(start + 4, size, buf2);
> +                    buf_size2 = v->vc1dsp.vc1_unescape_buffer(start + 4, size, buf2);
>                      init_get_bits(&s->gb, buf2, buf_size2 * 8);
>                      ff_vc1_decode_entry_point(avctx, v, &s->gb);
>                      break;
> @@ -726,8 +726,8 @@ static int vc1_decode_frame(AVCodecContext *avctx, void *data,
>                          ret = AVERROR(ENOMEM);
>                          goto err;
>                      }
> -                    buf_size3 = vc1_unescape_buffer(start + 4, size,
> -                                                    slices[n_slices].buf);
> +                    buf_size3 = v->vc1dsp.vc1_unescape_buffer(start + 4, size,
> +                                                              slices[n_slices].buf);
>                      init_get_bits(&slices[n_slices].gb, slices[n_slices].buf,
>                                    buf_size3 << 3);
>                      slices[n_slices].mby_start = get_bits(&slices[n_slices].gb, 9);
> @@ -761,7 +761,7 @@ static int vc1_decode_frame(AVCodecContext *avctx, void *data,
>                      ret = AVERROR(ENOMEM);
>                      goto err;
>                  }
> -                buf_size3 = vc1_unescape_buffer(divider + 4, buf + buf_size - divider - 4, slices[n_slices].buf);
> +                buf_size3 = v->vc1dsp.vc1_unescape_buffer(divider + 4, buf + buf_size - divider - 4, slices[n_slices].buf);
>                  init_get_bits(&slices[n_slices].gb, slices[n_slices].buf,
>                                buf_size3 << 3);
>                  slices[n_slices].mby_start = s->mb_height + 1 >> 1;
> @@ -770,9 +770,9 @@ static int vc1_decode_frame(AVCodecContext *avctx, void *data,
>                  n_slices1 = n_slices - 1;
>                  n_slices++;
>              }
> -            buf_size2 = vc1_unescape_buffer(buf, divider - buf, buf2);
> +            buf_size2 = v->vc1dsp.vc1_unescape_buffer(buf, divider - buf, buf2);
>          } else {
> -            buf_size2 = vc1_unescape_buffer(buf, buf_size, buf2);
> +            buf_size2 = v->vc1dsp.vc1_unescape_buffer(buf, buf_size, buf2);
>          }
>          init_get_bits(&s->gb, buf2, buf_size2*8);
>      } else{
> diff --git a/libavcodec/vc1dsp.c b/libavcodec/vc1dsp.c
> index a29b91bf3d..11d493f002 100644
> --- a/libavcodec/vc1dsp.c
> +++ b/libavcodec/vc1dsp.c
> @@ -34,6 +34,7 @@
>  #include "rnd_avg.h"
>  #include "vc1dsp.h"
>  #include "startcode.h"
> +#include "vc1_common.h"
>  
>  /* Apply overlap transform to horizontal edge */
>  static void vc1_v_overlap_c(uint8_t *src, int stride)
> @@ -1030,6 +1031,7 @@ av_cold void ff_vc1dsp_init(VC1DSPContext *dsp)
>  #endif /* CONFIG_WMV3IMAGE_DECODER || CONFIG_VC1IMAGE_DECODER */
>  
>      dsp->startcode_find_candidate = ff_startcode_find_candidate_c;
> +    dsp->vc1_unescape_buffer      = vc1_unescape_buffer;
>  
>      if (ARCH_AARCH64)
>          ff_vc1dsp_init_aarch64(dsp);
> diff --git a/libavcodec/vc1dsp.h b/libavcodec/vc1dsp.h
> index c6443acb20..8be1198071 100644
> --- a/libavcodec/vc1dsp.h
> +++ b/libavcodec/vc1dsp.h
> @@ -80,6 +80,9 @@ typedef struct VC1DSPContext {
>       * one or more further zero bytes and a one byte.
>       */
>      int (*startcode_find_candidate)(const uint8_t *buf, int size);
> +
> +    /* Copy a buffer, removing startcode emulation escape bytes as we go */
> +    int (*vc1_unescape_buffer)(const uint8_t *src, int size, uint8_t *dst);
>  } VC1DSPContext;
>  
>  void ff_vc1dsp_init(VC1DSPContext* c);

1. You should add some benchmarks to the commit message.
2. The unescaping process for VC1 is basically the same as for H.264 and
HEVC* and for those we already have better optimized code in
libavcodec/h2645_parse.c. Can you check the performance of this code
here against (re)using the code from h2645_parse.c?
(3. Btw: The code in h2645_parse.c could even be optimized further along
the lines of
https://ffmpeg.org/pipermail/ffmpeg-devel/2019-June/245203.html (The
H.264 and VC1 parsers use a quite suboptimal startcode search; this
patch is part of a patchset I submitted ages ago to improve it.).)

- Andreas

*: Except for the fact that VC-1 seems to allow 0x00 0x00 0x03 0xXY with
0xXY > 3 (where the 0x03 is not escaped) to occur inside a EBDU; it also
allows 0x00 0x00 0x02 (while the informative process for encoders is the
same as for H.2645; it does not produce the byte sequences disallows by
H.264).
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 0/6] avcodec/vc1: Arm optimisations
  2022-03-17 18:58 [FFmpeg-devel] [PATCH 0/6] avcodec/vc1: Arm optimisations Ben Avison
                   ` (5 preceding siblings ...)
  2022-03-17 18:58 ` [FFmpeg-devel] [PATCH 6/6] avcodec/vc1: Introduce fast path for unescaping bitstream buffer Ben Avison
@ 2022-03-19 23:06 ` Martin Storsjö
  2022-03-19 23:07   ` Martin Storsjö
  2022-03-21 17:37   ` Ben Avison
  2022-03-25 18:52 ` [FFmpeg-devel] [PATCH v2 00/10] " Ben Avison
  7 siblings, 2 replies; 55+ messages in thread
From: Martin Storsjö @ 2022-03-19 23:06 UTC (permalink / raw)
  To: FFmpeg development discussions and patches; +Cc: Ben Avison

Hi Ben,

On Thu, 17 Mar 2022, Ben Avison wrote:

> The VC1 decoder was missing lots of important fast paths for Arm, especially
> for 64-bit Arm. This submission fills in implementations for all functions
> where a fast path already existed and the fallback C implementation was
> taking 1% or more of the runtime, and adds a new fast path to permit
> vc1_unescape_buffer() to be overridden.
>
> I've measured the playback speed on a 1.5 GHz Cortex-A72 (Raspberry Pi 4)
> using `ffmpeg -i <bitstream> -f null -` for a couple of example streams:
>
> Architecture:  AArch32    AArch32    AArch64    AArch64
> Stream:        1          2          1          2
> Before speed:  1.22x      0.82x      1.00x      0.67x
> After speed:   1.31x      0.98x      1.39x      1.06x
> Improvement:   7.4%       20%        39%        58%
>
> `make fate` passes on both AArch32 and AArch64.

Thanks for the patches! I have looked briefly at the patches (I haven't 
started reading the implementation in detail yet though).

As you are writing assembly for these functions, I would very much 
appreciate if you could add checkasm tests for all the functions you're 
implementing. I see that there exists a test for the blockdsp functions, 
but all the other ones are missing a test.

I try to request such tests for all new assembly. Such a test allows 
testing all interesting cornercases of the DSP functions with one concise 
test, instead of having to run the full fate testsuite. It also allows 
catching a number of other possible lingering issues, like using the full 
64 bit register when the argument only set 32 bits where the upper bits 
are undefined, or missing to restore callee saved registers, etc. It also 
allows for easy benchmarking of the functions on their own, which is very 
useful for tuning of the implementation. And it finally allows easily 
checking that the assembly works correctly when built with a different 
toolchain for a different platform - without needing to run the full 
decoding tests.

Especially as you've been implementing the functions, you're probably more 
familiar with the expectations and behaviours (and potential cornercases 
that are worth testing) of the functions than most other developers in the 
community at the moment, which is good for writing useful testcases.

There's plenty of existing examples of such tests - the h264dsp, vp8dsp 
and vp9dsp cases might be relevant.


The other main issue I'd like to request is to indent the assembly 
similarly to the rest of the existing assembly. For the 32 bit assembly, 
your patches do match the surrounding code, but for the 64 bit assembly, 
your patches align the operands column differently than the rest. (I think 
your code aligns the operands with 16 chars to the left of the operands, 
while our code aligns it with 24 chars to the left, both in 32 and 64 bit 
arm assembly.)


Finally, the 32 bit assembly fails to build for me both with (recent) 
clang and old binutils, with errors like these:

src/libavcodec/arm/vc1dsp_neon.S: Assembler messages:
src/libavcodec/arm/vc1dsp_neon.S:1579: Error: bad type for scalar -- `vmov r0,d4[1]'
src/libavcodec/arm/vc1dsp_neon.S:1582: Error: bad type for scalar -- `vmov r2,d5[1]'
src/libavcodec/arm/vc1dsp_neon.S:1592: Error: bad type for scalar -- `vmov r2,d8[1]'
src/libavcodec/arm/vc1dsp_neon.S:1595: Error: bad type for scalar -- `vmov r12,d9[1]'

Qualifying the "vmov" into "vmov.32" seems to fix it.

// Martin

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 0/6] avcodec/vc1: Arm optimisations
  2022-03-19 23:06 ` [FFmpeg-devel] [PATCH 0/6] avcodec/vc1: Arm optimisations Martin Storsjö
@ 2022-03-19 23:07   ` Martin Storsjö
  2022-03-21 17:37   ` Ben Avison
  1 sibling, 0 replies; 55+ messages in thread
From: Martin Storsjö @ 2022-03-19 23:07 UTC (permalink / raw)
  To: FFmpeg development discussions and patches; +Cc: Ben Avison

On Sun, 20 Mar 2022, Martin Storsjö wrote:

> The other main issue I'd like to request is to indent the assembly similarly 
> to the rest of the existing assembly. For the 32 bit assembly, your patches 
> do match the surrounding code, but for the 64 bit assembly, your patches 
> align the operands column differently than the rest. (I think your code 
> aligns the operands with 16 chars to the left of the operands, while our code 
> aligns it with 24 chars to the left, both in 32 and 64 bit arm assembly.)

Oh, sidenote - I do see that the last patch in the set uses much more 
inconsistent indentation, with varying indentation between lines. Is this 
intentional to signify some structure in the code, or just accidental? I 
think it'd be preferrable to have it use normal straight indentation all 
the way throughout.

// Martin
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 6/6] avcodec/vc1: Introduce fast path for unescaping bitstream buffer
  2022-03-18 19:10   ` Andreas Rheinhardt
@ 2022-03-21 15:51     ` Ben Avison
  2022-03-21 20:44       ` Martin Storsjö
  0 siblings, 1 reply; 55+ messages in thread
From: Ben Avison @ 2022-03-21 15:51 UTC (permalink / raw)
  To: FFmpeg development discussions and patches, Andreas Rheinhardt

On 18/03/2022 19:10, Andreas Rheinhardt wrote:
> Ben Avison:
>> +static int vc1_unescape_buffer_neon(const uint8_t *src, int size, uint8_t *dst)
>> +{
>> +    /* Dealing with starting and stopping, and removing escape bytes, are
>> +     * comparatively less time-sensitive, so are more clearly expressed using
>> +     * a C wrapper around the assembly inner loop. Note that we assume a
>> +     * little-endian machine that supports unaligned loads. */
> 
> You should nevertheless use AV_RL32 for your unaligned LE loads

Thanks - I wasn't aware of that. I'll add it in.

> 1. You should add some benchmarks to the commit message.

Do you mean for each commit, or this one in particular? Are there any 
particular standard files you'd expect to see benchmarked, or will the 
ones I used in the cover-letter do? (Those were just snippets from 
problematic BluRay rips, but that does mean I don't have the rights to 
redistribute them.) I believe there should be conformance bitstreams for 
VC-1 somewhere, but I wasn't able to locate them.

During development, I wrote a simple benchmarker for this particular 
patch, which measures the throughput of processing random data (which 
doesn't contain the escape sequence at any point). I've just pushed it 
here if anyone's interested:

https://github.com/bavison/test-unescape

The compile-time define VERSION there takes a few different values:
1: the original C implementation of vc1_unescape_buffer()
2: an early prototype version I wrote that uses unaligned 32-bit loads, 
again in pure C
3: the NEON assembly versions

The sort of speeds this measures are:
             AArch32    AArch64
version 1   210 MB/s   292 MB/s
version 2   461 MB/s   435 MB/s
version 3  1294 MB/s  1554 MB/s

> 2. The unescaping process for VC1 is basically the same as for H.264 and
> HEVC* and for those we already have better optimized code in
> libavcodec/h2645_parse.c. Can you check the performance of this code
> here against (re)using the code from h2645_parse.c?

I've hacked that around a bit to match the calling conditions of 
vc1_unescape_buffer(), though not adapted it for the slightly different 
rules you noted for VC-1 as opposed to H.264/265. Hopefully it should 
still give some indication of the approximate performance that could be 
expected, but I didn't take time to fully understand everything it was 
doing, so do please say if I've messed something up.

This can be selected by #defining VERSION 4:

             AArch32    AArch64
version 4   737 MB/s  1286 MB/s

This suggests it's much better than the original C, but my NEON versions 
still have the edge, especially on AArch32. The NEON code is very much a 
brute force check, but it's effectively able to do the testing in 
parallel with the memcpy - each byte only gets loaded once.

Ben
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 0/6] avcodec/vc1: Arm optimisations
  2022-03-19 23:06 ` [FFmpeg-devel] [PATCH 0/6] avcodec/vc1: Arm optimisations Martin Storsjö
  2022-03-19 23:07   ` Martin Storsjö
@ 2022-03-21 17:37   ` Ben Avison
  2022-03-21 22:29     ` Martin Storsjö
  1 sibling, 1 reply; 55+ messages in thread
From: Ben Avison @ 2022-03-21 17:37 UTC (permalink / raw)
  To: Martin Storsjö, FFmpeg development discussions and patches

Hi Martin,

Thanks very much for taking a look.

On 19/03/2022 23:06, Martin Storsjö wrote:
> As you are writing assembly for these functions, I would very much 
> appreciate if you could add checkasm tests for all the functions you're 
> implementing. I see that there exists a test for the blockdsp functions, 
> but all the other ones are missing a test.

I think I'd have a bit of a learning curve ahead of me there! I did 
write my own fuzz testers to check the validity of my assembly 
implementations, and I could share them (they'd probably need a bit of 
tidying up since I wasn't intending them for public consumption) but 
they were written in ignorance of the checkasm framework, so probably 
wouldn't slot in neatly.

Is there any writeup of checkasm anywhere, discussing how it's used, 
what sorts of things it tests, any speed/memory limits that tests should 
try to adhere to - that sort of thing?

> The other main issue I'd like to request is to indent the assembly 
> similarly to the rest of the existing assembly. For the 32 bit assembly, 
> your patches do match the surrounding code, but for the 64 bit assembly, 
> your patches align the operands column differently than the rest.

Since I was creating new source files for the 64-bit stuff, I assumed I 
had a bit of leeway in indentation style - but I can easily change it.

For what it's worth, the opcodes in AArch64 are significantly shorter 
than in AArch32, since the vector element size qualifiers go on the 
operands instead of the opcodes, so there's less need for extra indentation.

> Finally, the 32 bit assembly fails to build for me both with (recent) 
> clang and old binutils, with errors like these:
> 
> src/libavcodec/arm/vc1dsp_neon.S: Assembler messages:
> src/libavcodec/arm/vc1dsp_neon.S:1579: Error: bad type for scalar -- 
> `vmov r0,d4[1]'

Thanks - the Armv8-A ARM says (section F6.1.139) that the data type can 
be omitted here, and in that case it is equivalent to '32', so that's a 
bug in clang. But easy to work around.

> Oh, sidenote - I do see that the last patch in the set uses much more
> inconsistent indentation, with varying indentation between lines. Is
> this intentional to signify some structure in the code, or just
> accidental?

That was deliberate! The inner loop there is unrolled x2, and then 
adjacent iterations are overlapped 180 degrees out of phase. This is 
because each iteration starts off busy, with lots of instructions to 
execute, keeping pipelines full, and towards the end, it thins out, 
meaning we can benefit by using what would otherwise be stalls to 
speculatively start to process the next iteration before we've completed 
the current one.

Effectively, if you only read a series of instructions with matching 
indentation, you get one logical iteration of the loop - for example, in 
the AArch32 version, you can follow through the process from loading the 
source buffer into q10 (line 1849) until we store from it to the 
destination buffer, having determined that it doesn't contain the start 
of any escape sequences (line 1890).

It's a trick I've seen used a few times elsewhere, which is why I didn't 
bother explaining it in a comment. I could add one, or if you still 
don't like it once you've understood what it means, I'd be happy to take 
it out.

Ben
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 6/6] avcodec/vc1: Introduce fast path for unescaping bitstream buffer
  2022-03-21 15:51     ` Ben Avison
@ 2022-03-21 20:44       ` Martin Storsjö
  0 siblings, 0 replies; 55+ messages in thread
From: Martin Storsjö @ 2022-03-21 20:44 UTC (permalink / raw)
  To: FFmpeg development discussions and patches; +Cc: Andreas Rheinhardt

On Mon, 21 Mar 2022, Ben Avison wrote:

> On 18/03/2022 19:10, Andreas Rheinhardt wrote:
>> Ben Avison:
>>> +static int vc1_unescape_buffer_neon(const uint8_t *src, int size, uint8_t 
>>> *dst)
>>> +{
>>> +    /* Dealing with starting and stopping, and removing escape bytes, are
>>> +     * comparatively less time-sensitive, so are more clearly expressed 
>>> using
>>> +     * a C wrapper around the assembly inner loop. Note that we assume a
>>> +     * little-endian machine that supports unaligned loads. */
>> 
>> You should nevertheless use AV_RL32 for your unaligned LE loads
>
> Thanks - I wasn't aware of that. I'll add it in.
>
>> 1. You should add some benchmarks to the commit message.
>
> Do you mean for each commit, or this one in particular? Are there any 
> particular standard files you'd expect to see benchmarked, or will the ones I 
> used in the cover-letter do?

With checkasm tests available, it'd be nice to have per-function 
benchmarks in each of the patches that adds/tweaks a new function - so 
you can see e.g. that the NEON version of a function is e.g. 8x faster 
than the corresponding C function. That usually verifies that this 
particular assembly function is beneficial (there have been cases where 
people have contributed code which turned out to be slower than what the C 
compiler produces).

Then overall, it can probably be nice to have a high level benchmark in 
e.g. the cover letter, like "speeds up decoding <random clip> from xx fps 
to yy fps on hardware zz".

(I'll make a longer reply to the other mail.)

// Martin

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 0/6] avcodec/vc1: Arm optimisations
  2022-03-21 17:37   ` Ben Avison
@ 2022-03-21 22:29     ` Martin Storsjö
  0 siblings, 0 replies; 55+ messages in thread
From: Martin Storsjö @ 2022-03-21 22:29 UTC (permalink / raw)
  To: Ben Avison; +Cc: FFmpeg development discussions and patches

On Mon, 21 Mar 2022, Ben Avison wrote:

>
> On 19/03/2022 23:06, Martin Storsjö wrote:
>> As you are writing assembly for these functions, I would very much 
>> appreciate if you could add checkasm tests for all the functions you're 
>> implementing. I see that there exists a test for the blockdsp functions, 
>> but all the other ones are missing a test.
>
> I think I'd have a bit of a learning curve ahead of me there! I did write my 
> own fuzz testers to check the validity of my assembly implementations, and I 
> could share them (they'd probably need a bit of tidying up since I wasn't 
> intending them for public consumption) but they were written in ignorance of 
> the checkasm framework, so probably wouldn't slot in neatly.
>
> Is there any writeup of checkasm anywhere, discussing how it's used, what 
> sorts of things it tests, any speed/memory limits that tests should try to 
> adhere to - that sort of thing?

I'm not aware of any guide in itself, but I can try to do a short writeup 
here.

Checkasm is essentially a unit test framework for assembly functions. A 
test runs two versions of the same function, one reference (e.g. C 
implementation) and one test version (e.g. NEON implementation) against 
randomish input data, and compares the output to make sure that it matches 
(for the relevant part of the output buffer).

For each test, the main point lies in knowing what the expected/valid 
ranges of inputs are, so that you pick random inputs that are valid - and 
test specifically the e.g. buffer sizes that are used in practice by the 
decoder.

If a function usually has e.g. different codepaths internally, depending 
on some input values, you can make the test exercise all the different 
codepaths. Or if the function has potential overflows in some case, you 
can make part of the input buffer always contain the worst-case scenario, 
so that each run always tests for the overflows, even if the rest of the 
buffers are random data.

Another aspect of tests is what part of the output to check. E.g. 
functions in video codecs often write a rectangular block in a buffer. At 
the very least, a test can check that the contents within the expected 
region of the output buffer matches. But tests can also (optionally) take 
it one step further, and check that e.g. the function didn't accidentally 
write outside of the edges of the indended rectangle in the output buffer. 
Or in some cases it's expected that a function may overwrite e.g. up to 16 
bytes past the end of the payload in each row - then you intentionally 
wouldn't check that area.

Additionally, after comparing the tested version with the reference, 
checkasm can also optionally benchmark your functions. This runs the 
benchmark specifically of only the assembly function, nothing else, by 
running the function with the same input parameters e.g. 1000 times. It 
also measures the C version of the function, so that you can compare 
against that and see the speedup of your assembly work, in isolation.

(On linux on ARM, it by default uses the perf timers. If you have enabled 
user mode access to the cycle counter registers, which I highly recommend, 
and configure with --disable-linux-perf, you get much more precise timing 
- to the point that you can measure the impact of different instruction 
scheduling setups on in-order cores, like the Cortex A53.)

Additionally, for the testing (but not for benchmarking), the functions 
get wrapped with extra setup to try to find lingering nonfunctional issues 
that don't show up when you just run the code.

E.g. one common isssue is with functions that take a 32 bit value as 
argument. On a 64 bit architecture with arguments in registers, the upper 
32 bits of such a register are undefined - while in practice they're often 
zero. This can lead to issues later down the line, when e.g. a different 
or updated compiler suddenly happens to pass nonzero bits in the undefined 
part. The test wrapping tries to arrange so that these bits end up as 
nonzero, to catch such hidden bugs.

Additionally, it checks to make sure you've restored all callee saved 
registers. In most cases, if you happen to forget to restore e.g. a callee 
saved SIMD register, the effects of it normally don't show up soon (or 
at all), but may only show up much later depending on what the compiler 
did in a calling function. But all functions covered in checkasm get this 
checked for free.


After building checkasm, if you run e.g. ./tests/checkasm/checkasm, it 
runs all the tests for all functions, for all SIMD instruction sets 
available. (On ARM there's usually only NEON, but e.g. on X86 it first 
enables only MMX, tests all functions available there, then increasing 
levels with SSE2, SSSE3, etc, to test all potential implementations.) If 
you run e.g. checkasm --test=blockdsp or --test=h264dsp, it will only run 
the tests for that subsystem. If you further add --bench=h264_idct, it 
will benchmark all functions with a name starting with h264_idct.


One of the simplest tests to have a look at, to understand the structure, 
is tests/checkasm/blockdsp.c. First, the cpu feature mask is set so that 
av_get_cpu_flags() returns 0. Then the test main function, 
checkasm_check_blockdsp, is called, which initializes the DSP context 
(which then only gets assigned the reference C implementation of the 
function). In this case the test does nothing, as it compares the 
C implementation with itself. The next time around, av_get_cpu_flags 
returns NEON, and checkasm_check_blockdsp gets called again, where the DSP 
context now gets the NEON function assigned.

In blockdsp.c, the first call to check_func(h.func, 
"blockdsp.clear_block") stored a copy of the previous function pointer, 
the C reference function, in a map. On the second call to it, it digs up 
the previous version and keeps the new current version. These function 
pointers are used via the macros call_ref() and call_new(), with the same 
parameters as if you'd call the function directly. After running 
both, you inspect the output of them to see if they match, and if not 
you fail the test. Finally, the bench_new() macro checks if you've asked 
to benchmark this particular function. If this function is one of the 
functions to benchmark, it runs it N times with the provided parameters.

For your patches, the existing tests in h264dsp, vp8dsp, vp9dsp probably 
are good examples of such tests.

A small gotcha when/if you're adding a new checkasm test in a new file, 
under a new name. If you just run checkasm without parameters, it runs all 
the tests by default (as long as the test is hooked up in the main tests[] 
array in checkasm.c). But when running fate, it runs checkasm individually 
with one module at a time. So if adding a new test module in checkasm, be 
sure to add it to the test listing in tests/fate/checkasm.mak too.


The inverse transforms are tricky to test, because you probably can't feed 
them any random input data. The existing h264dsp, vp8dsp and vp9dsp 
inverse transform tests take random pixels and do a naive forward 
transform of them, so that you only get transform coefficients within the 
possible range.

For deblocking filters, those tests start with random-ish input data, but 
try to arrange coefficients in a way so that each block contains all 
possible combinations of data (above or below the threshold values). Or a 
test can run the functions multiple times, with input data arranged to 
trigger each special case.

For the unescape function, I'm not sure if we have any good examples of 
existing testcases that work on a similar function though. Try to come up 
with all cases of interesting input to the function (short buffers, long 
buffers, mod-4/non-mod-4 length, nothing to unescape, lots of things to 
unescape close together).


>> The other main issue I'd like to request is to indent the assembly 
>> similarly to the rest of the existing assembly. For the 32 bit assembly, 
>> your patches do match the surrounding code, but for the 64 bit assembly, 
>> your patches align the operands column differently than the rest.
>
> Since I was creating new source files for the 64-bit stuff, I assumed I had a 
> bit of leeway in indentation style - but I can easily change it.

Ok, thanks, that'd be appreciated. Yeah I try to maintain consistency 
across files here.

> For what it's worth, the opcodes in AArch64 are significantly shorter than in 
> AArch32, since the vector element size qualifiers go on the operands instead 
> of the opcodes, so there's less need for extra indentation.

Yup, that's true. But for functions where instruction-like macros are 
used, the macro names often are a bit longer than regular instructions, so 
there the extra space is appreciated. And consistency is still nice when 
both 32 and 64 bit arm use the same indentation style; in many cases, code 
is ported between the two by just copying and slightly adjusting/rewriting 
e.g. register names and tweaking instruction names.

>> Finally, the 32 bit assembly fails to build for me both with (recent) clang 
>> and old binutils, with errors like these:
>> 
>> src/libavcodec/arm/vc1dsp_neon.S: Assembler messages:
>> src/libavcodec/arm/vc1dsp_neon.S:1579: Error: bad type for scalar -- `vmov 
>> r0,d4[1]'
>
> Thanks - the Armv8-A ARM says (section F6.1.139) that the data type can be 
> omitted here, and in that case it is equivalent to '32', so that's a bug in 
> clang. But easy to work around.

Ok, good. Yeah, bugs or not, we try to stick with the subset of assembly 
that builds on all toolchains that regularly are used to building.

>> Oh, sidenote - I do see that the last patch in the set uses much more
>> inconsistent indentation, with varying indentation between lines. Is
>> this intentional to signify some structure in the code, or just
>> accidental?
>
> That was deliberate! The inner loop there is unrolled x2, and then adjacent 
> iterations are overlapped 180 degrees out of phase. This is because each 
> iteration starts off busy, with lots of instructions to execute, keeping 
> pipelines full, and towards the end, it thins out, meaning we can benefit by 
> using what would otherwise be stalls to speculatively start to process the 
> next iteration before we've completed the current one.
>
> Effectively, if you only read a series of instructions with matching 
> indentation, you get one logical iteration of the loop - for example, in the 
> AArch32 version, you can follow through the process from loading the source 
> buffer into q10 (line 1849) until we store from it to the destination buffer, 
> having determined that it doesn't contain the start of any escape sequences 
> (line 1890).
>
> It's a trick I've seen used a few times elsewhere, which is why I didn't 
> bother explaining it in a comment. I could add one, or if you still don't 
> like it once you've understood what it means, I'd be happy to take it out.

Right, I see. (I didn't try to read the code and follow it yet, I just 
browsed your patches and testbuilt them.) I think it can be valuable to 
keep this nonstandard indentation as a readability/maintainability aid 
then. (But do shift the operand column 8 chars to the right for the 64 bit 
version.)

// Martin
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [FFmpeg-devel] [PATCH v2 00/10] avcodec/vc1: Arm optimisations
  2022-03-17 18:58 [FFmpeg-devel] [PATCH 0/6] avcodec/vc1: Arm optimisations Ben Avison
                   ` (6 preceding siblings ...)
  2022-03-19 23:06 ` [FFmpeg-devel] [PATCH 0/6] avcodec/vc1: Arm optimisations Martin Storsjö
@ 2022-03-25 18:52 ` Ben Avison
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 01/10] checkasm: Add vc1dsp in-loop deblocking filter tests Ben Avison
                     ` (9 more replies)
  7 siblings, 10 replies; 55+ messages in thread
From: Ben Avison @ 2022-03-25 18:52 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Ben Avison

The VC1 decoder was missing lots of important fast paths for Arm, especially
for 64-bit Arm. This submission fills in implementations for all functions
where a fast path already existed and the fallback C implementation was
taking 1% or more of the runtime, and adds a new fast path to permit
vc1_unescape_buffer() to be overridden.

I've measured the playback speed on a 1.5 GHz Cortex-A72 (Raspberry Pi 4)
using `ffmpeg -i <bitstream> -f null -` for a couple of example streams:

Architecture:  AArch32    AArch32    AArch64    AArch64
Stream:        1          2          1          2
Before speed:  1.22x      0.82x      1.00x      0.67x
After speed:   1.31x      0.98x      1.39x      1.06x
Improvement:   7.4%       20%        39%        58%

`make fate` passes on both AArch32 and AArch64.

Changes in v2:

* Use AV_RL32 when performing unaligned loads from C.
* Work around bug in some assemblers which require a size specifier on VMOV
  scalar-to-general-purpose-register for AArch32.
* Increase operand indentation in AArch64 assembly.
* Add checkasm tests for each fast path for which they did not yet exist.
* Add benchmarks (generated via checkasm) to individual commits.
* Remove AArch64 blockdsp fast paths since it was impossible to demonstrate
  that they had any appreciable effect on timings.

Ben Avison (10):
  checkasm: Add vc1dsp in-loop deblocking filter tests
  checkasm: Add vc1dsp inverse transform tests
  checkasm: Add idctdsp add/put-pixels-clamped tests
  avcodec/vc1: Introduce fast path for unescaping bitstream buffer
  avcodec/vc1: Arm 64-bit NEON deblocking filter fast paths
  avcodec/vc1: Arm 32-bit NEON deblocking filter fast paths
  avcodec/vc1: Arm 64-bit NEON inverse transform fast paths
  avcodec/idctdsp: Arm 64-bit NEON block add and clamp fast paths
  avcodec/vc1: Arm 64-bit NEON unescape fast path
  avcodec/vc1: Arm 32-bit NEON unescape fast path

 libavcodec/aarch64/Makefile               |    4 +-
 libavcodec/aarch64/idctdsp_init_aarch64.c |   26 +-
 libavcodec/aarch64/idctdsp_neon.S         |  130 ++
 libavcodec/aarch64/vc1dsp_init_aarch64.c  |   94 ++
 libavcodec/aarch64/vc1dsp_neon.S          | 1552 +++++++++++++++++++++
 libavcodec/arm/idctdsp_init_arm.c         |    2 +
 libavcodec/arm/vc1dsp_init_neon.c         |   75 +
 libavcodec/arm/vc1dsp_neon.S              |  761 ++++++++++
 libavcodec/vc1dec.c                       |   20 +-
 libavcodec/vc1dsp.c                       |    2 +
 libavcodec/vc1dsp.h                       |    3 +
 tests/checkasm/Makefile                   |    2 +
 tests/checkasm/checkasm.c                 |    6 +
 tests/checkasm/checkasm.h                 |    2 +
 tests/checkasm/idctdsp.c                  |   85 ++
 tests/checkasm/vc1dsp.c                   |  411 ++++++
 tests/fate/checkasm.mak                   |    2 +
 17 files changed, 3158 insertions(+), 19 deletions(-)
 create mode 100644 libavcodec/aarch64/idctdsp_neon.S
 create mode 100644 libavcodec/aarch64/vc1dsp_neon.S
 create mode 100644 tests/checkasm/idctdsp.c
 create mode 100644 tests/checkasm/vc1dsp.c

-- 
2.25.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [FFmpeg-devel] [PATCH 01/10] checkasm: Add vc1dsp in-loop deblocking filter tests
  2022-03-25 18:52 ` [FFmpeg-devel] [PATCH v2 00/10] " Ben Avison
@ 2022-03-25 18:52   ` Ben Avison
  2022-03-25 22:53     ` Martin Storsjö
                       ` (2 more replies)
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 02/10] checkasm: Add vc1dsp inverse transform tests Ben Avison
                     ` (8 subsequent siblings)
  9 siblings, 3 replies; 55+ messages in thread
From: Ben Avison @ 2022-03-25 18:52 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Ben Avison

Note that the benchmarking results for these functions are highly dependent
upon the input data. Therefore, each function is benchmarked twice,
corresponding to the best and worst case complexity of the reference C
implementation. The performance of a real stream decode will fall somewhere
between these two extremes.

Signed-off-by: Ben Avison <bavison@riscosopen.org>
---
 tests/checkasm/Makefile   |  1 +
 tests/checkasm/checkasm.c |  3 ++
 tests/checkasm/checkasm.h |  1 +
 tests/checkasm/vc1dsp.c   | 94 +++++++++++++++++++++++++++++++++++++++
 tests/fate/checkasm.mak   |  1 +
 5 files changed, 100 insertions(+)
 create mode 100644 tests/checkasm/vc1dsp.c

diff --git a/tests/checkasm/Makefile b/tests/checkasm/Makefile
index f768b1144e..7133a6ee66 100644
--- a/tests/checkasm/Makefile
+++ b/tests/checkasm/Makefile
@@ -11,6 +11,7 @@ AVCODECOBJS-$(CONFIG_H264PRED)          += h264pred.o
 AVCODECOBJS-$(CONFIG_H264QPEL)          += h264qpel.o
 AVCODECOBJS-$(CONFIG_LLVIDDSP)          += llviddsp.o
 AVCODECOBJS-$(CONFIG_LLVIDENCDSP)       += llviddspenc.o
+AVCODECOBJS-$(CONFIG_VC1DSP)            += vc1dsp.o
 AVCODECOBJS-$(CONFIG_VP8DSP)            += vp8dsp.o
 AVCODECOBJS-$(CONFIG_VIDEODSP)          += videodsp.o
 
diff --git a/tests/checkasm/checkasm.c b/tests/checkasm/checkasm.c
index 748d6a9f3a..c2efd81b6d 100644
--- a/tests/checkasm/checkasm.c
+++ b/tests/checkasm/checkasm.c
@@ -147,6 +147,9 @@ static const struct {
     #if CONFIG_V210_ENCODER
         { "v210enc", checkasm_check_v210enc },
     #endif
+    #if CONFIG_VC1DSP
+        { "vc1dsp", checkasm_check_vc1dsp },
+    #endif
     #if CONFIG_VP8DSP
         { "vp8dsp", checkasm_check_vp8dsp },
     #endif
diff --git a/tests/checkasm/checkasm.h b/tests/checkasm/checkasm.h
index c3192d8c23..52ab18a5b1 100644
--- a/tests/checkasm/checkasm.h
+++ b/tests/checkasm/checkasm.h
@@ -78,6 +78,7 @@ void checkasm_check_sw_scale(void);
 void checkasm_check_utvideodsp(void);
 void checkasm_check_v210dec(void);
 void checkasm_check_v210enc(void);
+void checkasm_check_vc1dsp(void);
 void checkasm_check_vf_eq(void);
 void checkasm_check_vf_gblur(void);
 void checkasm_check_vf_hflip(void);
diff --git a/tests/checkasm/vc1dsp.c b/tests/checkasm/vc1dsp.c
new file mode 100644
index 0000000000..db916d08f9
--- /dev/null
+++ b/tests/checkasm/vc1dsp.c
@@ -0,0 +1,94 @@
+/*
+ * Copyright (c) 2022 Ben Avison
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with FFmpeg; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+ */
+
+#include <string.h>
+
+#include "checkasm.h"
+
+#include "libavcodec/vc1dsp.h"
+
+#include "libavutil/common.h"
+#include "libavutil/internal.h"
+#include "libavutil/intreadwrite.h"
+#include "libavutil/mem_internal.h"
+
+#define RANDOMIZE_BUFFER8_MID_WEIGHTED(name, size)  \
+    do {                                            \
+        uint8_t *p##0 = name##0, *p##1 = name##1;   \
+        int i = (size);                             \
+        while (i-- > 0) {                           \
+            int x = 0x80 | (rnd() & 0x7F);          \
+            x >>= rnd() % 9;                        \
+            if (rnd() & 1)                          \
+                x = -x;                             \
+            *p##1++ = *p##0++ = 0x80 + x;           \
+        }                                           \
+    } while (0)
+
+#define CHECK_LOOP_FILTER(func)                                             \
+    do {                                                                    \
+        if (check_func(h.func, "vc1dsp." #func)) {                          \
+            declare_func_emms(AV_CPU_FLAG_MMX, void, uint8_t *, int, int);  \
+            for (int count = 1000; count > 0; --count) {                    \
+                int pq = rnd() % 31 + 1;                                    \
+                RANDOMIZE_BUFFER8_MID_WEIGHTED(filter_buf, 24 * 24);        \
+                call_ref(filter_buf0 + 4 * 24 + 4, 24, pq);                 \
+                call_new(filter_buf1 + 4 * 24 + 4, 24, pq);                 \
+                if (memcmp(filter_buf0, filter_buf1, 24 * 24))              \
+                    fail();                                                 \
+            }                                                               \
+        }                                                                   \
+        for (int j = 0; j < 24; ++j)                                        \
+            for (int i = 0; i < 24; ++i)                                    \
+                filter_buf1[24*j + i] = 0x60 + 0x40 * (i >= 4 && j >= 4);   \
+        if (check_func(h.func, "vc1dsp." #func "_bestcase")) {              \
+            declare_func_emms(AV_CPU_FLAG_MMX, void, uint8_t *, int, int);  \
+            bench_new(filter_buf1 + 4 * 24 + 4, 24, 1);                     \
+            (void) checked_call;                                            \
+        }                                                                   \
+        if (check_func(h.func, "vc1dsp." #func "_worstcase")) {             \
+            declare_func_emms(AV_CPU_FLAG_MMX, void, uint8_t *, int, int);  \
+            bench_new(filter_buf1 + 4 * 24 + 4, 24, 31);                    \
+            (void) checked_call;                                            \
+        }                                                                   \
+    } while (0)
+
+void checkasm_check_vc1dsp(void)
+{
+    /* Deblocking filter buffers are big enough to hold a 16x16 block,
+     * plus 4 rows/columns above/left to hold filter inputs (depending on
+     * whether v or h neighbouring block edge) plus 4 rows/columns
+     * right/below to catch write overflows */
+    LOCAL_ALIGNED_4(uint8_t, filter_buf0, [24 * 24]);
+    LOCAL_ALIGNED_4(uint8_t, filter_buf1, [24 * 24]);
+
+    VC1DSPContext h;
+
+    ff_vc1dsp_init(&h);
+
+    CHECK_LOOP_FILTER(vc1_v_loop_filter4);
+    CHECK_LOOP_FILTER(vc1_h_loop_filter4);
+    CHECK_LOOP_FILTER(vc1_v_loop_filter8);
+    CHECK_LOOP_FILTER(vc1_h_loop_filter8);
+    CHECK_LOOP_FILTER(vc1_v_loop_filter16);
+    CHECK_LOOP_FILTER(vc1_h_loop_filter16);
+
+    report("loop_filter");
+}
diff --git a/tests/fate/checkasm.mak b/tests/fate/checkasm.mak
index 6db8f09d12..99e6bb13c4 100644
--- a/tests/fate/checkasm.mak
+++ b/tests/fate/checkasm.mak
@@ -32,6 +32,7 @@ FATE_CHECKASM = fate-checkasm-aacpsdsp                                  \
                 fate-checkasm-utvideodsp                                \
                 fate-checkasm-v210dec                                   \
                 fate-checkasm-v210enc                                   \
+                fate-checkasm-vc1dsp                                    \
                 fate-checkasm-vf_blend                                  \
                 fate-checkasm-vf_colorspace                             \
                 fate-checkasm-vf_eq                                     \
-- 
2.25.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [FFmpeg-devel] [PATCH 02/10] checkasm: Add vc1dsp inverse transform tests
  2022-03-25 18:52 ` [FFmpeg-devel] [PATCH v2 00/10] " Ben Avison
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 01/10] checkasm: Add vc1dsp in-loop deblocking filter tests Ben Avison
@ 2022-03-25 18:52   ` Ben Avison
  2022-03-29 12:41     ` Martin Storsjö
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 03/10] checkasm: Add idctdsp add/put-pixels-clamped tests Ben Avison
                     ` (7 subsequent siblings)
  9 siblings, 1 reply; 55+ messages in thread
From: Ben Avison @ 2022-03-25 18:52 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Ben Avison

This test deliberately doesn't exercise the full range of inputs described in
the committee draft VC-1 standard. It says:

input coefficients in frequency domain, D, satisfy   -2048 <= D < 2047
intermediate coefficients, E, satisfy                -4096 <= E < 4095
fully inverse-transformed coefficients, R, satisfy    -512 <= R <  511

For one thing, the inequalities look odd. Did they mean them to go the
other way round? That would make more sense because the equations generally
both add and subtract coefficients multiplied by constants, including powers
of 2. Requiring the most-negative values to be valid extends the number of
bits to represent the intermediate values just for the sake of that one case!

For another thing, the extreme values don't look to occur in real streams -
both in my experience and supported by the following comment in the AArch32
decoder:

    tNhalf is half of the value of tN (as described in vc1_inv_trans_8x8_c).
    This is done because sometimes files have input that causes tN + tM to
    overflow. To avoid this overflow, we compute tNhalf, then compute
    tNhalf + tM (which doesn't overflow), and then we use vhadd to compute
    (tNhalf + (tNhalf + tM)) >> 1 which does not overflow because it is
    one instruction.

My AArch64 decoder goes further than this. It calculates tNhalf and tM
then does an SRA (essentially a fused halve and add) to compute
(tN + tM) >> 1 without ever having to hold (tNhalf + tM) in a 16-bit element
without overflowing. It only encounters difficulties if either tNhalf or
tM overflow in isolation.

I haven't had sight of the final standard, so it's possible that these
issues were dealt with during finalisation, which could explain the lack
of usage of extreme inputs in real streams. Or a preponderance of decoders
that only support 16-bit intermediate values in their inverse transforms
might have caused encoders to steer clear of such cases.

I have effectively followed this approach in the test, and limited the
scale of the coefficients sufficient that both the existing AArch32 decoder
and my new AArch64 decoder both pass.

Signed-off-by: Ben Avison <bavison@riscosopen.org>
---
 tests/checkasm/vc1dsp.c | 258 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 258 insertions(+)

diff --git a/tests/checkasm/vc1dsp.c b/tests/checkasm/vc1dsp.c
index db916d08f9..0823ccad31 100644
--- a/tests/checkasm/vc1dsp.c
+++ b/tests/checkasm/vc1dsp.c
@@ -29,6 +29,200 @@
 #include "libavutil/intreadwrite.h"
 #include "libavutil/mem_internal.h"
 
+typedef struct matrix {
+    size_t width;
+    size_t height;
+    float d[];
+} matrix;
+
+static const matrix T8 = { 8, 8, {
+        12,  12,  12,  12,  12,  12,  12,  12,
+        16,  15,   9,   4,  -4,  -9, -15, -16,
+        16,   6,  -6, -16, -16,  -6,   6,  16,
+        15,  -4, -16,  -9,   9,  16,   4, -15,
+        12, -12, -12,  12,  12, -12, -12,  12,
+         9, -16,   4,  15, -15,  -4,  16,  -9,
+         6, -16,  16,  -6,  -6,  16, -16,   6,
+         4,  -9,  15, -16,  16, -15,   9,  -4
+} };
+
+static const matrix T4 = { 4, 4, {
+        17,  17,  17,  17,
+        22,  10, -10, -22,
+        17, -17, -17,  17,
+        10, -22,  22, -10
+} };
+
+static const matrix T8t = { 8, 8, {
+        12,  16,  16,  15,  12,   9,   6,   4,
+        12,  15,   6,  -4, -12, -16, -16,  -9,
+        12,   9,  -6, -16, -12,   4,  16,  15,
+        12,   4, -16,  -9,  12,  15,  -6, -16,
+        12,  -4, -16,   9,  12, -15,  -6,  16,
+        12,  -9,  -6,  16, -12,  -4,  16, -15,
+        12, -15,   6,   4, -12,  16, -16,   9,
+        12, -16,  16, -15,  12,  -9,   6,  -4
+} };
+
+static const matrix T4t = { 4, 4, {
+        17,  22,  17,  10,
+        17,  10, -17, -22,
+        17, -10, -17,  22,
+        17, -22,  17, -10
+} };
+
+static matrix *new_matrix(size_t width, size_t height)
+{
+    matrix *out = av_mallocz(sizeof (matrix) + height * width * sizeof (float));
+    if (out == NULL) {
+        fprintf(stderr, "Memory allocation failure\n");
+        exit(EXIT_FAILURE);
+    }
+    out->width = width;
+    out->height = height;
+    return out;
+}
+
+static matrix *multiply(const matrix *a, const matrix *b)
+{
+    matrix *out;
+    if (a->width != b->height) {
+        fprintf(stderr, "Incompatible multiplication\n");
+        exit(EXIT_FAILURE);
+    }
+    out = new_matrix(b->width, a->height);
+    for (int j = 0; j < out->height; ++j)
+        for (int i = 0; i < out->width; ++i) {
+            float sum = 0;
+            for (int k = 0; k < a->width; ++k)
+                sum += a->d[j * a->width + k] * b->d[k * b->width + i];
+            out->d[j * out->width + i] = sum;
+        }
+    return out;
+}
+
+static void normalise(matrix *a)
+{
+    for (int j = 0; j < a->height; ++j)
+        for (int i = 0; i < a->width; ++i) {
+            float *p = a->d + j * a->width + i;
+            *p *= 64;
+            if (a->height == 4)
+                *p /= (const unsigned[]) { 289, 292, 289, 292 } [j];
+            else
+                *p /= (const unsigned[]) { 288, 289, 292, 289, 288, 289, 292, 289 } [j];
+            if (a->width == 4)
+                *p /= (const unsigned[]) { 289, 292, 289, 292 } [i];
+            else
+                *p /= (const unsigned[]) { 288, 289, 292, 289, 288, 289, 292, 289 } [i];
+        }
+}
+
+static void divide_and_round_nearest(matrix *a, float by)
+{
+    for (int j = 0; j < a->height; ++j)
+        for (int i = 0; i < a->width; ++i) {
+            float *p = a->d + j * a->width + i;
+            *p = rintf(*p / by);
+        }
+}
+
+static void tweak(matrix *a)
+{
+    for (int j = 4; j < a->height; ++j)
+        for (int i = 0; i < a->width; ++i) {
+            float *p = a->d + j * a->width + i;
+            *p += 1;
+        }
+}
+
+/* The VC-1 spec places restrictions on the values permitted at three
+ * different stages:
+ * - D: the input coefficients in frequency domain
+ * - E: the intermediate coefficients, inverse-transformed only horizontally
+ * - R: the fully inverse-transformed coefficients
+ *
+ * To fully cater for the ranges specified requires various intermediate
+ * values to be held to 17-bit precision; yet these conditions do not appear
+ * to be utilised in real-world streams. At least some assembly
+ * implementations have chosen to restrict these values to 16-bit precision,
+ * to accelerate the decoding of real-world streams at the cost of strict
+ * adherence to the spec. To avoid our test marking these as failures,
+ * reduce our random inputs.
+ */
+#define ATTENUATION 4
+
+static matrix *generate_inverse_quantized_transform_coefficients(size_t width, size_t height)
+{
+    matrix *raw, *tmp, *D, *E, *R;
+    raw = new_matrix(width, height);
+    for (int i = 0; i < width * height; ++i)
+        raw->d[i] = (int) (rnd() % (1024/ATTENUATION)) - 512/ATTENUATION;
+    tmp = multiply(height == 8 ? &T8 : &T4, raw);
+    D = multiply(tmp, width == 8 ? &T8t : &T4t);
+    normalise(D);
+    divide_and_round_nearest(D, 1);
+    for (int i = 0; i < width * height; ++i) {
+        if (D->d[i] < -2048/ATTENUATION || D->d[i] > 2048/ATTENUATION-1) {
+            /* Rare, so simply try again */
+            av_free(raw);
+            av_free(tmp);
+            av_free(D);
+            return generate_inverse_quantized_transform_coefficients(width, height);
+        }
+    }
+    E = multiply(D, width == 8 ? &T8 : &T4);
+    divide_and_round_nearest(E, 8);
+    for (int i = 0; i < width * height; ++i)
+        if (E->d[i] < -4096/ATTENUATION || E->d[i] > 4096/ATTENUATION-1) {
+            /* Rare, so simply try again */
+            av_free(raw);
+            av_free(tmp);
+            av_free(D);
+            av_free(E);
+            return generate_inverse_quantized_transform_coefficients(width, height);
+        }
+    R = multiply(height == 8 ? &T8t : &T4t, E);
+    tweak(R);
+    divide_and_round_nearest(R, 128);
+    for (int i = 0; i < width * height; ++i)
+        if (R->d[i] < -512/ATTENUATION || R->d[i] > 512/ATTENUATION-1) {
+            /* Rare, so simply try again */
+            av_free(raw);
+            av_free(tmp);
+            av_free(D);
+            av_free(E);
+            av_free(R);
+            return generate_inverse_quantized_transform_coefficients(width, height);
+        }
+    av_free(raw);
+    av_free(tmp);
+    av_free(E);
+    av_free(R);
+    return D;
+}
+
+#define RANDOMIZE_BUFFER16(name, size)        \
+    do {                                      \
+        int i;                                \
+        for (i = 0; i < size; ++i) {          \
+            uint16_t r = rnd();               \
+            AV_WN16A(name##0 + i, r);         \
+            AV_WN16A(name##1 + i, r);         \
+        }                                     \
+    } while (0)
+
+#define RANDOMIZE_BUFFER8(name, size)         \
+    do {                                      \
+        int i;                                \
+        for (i = 0; i < size; ++i) {          \
+            uint8_t r = rnd();                \
+            name##0[i] = r;                   \
+            name##1[i] = r;                   \
+        }                                     \
+    } while (0)
+
+
 #define RANDOMIZE_BUFFER8_MID_WEIGHTED(name, size)  \
     do {                                            \
         uint8_t *p##0 = name##0, *p##1 = name##1;   \
@@ -42,6 +236,28 @@
         }                                           \
     } while (0)
 
+#define CHECK_INV_TRANS(func, width, height)                                            \
+    do {                                                                                \
+        if (check_func(h.func, "vc1dsp." #func)) {                                      \
+            matrix *coeffs;                                                             \
+            declare_func_emms(AV_CPU_FLAG_MMX, void, uint8_t *, ptrdiff_t, int16_t *);  \
+            RANDOMIZE_BUFFER16(inv_trans_in, 10 * 8);                                   \
+            RANDOMIZE_BUFFER8(inv_trans_out, 10 * 24);                                  \
+            coeffs = generate_inverse_quantized_transform_coefficients(width, height);  \
+            for (int j = 0; j < height; ++j)                                            \
+                for (int i = 0; i < width; ++i) {                                       \
+                    int idx = 8 + j * 8 + i;                                            \
+                    inv_trans_in1[idx] = inv_trans_in0[idx] = coeffs->d[j * width + i]; \
+                }                                                                       \
+            call_ref(inv_trans_out0 + 24 + 8, 24, inv_trans_in0 + 8);                   \
+            call_new(inv_trans_out1 + 24 + 8, 24, inv_trans_in1 + 8);                   \
+            if (memcmp(inv_trans_out0, inv_trans_out1, 10 * 24))                        \
+                fail();                                                                 \
+            bench_new(inv_trans_out1 + 24 + 8, 24, inv_trans_in1 + 8);                  \
+            av_free(coeffs);                                                            \
+        }                                                                               \
+    } while (0)
+
 #define CHECK_LOOP_FILTER(func)                                             \
     do {                                                                    \
         if (check_func(h.func, "vc1dsp." #func)) {                          \
@@ -72,6 +288,20 @@
 
 void checkasm_check_vc1dsp(void)
 {
+    /* Inverse transform input coefficients are stored in a 16-bit buffer
+     * with row stride of 8 coefficients irrespective of transform size.
+     * vc1_inv_trans_8x8 differs from the others in two ways: coefficients
+     * are stored in column-major order, and the outputs are written back
+     * to the input buffer, so we oversize it slightly to catch overruns. */
+    LOCAL_ALIGNED_16(int16_t, inv_trans_in0, [10 * 8]);
+    LOCAL_ALIGNED_16(int16_t, inv_trans_in1, [10 * 8]);
+
+    /* For all but vc1_inv_trans_8x8, the inverse transform is narrowed and
+     * added with saturation to an array of unsigned 8-bit values. Oversize
+     * this by 8 samples left and right and one row above and below. */
+    LOCAL_ALIGNED_8(uint8_t, inv_trans_out0, [10 * 24]);
+    LOCAL_ALIGNED_8(uint8_t, inv_trans_out1, [10 * 24]);
+
     /* Deblocking filter buffers are big enough to hold a 16x16 block,
      * plus 4 rows/columns above/left to hold filter inputs (depending on
      * whether v or h neighbouring block edge) plus 4 rows/columns
@@ -83,6 +313,34 @@ void checkasm_check_vc1dsp(void)
 
     ff_vc1dsp_init(&h);
 
+    if (check_func(h.vc1_inv_trans_8x8, "vc1dsp.vc1_inv_trans_8x8")) {
+        matrix *coeffs;
+        declare_func_emms(AV_CPU_FLAG_MMX, void, int16_t *);
+        RANDOMIZE_BUFFER16(inv_trans_in, 10 * 8);
+        coeffs = generate_inverse_quantized_transform_coefficients(8, 8);
+        for (int j = 0; j < 8; ++j)
+            for (int i = 0; i < 8; ++i) {
+                int idx = 8 + i * 8 + j;
+                inv_trans_in1[idx] = inv_trans_in0[idx] = coeffs->d[j * 8 + i];
+            }
+        call_ref(inv_trans_in0 + 8);
+        call_new(inv_trans_in1 + 8);
+        if (memcmp(inv_trans_in0,  inv_trans_in1,  10 * 8 * sizeof (int16_t)))
+            fail();
+        bench_new(inv_trans_in1 + 8);
+        av_free(coeffs);
+    }
+
+    CHECK_INV_TRANS(vc1_inv_trans_8x4, 8, 4);
+    CHECK_INV_TRANS(vc1_inv_trans_4x8, 4, 8);
+    CHECK_INV_TRANS(vc1_inv_trans_4x4, 4, 4);
+    CHECK_INV_TRANS(vc1_inv_trans_8x8_dc, 8, 8);
+    CHECK_INV_TRANS(vc1_inv_trans_8x4_dc, 8, 4);
+    CHECK_INV_TRANS(vc1_inv_trans_4x8_dc, 4, 8);
+    CHECK_INV_TRANS(vc1_inv_trans_4x4_dc, 4, 4);
+
+    report("inv_trans");
+
     CHECK_LOOP_FILTER(vc1_v_loop_filter4);
     CHECK_LOOP_FILTER(vc1_h_loop_filter4);
     CHECK_LOOP_FILTER(vc1_v_loop_filter8);
-- 
2.25.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [FFmpeg-devel] [PATCH 03/10] checkasm: Add idctdsp add/put-pixels-clamped tests
  2022-03-25 18:52 ` [FFmpeg-devel] [PATCH v2 00/10] " Ben Avison
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 01/10] checkasm: Add vc1dsp in-loop deblocking filter tests Ben Avison
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 02/10] checkasm: Add vc1dsp inverse transform tests Ben Avison
@ 2022-03-25 18:52   ` Ben Avison
  2022-03-29 13:13     ` Martin Storsjö
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 04/10] avcodec/vc1: Introduce fast path for unescaping bitstream buffer Ben Avison
                     ` (6 subsequent siblings)
  9 siblings, 1 reply; 55+ messages in thread
From: Ben Avison @ 2022-03-25 18:52 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Ben Avison

Disable ff_add_pixels_clamped_arm, which was found to fail the test. As this
is normally only used for Arms prior to Armv6 (ARM11) it seems quite unlikely
that anyone is still using this, so I haven't put in the effort to debug it.

Signed-off-by: Ben Avison <bavison@riscosopen.org>
---
 libavcodec/arm/idctdsp_init_arm.c |  2 +
 tests/checkasm/Makefile           |  1 +
 tests/checkasm/checkasm.c         |  3 ++
 tests/checkasm/checkasm.h         |  1 +
 tests/checkasm/idctdsp.c          | 85 +++++++++++++++++++++++++++++++
 tests/fate/checkasm.mak           |  1 +
 6 files changed, 93 insertions(+)
 create mode 100644 tests/checkasm/idctdsp.c

diff --git a/libavcodec/arm/idctdsp_init_arm.c b/libavcodec/arm/idctdsp_init_arm.c
index ebc90e4b49..8c8f7daf06 100644
--- a/libavcodec/arm/idctdsp_init_arm.c
+++ b/libavcodec/arm/idctdsp_init_arm.c
@@ -83,7 +83,9 @@ av_cold void ff_idctdsp_init_arm(IDCTDSPContext *c, AVCodecContext *avctx,
         }
     }
 
+#if 0 // FIXME: this implementation fails checkasm test
     c->add_pixels_clamped = ff_add_pixels_clamped_arm;
+#endif
 
     if (have_armv5te(cpu_flags))
         ff_idctdsp_init_armv5te(c, avctx, high_bit_depth);
diff --git a/tests/checkasm/Makefile b/tests/checkasm/Makefile
index 7133a6ee66..f6b1008855 100644
--- a/tests/checkasm/Makefile
+++ b/tests/checkasm/Makefile
@@ -9,6 +9,7 @@ AVCODECOBJS-$(CONFIG_G722DSP)           += g722dsp.o
 AVCODECOBJS-$(CONFIG_H264DSP)           += h264dsp.o
 AVCODECOBJS-$(CONFIG_H264PRED)          += h264pred.o
 AVCODECOBJS-$(CONFIG_H264QPEL)          += h264qpel.o
+AVCODECOBJS-$(CONFIG_IDCTDSP)           += idctdsp.o
 AVCODECOBJS-$(CONFIG_LLVIDDSP)          += llviddsp.o
 AVCODECOBJS-$(CONFIG_LLVIDENCDSP)       += llviddspenc.o
 AVCODECOBJS-$(CONFIG_VC1DSP)            += vc1dsp.o
diff --git a/tests/checkasm/checkasm.c b/tests/checkasm/checkasm.c
index c2efd81b6d..57134f96ea 100644
--- a/tests/checkasm/checkasm.c
+++ b/tests/checkasm/checkasm.c
@@ -123,6 +123,9 @@ static const struct {
     #if CONFIG_HUFFYUV_DECODER
         { "huffyuvdsp", checkasm_check_huffyuvdsp },
     #endif
+    #if CONFIG_IDCTDSP
+        { "idctdsp", checkasm_check_idctdsp },
+    #endif
     #if CONFIG_JPEG2000_DECODER
         { "jpeg2000dsp", checkasm_check_jpeg2000dsp },
     #endif
diff --git a/tests/checkasm/checkasm.h b/tests/checkasm/checkasm.h
index 52ab18a5b1..a86db140e3 100644
--- a/tests/checkasm/checkasm.h
+++ b/tests/checkasm/checkasm.h
@@ -64,6 +64,7 @@ void checkasm_check_hevc_idct(void);
 void checkasm_check_hevc_pel(void);
 void checkasm_check_hevc_sao(void);
 void checkasm_check_huffyuvdsp(void);
+void checkasm_check_idctdsp(void);
 void checkasm_check_jpeg2000dsp(void);
 void checkasm_check_llviddsp(void);
 void checkasm_check_llviddspenc(void);
diff --git a/tests/checkasm/idctdsp.c b/tests/checkasm/idctdsp.c
new file mode 100644
index 0000000000..d94728b672
--- /dev/null
+++ b/tests/checkasm/idctdsp.c
@@ -0,0 +1,85 @@
+/*
+ * Copyright (c) 2022 Ben Avison
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with FFmpeg; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+ */
+
+#include <string.h>
+
+#include "checkasm.h"
+
+#include "libavcodec/idctdsp.h"
+
+#include "libavutil/common.h"
+#include "libavutil/internal.h"
+#include "libavutil/intreadwrite.h"
+#include "libavutil/mem_internal.h"
+
+#define RANDOMIZE_BUFFER16(name, size)        \
+    do {                                      \
+        int i;                                \
+        for (i = 0; i < size; ++i) {          \
+            uint16_t r = rnd();               \
+            AV_WN16A(name##0 + i, r);         \
+            AV_WN16A(name##1 + i, r);         \
+        }                                     \
+    } while (0)
+
+#define RANDOMIZE_BUFFER8(name, size)         \
+    do {                                      \
+        int i;                                \
+        for (i = 0; i < size; ++i) {          \
+            uint8_t r = rnd();                \
+            name##0[i] = r;                   \
+            name##1[i] = r;                   \
+        }                                     \
+    } while (0)
+
+#define CHECK_ADD_PUT_CLAMPED(func)                                                             \
+    do {                                                                                        \
+        if (check_func(h.func, "idctdsp." #func)) {                                             \
+            declare_func_emms(AV_CPU_FLAG_MMX, void, const int16_t *, uint8_t *, ptrdiff_t);    \
+            RANDOMIZE_BUFFER16(src, 64);                                                        \
+            RANDOMIZE_BUFFER8(dst, 10 * 24);                                                    \
+            call_ref(src0, dst0 + 24 + 8, 24);                                                  \
+            call_new(src1, dst1 + 24 + 8, 24);                                                  \
+            if (memcmp(dst0, dst1, 10 * 24))                                                    \
+                fail();                                                                         \
+            bench_new(src1, dst1 + 24 + 8, 24);                                                 \
+        }                                                                                       \
+    } while (0)
+
+void checkasm_check_idctdsp(void)
+{
+    /* Source buffers are only as big as needed, since any over-read won't affect results */
+    LOCAL_ALIGNED_16(int16_t, src0, [64]);
+    LOCAL_ALIGNED_16(int16_t, src1, [64]);
+    /* Destination buffers have borders of one row above/below and 8 columns left/right to catch overflows */
+    LOCAL_ALIGNED_8(uint8_t, dst0, [10 * 24]);
+    LOCAL_ALIGNED_8(uint8_t, dst1, [10 * 24]);
+
+    AVCodecContext avctx = { 0 };
+    IDCTDSPContext h;
+
+    ff_idctdsp_init(&h, &avctx);
+
+    CHECK_ADD_PUT_CLAMPED(add_pixels_clamped);
+    CHECK_ADD_PUT_CLAMPED(put_pixels_clamped);
+    CHECK_ADD_PUT_CLAMPED(put_signed_pixels_clamped);
+
+    report("idctdsp");
+}
diff --git a/tests/fate/checkasm.mak b/tests/fate/checkasm.mak
index 99e6bb13c4..c6273db183 100644
--- a/tests/fate/checkasm.mak
+++ b/tests/fate/checkasm.mak
@@ -19,6 +19,7 @@ FATE_CHECKASM = fate-checkasm-aacpsdsp                                  \
                 fate-checkasm-hevc_pel                                  \
                 fate-checkasm-hevc_sao                                  \
                 fate-checkasm-huffyuvdsp                                \
+                fate-checkasm-idctdsp                                   \
                 fate-checkasm-jpeg2000dsp                               \
                 fate-checkasm-llviddsp                                  \
                 fate-checkasm-llviddspenc                               \
-- 
2.25.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [FFmpeg-devel] [PATCH 04/10] avcodec/vc1: Introduce fast path for unescaping bitstream buffer
  2022-03-25 18:52 ` [FFmpeg-devel] [PATCH v2 00/10] " Ben Avison
                     ` (2 preceding siblings ...)
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 03/10] checkasm: Add idctdsp add/put-pixels-clamped tests Ben Avison
@ 2022-03-25 18:52   ` Ben Avison
  2022-03-29 20:37     ` Martin Storsjö
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 05/10] avcodec/vc1: Arm 64-bit NEON deblocking filter fast paths Ben Avison
                     ` (5 subsequent siblings)
  9 siblings, 1 reply; 55+ messages in thread
From: Ben Avison @ 2022-03-25 18:52 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Ben Avison

Includes a checkasm test.

Signed-off-by: Ben Avison <bavison@riscosopen.org>
---
 libavcodec/vc1dec.c     | 20 +++++++-------
 libavcodec/vc1dsp.c     |  2 ++
 libavcodec/vc1dsp.h     |  3 +++
 tests/checkasm/vc1dsp.c | 59 +++++++++++++++++++++++++++++++++++++++++
 4 files changed, 74 insertions(+), 10 deletions(-)

diff --git a/libavcodec/vc1dec.c b/libavcodec/vc1dec.c
index 1c92b9d401..6a30b5b664 100644
--- a/libavcodec/vc1dec.c
+++ b/libavcodec/vc1dec.c
@@ -490,7 +490,7 @@ static av_cold int vc1_decode_init(AVCodecContext *avctx)
             size = next - start - 4;
             if (size <= 0)
                 continue;
-            buf2_size = vc1_unescape_buffer(start + 4, size, buf2);
+            buf2_size = v->vc1dsp.vc1_unescape_buffer(start + 4, size, buf2);
             init_get_bits(&gb, buf2, buf2_size * 8);
             switch (AV_RB32(start)) {
             case VC1_CODE_SEQHDR:
@@ -680,7 +680,7 @@ static int vc1_decode_frame(AVCodecContext *avctx, void *data,
                 case VC1_CODE_FRAME:
                     if (avctx->hwaccel)
                         buf_start = start;
-                    buf_size2 = vc1_unescape_buffer(start + 4, size, buf2);
+                    buf_size2 = v->vc1dsp.vc1_unescape_buffer(start + 4, size, buf2);
                     break;
                 case VC1_CODE_FIELD: {
                     int buf_size3;
@@ -697,8 +697,8 @@ static int vc1_decode_frame(AVCodecContext *avctx, void *data,
                         ret = AVERROR(ENOMEM);
                         goto err;
                     }
-                    buf_size3 = vc1_unescape_buffer(start + 4, size,
-                                                    slices[n_slices].buf);
+                    buf_size3 = v->vc1dsp.vc1_unescape_buffer(start + 4, size,
+                                                              slices[n_slices].buf);
                     init_get_bits(&slices[n_slices].gb, slices[n_slices].buf,
                                   buf_size3 << 3);
                     slices[n_slices].mby_start = avctx->coded_height + 31 >> 5;
@@ -709,7 +709,7 @@ static int vc1_decode_frame(AVCodecContext *avctx, void *data,
                     break;
                 }
                 case VC1_CODE_ENTRYPOINT: /* it should be before frame data */
-                    buf_size2 = vc1_unescape_buffer(start + 4, size, buf2);
+                    buf_size2 = v->vc1dsp.vc1_unescape_buffer(start + 4, size, buf2);
                     init_get_bits(&s->gb, buf2, buf_size2 * 8);
                     ff_vc1_decode_entry_point(avctx, v, &s->gb);
                     break;
@@ -726,8 +726,8 @@ static int vc1_decode_frame(AVCodecContext *avctx, void *data,
                         ret = AVERROR(ENOMEM);
                         goto err;
                     }
-                    buf_size3 = vc1_unescape_buffer(start + 4, size,
-                                                    slices[n_slices].buf);
+                    buf_size3 = v->vc1dsp.vc1_unescape_buffer(start + 4, size,
+                                                              slices[n_slices].buf);
                     init_get_bits(&slices[n_slices].gb, slices[n_slices].buf,
                                   buf_size3 << 3);
                     slices[n_slices].mby_start = get_bits(&slices[n_slices].gb, 9);
@@ -761,7 +761,7 @@ static int vc1_decode_frame(AVCodecContext *avctx, void *data,
                     ret = AVERROR(ENOMEM);
                     goto err;
                 }
-                buf_size3 = vc1_unescape_buffer(divider + 4, buf + buf_size - divider - 4, slices[n_slices].buf);
+                buf_size3 = v->vc1dsp.vc1_unescape_buffer(divider + 4, buf + buf_size - divider - 4, slices[n_slices].buf);
                 init_get_bits(&slices[n_slices].gb, slices[n_slices].buf,
                               buf_size3 << 3);
                 slices[n_slices].mby_start = s->mb_height + 1 >> 1;
@@ -770,9 +770,9 @@ static int vc1_decode_frame(AVCodecContext *avctx, void *data,
                 n_slices1 = n_slices - 1;
                 n_slices++;
             }
-            buf_size2 = vc1_unescape_buffer(buf, divider - buf, buf2);
+            buf_size2 = v->vc1dsp.vc1_unescape_buffer(buf, divider - buf, buf2);
         } else {
-            buf_size2 = vc1_unescape_buffer(buf, buf_size, buf2);
+            buf_size2 = v->vc1dsp.vc1_unescape_buffer(buf, buf_size, buf2);
         }
         init_get_bits(&s->gb, buf2, buf_size2*8);
     } else{
diff --git a/libavcodec/vc1dsp.c b/libavcodec/vc1dsp.c
index a29b91bf3d..11d493f002 100644
--- a/libavcodec/vc1dsp.c
+++ b/libavcodec/vc1dsp.c
@@ -34,6 +34,7 @@
 #include "rnd_avg.h"
 #include "vc1dsp.h"
 #include "startcode.h"
+#include "vc1_common.h"
 
 /* Apply overlap transform to horizontal edge */
 static void vc1_v_overlap_c(uint8_t *src, int stride)
@@ -1030,6 +1031,7 @@ av_cold void ff_vc1dsp_init(VC1DSPContext *dsp)
 #endif /* CONFIG_WMV3IMAGE_DECODER || CONFIG_VC1IMAGE_DECODER */
 
     dsp->startcode_find_candidate = ff_startcode_find_candidate_c;
+    dsp->vc1_unescape_buffer      = vc1_unescape_buffer;
 
     if (ARCH_AARCH64)
         ff_vc1dsp_init_aarch64(dsp);
diff --git a/libavcodec/vc1dsp.h b/libavcodec/vc1dsp.h
index c6443acb20..8be1198071 100644
--- a/libavcodec/vc1dsp.h
+++ b/libavcodec/vc1dsp.h
@@ -80,6 +80,9 @@ typedef struct VC1DSPContext {
      * one or more further zero bytes and a one byte.
      */
     int (*startcode_find_candidate)(const uint8_t *buf, int size);
+
+    /* Copy a buffer, removing startcode emulation escape bytes as we go */
+    int (*vc1_unescape_buffer)(const uint8_t *src, int size, uint8_t *dst);
 } VC1DSPContext;
 
 void ff_vc1dsp_init(VC1DSPContext* c);
diff --git a/tests/checkasm/vc1dsp.c b/tests/checkasm/vc1dsp.c
index 0823ccad31..0ab5892403 100644
--- a/tests/checkasm/vc1dsp.c
+++ b/tests/checkasm/vc1dsp.c
@@ -286,6 +286,20 @@ static matrix *generate_inverse_quantized_transform_coefficients(size_t width, s
         }                                                                   \
     } while (0)
 
+#define TEST_UNESCAPE                                                                                   \
+    do {                                                                                            \
+        for (int count = 100; count > 0; --count) {                                                 \
+            escaped_offset = rnd() & 7;                                                             \
+            unescaped_offset = rnd() & 7;                                                           \
+            escaped_len = (1u << (rnd() % 8) + 3) - (rnd() & 7);                                    \
+            RANDOMIZE_BUFFER8(unescaped, UNESCAPE_BUF_SIZE);                                        \
+            len0 = call_ref(escaped0 + escaped_offset, escaped_len, unescaped0 + unescaped_offset); \
+            len1 = call_new(escaped1 + escaped_offset, escaped_len, unescaped1 + unescaped_offset); \
+            if (len0 != len1 || memcmp(unescaped0, unescaped1, len0))                               \
+                fail();                                                                             \
+        }                                                                                           \
+    } while (0)
+
 void checkasm_check_vc1dsp(void)
 {
     /* Inverse transform input coefficients are stored in a 16-bit buffer
@@ -309,6 +323,14 @@ void checkasm_check_vc1dsp(void)
     LOCAL_ALIGNED_4(uint8_t, filter_buf0, [24 * 24]);
     LOCAL_ALIGNED_4(uint8_t, filter_buf1, [24 * 24]);
 
+    /* This appears to be a typical length of buffer in use */
+#define LOG2_UNESCAPE_BUF_SIZE 17
+#define UNESCAPE_BUF_SIZE (1u<<LOG2_UNESCAPE_BUF_SIZE)
+    LOCAL_ALIGNED_8(uint8_t, escaped0, [UNESCAPE_BUF_SIZE]);
+    LOCAL_ALIGNED_8(uint8_t, escaped1, [UNESCAPE_BUF_SIZE]);
+    LOCAL_ALIGNED_8(uint8_t, unescaped0, [UNESCAPE_BUF_SIZE]);
+    LOCAL_ALIGNED_8(uint8_t, unescaped1, [UNESCAPE_BUF_SIZE]);
+
     VC1DSPContext h;
 
     ff_vc1dsp_init(&h);
@@ -349,4 +371,41 @@ void checkasm_check_vc1dsp(void)
     CHECK_LOOP_FILTER(vc1_h_loop_filter16);
 
     report("loop_filter");
+
+    if (check_func(h.vc1_unescape_buffer, "vc1dsp.vc1_unescape_buffer")) {
+        int len0, len1, escaped_offset, unescaped_offset, escaped_len;
+        declare_func_emms(AV_CPU_FLAG_MMX, int, const uint8_t *, int, uint8_t *);
+
+        /* Test data which consists of escapes sequences packed as tightly as possible */
+        for (int x = 0; x < UNESCAPE_BUF_SIZE; ++x)
+            escaped1[x] = escaped0[x] = 3 * (x % 3 == 0);
+        TEST_UNESCAPE;
+
+        /* Test random data */
+        RANDOMIZE_BUFFER8(escaped, UNESCAPE_BUF_SIZE);
+        TEST_UNESCAPE;
+
+        /* Test data with escape sequences at random intervals */
+        for (int x = 0; x <= UNESCAPE_BUF_SIZE - 4;) {
+            int gap, gap_msb;
+            escaped1[x+0] = escaped0[x+0] = 0;
+            escaped1[x+1] = escaped0[x+1] = 0;
+            escaped1[x+2] = escaped0[x+2] = 3;
+            escaped1[x+3] = escaped0[x+3] = rnd() & 3;
+            gap_msb = 2u << (rnd() % 8);
+            gap = (rnd() &~ -gap_msb) | gap_msb;
+            x += gap;
+        }
+        TEST_UNESCAPE;
+
+        /* Test data which is known to contain no escape sequences */
+        memset(escaped0, 0xFF, UNESCAPE_BUF_SIZE);
+        memset(escaped1, 0xFF, UNESCAPE_BUF_SIZE);
+        TEST_UNESCAPE;
+
+        /* Benchmark the no-escape-sequences case */
+        bench_new(escaped1, UNESCAPE_BUF_SIZE, unescaped1);
+    }
+
+    report("unescape_buffer");
 }
-- 
2.25.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [FFmpeg-devel] [PATCH 05/10] avcodec/vc1: Arm 64-bit NEON deblocking filter fast paths
  2022-03-25 18:52 ` [FFmpeg-devel] [PATCH v2 00/10] " Ben Avison
                     ` (3 preceding siblings ...)
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 04/10] avcodec/vc1: Introduce fast path for unescaping bitstream buffer Ben Avison
@ 2022-03-25 18:52   ` Ben Avison
  2022-03-30 12:35     ` Martin Storsjö
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 06/10] avcodec/vc1: Arm 32-bit " Ben Avison
                     ` (4 subsequent siblings)
  9 siblings, 1 reply; 55+ messages in thread
From: Ben Avison @ 2022-03-25 18:52 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Ben Avison

checkasm benchmarks on 1.5 GHz Cortex-A72 are as follows. Note that the C
version can still outperform the NEON version in specific cases. The balance
between different code paths is stream-dependent, but in practice the best
case happens about 5% of the time, the worst case happens about 40% of the
time, and the complexity of the remaining cases fall somewhere in between.
Therefore, taking the average of the best and worst case timings is
probably a conservative estimate of the degree by which the NEON code
improves performance.

vc1dsp.vc1_h_loop_filter4_bestcase_c: 10.7
vc1dsp.vc1_h_loop_filter4_bestcase_neon: 43.5
vc1dsp.vc1_h_loop_filter4_worstcase_c: 184.5
vc1dsp.vc1_h_loop_filter4_worstcase_neon: 73.7
vc1dsp.vc1_h_loop_filter8_bestcase_c: 31.2
vc1dsp.vc1_h_loop_filter8_bestcase_neon: 62.2
vc1dsp.vc1_h_loop_filter8_worstcase_c: 358.2
vc1dsp.vc1_h_loop_filter8_worstcase_neon: 88.2
vc1dsp.vc1_h_loop_filter16_bestcase_c: 51.0
vc1dsp.vc1_h_loop_filter16_bestcase_neon: 107.7
vc1dsp.vc1_h_loop_filter16_worstcase_c: 722.7
vc1dsp.vc1_h_loop_filter16_worstcase_neon: 140.5
vc1dsp.vc1_v_loop_filter4_bestcase_c: 9.7
vc1dsp.vc1_v_loop_filter4_bestcase_neon: 43.0
vc1dsp.vc1_v_loop_filter4_worstcase_c: 178.7
vc1dsp.vc1_v_loop_filter4_worstcase_neon: 69.0
vc1dsp.vc1_v_loop_filter8_bestcase_c: 30.2
vc1dsp.vc1_v_loop_filter8_bestcase_neon: 50.7
vc1dsp.vc1_v_loop_filter8_worstcase_c: 353.0
vc1dsp.vc1_v_loop_filter8_worstcase_neon: 69.2
vc1dsp.vc1_v_loop_filter16_bestcase_c: 60.0
vc1dsp.vc1_v_loop_filter16_bestcase_neon: 90.0
vc1dsp.vc1_v_loop_filter16_worstcase_c: 714.2
vc1dsp.vc1_v_loop_filter16_worstcase_neon: 97.2

Signed-off-by: Ben Avison <bavison@riscosopen.org>
---
 libavcodec/aarch64/Makefile              |   1 +
 libavcodec/aarch64/vc1dsp_init_aarch64.c |  14 +
 libavcodec/aarch64/vc1dsp_neon.S         | 698 +++++++++++++++++++++++
 3 files changed, 713 insertions(+)
 create mode 100644 libavcodec/aarch64/vc1dsp_neon.S

diff --git a/libavcodec/aarch64/Makefile b/libavcodec/aarch64/Makefile
index 954461f81d..5b25e4dfb9 100644
--- a/libavcodec/aarch64/Makefile
+++ b/libavcodec/aarch64/Makefile
@@ -48,6 +48,7 @@ NEON-OBJS-$(CONFIG_IDCTDSP)             += aarch64/simple_idct_neon.o
 NEON-OBJS-$(CONFIG_MDCT)                += aarch64/mdct_neon.o
 NEON-OBJS-$(CONFIG_MPEGAUDIODSP)        += aarch64/mpegaudiodsp_neon.o
 NEON-OBJS-$(CONFIG_PIXBLOCKDSP)         += aarch64/pixblockdsp_neon.o
+NEON-OBJS-$(CONFIG_VC1DSP)              += aarch64/vc1dsp_neon.o
 NEON-OBJS-$(CONFIG_VP8DSP)              += aarch64/vp8dsp_neon.o
 
 # decoders/encoders
diff --git a/libavcodec/aarch64/vc1dsp_init_aarch64.c b/libavcodec/aarch64/vc1dsp_init_aarch64.c
index 13dfd74940..edfb296b75 100644
--- a/libavcodec/aarch64/vc1dsp_init_aarch64.c
+++ b/libavcodec/aarch64/vc1dsp_init_aarch64.c
@@ -25,6 +25,13 @@
 
 #include "config.h"
 
+void ff_vc1_v_loop_filter4_neon(uint8_t *src, int stride, int pq);
+void ff_vc1_h_loop_filter4_neon(uint8_t *src, int stride, int pq);
+void ff_vc1_v_loop_filter8_neon(uint8_t *src, int stride, int pq);
+void ff_vc1_h_loop_filter8_neon(uint8_t *src, int stride, int pq);
+void ff_vc1_v_loop_filter16_neon(uint8_t *src, int stride, int pq);
+void ff_vc1_h_loop_filter16_neon(uint8_t *src, int stride, int pq);
+
 void ff_put_vc1_chroma_mc8_neon(uint8_t *dst, uint8_t *src, ptrdiff_t stride,
                                 int h, int x, int y);
 void ff_avg_vc1_chroma_mc8_neon(uint8_t *dst, uint8_t *src, ptrdiff_t stride,
@@ -39,6 +46,13 @@ av_cold void ff_vc1dsp_init_aarch64(VC1DSPContext *dsp)
     int cpu_flags = av_get_cpu_flags();
 
     if (have_neon(cpu_flags)) {
+        dsp->vc1_v_loop_filter4  = ff_vc1_v_loop_filter4_neon;
+        dsp->vc1_h_loop_filter4  = ff_vc1_h_loop_filter4_neon;
+        dsp->vc1_v_loop_filter8  = ff_vc1_v_loop_filter8_neon;
+        dsp->vc1_h_loop_filter8  = ff_vc1_h_loop_filter8_neon;
+        dsp->vc1_v_loop_filter16 = ff_vc1_v_loop_filter16_neon;
+        dsp->vc1_h_loop_filter16 = ff_vc1_h_loop_filter16_neon;
+
         dsp->put_no_rnd_vc1_chroma_pixels_tab[0] = ff_put_vc1_chroma_mc8_neon;
         dsp->avg_no_rnd_vc1_chroma_pixels_tab[0] = ff_avg_vc1_chroma_mc8_neon;
         dsp->put_no_rnd_vc1_chroma_pixels_tab[1] = ff_put_vc1_chroma_mc4_neon;
diff --git a/libavcodec/aarch64/vc1dsp_neon.S b/libavcodec/aarch64/vc1dsp_neon.S
new file mode 100644
index 0000000000..70391b4179
--- /dev/null
+++ b/libavcodec/aarch64/vc1dsp_neon.S
@@ -0,0 +1,698 @@
+/*
+ * VC1 AArch64 NEON optimisations
+ *
+ * Copyright (c) 2022 Ben Avison <bavison@riscosopen.org>
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include "libavutil/aarch64/asm.S"
+
+.align  5
+.Lcoeffs:
+.quad   0x00050002
+
+// VC-1 in-loop deblocking filter for 4 pixel pairs at boundary of vertically-neighbouring blocks
+// On entry:
+//   x0 -> top-left pel of lower block
+//   w1 = row stride, bytes
+//   w2 = PQUANT bitstream parameter
+function ff_vc1_v_loop_filter4_neon, export=1
+        sub             x3, x0, w1, sxtw #2
+        sxtw            x1, w1                  // technically, stride is signed int
+        ldr             d0, .Lcoeffs
+        ld1             {v1.s}[0], [x0], x1     // P5
+        ld1             {v2.s}[0], [x3], x1     // P1
+        ld1             {v3.s}[0], [x3], x1     // P2
+        ld1             {v4.s}[0], [x0], x1     // P6
+        ld1             {v5.s}[0], [x3], x1     // P3
+        ld1             {v6.s}[0], [x0], x1     // P7
+        ld1             {v7.s}[0], [x3]         // P4
+        ld1             {v16.s}[0], [x0]        // P8
+        ushll           v17.8h, v1.8b, #1       // 2*P5
+        dup             v18.8h, w2              // pq
+        ushll           v2.8h, v2.8b, #1        // 2*P1
+        uxtl            v3.8h, v3.8b            // P2
+        uxtl            v4.8h, v4.8b            // P6
+        uxtl            v19.8h, v5.8b           // P3
+        mls             v2.4h, v3.4h, v0.h[1]   // 2*P1-5*P2
+        uxtl            v3.8h, v6.8b            // P7
+        mls             v17.4h, v4.4h, v0.h[1]  // 2*P5-5*P6
+        ushll           v5.8h, v5.8b, #1        // 2*P3
+        uxtl            v6.8h, v7.8b            // P4
+        mla             v17.4h, v3.4h, v0.h[1]  // 2*P5-5*P6+5*P7
+        uxtl            v3.8h, v16.8b           // P8
+        mla             v2.4h, v19.4h, v0.h[1]  // 2*P1-5*P2+5*P3
+        uxtl            v1.8h, v1.8b            // P5
+        mls             v5.4h, v6.4h, v0.h[1]   // 2*P3-5*P4
+        mls             v17.4h, v3.4h, v0.h[0]  // 2*P5-5*P6+5*P7-2*P8
+        sub             v3.4h, v6.4h, v1.4h     // P4-P5
+        mls             v2.4h, v6.4h, v0.h[0]   // 2*P1-5*P2+5*P3-2*P4
+        mla             v5.4h, v1.4h, v0.h[1]   // 2*P3-5*P4+5*P5
+        mls             v5.4h, v4.4h, v0.h[0]   // 2*P3-5*P4+5*P5-2*P6
+        abs             v4.4h, v3.4h
+        srshr           v7.4h, v17.4h, #3
+        srshr           v2.4h, v2.4h, #3
+        sshr            v4.4h, v4.4h, #1        // clip
+        srshr           v5.4h, v5.4h, #3
+        abs             v7.4h, v7.4h            // a2
+        sshr            v3.4h, v3.4h, #8        // clip_sign
+        abs             v2.4h, v2.4h            // a1
+        cmeq            v16.4h, v4.4h, #0       // test clip == 0
+        abs             v17.4h, v5.4h           // a0
+        sshr            v5.4h, v5.4h, #8        // a0_sign
+        cmhs            v19.4h, v2.4h, v7.4h    // test a1 >= a2
+        cmhs            v18.4h, v17.4h, v18.4h  // test a0 >= pq
+        sub             v3.4h, v3.4h, v5.4h     // clip_sign - a0_sign
+        bsl             v19.8b, v7.8b, v2.8b    // a3
+        orr             v2.8b, v16.8b, v18.8b   // test clip == 0 || a0 >= pq
+        uqsub           v5.4h, v17.4h, v19.4h   // a0 >= a3 ? a0-a3 : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        cmhs            v7.4h, v19.4h, v17.4h   // test a3 >= a0
+        mul             v0.4h, v5.4h, v0.h[1]   // a0 >= a3 ? 5*(a0-a3) : 0
+        orr             v5.8b, v2.8b, v7.8b     // test clip == 0 || a0 >= pq || a3 >= a0
+        mov             w0, v5.s[1]             // move to gp reg
+        ushr            v0.4h, v0.4h, #3        // a0 >= a3 ? (5*(a0-a3))>>3 : 0
+        cmhs            v5.4h, v0.4h, v4.4h
+        tbnz            w0, #0, 1f              // none of the 4 pixel pairs should be updated if this one is not filtered
+        bsl             v5.8b, v4.8b, v0.8b     // FFMIN(d, clip)
+        bic             v0.8b, v5.8b, v2.8b     // set each d to zero if it should not be filtered because clip == 0 || a0 >= pq (a3 > a0 case already zeroed by saturating sub)
+        mls             v6.4h, v0.4h, v3.4h     // invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P4
+        mla             v1.4h, v0.4h, v3.4h     // invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P5
+        sqxtun          v0.8b, v6.8h
+        sqxtun          v1.8b, v1.8h
+        st1             {v0.s}[0], [x3], x1
+        st1             {v1.s}[0], [x3]
+1:      ret
+endfunc
+
+// VC-1 in-loop deblocking filter for 4 pixel pairs at boundary of horizontally-neighbouring blocks
+// On entry:
+//   x0 -> top-left pel of right block
+//   w1 = row stride, bytes
+//   w2 = PQUANT bitstream parameter
+function ff_vc1_h_loop_filter4_neon, export=1
+        sub             x3, x0, #4              // where to start reading
+        sxtw            x1, w1                  // technically, stride is signed int
+        ldr             d0, .Lcoeffs
+        ld1             {v1.8b}, [x3], x1
+        sub             x0, x0, #1              // where to start writing
+        ld1             {v2.8b}, [x3], x1
+        ld1             {v3.8b}, [x3], x1
+        ld1             {v4.8b}, [x3]
+        dup             v5.8h, w2               // pq
+        trn1            v6.8b, v1.8b, v2.8b
+        trn2            v1.8b, v1.8b, v2.8b
+        trn1            v2.8b, v3.8b, v4.8b
+        trn2            v3.8b, v3.8b, v4.8b
+        trn1            v4.4h, v6.4h, v2.4h     // P1, P5
+        trn1            v7.4h, v1.4h, v3.4h     // P2, P6
+        trn2            v2.4h, v6.4h, v2.4h     // P3, P7
+        trn2            v1.4h, v1.4h, v3.4h     // P4, P8
+        ushll           v3.8h, v4.8b, #1        // 2*P1, 2*P5
+        uxtl            v6.8h, v7.8b            // P2, P6
+        uxtl            v7.8h, v2.8b            // P3, P7
+        uxtl            v1.8h, v1.8b            // P4, P8
+        mls             v3.8h, v6.8h, v0.h[1]   // 2*P1-5*P2, 2*P5-5*P6
+        ushll           v2.8h, v2.8b, #1        // 2*P3, 2*P7
+        uxtl            v4.8h, v4.8b            // P1, P5
+        mla             v3.8h, v7.8h, v0.h[1]   // 2*P1-5*P2+5*P3, 2*P5-5*P6+5*P7
+        mov             d6, v6.d[1]             // P6
+        mls             v3.8h, v1.8h, v0.h[0]   // 2*P1-5*P2+5*P3-2*P4, 2*P5-5*P6+5*P7-2*P8
+        mov             d4, v4.d[1]             // P5
+        mls             v2.4h, v1.4h, v0.h[1]   // 2*P3-5*P4
+        mla             v2.4h, v4.4h, v0.h[1]   // 2*P3-5*P4+5*P5
+        sub             v7.4h, v1.4h, v4.4h     // P4-P5
+        mls             v2.4h, v6.4h, v0.h[0]   // 2*P3-5*P4+5*P5-2*P6
+        srshr           v3.8h, v3.8h, #3
+        abs             v6.4h, v7.4h
+        sshr            v7.4h, v7.4h, #8        // clip_sign
+        srshr           v2.4h, v2.4h, #3
+        abs             v3.8h, v3.8h            // a1, a2
+        sshr            v6.4h, v6.4h, #1        // clip
+        mov             d16, v3.d[1]            // a2
+        abs             v17.4h, v2.4h           // a0
+        cmeq            v18.4h, v6.4h, #0       // test clip == 0
+        sshr            v2.4h, v2.4h, #8        // a0_sign
+        cmhs            v19.4h, v3.4h, v16.4h   // test a1 >= a2
+        cmhs            v5.4h, v17.4h, v5.4h    // test a0 >= pq
+        sub             v2.4h, v7.4h, v2.4h     // clip_sign - a0_sign
+        bsl             v19.8b, v16.8b, v3.8b   // a3
+        orr             v3.8b, v18.8b, v5.8b    // test clip == 0 || a0 >= pq
+        uqsub           v5.4h, v17.4h, v19.4h   // a0 >= a3 ? a0-a3 : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        cmhs            v7.4h, v19.4h, v17.4h   // test a3 >= a0
+        mul             v0.4h, v5.4h, v0.h[1]   // a0 >= a3 ? 5*(a0-a3) : 0
+        orr             v5.8b, v3.8b, v7.8b     // test clip == 0 || a0 >= pq || a3 >= a0
+        mov             w2, v5.s[1]             // move to gp reg
+        ushr            v0.4h, v0.4h, #3        // a0 >= a3 ? (5*(a0-a3))>>3 : 0
+        cmhs            v5.4h, v0.4h, v6.4h
+        tbnz            w2, #0, 1f              // none of the 4 pixel pairs should be updated if this one is not filtered
+        bsl             v5.8b, v6.8b, v0.8b     // FFMIN(d, clip)
+        bic             v0.8b, v5.8b, v3.8b     // set each d to zero if it should not be filtered because clip == 0 || a0 >= pq (a3 > a0 case already zeroed by saturating sub)
+        mla             v4.4h, v0.4h, v2.4h     // invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P5
+        mls             v1.4h, v0.4h, v2.4h     // invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P4
+        sqxtun          v3.8b, v4.8h
+        sqxtun          v2.8b, v1.8h
+        st2             {v2.b, v3.b}[0], [x0], x1
+        st2             {v2.b, v3.b}[1], [x0], x1
+        st2             {v2.b, v3.b}[2], [x0], x1
+        st2             {v2.b, v3.b}[3], [x0]
+1:      ret
+endfunc
+
+// VC-1 in-loop deblocking filter for 8 pixel pairs at boundary of vertically-neighbouring blocks
+// On entry:
+//   x0 -> top-left pel of lower block
+//   w1 = row stride, bytes
+//   w2 = PQUANT bitstream parameter
+function ff_vc1_v_loop_filter8_neon, export=1
+        sub             x3, x0, w1, sxtw #2
+        sxtw            x1, w1                  // technically, stride is signed int
+        ldr             d0, .Lcoeffs
+        ld1             {v1.8b}, [x0], x1       // P5
+        movi            v2.2d, #0x0000ffff00000000
+        ld1             {v3.8b}, [x3], x1       // P1
+        ld1             {v4.8b}, [x3], x1       // P2
+        ld1             {v5.8b}, [x0], x1       // P6
+        ld1             {v6.8b}, [x3], x1       // P3
+        ld1             {v7.8b}, [x0], x1       // P7
+        ushll           v16.8h, v1.8b, #1       // 2*P5
+        ushll           v3.8h, v3.8b, #1        // 2*P1
+        ld1             {v17.8b}, [x3]          // P4
+        uxtl            v4.8h, v4.8b            // P2
+        ld1             {v18.8b}, [x0]          // P8
+        uxtl            v5.8h, v5.8b            // P6
+        dup             v19.8h, w2              // pq
+        uxtl            v20.8h, v6.8b           // P3
+        mls             v3.8h, v4.8h, v0.h[1]   // 2*P1-5*P2
+        uxtl            v4.8h, v7.8b            // P7
+        ushll           v6.8h, v6.8b, #1        // 2*P3
+        mls             v16.8h, v5.8h, v0.h[1]  // 2*P5-5*P6
+        uxtl            v7.8h, v17.8b           // P4
+        uxtl            v17.8h, v18.8b          // P8
+        mla             v16.8h, v4.8h, v0.h[1]  // 2*P5-5*P6+5*P7
+        uxtl            v1.8h, v1.8b            // P5
+        mla             v3.8h, v20.8h, v0.h[1]  // 2*P1-5*P2+5*P3
+        sub             v4.8h, v7.8h, v1.8h     // P4-P5
+        mls             v6.8h, v7.8h, v0.h[1]   // 2*P3-5*P4
+        mls             v16.8h, v17.8h, v0.h[0] // 2*P5-5*P6+5*P7-2*P8
+        abs             v17.8h, v4.8h
+        sshr            v4.8h, v4.8h, #8        // clip_sign
+        mls             v3.8h, v7.8h, v0.h[0]   // 2*P1-5*P2+5*P3-2*P4
+        sshr            v17.8h, v17.8h, #1      // clip
+        mla             v6.8h, v1.8h, v0.h[1]   // 2*P3-5*P4+5*P5
+        srshr           v16.8h, v16.8h, #3
+        mls             v6.8h, v5.8h, v0.h[0]   // 2*P3-5*P4+5*P5-2*P6
+        cmeq            v5.8h, v17.8h, #0       // test clip == 0
+        srshr           v3.8h, v3.8h, #3
+        abs             v16.8h, v16.8h          // a2
+        abs             v3.8h, v3.8h            // a1
+        srshr           v6.8h, v6.8h, #3
+        cmhs            v18.8h, v3.8h, v16.8h   // test a1 >= a2
+        abs             v20.8h, v6.8h           // a0
+        sshr            v6.8h, v6.8h, #8        // a0_sign
+        bsl             v18.16b, v16.16b, v3.16b // a3
+        cmhs            v3.8h, v20.8h, v19.8h   // test a0 >= pq
+        sub             v4.8h, v4.8h, v6.8h     // clip_sign - a0_sign
+        uqsub           v6.8h, v20.8h, v18.8h   // a0 >= a3 ? a0-a3 : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        cmhs            v16.8h, v18.8h, v20.8h  // test a3 >= a0
+        orr             v3.16b, v5.16b, v3.16b  // test clip == 0 || a0 >= pq
+        mul             v0.8h, v6.8h, v0.h[1]   // a0 >= a3 ? 5*(a0-a3) : 0
+        orr             v5.16b, v3.16b, v16.16b // test clip == 0 || a0 >= pq || a3 >= a0
+        cmtst           v2.2d, v5.2d, v2.2d     // if 2nd of each group of is not filtered, then none of the others in the group should be either
+        mov             w0, v5.s[1]             // move to gp reg
+        ushr            v0.8h, v0.8h, #3        // a0 >= a3 ? (5*(a0-a3))>>3 : 0
+        mov             w2, v5.s[3]
+        orr             v2.16b, v3.16b, v2.16b
+        cmhs            v3.8h, v0.8h, v17.8h
+        and             w0, w0, w2
+        bsl             v3.16b, v17.16b, v0.16b // FFMIN(d, clip)
+        tbnz            w0, #0, 1f              // none of the 8 pixel pairs should be updated in this case
+        bic             v0.16b, v3.16b, v2.16b  // set each d to zero if it should not be filtered
+        mls             v7.8h, v0.8h, v4.8h     // invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P4
+        mla             v1.8h, v0.8h, v4.8h     // invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P5
+        sqxtun          v0.8b, v7.8h
+        sqxtun          v1.8b, v1.8h
+        st1             {v0.8b}, [x3], x1
+        st1             {v1.8b}, [x3]
+1:      ret
+endfunc
+
+// VC-1 in-loop deblocking filter for 8 pixel pairs at boundary of horizontally-neighbouring blocks
+// On entry:
+//   x0 -> top-left pel of right block
+//   w1 = row stride, bytes
+//   w2 = PQUANT bitstream parameter
+function ff_vc1_h_loop_filter8_neon, export=1
+        sub             x3, x0, #4              // where to start reading
+        sxtw            x1, w1                  // technically, stride is signed int
+        ldr             d0, .Lcoeffs
+        ld1             {v1.8b}, [x3], x1       // P1[0], P2[0]...
+        sub             x0, x0, #1              // where to start writing
+        ld1             {v2.8b}, [x3], x1
+        add             x4, x0, x1, lsl #2
+        ld1             {v3.8b}, [x3], x1
+        ld1             {v4.8b}, [x3], x1
+        ld1             {v5.8b}, [x3], x1
+        ld1             {v6.8b}, [x3], x1
+        ld1             {v7.8b}, [x3], x1
+        trn1            v16.8b, v1.8b, v2.8b    // P1[0], P1[1], P3[0]...
+        ld1             {v17.8b}, [x3]
+        trn2            v1.8b, v1.8b, v2.8b     // P2[0], P2[1], P4[0]...
+        trn1            v2.8b, v3.8b, v4.8b     // P1[2], P1[3], P3[2]...
+        trn2            v3.8b, v3.8b, v4.8b     // P2[2], P2[3], P4[2]...
+        dup             v4.8h, w2               // pq
+        trn1            v18.8b, v5.8b, v6.8b    // P1[4], P1[5], P3[4]...
+        trn2            v5.8b, v5.8b, v6.8b     // P2[4], P2[5], P4[4]...
+        trn1            v6.4h, v16.4h, v2.4h    // P1[0], P1[1], P1[2], P1[3], P5[0]...
+        trn1            v19.4h, v1.4h, v3.4h    // P2[0], P2[1], P2[2], P2[3], P6[0]...
+        trn1            v20.8b, v7.8b, v17.8b   // P1[6], P1[7], P3[6]...
+        trn2            v7.8b, v7.8b, v17.8b    // P2[6], P2[7], P4[6]...
+        trn2            v2.4h, v16.4h, v2.4h    // P3[0], P3[1], P3[2], P3[3], P7[0]...
+        trn2            v1.4h, v1.4h, v3.4h     // P4[0], P4[1], P4[2], P4[3], P8[0]...
+        trn1            v3.4h, v18.4h, v20.4h   // P1[4], P1[5], P1[6], P1[7], P5[4]...
+        trn1            v16.4h, v5.4h, v7.4h    // P2[4], P2[5], P2[6], P2[7], P6[4]...
+        trn2            v17.4h, v18.4h, v20.4h  // P3[4], P3[5], P3[6], P3[7], P7[4]...
+        trn2            v5.4h, v5.4h, v7.4h     // P4[4], P4[5], P4[6], P4[7], P8[4]...
+        trn1            v7.2s, v6.2s, v3.2s     // P1
+        trn1            v18.2s, v19.2s, v16.2s  // P2
+        trn2            v3.2s, v6.2s, v3.2s     // P5
+        trn2            v6.2s, v19.2s, v16.2s   // P6
+        trn1            v16.2s, v2.2s, v17.2s   // P3
+        trn2            v2.2s, v2.2s, v17.2s    // P7
+        ushll           v7.8h, v7.8b, #1        // 2*P1
+        trn1            v17.2s, v1.2s, v5.2s    // P4
+        ushll           v19.8h, v3.8b, #1       // 2*P5
+        trn2            v1.2s, v1.2s, v5.2s     // P8
+        uxtl            v5.8h, v18.8b           // P2
+        uxtl            v6.8h, v6.8b            // P6
+        uxtl            v18.8h, v16.8b          // P3
+        mls             v7.8h, v5.8h, v0.h[1]   // 2*P1-5*P2
+        uxtl            v2.8h, v2.8b            // P7
+        ushll           v5.8h, v16.8b, #1       // 2*P3
+        mls             v19.8h, v6.8h, v0.h[1]  // 2*P5-5*P6
+        uxtl            v16.8h, v17.8b          // P4
+        uxtl            v1.8h, v1.8b            // P8
+        mla             v19.8h, v2.8h, v0.h[1]  // 2*P5-5*P6+5*P7
+        uxtl            v2.8h, v3.8b            // P5
+        mla             v7.8h, v18.8h, v0.h[1]  // 2*P1-5*P2+5*P3
+        sub             v3.8h, v16.8h, v2.8h    // P4-P5
+        mls             v5.8h, v16.8h, v0.h[1]  // 2*P3-5*P4
+        mls             v19.8h, v1.8h, v0.h[0]  // 2*P5-5*P6+5*P7-2*P8
+        abs             v1.8h, v3.8h
+        sshr            v3.8h, v3.8h, #8        // clip_sign
+        mls             v7.8h, v16.8h, v0.h[0]  // 2*P1-5*P2+5*P3-2*P4
+        sshr            v1.8h, v1.8h, #1        // clip
+        mla             v5.8h, v2.8h, v0.h[1]   // 2*P3-5*P4+5*P5
+        srshr           v17.8h, v19.8h, #3
+        mls             v5.8h, v6.8h, v0.h[0]   // 2*P3-5*P4+5*P5-2*P6
+        cmeq            v6.8h, v1.8h, #0        // test clip == 0
+        srshr           v7.8h, v7.8h, #3
+        abs             v17.8h, v17.8h          // a2
+        abs             v7.8h, v7.8h            // a1
+        srshr           v5.8h, v5.8h, #3
+        cmhs            v18.8h, v7.8h, v17.8h   // test a1 >= a2
+        abs             v19.8h, v5.8h           // a0
+        sshr            v5.8h, v5.8h, #8        // a0_sign
+        bsl             v18.16b, v17.16b, v7.16b // a3
+        cmhs            v4.8h, v19.8h, v4.8h    // test a0 >= pq
+        sub             v3.8h, v3.8h, v5.8h     // clip_sign - a0_sign
+        uqsub           v5.8h, v19.8h, v18.8h   // a0 >= a3 ? a0-a3 : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        cmhs            v7.8h, v18.8h, v19.8h   // test a3 >= a0
+        orr             v4.16b, v6.16b, v4.16b  // test clip == 0 || a0 >= pq
+        mul             v0.8h, v5.8h, v0.h[1]   // a0 >= a3 ? 5*(a0-a3) : 0
+        orr             v5.16b, v4.16b, v7.16b  // test clip == 0 || a0 >= pq || a3 >= a0
+        mov             w2, v5.s[1]             // move to gp reg
+        ushr            v0.8h, v0.8h, #3        // a0 >= a3 ? (5*(a0-a3))>>3 : 0
+        mov             w3, v5.s[3]
+        cmhs            v5.8h, v0.8h, v1.8h
+        and             w5, w2, w3
+        bsl             v5.16b, v1.16b, v0.16b  // FFMIN(d, clip)
+        tbnz            w5, #0, 2f              // none of the 8 pixel pairs should be updated in this case
+        bic             v0.16b, v5.16b, v4.16b  // set each d to zero if it should not be filtered because clip == 0 || a0 >= pq (a3 > a0 case already zeroed by saturating sub)
+        mla             v2.8h, v0.8h, v3.8h     // invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P5
+        mls             v16.8h, v0.8h, v3.8h    // invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P4
+        sqxtun          v1.8b, v2.8h
+        sqxtun          v0.8b, v16.8h
+        tbnz            w2, #0, 1f              // none of the first 4 pixel pairs should be updated if so
+        st2             {v0.b, v1.b}[0], [x0], x1
+        st2             {v0.b, v1.b}[1], [x0], x1
+        st2             {v0.b, v1.b}[2], [x0], x1
+        st2             {v0.b, v1.b}[3], [x0]
+1:      tbnz            w3, #0, 2f              // none of the second 4 pixel pairs should be updated if so
+        st2             {v0.b, v1.b}[4], [x4], x1
+        st2             {v0.b, v1.b}[5], [x4], x1
+        st2             {v0.b, v1.b}[6], [x4], x1
+        st2             {v0.b, v1.b}[7], [x4]
+2:      ret
+endfunc
+
+// VC-1 in-loop deblocking filter for 16 pixel pairs at boundary of vertically-neighbouring blocks
+// On entry:
+//   x0 -> top-left pel of lower block
+//   w1 = row stride, bytes
+//   w2 = PQUANT bitstream parameter
+function ff_vc1_v_loop_filter16_neon, export=1
+        sub             x3, x0, w1, sxtw #2
+        sxtw            x1, w1                  // technically, stride is signed int
+        ldr             d0, .Lcoeffs
+        ld1             {v1.16b}, [x0], x1      // P5
+        movi            v2.2d, #0x0000ffff00000000
+        ld1             {v3.16b}, [x3], x1      // P1
+        ld1             {v4.16b}, [x3], x1      // P2
+        ld1             {v5.16b}, [x0], x1      // P6
+        ld1             {v6.16b}, [x3], x1      // P3
+        ld1             {v7.16b}, [x0], x1      // P7
+        ushll           v16.8h, v1.8b, #1       // 2*P5[0..7]
+        ushll           v17.8h, v3.8b, #1       // 2*P1[0..7]
+        ld1             {v18.16b}, [x3]         // P4
+        uxtl            v19.8h, v4.8b           // P2[0..7]
+        ld1             {v20.16b}, [x0]         // P8
+        uxtl            v21.8h, v5.8b           // P6[0..7]
+        dup             v22.8h, w2              // pq
+        ushll2          v3.8h, v3.16b, #1       // 2*P1[8..15]
+        mls             v17.8h, v19.8h, v0.h[1] // 2*P1[0..7]-5*P2[0..7]
+        ushll2          v19.8h, v1.16b, #1      // 2*P5[8..15]
+        uxtl2           v4.8h, v4.16b           // P2[8..15]
+        mls             v16.8h, v21.8h, v0.h[1] // 2*P5[0..7]-5*P6[0..7]
+        uxtl2           v5.8h, v5.16b           // P6[8..15]
+        uxtl            v23.8h, v6.8b           // P3[0..7]
+        uxtl            v24.8h, v7.8b           // P7[0..7]
+        mls             v3.8h, v4.8h, v0.h[1]   // 2*P1[8..15]-5*P2[8..15]
+        ushll           v4.8h, v6.8b, #1        // 2*P3[0..7]
+        uxtl            v25.8h, v18.8b          // P4[0..7]
+        mls             v19.8h, v5.8h, v0.h[1]  // 2*P5[8..15]-5*P6[8..15]
+        uxtl2           v26.8h, v6.16b          // P3[8..15]
+        mla             v17.8h, v23.8h, v0.h[1] // 2*P1[0..7]-5*P2[0..7]+5*P3[0..7]
+        uxtl2           v7.8h, v7.16b           // P7[8..15]
+        ushll2          v6.8h, v6.16b, #1       // 2*P3[8..15]
+        mla             v16.8h, v24.8h, v0.h[1] // 2*P5[0..7]-5*P6[0..7]+5*P7[0..7]
+        uxtl2           v18.8h, v18.16b         // P4[8..15]
+        uxtl            v23.8h, v20.8b          // P8[0..7]
+        mls             v4.8h, v25.8h, v0.h[1]  // 2*P3[0..7]-5*P4[0..7]
+        uxtl            v24.8h, v1.8b           // P5[0..7]
+        uxtl2           v20.8h, v20.16b         // P8[8..15]
+        mla             v3.8h, v26.8h, v0.h[1]  // 2*P1[8..15]-5*P2[8..15]+5*P3[8..15]
+        uxtl2           v1.8h, v1.16b           // P5[8..15]
+        sub             v26.8h, v25.8h, v24.8h  // P4[0..7]-P5[0..7]
+        mla             v19.8h, v7.8h, v0.h[1]  // 2*P5[8..15]-5*P6[8..15]+5*P7[8..15]
+        sub             v7.8h, v18.8h, v1.8h    // P4[8..15]-P5[8..15]
+        mls             v6.8h, v18.8h, v0.h[1]  // 2*P3[8..15]-5*P4[8..15]
+        abs             v27.8h, v26.8h
+        sshr            v26.8h, v26.8h, #8      // clip_sign[0..7]
+        mls             v17.8h, v25.8h, v0.h[0] // 2*P1[0..7]-5*P2[0..7]+5*P3[0..7]-2*P4[0..7]
+        abs             v28.8h, v7.8h
+        sshr            v27.8h, v27.8h, #1      // clip[0..7]
+        mls             v16.8h, v23.8h, v0.h[0] // 2*P5[0..7]-5*P6[0..7]+5*P7[0..7]-2*P8[0..7]
+        sshr            v7.8h, v7.8h, #8        // clip_sign[8..15]
+        sshr            v23.8h, v28.8h, #1      // clip[8..15]
+        mla             v4.8h, v24.8h, v0.h[1]  // 2*P3[0..7]-5*P4[0..7]+5*P5[0..7]
+        cmeq            v28.8h, v27.8h, #0      // test clip[0..7] == 0
+        srshr           v17.8h, v17.8h, #3
+        mls             v3.8h, v18.8h, v0.h[0]  // 2*P1[8..15]-5*P2[8..15]+5*P3[8..15]-2*P4[8..15]
+        cmeq            v29.8h, v23.8h, #0      // test clip[8..15] == 0
+        srshr           v16.8h, v16.8h, #3
+        mls             v19.8h, v20.8h, v0.h[0] // 2*P5[8..15]-5*P6[8..15]+5*P7[8..15]-2*P8[8..15]
+        abs             v17.8h, v17.8h          // a1[0..7]
+        mla             v6.8h, v1.8h, v0.h[1]   // 2*P3[8..15]-5*P4[8..15]+5*P5[8..15]
+        srshr           v3.8h, v3.8h, #3
+        mls             v4.8h, v21.8h, v0.h[0]  // 2*P3[0..7]-5*P4[0..7]+5*P5[0..7]-2*P6[0..7]
+        abs             v16.8h, v16.8h          // a2[0..7]
+        srshr           v19.8h, v19.8h, #3
+        mls             v6.8h, v5.8h, v0.h[0]   // 2*P3[8..15]-5*P4[8..15]+5*P5[8..15]-2*P6[8..15]
+        cmhs            v5.8h, v17.8h, v16.8h   // test a1[0..7] >= a2[0..7]
+        abs             v3.8h, v3.8h            // a1[8..15]
+        srshr           v4.8h, v4.8h, #3
+        abs             v19.8h, v19.8h          // a2[8..15]
+        bsl             v5.16b, v16.16b, v17.16b // a3[0..7]
+        srshr           v6.8h, v6.8h, #3
+        cmhs            v16.8h, v3.8h, v19.8h   // test a1[8..15] >= a2[8.15]
+        abs             v17.8h, v4.8h           // a0[0..7]
+        sshr            v4.8h, v4.8h, #8        // a0_sign[0..7]
+        bsl             v16.16b, v19.16b, v3.16b // a3[8..15]
+        uqsub           v3.8h, v17.8h, v5.8h    // a0[0..7] >= a3[0..7] ? a0[0..7]-a3[0..7] : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        abs             v19.8h, v6.8h           // a0[8..15]
+        cmhs            v20.8h, v17.8h, v22.8h  // test a0[0..7] >= pq
+        cmhs            v5.8h, v5.8h, v17.8h    // test a3[0..7] >= a0[0..7]
+        sub             v4.8h, v26.8h, v4.8h    // clip_sign[0..7] - a0_sign[0..7]
+        sshr            v6.8h, v6.8h, #8        // a0_sign[8..15]
+        mul             v3.8h, v3.8h, v0.h[1]   // a0[0..7] >= a3[0..7] ? 5*(a0[0..7]-a3[0..7]) : 0
+        uqsub           v17.8h, v19.8h, v16.8h  // a0[8..15] >= a3[8..15] ? a0[8..15]-a3[8..15] : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        orr             v20.16b, v28.16b, v20.16b // test clip[0..7] == 0 || a0[0..7] >= pq
+        cmhs            v21.8h, v19.8h, v22.8h  // test a0[8..15] >= pq
+        cmhs            v16.8h, v16.8h, v19.8h  // test a3[8..15] >= a0[8..15]
+        mul             v0.8h, v17.8h, v0.h[1]  // a0[8..15] >= a3[8..15] ? 5*(a0[8..15]-a3[8..15]) : 0
+        sub             v6.8h, v7.8h, v6.8h     // clip_sign[8..15] - a0_sign[8..15]
+        orr             v5.16b, v20.16b, v5.16b // test clip[0..7] == 0 || a0[0..7] >= pq || a3[0..7] >= a0[0..7]
+        ushr            v3.8h, v3.8h, #3        // a0[0..7] >= a3[0..7] ? (5*(a0[0..7]-a3[0..7]))>>3 : 0
+        orr             v7.16b, v29.16b, v21.16b // test clip[8..15] == 0 || a0[8..15] >= pq
+        cmtst           v17.2d, v5.2d, v2.2d    // if 2nd of each group of is not filtered, then none of the others in the group should be either
+        mov             w0, v5.s[1]             // move to gp reg
+        cmhs            v19.8h, v3.8h, v27.8h
+        ushr            v0.8h, v0.8h, #3        // a0[8..15] >= a3[8..15] ? (5*(a0[8..15]-a3[8..15]))>>3 : 0
+        mov             w2, v5.s[3]
+        orr             v5.16b, v7.16b, v16.16b // test clip[8..15] == 0 || a0[8..15] >= pq || a3[8..15] >= a0[8..15]
+        orr             v16.16b, v20.16b, v17.16b
+        bsl             v19.16b, v27.16b, v3.16b // FFMIN(d[0..7], clip[0..7])
+        cmtst           v2.2d, v5.2d, v2.2d
+        cmhs            v3.8h, v0.8h, v23.8h
+        mov             w4, v5.s[1]
+        mov             w5, v5.s[3]
+        and             w0, w0, w2
+        bic             v5.16b, v19.16b, v16.16b // set each d[0..7] to zero if it should not be filtered because clip[0..7] == 0 || a0[0..7] >= pq (a3 > a0 case already zeroed by saturating sub)
+        orr             v2.16b, v7.16b, v2.16b
+        bsl             v3.16b, v23.16b, v0.16b // FFMIN(d[8..15], clip[8..15])
+        mls             v25.8h, v5.8h, v4.8h    // invert d[0..7] depending on clip_sign[0..7] & a0_sign[0..7], or zero it if they match, and accumulate into P4[0..7]
+        and             w2, w4, w5
+        bic             v0.16b, v3.16b, v2.16b  // set each d[8..15] to zero if it should not be filtered because clip[8..15] == 0 || a0[8..15] >= pq (a3 > a0 case already zeroed by saturating sub)
+        mla             v24.8h, v5.8h, v4.8h    // invert d[0..7] depending on clip_sign[0..7] & a0_sign[0..7], or zero it if they match, and accumulate into P5[0..7]
+        and             w0, w0, w2
+        mls             v18.8h, v0.8h, v6.8h    // invert d[8..15] depending on clip_sign[8..15] & a0_sign[8..15], or zero it if they match, and accumulate into P4[8..15]
+        sqxtun          v2.8b, v25.8h
+        tbnz            w0, #0, 1f              // none of the 16 pixel pairs should be updated in this case
+        mla             v1.8h, v0.8h, v6.8h     // invert d[8..15] depending on clip_sign[8..15] & a0_sign[8..15], or zero it if they match, and accumulate into P5[8..15]
+        sqxtun          v0.8b, v24.8h
+        sqxtun2         v2.16b, v18.8h
+        sqxtun2         v0.16b, v1.8h
+        st1             {v2.16b}, [x3], x1
+        st1             {v0.16b}, [x3]
+1:      ret
+endfunc
+
+// VC-1 in-loop deblocking filter for 16 pixel pairs at boundary of horizontally-neighbouring blocks
+// On entry:
+//   x0 -> top-left pel of right block
+//   w1 = row stride, bytes
+//   w2 = PQUANT bitstream parameter
+function ff_vc1_h_loop_filter16_neon, export=1
+        sub             x3, x0, #4              // where to start reading
+        sxtw            x1, w1                  // technically, stride is signed int
+        ldr             d0, .Lcoeffs
+        ld1             {v1.8b}, [x3], x1       // P1[0], P2[0]...
+        sub             x0, x0, #1              // where to start writing
+        ld1             {v2.8b}, [x3], x1
+        add             x4, x0, x1, lsl #3
+        ld1             {v3.8b}, [x3], x1
+        add             x5, x0, x1, lsl #2
+        ld1             {v4.8b}, [x3], x1
+        add             x6, x4, x1, lsl #2
+        ld1             {v5.8b}, [x3], x1
+        ld1             {v6.8b}, [x3], x1
+        ld1             {v7.8b}, [x3], x1
+        trn1            v16.8b, v1.8b, v2.8b    // P1[0], P1[1], P3[0]...
+        ld1             {v17.8b}, [x3], x1
+        trn2            v1.8b, v1.8b, v2.8b     // P2[0], P2[1], P4[0]...
+        ld1             {v2.8b}, [x3], x1
+        trn1            v18.8b, v3.8b, v4.8b    // P1[2], P1[3], P3[2]...
+        ld1             {v19.8b}, [x3], x1
+        trn2            v3.8b, v3.8b, v4.8b     // P2[2], P2[3], P4[2]...
+        ld1             {v4.8b}, [x3], x1
+        trn1            v20.8b, v5.8b, v6.8b    // P1[4], P1[5], P3[4]...
+        ld1             {v21.8b}, [x3], x1
+        trn2            v5.8b, v5.8b, v6.8b     // P2[4], P2[5], P4[4]...
+        ld1             {v6.8b}, [x3], x1
+        trn1            v22.8b, v7.8b, v17.8b   // P1[6], P1[7], P3[6]...
+        ld1             {v23.8b}, [x3], x1
+        trn2            v7.8b, v7.8b, v17.8b    // P2[6], P2[7], P4[6]...
+        ld1             {v17.8b}, [x3], x1
+        trn1            v24.8b, v2.8b, v19.8b   // P1[8], P1[9], P3[8]...
+        ld1             {v25.8b}, [x3]
+        trn2            v2.8b, v2.8b, v19.8b    // P2[8], P2[9], P4[8]...
+        trn1            v19.4h, v16.4h, v18.4h  // P1[0], P1[1], P1[2], P1[3], P5[0]...
+        trn1            v26.8b, v4.8b, v21.8b   // P1[10], P1[11], P3[10]...
+        trn2            v4.8b, v4.8b, v21.8b    // P2[10], P2[11], P4[10]...
+        trn1            v21.4h, v1.4h, v3.4h    // P2[0], P2[1], P2[2], P2[3], P6[0]...
+        trn1            v27.4h, v20.4h, v22.4h  // P1[4], P1[5], P1[6], P1[7], P5[4]...
+        trn1            v28.8b, v6.8b, v23.8b   // P1[12], P1[13], P3[12]...
+        trn2            v6.8b, v6.8b, v23.8b    // P2[12], P2[13], P4[12]...
+        trn1            v23.4h, v5.4h, v7.4h    // P2[4], P2[5], P2[6], P2[7], P6[4]...
+        trn1            v29.4h, v24.4h, v26.4h  // P1[8], P1[9], P1[10], P1[11], P5[8]...
+        trn1            v30.8b, v17.8b, v25.8b  // P1[14], P1[15], P3[14]...
+        trn2            v17.8b, v17.8b, v25.8b  // P2[14], P2[15], P4[14]...
+        trn1            v25.4h, v2.4h, v4.4h    // P2[8], P2[9], P2[10], P2[11], P6[8]...
+        trn1            v31.2s, v19.2s, v27.2s  // P1[0..7]
+        trn2            v19.2s, v19.2s, v27.2s  // P5[0..7]
+        trn1            v27.2s, v21.2s, v23.2s  // P2[0..7]
+        trn2            v21.2s, v21.2s, v23.2s  // P6[0..7]
+        trn1            v23.4h, v28.4h, v30.4h  // P1[12], P1[13], P1[14], P1[15], P5[12]...
+        trn2            v16.4h, v16.4h, v18.4h  // P3[0], P3[1], P3[2], P3[3], P7[0]...
+        trn1            v18.4h, v6.4h, v17.4h   // P2[12], P2[13], P2[14], P2[15], P6[12]...
+        trn2            v20.4h, v20.4h, v22.4h  // P3[4], P3[5], P3[6], P3[7], P7[4]...
+        trn2            v22.4h, v24.4h, v26.4h  // P3[8], P3[9], P3[10], P3[11], P7[8]...
+        trn1            v24.2s, v29.2s, v23.2s  // P1[8..15]
+        trn2            v23.2s, v29.2s, v23.2s  // P5[8..15]
+        trn1            v26.2s, v25.2s, v18.2s  // P2[8..15]
+        trn2            v18.2s, v25.2s, v18.2s  // P6[8..15]
+        trn2            v25.4h, v28.4h, v30.4h  // P3[12], P3[13], P3[14], P3[15], P7[12]...
+        trn2            v1.4h, v1.4h, v3.4h     // P4[0], P4[1], P4[2], P4[3], P8[0]...
+        trn2            v3.4h, v5.4h, v7.4h     // P4[4], P4[5], P4[6], P4[7], P8[4]...
+        trn2            v2.4h, v2.4h, v4.4h     // P4[8], P4[9], P4[10], P4[11], P8[8]...
+        trn2            v4.4h, v6.4h, v17.4h    // P4[12], P4[13], P4[14], P4[15], P8[12]...
+        ushll           v5.8h, v31.8b, #1       // 2*P1[0..7]
+        ushll           v6.8h, v19.8b, #1       // 2*P5[0..7]
+        trn1            v7.2s, v16.2s, v20.2s   // P3[0..7]
+        uxtl            v17.8h, v27.8b          // P2[0..7]
+        trn2            v16.2s, v16.2s, v20.2s  // P7[0..7]
+        uxtl            v20.8h, v21.8b          // P6[0..7]
+        trn1            v21.2s, v22.2s, v25.2s  // P3[8..15]
+        ushll           v24.8h, v24.8b, #1      // 2*P1[8..15]
+        trn2            v22.2s, v22.2s, v25.2s  // P7[8..15]
+        ushll           v25.8h, v23.8b, #1      // 2*P5[8..15]
+        trn1            v27.2s, v1.2s, v3.2s    // P4[0..7]
+        uxtl            v26.8h, v26.8b          // P2[8..15]
+        mls             v5.8h, v17.8h, v0.h[1]  // 2*P1[0..7]-5*P2[0..7]
+        uxtl            v17.8h, v18.8b          // P6[8..15]
+        mls             v6.8h, v20.8h, v0.h[1]  // 2*P5[0..7]-5*P6[0..7]
+        trn1            v18.2s, v2.2s, v4.2s    // P4[8..15]
+        uxtl            v28.8h, v7.8b           // P3[0..7]
+        mls             v24.8h, v26.8h, v0.h[1] // 2*P1[8..15]-5*P2[8..15]
+        uxtl            v16.8h, v16.8b          // P7[0..7]
+        uxtl            v26.8h, v21.8b          // P3[8..15]
+        mls             v25.8h, v17.8h, v0.h[1] // 2*P5[8..15]-5*P6[8..15]
+        uxtl            v22.8h, v22.8b          // P7[8..15]
+        ushll           v7.8h, v7.8b, #1        // 2*P3[0..7]
+        uxtl            v27.8h, v27.8b          // P4[0..7]
+        trn2            v1.2s, v1.2s, v3.2s     // P8[0..7]
+        ushll           v3.8h, v21.8b, #1       // 2*P3[8..15]
+        trn2            v2.2s, v2.2s, v4.2s     // P8[8..15]
+        uxtl            v4.8h, v18.8b           // P4[8..15]
+        mla             v5.8h, v28.8h, v0.h[1]  // 2*P1[0..7]-5*P2[0..7]+5*P3[0..7]
+        uxtl            v1.8h, v1.8b            // P8[0..7]
+        mla             v6.8h, v16.8h, v0.h[1]  // 2*P5[0..7]-5*P6[0..7]+5*P7[0..7]
+        uxtl            v2.8h, v2.8b            // P8[8..15]
+        uxtl            v16.8h, v19.8b          // P5[0..7]
+        mla             v24.8h, v26.8h, v0.h[1] // 2*P1[8..15]-5*P2[8..15]+5*P3[8..15]
+        uxtl            v18.8h, v23.8b          // P5[8..15]
+        dup             v19.8h, w2              // pq
+        mla             v25.8h, v22.8h, v0.h[1] // 2*P5[8..15]-5*P6[8..15]+5*P7[8..15]
+        sub             v21.8h, v27.8h, v16.8h  // P4[0..7]-P5[0..7]
+        sub             v22.8h, v4.8h, v18.8h   // P4[8..15]-P5[8..15]
+        mls             v7.8h, v27.8h, v0.h[1]  // 2*P3[0..7]-5*P4[0..7]
+        abs             v23.8h, v21.8h
+        mls             v3.8h, v4.8h, v0.h[1]   // 2*P3[8..15]-5*P4[8..15]
+        abs             v26.8h, v22.8h
+        sshr            v21.8h, v21.8h, #8      // clip_sign[0..7]
+        mls             v5.8h, v27.8h, v0.h[0]  // 2*P1[0..7]-5*P2[0..7]+5*P3[0..7]-2*P4[0..7]
+        sshr            v23.8h, v23.8h, #1      // clip[0..7]
+        sshr            v26.8h, v26.8h, #1      // clip[8..15]
+        mls             v6.8h, v1.8h, v0.h[0]   // 2*P5[0..7]-5*P6[0..7]+5*P7[0..7]-2*P8[0..7]
+        sshr            v1.8h, v22.8h, #8       // clip_sign[8..15]
+        cmeq            v22.8h, v23.8h, #0      // test clip[0..7] == 0
+        mls             v24.8h, v4.8h, v0.h[0]  // 2*P1[8..15]-5*P2[8..15]+5*P3[8..15]-2*P4[8..15]
+        cmeq            v28.8h, v26.8h, #0      // test clip[8..15] == 0
+        srshr           v5.8h, v5.8h, #3
+        mls             v25.8h, v2.8h, v0.h[0]  // 2*P5[8..15]-5*P6[8..15]+5*P7[8..15]-2*P8[8..15]
+        srshr           v2.8h, v6.8h, #3
+        mla             v7.8h, v16.8h, v0.h[1]  // 2*P3[0..7]-5*P4[0..7]+5*P5[0..7]
+        srshr           v6.8h, v24.8h, #3
+        mla             v3.8h, v18.8h, v0.h[1]  // 2*P3[8..15]-5*P4[8..15]+5*P5[8..15]
+        abs             v5.8h, v5.8h            // a1[0..7]
+        srshr           v24.8h, v25.8h, #3
+        mls             v3.8h, v17.8h, v0.h[0]  // 2*P3[8..15]-5*P4[8..15]+5*P5[8..15]-2*P6[8..15]
+        abs             v2.8h, v2.8h            // a2[0..7]
+        abs             v6.8h, v6.8h            // a1[8..15]
+        mls             v7.8h, v20.8h, v0.h[0]  // 2*P3[0..7]-5*P4[0..7]+5*P5[0..7]-2*P6[0..7]
+        abs             v17.8h, v24.8h          // a2[8..15]
+        cmhs            v20.8h, v5.8h, v2.8h    // test a1[0..7] >= a2[0..7]
+        srshr           v3.8h, v3.8h, #3
+        cmhs            v24.8h, v6.8h, v17.8h   // test a1[8..15] >= a2[8.15]
+        srshr           v7.8h, v7.8h, #3
+        bsl             v20.16b, v2.16b, v5.16b // a3[0..7]
+        abs             v2.8h, v3.8h            // a0[8..15]
+        sshr            v3.8h, v3.8h, #8        // a0_sign[8..15]
+        bsl             v24.16b, v17.16b, v6.16b // a3[8..15]
+        abs             v5.8h, v7.8h            // a0[0..7]
+        sshr            v6.8h, v7.8h, #8        // a0_sign[0..7]
+        cmhs            v7.8h, v2.8h, v19.8h    // test a0[8..15] >= pq
+        sub             v1.8h, v1.8h, v3.8h     // clip_sign[8..15] - a0_sign[8..15]
+        uqsub           v3.8h, v2.8h, v24.8h    // a0[8..15] >= a3[8..15] ? a0[8..15]-a3[8..15] : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        cmhs            v2.8h, v24.8h, v2.8h    // test a3[8..15] >= a0[8..15]
+        uqsub           v17.8h, v5.8h, v20.8h   // a0[0..7] >= a3[0..7] ? a0[0..7]-a3[0..7] : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        cmhs            v19.8h, v5.8h, v19.8h   // test a0[0..7] >= pq
+        orr             v7.16b, v28.16b, v7.16b // test clip[8..15] == 0 || a0[8..15] >= pq
+        sub             v6.8h, v21.8h, v6.8h    // clip_sign[0..7] - a0_sign[0..7]
+        mul             v3.8h, v3.8h, v0.h[1]   // a0[8..15] >= a3[8..15] ? 5*(a0[8..15]-a3[8..15]) : 0
+        cmhs            v5.8h, v20.8h, v5.8h    // test a3[0..7] >= a0[0..7]
+        orr             v19.16b, v22.16b, v19.16b // test clip[0..7] == 0 || a0[0..7] >= pq
+        mul             v0.8h, v17.8h, v0.h[1]  // a0[0..7] >= a3[0..7] ? 5*(a0[0..7]-a3[0..7]) : 0
+        orr             v2.16b, v7.16b, v2.16b  // test clip[8..15] == 0 || a0[8..15] >= pq || a3[8..15] >= a0[8..15]
+        orr             v5.16b, v19.16b, v5.16b // test clip[0..7] == 0 || a0[0..7] >= pq || a3[0..7] >= a0[0..7]
+        ushr            v3.8h, v3.8h, #3        // a0[8..15] >= a3[8..15] ? (5*(a0[8..15]-a3[8..15]))>>3 : 0
+        mov             w7, v2.s[1]
+        mov             w8, v2.s[3]
+        ushr            v0.8h, v0.8h, #3        // a0[0..7] >= a3[0..7] ? (5*(a0[0..7]-a3[0..7]))>>3 : 0
+        mov             w2, v5.s[1]             // move to gp reg
+        cmhs            v2.8h, v3.8h, v26.8h
+        mov             w3, v5.s[3]
+        cmhs            v5.8h, v0.8h, v23.8h
+        bsl             v2.16b, v26.16b, v3.16b // FFMIN(d[8..15], clip[8..15])
+        and             w9, w7, w8
+        bsl             v5.16b, v23.16b, v0.16b // FFMIN(d[0..7], clip[0..7])
+        and             w10, w2, w3
+        bic             v0.16b, v2.16b, v7.16b  // set each d[8..15] to zero if it should not be filtered because clip[8..15] == 0 || a0[8..15] >= pq (a3 > a0 case already zeroed by saturating sub)
+        and             w9, w10, w9
+        bic             v2.16b, v5.16b, v19.16b // set each d[0..7] to zero if it should not be filtered because clip[0..7] == 0 || a0[0..7] >= pq (a3 > a0 case already zeroed by saturating sub)
+        mls             v4.8h, v0.8h, v1.8h     // invert d[8..15] depending on clip_sign[8..15] & a0_sign[8..15], or zero it if they match, and accumulate into P4
+        tbnz            w9, #0, 4f              // none of the 16 pixel pairs should be updated in this case
+        mls             v27.8h, v2.8h, v6.8h    // invert d[0..7] depending on clip_sign[0..7] & a0_sign[0..7], or zero it if they match, and accumulate into P4
+        mla             v16.8h, v2.8h, v6.8h    // invert d[0..7] depending on clip_sign[0..7] & a0_sign[0..7], or zero it if they match, and accumulate into P5
+        sqxtun          v2.8b, v4.8h
+        mla             v18.8h, v0.8h, v1.8h    // invert d[8..15] depending on clip_sign[8..15] & a0_sign[8..15], or zero it if they match, and accumulate into P5
+        sqxtun          v0.8b, v27.8h
+        sqxtun          v1.8b, v16.8h
+        sqxtun          v3.8b, v18.8h
+        tbnz            w2, #0, 1f
+        st2             {v0.b, v1.b}[0], [x0], x1
+        st2             {v0.b, v1.b}[1], [x0], x1
+        st2             {v0.b, v1.b}[2], [x0], x1
+        st2             {v0.b, v1.b}[3], [x0]
+1:      tbnz            w3, #0, 2f
+        st2             {v0.b, v1.b}[4], [x5], x1
+        st2             {v0.b, v1.b}[5], [x5], x1
+        st2             {v0.b, v1.b}[6], [x5], x1
+        st2             {v0.b, v1.b}[7], [x5]
+2:      tbnz            w7, #0, 3f
+        st2             {v2.b, v3.b}[0], [x4], x1
+        st2             {v2.b, v3.b}[1], [x4], x1
+        st2             {v2.b, v3.b}[2], [x4], x1
+        st2             {v2.b, v3.b}[3], [x4]
+3:      tbnz            w8, #0, 4f
+        st2             {v2.b, v3.b}[4], [x6], x1
+        st2             {v2.b, v3.b}[5], [x6], x1
+        st2             {v2.b, v3.b}[6], [x6], x1
+        st2             {v2.b, v3.b}[7], [x6]
+4:      ret
+endfunc
-- 
2.25.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [FFmpeg-devel] [PATCH 06/10] avcodec/vc1: Arm 32-bit NEON deblocking filter fast paths
  2022-03-25 18:52 ` [FFmpeg-devel] [PATCH v2 00/10] " Ben Avison
                     ` (4 preceding siblings ...)
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 05/10] avcodec/vc1: Arm 64-bit NEON deblocking filter fast paths Ben Avison
@ 2022-03-25 18:52   ` Ben Avison
  2022-03-25 19:27     ` Lynne
                       ` (2 more replies)
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 07/10] avcodec/vc1: Arm 64-bit NEON inverse transform " Ben Avison
                     ` (3 subsequent siblings)
  9 siblings, 3 replies; 55+ messages in thread
From: Ben Avison @ 2022-03-25 18:52 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Ben Avison

checkasm benchmarks on 1.5 GHz Cortex-A72 are as follows. Note that the C
version can still outperform the NEON version in specific cases. The balance
between different code paths is stream-dependent, but in practice the best
case happens about 5% of the time, the worst case happens about 40% of the
time, and the complexity of the remaining cases fall somewhere in between.
Therefore, taking the average of the best and worst case timings is
probably a conservative estimate of the degree by which the NEON code
improves performance.

vc1dsp.vc1_h_loop_filter4_bestcase_c: 19.0
vc1dsp.vc1_h_loop_filter4_bestcase_neon: 48.5
vc1dsp.vc1_h_loop_filter4_worstcase_c: 144.7
vc1dsp.vc1_h_loop_filter4_worstcase_neon: 76.2
vc1dsp.vc1_h_loop_filter8_bestcase_c: 41.0
vc1dsp.vc1_h_loop_filter8_bestcase_neon: 75.0
vc1dsp.vc1_h_loop_filter8_worstcase_c: 294.0
vc1dsp.vc1_h_loop_filter8_worstcase_neon: 102.7
vc1dsp.vc1_h_loop_filter16_bestcase_c: 54.7
vc1dsp.vc1_h_loop_filter16_bestcase_neon: 130.0
vc1dsp.vc1_h_loop_filter16_worstcase_c: 569.7
vc1dsp.vc1_h_loop_filter16_worstcase_neon: 186.7
vc1dsp.vc1_v_loop_filter4_bestcase_c: 20.2
vc1dsp.vc1_v_loop_filter4_bestcase_neon: 47.2
vc1dsp.vc1_v_loop_filter4_worstcase_c: 164.2
vc1dsp.vc1_v_loop_filter4_worstcase_neon: 68.5
vc1dsp.vc1_v_loop_filter8_bestcase_c: 43.5
vc1dsp.vc1_v_loop_filter8_bestcase_neon: 55.2
vc1dsp.vc1_v_loop_filter8_worstcase_c: 316.2
vc1dsp.vc1_v_loop_filter8_worstcase_neon: 72.7
vc1dsp.vc1_v_loop_filter16_bestcase_c: 62.2
vc1dsp.vc1_v_loop_filter16_bestcase_neon: 103.7
vc1dsp.vc1_v_loop_filter16_worstcase_c: 646.5
vc1dsp.vc1_v_loop_filter16_worstcase_neon: 110.7

Signed-off-by: Ben Avison <bavison@riscosopen.org>
---
 libavcodec/arm/vc1dsp_init_neon.c |  14 +
 libavcodec/arm/vc1dsp_neon.S      | 643 ++++++++++++++++++++++++++++++
 2 files changed, 657 insertions(+)

diff --git a/libavcodec/arm/vc1dsp_init_neon.c b/libavcodec/arm/vc1dsp_init_neon.c
index 2cca784f5a..f5f5c702d7 100644
--- a/libavcodec/arm/vc1dsp_init_neon.c
+++ b/libavcodec/arm/vc1dsp_init_neon.c
@@ -32,6 +32,13 @@ void ff_vc1_inv_trans_4x8_dc_neon(uint8_t *dest, ptrdiff_t stride, int16_t *bloc
 void ff_vc1_inv_trans_8x4_dc_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
 void ff_vc1_inv_trans_4x4_dc_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
 
+void ff_vc1_v_loop_filter4_neon(uint8_t *src, int stride, int pq);
+void ff_vc1_h_loop_filter4_neon(uint8_t *src, int stride, int pq);
+void ff_vc1_v_loop_filter8_neon(uint8_t *src, int stride, int pq);
+void ff_vc1_h_loop_filter8_neon(uint8_t *src, int stride, int pq);
+void ff_vc1_v_loop_filter16_neon(uint8_t *src, int stride, int pq);
+void ff_vc1_h_loop_filter16_neon(uint8_t *src, int stride, int pq);
+
 void ff_put_pixels8x8_neon(uint8_t *block, const uint8_t *pixels,
                            ptrdiff_t line_size, int rnd);
 
@@ -92,6 +99,13 @@ av_cold void ff_vc1dsp_init_neon(VC1DSPContext *dsp)
     dsp->vc1_inv_trans_8x4_dc = ff_vc1_inv_trans_8x4_dc_neon;
     dsp->vc1_inv_trans_4x4_dc = ff_vc1_inv_trans_4x4_dc_neon;
 
+    dsp->vc1_v_loop_filter4  = ff_vc1_v_loop_filter4_neon;
+    dsp->vc1_h_loop_filter4  = ff_vc1_h_loop_filter4_neon;
+    dsp->vc1_v_loop_filter8  = ff_vc1_v_loop_filter8_neon;
+    dsp->vc1_h_loop_filter8  = ff_vc1_h_loop_filter8_neon;
+    dsp->vc1_v_loop_filter16 = ff_vc1_v_loop_filter16_neon;
+    dsp->vc1_h_loop_filter16 = ff_vc1_h_loop_filter16_neon;
+
     dsp->put_vc1_mspel_pixels_tab[1][ 0] = ff_put_pixels8x8_neon;
     FN_ASSIGN(1, 0);
     FN_ASSIGN(2, 0);
diff --git a/libavcodec/arm/vc1dsp_neon.S b/libavcodec/arm/vc1dsp_neon.S
index 93f043bf08..a639e81171 100644
--- a/libavcodec/arm/vc1dsp_neon.S
+++ b/libavcodec/arm/vc1dsp_neon.S
@@ -1161,3 +1161,646 @@ function ff_vc1_inv_trans_4x4_dc_neon, export=1
         vst1.32         {d1[1]},  [r0,:32]
         bx              lr
 endfunc
+
+@ VC-1 in-loop deblocking filter for 4 pixel pairs at boundary of vertically-neighbouring blocks
+@ On entry:
+@   r0 -> top-left pel of lower block
+@   r1 = row stride, bytes
+@   r2 = PQUANT bitstream parameter
+function ff_vc1_v_loop_filter4_neon, export=1
+        sub             r3, r0, r1, lsl #2
+        vldr            d0, .Lcoeffs
+        vld1.32         {d1[0]}, [r0], r1       @ P5
+        vld1.32         {d2[0]}, [r3], r1       @ P1
+        vld1.32         {d3[0]}, [r3], r1       @ P2
+        vld1.32         {d4[0]}, [r0], r1       @ P6
+        vld1.32         {d5[0]}, [r3], r1       @ P3
+        vld1.32         {d6[0]}, [r0], r1       @ P7
+        vld1.32         {d7[0]}, [r3]           @ P4
+        vld1.32         {d16[0]}, [r0]          @ P8
+        vshll.u8        q9, d1, #1              @ 2*P5
+        vdup.16         d17, r2                 @ pq
+        vshll.u8        q10, d2, #1             @ 2*P1
+        vmovl.u8        q11, d3                 @ P2
+        vmovl.u8        q1, d4                  @ P6
+        vmovl.u8        q12, d5                 @ P3
+        vmls.i16        d20, d22, d0[1]         @ 2*P1-5*P2
+        vmovl.u8        q11, d6                 @ P7
+        vmls.i16        d18, d2, d0[1]          @ 2*P5-5*P6
+        vshll.u8        q2, d5, #1              @ 2*P3
+        vmovl.u8        q3, d7                  @ P4
+        vmla.i16        d18, d22, d0[1]         @ 2*P5-5*P6+5*P7
+        vmovl.u8        q11, d16                @ P8
+        vmla.u16        d20, d24, d0[1]         @ 2*P1-5*P2+5*P3
+        vmovl.u8        q12, d1                 @ P5
+        vmls.u16        d4, d6, d0[1]           @ 2*P3-5*P4
+        vmls.u16        d18, d22, d0[0]         @ 2*P5-5*P6+5*P7-2*P8
+        vsub.i16        d1, d6, d24             @ P4-P5
+        vmls.i16        d20, d6, d0[0]          @ 2*P1-5*P2+5*P3-2*P4
+        vmla.i16        d4, d24, d0[1]          @ 2*P3-5*P4+5*P5
+        vmls.i16        d4, d2, d0[0]           @ 2*P3-5*P4+5*P5-2*P6
+        vabs.s16        d2, d1
+        vrshr.s16       d3, d18, #3
+        vrshr.s16       d5, d20, #3
+        vshr.s16        d2, d2, #1              @ clip
+        vrshr.s16       d4, d4, #3
+        vabs.s16        d3, d3                  @ a2
+        vshr.s16        d1, d1, #8              @ clip_sign
+        vabs.s16        d5, d5                  @ a1
+        vceq.i16        d7, d2, #0              @ test clip == 0
+        vabs.s16        d16, d4                 @ a0
+        vshr.s16        d4, d4, #8              @ a0_sign
+        vcge.s16        d18, d5, d3             @ test a1 >= a2
+        vcge.s16        d17, d16, d17           @ test a0 >= pq
+        vbsl            d18, d3, d5             @ a3
+        vsub.i16        d1, d1, d4              @ clip_sign - a0_sign
+        vorr            d3, d7, d17             @ test clip == 0 || a0 >= pq
+        vqsub.u16       d4, d16, d18            @ a0 >= a3 ? a0-a3 : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        vcge.s16        d5, d18, d16            @ test a3 >= a0
+        vmul.i16        d0, d4, d0[1]           @ a0 >= a3 ? 5*(a0-a3) : 0
+        vorr            d4, d3, d5              @ test clip == 0 || a0 >= pq || a3 >= a0
+        vmov.32         r0, d4[1]               @ move to gp reg
+        vshr.u16        d0, d0, #3              @ a0 >= a3 ? (5*(a0-a3))>>3 : 0
+        vcge.s16        d4, d0, d2
+        tst             r0, #1
+        bne             1f                      @ none of the 4 pixel pairs should be updated if this one is not filtered
+        vbsl            d4, d2, d0              @ FFMIN(d, clip)
+        vbic            d0, d4, d3              @ set each d to zero if it should not be filtered because clip == 0 || a0 >= pq (a3 > a0 case already zeroed by saturating sub)
+        vmls.i16        d6, d0, d1              @ invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P4
+        vmla.i16        d24, d0, d1             @ invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P5
+        vqmovun.s16     d0, q3
+        vqmovun.s16     d1, q12
+        vst1.32         {d0[0]}, [r3], r1
+        vst1.32         {d1[0]}, [r3]
+1:      bx              lr
+endfunc
+
+@ VC-1 in-loop deblocking filter for 4 pixel pairs at boundary of horizontally-neighbouring blocks
+@ On entry:
+@   r0 -> top-left pel of right block
+@   r1 = row stride, bytes
+@   r2 = PQUANT bitstream parameter
+function ff_vc1_h_loop_filter4_neon, export=1
+        sub             r3, r0, #4              @ where to start reading
+        vldr            d0, .Lcoeffs
+        vld1.32         {d2}, [r3], r1
+        sub             r0, r0, #1              @ where to start writing
+        vld1.32         {d4}, [r3], r1
+        vld1.32         {d3}, [r3], r1
+        vld1.32         {d5}, [r3]
+        vdup.16         d1, r2                  @ pq
+        vtrn.8          q1, q2
+        vtrn.16         d2, d3                  @ P1, P5, P3, P7
+        vtrn.16         d4, d5                  @ P2, P6, P4, P8
+        vshll.u8        q3, d2, #1              @ 2*P1, 2*P5
+        vmovl.u8        q8, d4                  @ P2, P6
+        vmovl.u8        q9, d3                  @ P3, P7
+        vmovl.u8        q2, d5                  @ P4, P8
+        vmls.i16        q3, q8, d0[1]           @ 2*P1-5*P2, 2*P5-5*P6
+        vshll.u8        q10, d3, #1             @ 2*P3, 2*P7
+        vmovl.u8        q1, d2                  @ P1, P5
+        vmla.i16        q3, q9, d0[1]           @ 2*P1-5*P2+5*P3, 2*P5-5*P6+5*P7
+        vmls.i16        q3, q2, d0[0]           @ 2*P1-5*P2+5*P3-2*P4, 2*P5-5*P6+5*P7-2*P8
+        vmov            d2, d3                  @ needs to be in an even-numbered vector for when we come to narrow it later
+        vmls.i16        d20, d4, d0[1]          @ 2*P3-5*P4
+        vmla.i16        d20, d3, d0[1]          @ 2*P3-5*P4+5*P5
+        vsub.i16        d3, d4, d2              @ P4-P5
+        vmls.i16        d20, d17, d0[0]         @ 2*P3-5*P4+5*P5-2*P6
+        vrshr.s16       q3, q3, #3
+        vabs.s16        d5, d3
+        vshr.s16        d3, d3, #8              @ clip_sign
+        vrshr.s16       d16, d20, #3
+        vabs.s16        q3, q3                  @ a1, a2
+        vshr.s16        d5, d5, #1              @ clip
+        vabs.s16        d17, d16                @ a0
+        vceq.i16        d18, d5, #0             @ test clip == 0
+        vshr.s16        d16, d16, #8            @ a0_sign
+        vcge.s16        d19, d6, d7             @ test a1 >= a2
+        vcge.s16        d1, d17, d1             @ test a0 >= pq
+        vsub.i16        d16, d3, d16            @ clip_sign - a0_sign
+        vbsl            d19, d7, d6             @ a3
+        vorr            d1, d18, d1             @ test clip == 0 || a0 >= pq
+        vqsub.u16       d3, d17, d19            @ a0 >= a3 ? a0-a3 : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        vcge.s16        d6, d19, d17            @ test a3 >= a0    @
+        vmul.i16        d0, d3, d0[1]           @ a0 >= a3 ? 5*(a0-a3) : 0
+        vorr            d3, d1, d6              @ test clip == 0 || a0 >= pq || a3 >= a0
+        vmov.32         r2, d3[1]               @ move to gp reg
+        vshr.u16        d0, d0, #3              @ a0 >= a3 ? (5*(a0-a3))>>3 : 0
+        vcge.s16        d3, d0, d5
+        tst             r2, #1
+        bne             1f                      @ none of the 4 pixel pairs should be updated if this one is not filtered
+        vbsl            d3, d5, d0              @ FFMIN(d, clip)
+        vbic            d0, d3, d1              @ set each d to zero if it should not be filtered because clip == 0 || a0 >= pq (a3 > a0 case already zeroed by saturating sub)
+        vmla.i16        d2, d0, d16             @ invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P5
+        vmls.i16        d4, d0, d16             @ invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P4
+        vqmovun.s16     d1, q1
+        vqmovun.s16     d0, q2
+        vst2.8          {d0[0], d1[0]}, [r0], r1
+        vst2.8          {d0[1], d1[1]}, [r0], r1
+        vst2.8          {d0[2], d1[2]}, [r0], r1
+        vst2.8          {d0[3], d1[3]}, [r0]
+1:      bx              lr
+endfunc
+
+@ VC-1 in-loop deblocking filter for 8 pixel pairs at boundary of vertically-neighbouring blocks
+@ On entry:
+@   r0 -> top-left pel of lower block
+@   r1 = row stride, bytes
+@   r2 = PQUANT bitstream parameter
+function ff_vc1_v_loop_filter8_neon, export=1
+        sub             r3, r0, r1, lsl #2
+        vldr            d0, .Lcoeffs
+        vld1.32         {d1}, [r0], r1          @ P5
+        vld1.32         {d2}, [r3], r1          @ P1
+        vld1.32         {d3}, [r3], r1          @ P2
+        vld1.32         {d4}, [r0], r1          @ P6
+        vld1.32         {d5}, [r3], r1          @ P3
+        vld1.32         {d6}, [r0], r1          @ P7
+        vshll.u8        q8, d1, #1              @ 2*P5
+        vshll.u8        q9, d2, #1              @ 2*P1
+        vld1.32         {d7}, [r3]              @ P4
+        vmovl.u8        q1, d3                  @ P2
+        vld1.32         {d20}, [r0]             @ P8
+        vmovl.u8        q11, d4                 @ P6
+        vdup.16         q12, r2                 @ pq
+        vmovl.u8        q13, d5                 @ P3
+        vmls.i16        q9, q1, d0[1]           @ 2*P1-5*P2
+        vmovl.u8        q1, d6                  @ P7
+        vshll.u8        q2, d5, #1              @ 2*P3
+        vmls.i16        q8, q11, d0[1]          @ 2*P5-5*P6
+        vmovl.u8        q3, d7                  @ P4
+        vmovl.u8        q10, d20                @ P8
+        vmla.i16        q8, q1, d0[1]           @ 2*P5-5*P6+5*P7
+        vmovl.u8        q1, d1                  @ P5
+        vmla.i16        q9, q13, d0[1]          @ 2*P1-5*P2+5*P3
+        vsub.i16        q13, q3, q1             @ P4-P5
+        vmls.i16        q2, q3, d0[1]           @ 2*P3-5*P4
+        vmls.i16        q8, q10, d0[0]          @ 2*P5-5*P6+5*P7-2*P8
+        vabs.s16        q10, q13
+        vshr.s16        q13, q13, #8            @ clip_sign
+        vmls.i16        q9, q3, d0[0]           @ 2*P1-5*P2+5*P3-2*P4
+        vshr.s16        q10, q10, #1            @ clip
+        vmla.i16        q2, q1, d0[1]           @ 2*P3-5*P4+5*P5
+        vrshr.s16       q8, q8, #3
+        vmls.i16        q2, q11, d0[0]          @ 2*P3-5*P4+5*P5-2*P6
+        vceq.i16        q11, q10, #0            @ test clip == 0
+        vrshr.s16       q9, q9, #3
+        vabs.s16        q8, q8                  @ a2
+        vabs.s16        q9, q9                  @ a1
+        vrshr.s16       q2, q2, #3
+        vcge.s16        q14, q9, q8             @ test a1 >= a2
+        vabs.s16        q15, q2                 @ a0
+        vshr.s16        q2, q2, #8              @ a0_sign
+        vbsl            q14, q8, q9             @ a3
+        vcge.s16        q8, q15, q12            @ test a0 >= pq
+        vsub.i16        q2, q13, q2             @ clip_sign - a0_sign
+        vqsub.u16       q9, q15, q14            @ a0 >= a3 ? a0-a3 : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        vcge.s16        q12, q14, q15           @ test a3 >= a0
+        vorr            q8, q11, q8             @ test clip == 0 || a0 >= pq
+        vmul.i16        q0, q9, d0[1]           @ a0 >= a3 ? 5*(a0-a3) : 0
+        vorr            q9, q8, q12             @ test clip == 0 || a0 >= pq || a3 >= a0
+        vshl.i64        q11, q9, #16
+        vmov.32         r0, d18[1]              @ move to gp reg
+        vshr.u16        q0, q0, #3              @ a0 >= a3 ? (5*(a0-a3))>>3 : 0
+        vmov.32         r2, d19[1]
+        vshr.s64        q9, q11, #48
+        vcge.s16        q11, q0, q10
+        vorr            q8, q8, q9
+        and             r0, r0, r2
+        vbsl            q11, q10, q0            @ FFMIN(d, clip)
+        tst             r0, #1
+        bne             1f                      @ none of the 8 pixel pairs should be updated in this case
+        vbic            q0, q11, q8             @ set each d to zero if it should not be filtered
+        vmls.i16        q3, q0, q2              @ invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P4
+        vmla.i16        q1, q0, q2              @ invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P5
+        vqmovun.s16     d0, q3
+        vqmovun.s16     d1, q1
+        vst1.32         {d0}, [r3], r1
+        vst1.32         {d1}, [r3]
+1:      bx              lr
+endfunc
+
+.align  5
+.Lcoeffs:
+.quad   0x00050002
+
+@ VC-1 in-loop deblocking filter for 8 pixel pairs at boundary of horizontally-neighbouring blocks
+@ On entry:
+@   r0 -> top-left pel of right block
+@   r1 = row stride, bytes
+@   r2 = PQUANT bitstream parameter
+function ff_vc1_h_loop_filter8_neon, export=1
+        push            {lr}
+        sub             r3, r0, #4              @ where to start reading
+        vldr            d0, .Lcoeffs
+        vld1.32         {d2}, [r3], r1          @ P1[0], P2[0]...
+        sub             r0, r0, #1              @ where to start writing
+        vld1.32         {d4}, [r3], r1
+        add             r12, r0, r1, lsl #2
+        vld1.32         {d3}, [r3], r1
+        vld1.32         {d5}, [r3], r1
+        vld1.32         {d6}, [r3], r1
+        vld1.32         {d16}, [r3], r1
+        vld1.32         {d7}, [r3], r1
+        vld1.32         {d17}, [r3]
+        vtrn.8          q1, q2                  @ P1[0], P1[1], P3[0]... P1[2], P1[3], P3[2]... P2[0], P2[1], P4[0]... P2[2], P2[3], P4[2]...
+        vdup.16         q9, r2                  @ pq
+        vtrn.16         d2, d3                  @ P1[0], P1[1], P1[2], P1[3], P5[0]... P3[0], P3[1], P3[2], P3[3], P7[0]...
+        vtrn.16         d4, d5                  @ P2[0], P2[1], P2[2], P2[3], P6[0]... P4[0], P4[1], P4[2], P4[3], P8[0]...
+        vtrn.8          q3, q8                  @ P1[4], P1[5], P3[4]... P1[6], P1[7], P3[6]... P2[4], P2[5], P4[4]... P2[6], P2[7], P4[6]...
+        vtrn.16         d6, d7                  @ P1[4], P1[5], P1[6], P1[7], P5[4]... P3[4], P3[5], P3[5], P3[7], P7[4]...
+        vtrn.16         d16, d17                @ P2[4], P2[5], P2[6], P2[7], P6[4]... P4[4], P4[5], P4[6], P4[7], P8[4]...
+        vtrn.32         d2, d6                  @ P1, P5
+        vtrn.32         d4, d16                 @ P2, P6
+        vtrn.32         d3, d7                  @ P3, P7
+        vtrn.32         d5, d17                 @ P4, P8
+        vshll.u8        q10, d2, #1             @ 2*P1
+        vshll.u8        q11, d6, #1             @ 2*P5
+        vmovl.u8        q12, d4                 @ P2
+        vmovl.u8        q13, d16                @ P6
+        vmovl.u8        q14, d3                 @ P3
+        vmls.i16        q10, q12, d0[1]         @ 2*P1-5*P2
+        vmovl.u8        q12, d7                 @ P7
+        vshll.u8        q1, d3, #1              @ 2*P3
+        vmls.i16        q11, q13, d0[1]         @ 2*P5-5*P6
+        vmovl.u8        q2, d5                  @ P4
+        vmovl.u8        q8, d17                 @ P8
+        vmla.i16        q11, q12, d0[1]         @ 2*P5-5*P6+5*P7
+        vmovl.u8        q3, d6                  @ P5
+        vmla.i16        q10, q14, d0[1]         @ 2*P1-5*P2+5*P3
+        vsub.i16        q12, q2, q3             @ P4-P5
+        vmls.i16        q1, q2, d0[1]           @ 2*P3-5*P4
+        vmls.i16        q11, q8, d0[0]          @ 2*P5-5*P6+5*P7-2*P8
+        vabs.s16        q8, q12
+        vshr.s16        q12, q12, #8            @ clip_sign
+        vmls.i16        q10, q2, d0[0]          @ 2*P1-5*P2+5*P3-2*P4
+        vshr.s16        q8, q8, #1              @ clip
+        vmla.i16        q1, q3, d0[1]           @ 2*P3-5*P4+5*P5
+        vrshr.s16       q11, q11, #3
+        vmls.i16        q1, q13, d0[0]          @ 2*P3-5*P4+5*P5-2*P6
+        vceq.i16        q13, q8, #0             @ test clip == 0
+        vrshr.s16       q10, q10, #3
+        vabs.s16        q11, q11                @ a2
+        vabs.s16        q10, q10                @ a1
+        vrshr.s16       q1, q1, #3
+        vcge.s16        q14, q10, q11           @ test a1 >= a2
+        vabs.s16        q15, q1                 @ a0
+        vshr.s16        q1, q1, #8              @ a0_sign
+        vbsl            q14, q11, q10           @ a3
+        vcge.s16        q9, q15, q9             @ test a0 >= pq
+        vsub.i16        q1, q12, q1             @ clip_sign - a0_sign
+        vqsub.u16       q10, q15, q14           @ a0 >= a3 ? a0-a3 : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        vcge.s16        q11, q14, q15           @ test a3 >= a0
+        vorr            q9, q13, q9             @ test clip == 0 || a0 >= pq
+        vmul.i16        q0, q10, d0[1]          @ a0 >= a3 ? 5*(a0-a3) : 0
+        vorr            q10, q9, q11            @ test clip == 0 || a0 >= pq || a3 >= a0
+        vmov.32         r2, d20[1]              @ move to gp reg
+        vshr.u16        q0, q0, #3              @ a0 >= a3 ? (5*(a0-a3))>>3 : 0
+        vmov.32         r3, d21[1]
+        vcge.s16        q10, q0, q8
+        and             r14, r2, r3
+        vbsl            q10, q8, q0             @ FFMIN(d, clip)
+        tst             r14, #1
+        bne             2f                      @ none of the 8 pixel pairs should be updated in this case
+        vbic            q0, q10, q9             @ set each d to zero if it should not be filtered because clip == 0 || a0 >= pq (a3 > a0 case already zeroed by saturating sub)
+        vmla.i16        q3, q0, q1              @ invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P5
+        vmls.i16        q2, q0, q1              @ invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P4
+        vqmovun.s16     d1, q3
+        vqmovun.s16     d0, q2
+        tst             r2, #1
+        bne             1f                      @ none of the first 4 pixel pairs should be updated if so
+        vst2.8          {d0[0], d1[0]}, [r0], r1
+        vst2.8          {d0[1], d1[1]}, [r0], r1
+        vst2.8          {d0[2], d1[2]}, [r0], r1
+        vst2.8          {d0[3], d1[3]}, [r0]
+1:      tst             r3, #1
+        bne             2f                      @ none of the second 4 pixel pairs should be updated if so
+        vst2.8          {d0[4], d1[4]}, [r12], r1
+        vst2.8          {d0[5], d1[5]}, [r12], r1
+        vst2.8          {d0[6], d1[6]}, [r12], r1
+        vst2.8          {d0[7], d1[7]}, [r12]
+2:      pop             {pc}
+endfunc
+
+@ VC-1 in-loop deblocking filter for 16 pixel pairs at boundary of vertically-neighbouring blocks
+@ On entry:
+@   r0 -> top-left pel of lower block
+@   r1 = row stride, bytes
+@   r2 = PQUANT bitstream parameter
+function ff_vc1_v_loop_filter16_neon, export=1
+        vpush           {d8-d15}
+        sub             r3, r0, r1, lsl #2
+        vldr            d0, .Lcoeffs
+        vld1.64         {q1}, [r0], r1          @ P5
+        vld1.64         {q2}, [r3], r1          @ P1
+        vld1.64         {q3}, [r3], r1          @ P2
+        vld1.64         {q4}, [r0], r1          @ P6
+        vld1.64         {q5}, [r3], r1          @ P3
+        vld1.64         {q6}, [r0], r1          @ P7
+        vshll.u8        q7, d2, #1              @ 2*P5[0..7]
+        vshll.u8        q8, d4, #1              @ 2*P1[0..7]
+        vld1.64         {q9}, [r3]              @ P4
+        vmovl.u8        q10, d6                 @ P2[0..7]
+        vld1.64         {q11}, [r0]             @ P8
+        vmovl.u8        q12, d8                 @ P6[0..7]
+        vdup.16         q13, r2                 @ pq
+        vshll.u8        q2, d5, #1              @ 2*P1[8..15]
+        vmls.i16        q8, q10, d0[1]          @ 2*P1[0..7]-5*P2[0..7]
+        vshll.u8        q10, d3, #1             @ 2*P5[8..15]
+        vmovl.u8        q3, d7                  @ P2[8..15]
+        vmls.i16        q7, q12, d0[1]          @ 2*P5[0..7]-5*P6[0..7]
+        vmovl.u8        q4, d9                  @ P6[8..15]
+        vmovl.u8        q14, d10                @ P3[0..7]
+        vmovl.u8        q15, d12                @ P7[0..7]
+        vmls.i16        q2, q3, d0[1]           @ 2*P1[8..15]-5*P2[8..15]
+        vshll.u8        q3, d10, #1             @ 2*P3[0..7]
+        vmls.i16        q10, q4, d0[1]          @ 2*P5[8..15]-5*P6[8..15]
+        vmovl.u8        q6, d13                 @ P7[8..15]
+        vmla.i16        q8, q14, d0[1]          @ 2*P1[0..7]-5*P2[0..7]+5*P3[0..7]
+        vmovl.u8        q14, d18                @ P4[0..7]
+        vmovl.u8        q9, d19                 @ P4[8..15]
+        vmla.i16        q7, q15, d0[1]          @ 2*P5[0..7]-5*P6[0..7]+5*P7[0..7]
+        vmovl.u8        q15, d11                @ P3[8..15]
+        vshll.u8        q5, d11, #1             @ 2*P3[8..15]
+        vmls.i16        q3, q14, d0[1]          @ 2*P3[0..7]-5*P4[0..7]
+        vmla.i16        q2, q15, d0[1]          @ 2*P1[8..15]-5*P2[8..15]+5*P3[8..15]
+        vmovl.u8        q15, d22                @ P8[0..7]
+        vmovl.u8        q11, d23                @ P8[8..15]
+        vmla.i16        q10, q6, d0[1]          @ 2*P5[8..15]-5*P6[8..15]+5*P7[8..15]
+        vmovl.u8        q6, d2                  @ P5[0..7]
+        vmovl.u8        q1, d3                  @ P5[8..15]
+        vmls.i16        q5, q9, d0[1]           @ 2*P3[8..15]-5*P4[8..15]
+        vmls.i16        q8, q14, d0[0]          @ 2*P1[0..7]-5*P2[0..7]+5*P3[0..7]-2*P4[0..7]
+        vmls.i16        q7, q15, d0[0]          @ 2*P5[0..7]-5*P6[0..7]+5*P7[0..7]-2*P8[0..7]
+        vsub.i16        q15, q14, q6            @ P4[0..7]-P5[0..7]
+        vmla.i16        q3, q6, d0[1]           @ 2*P3[0..7]-5*P4[0..7]+5*P5[0..7]
+        vrshr.s16       q8, q8, #3
+        vmls.i16        q2, q9, d0[0]           @ 2*P1[8..15]-5*P2[8..15]+5*P3[8..15]-2*P4[8..15]
+        vrshr.s16       q7, q7, #3
+        vmls.i16        q10, q11, d0[0]         @ 2*P5[8..15]-5*P6[8..15]+5*P7[8..15]-2*P8[8..15]
+        vabs.s16        q11, q15
+        vabs.s16        q8, q8                  @ a1[0..7]
+        vmla.i16        q5, q1, d0[1]           @ 2*P3[8..15]-5*P4[8..15]+5*P5[8..15]
+        vshr.s16        q15, q15, #8            @ clip_sign[0..7]
+        vrshr.s16       q2, q2, #3
+        vmls.i16        q3, q12, d0[0]          @ 2*P3[0..7]-5*P4[0..7]+5*P5[0..7]-2*P6[0..7]
+        vabs.s16        q7, q7                  @ a2[0..7]
+        vrshr.s16       q10, q10, #3
+        vsub.i16        q12, q9, q1             @ P4[8..15]-P5[8..15]
+        vshr.s16        q11, q11, #1            @ clip[0..7]
+        vmls.i16        q5, q4, d0[0]           @ 2*P3[8..15]-5*P4[8..15]+5*P5[8..15]-2*P6[8..15]
+        vcge.s16        q4, q8, q7              @ test a1[0..7] >= a2[0..7]
+        vabs.s16        q2, q2                  @ a1[8..15]
+        vrshr.s16       q3, q3, #3
+        vabs.s16        q10, q10                @ a2[8..15]
+        vbsl            q4, q7, q8              @ a3[0..7]
+        vabs.s16        q7, q12
+        vshr.s16        q8, q12, #8             @ clip_sign[8..15]
+        vrshr.s16       q5, q5, #3
+        vcge.s16        q12, q2, q10            @ test a1[8..15] >= a2[8.15]
+        vshr.s16        q7, q7, #1              @ clip[8..15]
+        vbsl            q12, q10, q2            @ a3[8..15]
+        vabs.s16        q2, q3                  @ a0[0..7]
+        vceq.i16        q10, q11, #0            @ test clip[0..7] == 0
+        vshr.s16        q3, q3, #8              @ a0_sign[0..7]
+        vsub.i16        q3, q15, q3             @ clip_sign[0..7] - a0_sign[0..7]
+        vcge.s16        q15, q2, q13            @ test a0[0..7] >= pq
+        vorr            q10, q10, q15           @ test clip[0..7] == 0 || a0[0..7] >= pq
+        vqsub.u16       q15, q2, q4             @ a0[0..7] >= a3[0..7] ? a0[0..7]-a3[0..7] : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        vcge.s16        q2, q4, q2              @ test a3[0..7] >= a0[0..7]
+        vabs.s16        q4, q5                  @ a0[8..15]
+        vshr.s16        q5, q5, #8              @ a0_sign[8..15]
+        vmul.i16        q15, q15, d0[1]         @ a0[0..7] >= a3[0..7] ? 5*(a0[0..7]-a3[0..7]) : 0
+        vcge.s16        q13, q4, q13            @ test a0[8..15] >= pq
+        vorr            q2, q10, q2             @ test clip[0..7] == 0 || a0[0..7] >= pq || a3[0..7] >= a0[0..7]
+        vsub.i16        q5, q8, q5              @ clip_sign[8..15] - a0_sign[8..15]
+        vceq.i16        q8, q7, #0              @ test clip[8..15] == 0
+        vshr.u16        q15, q15, #3            @ a0[0..7] >= a3[0..7] ? (5*(a0[0..7]-a3[0..7]))>>3 : 0
+        vmov.32         r0, d4[1]               @ move to gp reg
+        vorr            q8, q8, q13             @ test clip[8..15] == 0 || a0[8..15] >= pq
+        vqsub.u16       q13, q4, q12            @ a0[8..15] >= a3[8..15] ? a0[8..15]-a3[8..15] : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        vmov.32         r2, d5[1]
+        vcge.s16        q4, q12, q4             @ test a3[8..15] >= a0[8..15]
+        vshl.i64        q2, q2, #16
+        vcge.s16        q12, q15, q11
+        vmul.i16        q0, q13, d0[1]          @ a0[8..15] >= a3[8..15] ? 5*(a0[8..15]-a3[8..15]) : 0
+        vorr            q4, q8, q4              @ test clip[8..15] == 0 || a0[8..15] >= pq || a3[8..15] >= a0[8..15]
+        vshr.s64        q2, q2, #48
+        and             r0, r0, r2
+        vbsl            q12, q11, q15           @ FFMIN(d[0..7], clip[0..7])
+        vshl.i64        q11, q4, #16
+        vmov.32         r2, d8[1]
+        vshr.u16        q0, q0, #3              @ a0[8..15] >= a3[8..15] ? (5*(a0[8..15]-a3[8..15]))>>3 : 0
+        vorr            q2, q10, q2
+        vmov.32         r12, d9[1]
+        vshr.s64        q4, q11, #48
+        vcge.s16        q10, q0, q7
+        vbic            q2, q12, q2             @ set each d[0..7] to zero if it should not be filtered because clip[0..7] == 0 || a0[0..7] >= pq (a3 > a0 case already zeroed by saturating sub)
+        vorr            q4, q8, q4
+        and             r2, r2, r12
+        vbsl            q10, q7, q0             @ FFMIN(d[8..15], clip[8..15])
+        vmls.i16        q14, q2, q3             @ invert d[0..7] depending on clip_sign[0..7] & a0_sign[0..7], or zero it if they match, and accumulate into P4[0..7]
+        and             r0, r0, r2
+        vbic            q0, q10, q4             @ set each d[8..15] to zero if it should not be filtered because clip[8..15] == 0 || a0[8..15] >= pq (a3 > a0 case already zeroed by saturating sub)
+        tst             r0, #1
+        bne             1f                      @ none of the 16 pixel pairs should be updated in this case
+        vmla.i16        q6, q2, q3              @ invert d[0..7] depending on clip_sign[0..7] & a0_sign[0..7], or zero it if they match, and accumulate into P5[0..7]
+        vmls.i16        q9, q0, q5              @ invert d[8..15] depending on clip_sign[8..15] & a0_sign[8..15], or zero it if they match, and accumulate into P4[8..15]
+        vqmovun.s16     d4, q14
+        vmla.i16        q1, q0, q5              @ invert d[8..15] depending on clip_sign[8..15] & a0_sign[8..15], or zero it if they match, and accumulate into P5[8..15]
+        vqmovun.s16     d0, q6
+        vqmovun.s16     d5, q9
+        vqmovun.s16     d1, q1
+        vst1.64         {q2}, [r3], r1
+        vst1.64         {q0}, [r3]
+1:      vpop            {d8-d15}
+        bx              lr
+endfunc
+
+@ VC-1 in-loop deblocking filter for 16 pixel pairs at boundary of horizontally-neighbouring blocks
+@ On entry:
+@   r0 -> top-left pel of right block
+@   r1 = row stride, bytes
+@   r2 = PQUANT bitstream parameter
+function ff_vc1_h_loop_filter16_neon, export=1
+        push            {r4-r6,lr}
+        vpush           {d8-d15}
+        sub             r3, r0, #4              @ where to start reading
+        vldr            d0, .Lcoeffs
+        vld1.32         {d2}, [r3], r1          @ P1[0], P2[0]...
+        sub             r0, r0, #1              @ where to start writing
+        vld1.32         {d3}, [r3], r1
+        add             r4, r0, r1, lsl #2
+        vld1.32         {d10}, [r3], r1
+        vld1.32         {d11}, [r3], r1
+        vld1.32         {d16}, [r3], r1
+        vld1.32         {d4}, [r3], r1
+        vld1.32         {d8}, [r3], r1
+        vtrn.8          d2, d3                  @ P1[0], P1[1], P3[0]... P2[0], P2[1], P4[0]...
+        vld1.32         {d14}, [r3], r1
+        vld1.32         {d5}, [r3], r1
+        vtrn.8          d10, d11                @ P1[2], P1[3], P3[2]... P2[2], P2[3], P4[2]...
+        vld1.32         {d6}, [r3], r1
+        vld1.32         {d12}, [r3], r1
+        vtrn.8          d16, d4                 @ P1[4], P1[5], P3[4]... P2[4], P2[5], P4[4]...
+        vld1.32         {d13}, [r3], r1
+        vtrn.16         d2, d10                 @ P1[0], P1[1], P1[2], P1[3], P5[0]... P3[0], P3[1], P3[2], P3[3], P7[0]...
+        vld1.32         {d1}, [r3], r1
+        vtrn.8          d8, d14                 @ P1[6], P1[7], P3[6]... P2[6], P2[7], P4[6]...
+        vld1.32         {d7}, [r3], r1
+        vtrn.16         d3, d11                 @ P2[0], P2[1], P2[2], P2[3], P6[0]... P4[0], P4[1], P4[2], P4[3], P8[0]...
+        vld1.32         {d9}, [r3], r1
+        vtrn.8          d5, d6                  @ P1[8], P1[9], P3[8]... P2[8], P2[9], P4[8]...
+        vld1.32         {d15}, [r3]
+        vtrn.16         d16, d8                 @ P1[4], P1[5], P1[6], P1[7], P5[4]... P3[4], P3[5], P3[6], P3[7], P7[4]...
+        vtrn.16         d4, d14                 @ P2[4], P2[5], P2[6], P2[7], P6[4]... P4[4], P4[5], P4[6], P4[7], P8[4]...
+        vtrn.8          d12, d13                @ P1[10], P1[11], P3[10]... P2[10], P2[11], P4[10]...
+        vdup.16         q9, r2                  @ pq
+        vtrn.8          d1, d7                  @ P1[12], P1[13], P3[12]... P2[12], P2[13], P4[12]...
+        vtrn.32         d2, d16                 @ P1[0..7], P5[0..7]
+        vtrn.16         d5, d12                 @ P1[8], P1[7], P1[10], P1[11], P5[8]... P3[8], P3[9], P3[10], P3[11], P7[8]...
+        vtrn.16         d6, d13                 @ P2[8], P2[7], P2[10], P2[11], P6[8]... P4[8], P4[9], P4[10], P4[11], P8[8]...
+        vtrn.8          d9, d15                 @ P1[14], P1[15], P3[14]... P2[14], P2[15], P4[14]...
+        vtrn.32         d3, d4                  @ P2[0..7], P6[0..7]
+        vshll.u8        q10, d2, #1             @ 2*P1[0..7]
+        vtrn.32         d10, d8                 @ P3[0..7], P7[0..7]
+        vshll.u8        q11, d16, #1            @ 2*P5[0..7]
+        vtrn.32         d11, d14                @ P4[0..7], P8[0..7]
+        vtrn.16         d1, d9                  @ P1[12], P1[13], P1[14], P1[15], P5[12]... P3[12], P3[13], P3[14], P3[15], P7[12]...
+        vtrn.16         d7, d15                 @ P2[12], P2[13], P2[14], P2[15], P6[12]... P4[12], P4[13], P4[14], P4[15], P8[12]...
+        vmovl.u8        q1, d3                  @ P2[0..7]
+        vmovl.u8        q12, d4                 @ P6[0..7]
+        vtrn.32         d5, d1                  @ P1[8..15], P5[8..15]
+        vtrn.32         d6, d7                  @ P2[8..15], P6[8..15]
+        vtrn.32         d12, d9                 @ P3[8..15], P7[8..15]
+        vtrn.32         d13, d15                @ P4[8..15], P8[8..15]
+        vmls.i16        q10, q1, d0[1]          @ 2*P1[0..7]-5*P2[0..7]
+        vmovl.u8        q1, d10                 @ P3[0..7]
+        vshll.u8        q2, d5, #1              @ 2*P1[8..15]
+        vshll.u8        q13, d1, #1             @ 2*P5[8..15]
+        vmls.i16        q11, q12, d0[1]         @ 2*P5[0..7]-5*P6[0..7]
+        vmovl.u8        q14, d6                 @ P2[8..15]
+        vmovl.u8        q3, d7                  @ P6[8..15]
+        vmovl.u8        q15, d8                 @ P7[0..7]
+        vmla.i16        q10, q1, d0[1]          @ 2*P1[0..7]-5*P2[0..7]+5*P3[0..7]
+        vmovl.u8        q1, d12                 @ P3[8..15]
+        vmls.i16        q2, q14, d0[1]          @ 2*P1[8..15]-5*P2[8..15]
+        vmovl.u8        q4, d9                  @ P7[8..15]
+        vshll.u8        q14, d10, #1            @ 2*P3[0..7]
+        vmls.i16        q13, q3, d0[1]          @ 2*P5[8..15]-5*P6[8..15]
+        vmovl.u8        q5, d11                 @ P4[0..7]
+        vmla.i16        q11, q15, d0[1]         @ 2*P5[0..7]-5*P6[0..7]+5*P7[0..7]
+        vshll.u8        q15, d12, #1            @ 2*P3[8..15]
+        vmovl.u8        q6, d13                 @ P4[8..15]
+        vmla.i16        q2, q1, d0[1]           @ 2*P1[8..15]-5*P2[8..15]+5*P3[8..15]
+        vmovl.u8        q1, d14                 @ P8[0..7]
+        vmovl.u8        q7, d15                 @ P8[8..15]
+        vmla.i16        q13, q4, d0[1]          @ 2*P5[8..15]-5*P6[8..15]+5*P7[8..15]
+        vmovl.u8        q4, d16                 @ P5[0..7]
+        vmovl.u8        q8, d1                  @ P5[8..15]
+        vmls.i16        q14, q5, d0[1]          @ 2*P3[0..7]-5*P4[0..7]
+        vmls.i16        q15, q6, d0[1]          @ 2*P3[8..15]-5*P4[8..15]
+        vmls.i16        q10, q5, d0[0]          @ 2*P1[0..7]-5*P2[0..7]+5*P3[0..7]-2*P4[0..7]
+        vmls.i16        q11, q1, d0[0]          @ 2*P5[0..7]-5*P6[0..7]+5*P7[0..7]-2*P8[0..7]
+        vsub.i16        q1, q5, q4              @ P4[0..7]-P5[0..7]
+        vmls.i16        q2, q6, d0[0]           @ 2*P1[8..15]-5*P2[8..15]+5*P3[8..15]-2*P4[8..15]
+        vrshr.s16       q10, q10, #3
+        vmls.i16        q13, q7, d0[0]          @ 2*P5[8..15]-5*P6[8..15]+5*P7[8..15]-2*P8[8..15]
+        vsub.i16        q7, q6, q8              @ P4[8..15]-P5[8..15]
+        vrshr.s16       q11, q11, #3
+        vmla.s16        q14, q4, d0[1]          @ 2*P3[0..7]-5*P4[0..7]+5*P5[0..7]
+        vrshr.s16       q2, q2, #3
+        vmla.i16        q15, q8, d0[1]          @ 2*P3[8..15]-5*P4[8..15]+5*P5[8..15]
+        vabs.s16        q10, q10                @ a1[0..7]
+        vrshr.s16       q13, q13, #3
+        vmls.i16        q15, q3, d0[0]          @ 2*P3[8..15]-5*P4[8..15]+5*P5[8..15]-2*P6[8..15]
+        vabs.s16        q3, q11                 @ a2[0..7]
+        vabs.s16        q2, q2                  @ a1[8..15]
+        vmls.i16        q14, q12, d0[0]         @ 2*P3[0..7]-5*P4[0..7]+5*P5[0..7]-2*P6[0..7]
+        vabs.s16        q11, q1
+        vabs.s16        q12, q13                @ a2[8..15]
+        vcge.s16        q13, q10, q3            @ test a1[0..7] >= a2[0..7]
+        vshr.s16        q1, q1, #8              @ clip_sign[0..7]
+        vrshr.s16       q15, q15, #3
+        vshr.s16        q11, q11, #1            @ clip[0..7]
+        vrshr.s16       q14, q14, #3
+        vbsl            q13, q3, q10            @ a3[0..7]
+        vcge.s16        q3, q2, q12             @ test a1[8..15] >= a2[8.15]
+        vabs.s16        q10, q15                @ a0[8..15]
+        vshr.s16        q15, q15, #8            @ a0_sign[8..15]
+        vbsl            q3, q12, q2             @ a3[8..15]
+        vabs.s16        q2, q14                 @ a0[0..7]
+        vabs.s16        q12, q7
+        vshr.s16        q7, q7, #8              @ clip_sign[8..15]
+        vshr.s16        q14, q14, #8            @ a0_sign[0..7]
+        vshr.s16        q12, q12, #1            @ clip[8..15]
+        vsub.i16        q7, q7, q15             @ clip_sign[8..15] - a0_sign[8..15]
+        vqsub.u16       q15, q10, q3            @ a0[8..15] >= a3[8..15] ? a0[8..15]-a3[8..15] : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        vcge.s16        q3, q3, q10             @ test a3[8..15] >= a0[8..15]
+        vcge.s16        q10, q10, q9            @ test a0[8..15] >= pq
+        vcge.s16        q9, q2, q9              @ test a0[0..7] >= pq
+        vsub.i16        q1, q1, q14             @ clip_sign[0..7] - a0_sign[0..7]
+        vqsub.u16       q14, q2, q13            @ a0[0..7] >= a3[0..7] ? a0[0..7]-a3[0..7] : 0  (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+        vcge.s16        q2, q13, q2             @ test a3[0..7] >= a0[0..7]
+        vmul.i16        q13, q15, d0[1]         @ a0[8..15] >= a3[8..15] ? 5*(a0[8..15]-a3[8..15]) : 0
+        vceq.i16        q15, q11, #0            @ test clip[0..7] == 0
+        vmul.i16        q0, q14, d0[1]          @ a0[0..7] >= a3[0..7] ? 5*(a0[0..7]-a3[0..7]) : 0
+        vorr            q9, q15, q9             @ test clip[0..7] == 0 || a0[0..7] >= pq
+        vceq.i16        q14, q12, #0            @ test clip[8..15] == 0
+        vshr.u16        q13, q13, #3            @ a0[8..15] >= a3[8..15] ? (5*(a0[8..15]-a3[8..15]))>>3 : 0
+        vorr            q2, q9, q2              @ test clip[0..7] == 0 || a0[0..7] >= pq || a3[0..7] >= a0[0..7]
+        vshr.u16        q0, q0, #3              @ a0[0..7] >= a3[0..7] ? (5*(a0[0..7]-a3[0..7]))>>3 : 0
+        vorr            q10, q14, q10           @ test clip[8..15] == 0 || a0[8..15] >= pq
+        vcge.s16        q14, q13, q12
+        vmov.32         r2, d4[1]               @ move to gp reg
+        vorr            q3, q10, q3             @ test clip[8..15] == 0 || a0[8..15] >= pq || a3[8..15] >= a0[8..15]
+        vmov.32         r3, d5[1]
+        vcge.s16        q2, q0, q11
+        vbsl            q14, q12, q13           @ FFMIN(d[8..15], clip[8..15])
+        vbsl            q2, q11, q0             @ FFMIN(d[0..7], clip[0..7])
+        vmov.32         r5, d6[1]
+        vbic            q0, q14, q10            @ set each d[8..15] to zero if it should not be filtered because clip[8..15] == 0 || a0[8..15] >= pq (a3 > a0 case already zeroed by saturating sub)
+        vmov.32         r6, d7[1]
+        and             r12, r2, r3
+        vbic            q2, q2, q9              @ set each d[0..7] to zero if it should not be filtered because clip[0..7] == 0 || a0[0..7] >= pq (a3 > a0 case already zeroed by saturating sub)
+        vmls.i16        q6, q0, q7              @ invert d[8..15] depending on clip_sign[8..15] & a0_sign[8..15], or zero it if they match, and accumulate into P4
+        vmls.i16        q5, q2, q1              @ invert d[0..7] depending on clip_sign[0..7] & a0_sign[0..7], or zero it if they match, and accumulate into P4
+        and             r14, r5, r6
+        vmla.i16        q4, q2, q1              @ invert d[0..7] depending on clip_sign[0..7] & a0_sign[0..7], or zero it if they match, and accumulate into P5
+        and             r12, r12, r14
+        vqmovun.s16     d4, q6
+        vmla.i16        q8, q0, q7              @ invert d[8..15] depending on clip_sign[8..15] & a0_sign[8..15], or zero it if they match, and accumulate into P5
+        tst             r12, #1
+        bne             4f                      @ none of the 16 pixel pairs should be updated in this case
+        vqmovun.s16     d2, q5
+        vqmovun.s16     d3, q4
+        vqmovun.s16     d5, q8
+        tst             r2, #1
+        bne             1f
+        vst2.8          {d2[0], d3[0]}, [r0], r1
+        vst2.8          {d2[1], d3[1]}, [r0], r1
+        vst2.8          {d2[2], d3[2]}, [r0], r1
+        vst2.8          {d2[3], d3[3]}, [r0]
+1:      add             r0, r4, r1, lsl #2
+        tst             r3, #1
+        bne             2f
+        vst2.8          {d2[4], d3[4]}, [r4], r1
+        vst2.8          {d2[5], d3[5]}, [r4], r1
+        vst2.8          {d2[6], d3[6]}, [r4], r1
+        vst2.8          {d2[7], d3[7]}, [r4]
+2:      add             r4, r0, r1, lsl #2
+        tst             r5, #1
+        bne             3f
+        vst2.8          {d4[0], d5[0]}, [r0], r1
+        vst2.8          {d4[1], d5[1]}, [r0], r1
+        vst2.8          {d4[2], d5[2]}, [r0], r1
+        vst2.8          {d4[3], d5[3]}, [r0]
+3:      tst             r6, #1
+        bne             4f
+        vst2.8          {d4[4], d5[4]}, [r4], r1
+        vst2.8          {d4[5], d5[5]}, [r4], r1
+        vst2.8          {d4[6], d5[6]}, [r4], r1
+        vst2.8          {d4[7], d5[7]}, [r4]
+4:      vpop            {d8-d15}
+        pop             {r4-r6,pc}
+endfunc
-- 
2.25.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [FFmpeg-devel] [PATCH 07/10] avcodec/vc1: Arm 64-bit NEON inverse transform fast paths
  2022-03-25 18:52 ` [FFmpeg-devel] [PATCH v2 00/10] " Ben Avison
                     ` (5 preceding siblings ...)
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 06/10] avcodec/vc1: Arm 32-bit " Ben Avison
@ 2022-03-25 18:52   ` Ben Avison
  2022-03-30 13:49     ` Martin Storsjö
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 08/10] avcodec/idctdsp: Arm 64-bit NEON block add and clamp " Ben Avison
                     ` (2 subsequent siblings)
  9 siblings, 1 reply; 55+ messages in thread
From: Ben Avison @ 2022-03-25 18:52 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Ben Avison

checkasm benchmarks on 1.5 GHz Cortex-A72 are as follows.

vc1dsp.vc1_inv_trans_4x4_c: 158.2
vc1dsp.vc1_inv_trans_4x4_neon: 65.7
vc1dsp.vc1_inv_trans_4x4_dc_c: 86.5
vc1dsp.vc1_inv_trans_4x4_dc_neon: 26.5
vc1dsp.vc1_inv_trans_4x8_c: 335.2
vc1dsp.vc1_inv_trans_4x8_neon: 106.2
vc1dsp.vc1_inv_trans_4x8_dc_c: 151.2
vc1dsp.vc1_inv_trans_4x8_dc_neon: 25.5
vc1dsp.vc1_inv_trans_8x4_c: 365.7
vc1dsp.vc1_inv_trans_8x4_neon: 97.2
vc1dsp.vc1_inv_trans_8x4_dc_c: 139.7
vc1dsp.vc1_inv_trans_8x4_dc_neon: 16.5
vc1dsp.vc1_inv_trans_8x8_c: 547.7
vc1dsp.vc1_inv_trans_8x8_neon: 137.0
vc1dsp.vc1_inv_trans_8x8_dc_c: 268.2
vc1dsp.vc1_inv_trans_8x8_dc_neon: 30.5

Signed-off-by: Ben Avison <bavison@riscosopen.org>
---
 libavcodec/aarch64/vc1dsp_init_aarch64.c |  19 +
 libavcodec/aarch64/vc1dsp_neon.S         | 678 +++++++++++++++++++++++
 2 files changed, 697 insertions(+)

diff --git a/libavcodec/aarch64/vc1dsp_init_aarch64.c b/libavcodec/aarch64/vc1dsp_init_aarch64.c
index edfb296b75..b672b2aa99 100644
--- a/libavcodec/aarch64/vc1dsp_init_aarch64.c
+++ b/libavcodec/aarch64/vc1dsp_init_aarch64.c
@@ -25,6 +25,16 @@
 
 #include "config.h"
 
+void ff_vc1_inv_trans_8x8_neon(int16_t *block);
+void ff_vc1_inv_trans_8x4_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
+void ff_vc1_inv_trans_4x8_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
+void ff_vc1_inv_trans_4x4_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
+
+void ff_vc1_inv_trans_8x8_dc_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
+void ff_vc1_inv_trans_8x4_dc_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
+void ff_vc1_inv_trans_4x8_dc_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
+void ff_vc1_inv_trans_4x4_dc_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
+
 void ff_vc1_v_loop_filter4_neon(uint8_t *src, int stride, int pq);
 void ff_vc1_h_loop_filter4_neon(uint8_t *src, int stride, int pq);
 void ff_vc1_v_loop_filter8_neon(uint8_t *src, int stride, int pq);
@@ -46,6 +56,15 @@ av_cold void ff_vc1dsp_init_aarch64(VC1DSPContext *dsp)
     int cpu_flags = av_get_cpu_flags();
 
     if (have_neon(cpu_flags)) {
+        dsp->vc1_inv_trans_8x8 = ff_vc1_inv_trans_8x8_neon;
+        dsp->vc1_inv_trans_8x4 = ff_vc1_inv_trans_8x4_neon;
+        dsp->vc1_inv_trans_4x8 = ff_vc1_inv_trans_4x8_neon;
+        dsp->vc1_inv_trans_4x4 = ff_vc1_inv_trans_4x4_neon;
+        dsp->vc1_inv_trans_8x8_dc = ff_vc1_inv_trans_8x8_dc_neon;
+        dsp->vc1_inv_trans_8x4_dc = ff_vc1_inv_trans_8x4_dc_neon;
+        dsp->vc1_inv_trans_4x8_dc = ff_vc1_inv_trans_4x8_dc_neon;
+        dsp->vc1_inv_trans_4x4_dc = ff_vc1_inv_trans_4x4_dc_neon;
+
         dsp->vc1_v_loop_filter4  = ff_vc1_v_loop_filter4_neon;
         dsp->vc1_h_loop_filter4  = ff_vc1_h_loop_filter4_neon;
         dsp->vc1_v_loop_filter8  = ff_vc1_v_loop_filter8_neon;
diff --git a/libavcodec/aarch64/vc1dsp_neon.S b/libavcodec/aarch64/vc1dsp_neon.S
index 70391b4179..e68e0fce53 100644
--- a/libavcodec/aarch64/vc1dsp_neon.S
+++ b/libavcodec/aarch64/vc1dsp_neon.S
@@ -22,7 +22,685 @@
 
 #include "libavutil/aarch64/asm.S"
 
+// VC-1 8x8 inverse transform
+// On entry:
+//   x0 -> array of 16-bit inverse transform coefficients, in column-major order
+// On exit:
+//   array at x0 updated to hold transformed block; also now held in row-major order
+function ff_vc1_inv_trans_8x8_neon, export=1
+        ld1             {v1.16b, v2.16b}, [x0], #32
+        ld1             {v3.16b, v4.16b}, [x0], #32
+        ld1             {v5.16b, v6.16b}, [x0], #32
+        shl             v1.8h, v1.8h, #2        //         8/2 * src[0]
+        sub             x1, x0, #3*32
+        ld1             {v16.16b, v17.16b}, [x0]
+        shl             v7.8h, v2.8h, #4        //          16 * src[8]
+        shl             v18.8h, v2.8h, #2       //           4 * src[8]
+        shl             v19.8h, v4.8h, #4       //                        16 * src[24]
+        ldr             d0, .Lcoeffs_it8
+        shl             v5.8h, v5.8h, #2        //                                      8/2 * src[32]
+        shl             v20.8h, v6.8h, #4       //                                       16 * src[40]
+        shl             v21.8h, v6.8h, #2       //                                        4 * src[40]
+        shl             v22.8h, v17.8h, #4      //                                                      16 * src[56]
+        ssra            v20.8h, v19.8h, #2      //                         4 * src[24] + 16 * src[40]
+        mul             v23.8h, v3.8h, v0.h[0]  //                       6/2 * src[16]
+        sub             v19.8h, v19.8h, v21.8h  //                        16 * src[24] -  4 * src[40]
+        ssra            v7.8h, v22.8h, #2       //          16 * src[8]                               +  4 * src[56]
+        sub             v18.8h, v22.8h, v18.8h  //        -  4 * src[8]                               + 16 * src[56]
+        shl             v3.8h, v3.8h, #3        //                      16/2 * src[16]
+        mls             v20.8h, v2.8h, v0.h[2]  //        - 15 * src[8] +  4 * src[24] + 16 * src[40]
+        ssra            v1.8h, v1.8h, #1        //        12/2 * src[0]
+        ssra            v5.8h, v5.8h, #1        //                                     12/2 * src[32]
+        mla             v7.8h, v4.8h, v0.h[2]   //          16 * src[8] + 15 * src[24]                +  4 * src[56]
+        shl             v21.8h, v16.8h, #3      //                                                    16/2 * src[48]
+        mls             v19.8h, v2.8h, v0.h[1]  //        -  9 * src[8] + 16 * src[24] -  4 * src[40]
+        sub             v2.8h, v23.8h, v21.8h   // t4/2 =                6/2 * src[16]              - 16/2 * src[48]
+        mla             v18.8h, v4.8h, v0.h[1]  //        -  4 * src[8] +  9 * src[24]                + 16 * src[56]
+        add             v4.8h, v1.8h, v5.8h     // t1/2 = 12/2 * src[0]              + 12/2 * src[32]
+        sub             v1.8h, v1.8h, v5.8h     // t2/2 = 12/2 * src[0]              - 12/2 * src[32]
+        mla             v3.8h, v16.8h, v0.h[0]  // t3/2 =               16/2 * src[16]              +  6/2 * src[48]
+        mla             v7.8h, v6.8h, v0.h[1]   //  t1  =   16 * src[8] + 15 * src[24] +  9 * src[40] +  4 * src[56]
+        add             v5.8h, v1.8h, v2.8h     // t6/2 = t2/2 + t4/2
+        sub             v16.8h, v1.8h, v2.8h    // t7/2 = t2/2 - t4/2
+        mla             v20.8h, v17.8h, v0.h[1] // -t2  = - 15 * src[8] +  4 * src[24] + 16 * src[40] +  9 * src[56]
+        add             v21.8h, v1.8h, v2.8h    // t6/2 = t2/2 + t4/2
+        add             v22.8h, v4.8h, v3.8h    // t5/2 = t1/2 + t3/2
+        mls             v19.8h, v17.8h, v0.h[2] // -t3  = -  9 * src[8] + 16 * src[24] -  4 * src[40] - 15 * src[56]
+        sub             v17.8h, v4.8h, v3.8h    // t8/2 = t1/2 - t3/2
+        add             v23.8h, v4.8h, v3.8h    // t5/2 = t1/2 + t3/2
+        mls             v18.8h, v6.8h, v0.h[2]  // -t4  = -  4 * src[8] +  9 * src[24] - 15 * src[40] + 16 * src[56]
+        sub             v1.8h, v1.8h, v2.8h     // t7/2 = t2/2 - t4/2
+        sub             v2.8h, v4.8h, v3.8h     // t8/2 = t1/2 - t3/2
+        neg             v3.8h, v7.8h            // -t1
+        neg             v4.8h, v20.8h           // +t2
+        neg             v6.8h, v19.8h           // +t3
+        ssra            v22.8h, v7.8h, #1       // (t5 + t1) >> 1
+        ssra            v1.8h, v19.8h, #1       // (t7 - t3) >> 1
+        neg             v7.8h, v18.8h           // +t4
+        ssra            v5.8h, v4.8h, #1        // (t6 + t2) >> 1
+        ssra            v16.8h, v6.8h, #1       // (t7 + t3) >> 1
+        ssra            v2.8h, v18.8h, #1       // (t8 - t4) >> 1
+        ssra            v17.8h, v7.8h, #1       // (t8 + t4) >> 1
+        ssra            v21.8h, v20.8h, #1      // (t6 - t2) >> 1
+        ssra            v23.8h, v3.8h, #1       // (t5 - t1) >> 1
+        srshr           v3.8h, v22.8h, #2       // (t5 + t1 + 4) >> 3
+        srshr           v4.8h, v5.8h, #2        // (t6 + t2 + 4) >> 3
+        srshr           v5.8h, v16.8h, #2       // (t7 + t3 + 4) >> 3
+        srshr           v6.8h, v17.8h, #2       // (t8 + t4 + 4) >> 3
+        srshr           v2.8h, v2.8h, #2        // (t8 - t4 + 4) >> 3
+        srshr           v1.8h, v1.8h, #2        // (t7 - t3 + 4) >> 3
+        srshr           v7.8h, v21.8h, #2       // (t6 - t2 + 4) >> 3
+        srshr           v16.8h, v23.8h, #2      // (t5 - t1 + 4) >> 3
+        trn2            v17.8h, v3.8h, v4.8h
+        trn2            v18.8h, v5.8h, v6.8h
+        trn2            v19.8h, v2.8h, v1.8h
+        trn2            v20.8h, v7.8h, v16.8h
+        trn1            v21.4s, v17.4s, v18.4s
+        trn2            v17.4s, v17.4s, v18.4s
+        trn1            v18.4s, v19.4s, v20.4s
+        trn2            v19.4s, v19.4s, v20.4s
+        trn1            v3.8h, v3.8h, v4.8h
+        trn2            v4.2d, v21.2d, v18.2d
+        trn1            v20.2d, v17.2d, v19.2d
+        trn1            v5.8h, v5.8h, v6.8h
+        trn1            v1.8h, v2.8h, v1.8h
+        trn1            v2.8h, v7.8h, v16.8h
+        trn1            v6.2d, v21.2d, v18.2d
+        trn2            v7.2d, v17.2d, v19.2d
+        shl             v16.8h, v20.8h, #4      //                        16 * src[24]
+        shl             v17.8h, v4.8h, #4       //                                       16 * src[40]
+        trn1            v18.4s, v3.4s, v5.4s
+        trn1            v19.4s, v1.4s, v2.4s
+        shl             v21.8h, v7.8h, #4       //                                                      16 * src[56]
+        shl             v22.8h, v6.8h, #2       //           4 * src[8]
+        shl             v23.8h, v4.8h, #2       //                                        4 * src[40]
+        trn2            v3.4s, v3.4s, v5.4s
+        trn2            v1.4s, v1.4s, v2.4s
+        shl             v2.8h, v6.8h, #4        //          16 * src[8]
+        sub             v5.8h, v16.8h, v23.8h   //                        16 * src[24] -  4 * src[40]
+        ssra            v17.8h, v16.8h, #2      //                         4 * src[24] + 16 * src[40]
+        sub             v16.8h, v21.8h, v22.8h  //        -  4 * src[8]                               + 16 * src[56]
+        trn1            v22.2d, v18.2d, v19.2d
+        trn2            v18.2d, v18.2d, v19.2d
+        trn1            v19.2d, v3.2d, v1.2d
+        ssra            v2.8h, v21.8h, #2       //          16 * src[8]                               +  4 * src[56]
+        mls             v17.8h, v6.8h, v0.h[2]  //        - 15 * src[8] +  4 * src[24] + 16 * src[40]
+        shl             v21.8h, v22.8h, #2      //         8/2 * src[0]
+        shl             v18.8h, v18.8h, #2      //                                      8/2 * src[32]
+        mls             v5.8h, v6.8h, v0.h[1]   //        -  9 * src[8] + 16 * src[24] -  4 * src[40]
+        shl             v6.8h, v19.8h, #3       //                      16/2 * src[16]
+        trn2            v1.2d, v3.2d, v1.2d
+        mla             v16.8h, v20.8h, v0.h[1] //        -  4 * src[8] +  9 * src[24]                + 16 * src[56]
+        ssra            v21.8h, v21.8h, #1      //        12/2 * src[0]
+        ssra            v18.8h, v18.8h, #1      //                                     12/2 * src[32]
+        mul             v3.8h, v19.8h, v0.h[0]  //                       6/2 * src[16]
+        shl             v19.8h, v1.8h, #3       //                                                    16/2 * src[48]
+        mla             v2.8h, v20.8h, v0.h[2]  //          16 * src[8] + 15 * src[24]                +  4 * src[56]
+        add             v20.8h, v21.8h, v18.8h  // t1/2 = 12/2 * src[0]              + 12/2 * src[32]
+        mla             v6.8h, v1.8h, v0.h[0]   // t3/2 =               16/2 * src[16]              +  6/2 * src[48]
+        sub             v1.8h, v21.8h, v18.8h   // t2/2 = 12/2 * src[0]              - 12/2 * src[32]
+        sub             v3.8h, v3.8h, v19.8h    // t4/2 =                6/2 * src[16]              - 16/2 * src[48]
+        mla             v17.8h, v7.8h, v0.h[1]  // -t2  = - 15 * src[8] +  4 * src[24] + 16 * src[40] +  9 * src[56]
+        mls             v5.8h, v7.8h, v0.h[2]   // -t3  = -  9 * src[8] + 16 * src[24] -  4 * src[40] - 15 * src[56]
+        add             v7.8h, v1.8h, v3.8h     // t6/2 = t2/2 + t4/2
+        add             v18.8h, v20.8h, v6.8h   // t5/2 = t1/2 + t3/2
+        mls             v16.8h, v4.8h, v0.h[2]  // -t4  = -  4 * src[8] +  9 * src[24] - 15 * src[40] + 16 * src[56]
+        sub             v19.8h, v1.8h, v3.8h    // t7/2 = t2/2 - t4/2
+        neg             v21.8h, v17.8h          // +t2
+        mla             v2.8h, v4.8h, v0.h[1]   //  t1  =   16 * src[8] + 15 * src[24] +  9 * src[40] +  4 * src[56]
+        sub             v0.8h, v20.8h, v6.8h    // t8/2 = t1/2 - t3/2
+        neg             v4.8h, v5.8h            // +t3
+        sub             v22.8h, v1.8h, v3.8h    // t7/2 = t2/2 - t4/2
+        sub             v23.8h, v20.8h, v6.8h   // t8/2 = t1/2 - t3/2
+        neg             v24.8h, v16.8h          // +t4
+        add             v6.8h, v20.8h, v6.8h    // t5/2 = t1/2 + t3/2
+        add             v1.8h, v1.8h, v3.8h     // t6/2 = t2/2 + t4/2
+        ssra            v7.8h, v21.8h, #1       // (t6 + t2) >> 1
+        neg             v3.8h, v2.8h            // -t1
+        ssra            v18.8h, v2.8h, #1       // (t5 + t1) >> 1
+        ssra            v19.8h, v4.8h, #1       // (t7 + t3) >> 1
+        ssra            v0.8h, v24.8h, #1       // (t8 + t4) >> 1
+        srsra           v23.8h, v16.8h, #1      // (t8 - t4 + 1) >> 1
+        srsra           v22.8h, v5.8h, #1       // (t7 - t3 + 1) >> 1
+        srsra           v1.8h, v17.8h, #1       // (t6 - t2 + 1) >> 1
+        srsra           v6.8h, v3.8h, #1        // (t5 - t1 + 1) >> 1
+        srshr           v2.8h, v18.8h, #6       // (t5 + t1 + 64) >> 7
+        srshr           v3.8h, v7.8h, #6        // (t6 + t2 + 64) >> 7
+        srshr           v4.8h, v19.8h, #6       // (t7 + t3 + 64) >> 7
+        srshr           v5.8h, v0.8h, #6        // (t8 + t4 + 64) >> 7
+        srshr           v16.8h, v23.8h, #6      // (t8 - t4 + 65) >> 7
+        srshr           v17.8h, v22.8h, #6      // (t7 - t3 + 65) >> 7
+        st1             {v2.16b, v3.16b}, [x1], #32
+        srshr           v0.8h, v1.8h, #6        // (t6 - t2 + 65) >> 7
+        srshr           v1.8h, v6.8h, #6        // (t5 - t1 + 65) >> 7
+        st1             {v4.16b, v5.16b}, [x1], #32
+        st1             {v16.16b, v17.16b}, [x1], #32
+        st1             {v0.16b, v1.16b}, [x1]
+        ret
+endfunc
+
+// VC-1 8x4 inverse transform
+// On entry:
+//   x0 -> array of 8-bit samples, in row-major order
+//   x1 = row stride for 8-bit sample array
+//   x2 -> array of 16-bit inverse transform coefficients, in row-major order
+// On exit:
+//   array at x0 updated by saturated addition of (narrowed) transformed block
+function ff_vc1_inv_trans_8x4_neon, export=1
+        ld1             {v1.8b, v2.8b, v3.8b, v4.8b}, [x2], #32
+        mov             x3, x0
+        ld1             {v16.8b, v17.8b, v18.8b, v19.8b}, [x2]
+        ldr             q0, .Lcoeffs_it8        // includes 4-point coefficients in upper half of vector
+        ld1             {v5.8b}, [x0], x1
+        trn2            v6.4h, v1.4h, v3.4h
+        trn2            v7.4h, v2.4h, v4.4h
+        trn1            v1.4h, v1.4h, v3.4h
+        trn1            v2.4h, v2.4h, v4.4h
+        trn2            v3.4h, v16.4h, v18.4h
+        trn2            v4.4h, v17.4h, v19.4h
+        trn1            v16.4h, v16.4h, v18.4h
+        trn1            v17.4h, v17.4h, v19.4h
+        ld1             {v18.8b}, [x0], x1
+        trn1            v19.2s, v6.2s, v3.2s
+        trn2            v3.2s, v6.2s, v3.2s
+        trn1            v6.2s, v7.2s, v4.2s
+        trn2            v4.2s, v7.2s, v4.2s
+        trn1            v7.2s, v1.2s, v16.2s
+        trn1            v20.2s, v2.2s, v17.2s
+        shl             v21.4h, v19.4h, #4      //          16 * src[1]
+        trn2            v1.2s, v1.2s, v16.2s
+        shl             v16.4h, v3.4h, #4       //                        16 * src[3]
+        trn2            v2.2s, v2.2s, v17.2s
+        shl             v17.4h, v6.4h, #4       //                                      16 * src[5]
+        ld1             {v22.8b}, [x0], x1
+        shl             v23.4h, v4.4h, #4       //                                                    16 * src[7]
+        mul             v24.4h, v1.4h, v0.h[0]  //                       6/2 * src[2]
+        ld1             {v25.8b}, [x0]
+        shl             v26.4h, v19.4h, #2      //           4 * src[1]
+        shl             v27.4h, v6.4h, #2       //                                       4 * src[5]
+        ssra            v21.4h, v23.4h, #2      //          16 * src[1]                             +  4 * src[7]
+        ssra            v17.4h, v16.4h, #2      //                         4 * src[3] + 16 * src[5]
+        sub             v23.4h, v23.4h, v26.4h  //        -  4 * src[1]                             + 16 * src[7]
+        sub             v16.4h, v16.4h, v27.4h  //                        16 * src[3] -  4 * src[5]
+        shl             v7.4h, v7.4h, #2        //         8/2 * src[0]
+        shl             v20.4h, v20.4h, #2      //                                     8/2 * src[4]
+        mla             v21.4h, v3.4h, v0.h[2]  //          16 * src[1] + 15 * src[3]               +  4 * src[7]
+        shl             v1.4h, v1.4h, #3        //                      16/2 * src[2]
+        mls             v17.4h, v19.4h, v0.h[2] //        - 15 * src[1] +  4 * src[3] + 16 * src[5]
+        ssra            v7.4h, v7.4h, #1        //        12/2 * src[0]
+        mls             v16.4h, v19.4h, v0.h[1] //        -  9 * src[1] + 16 * src[3] -  4 * src[5]
+        ssra            v20.4h, v20.4h, #1      //                                    12/2 * src[4]
+        mla             v23.4h, v3.4h, v0.h[1]  //        -  4 * src[1] +  9 * src[3]               + 16 * src[7]
+        shl             v3.4h, v2.4h, #3        //                                                  16/2 * src[6]
+        mla             v1.4h, v2.4h, v0.h[0]   // t3/2 =               16/2 * src[2]             +  6/2 * src[6]
+        mla             v21.4h, v6.4h, v0.h[1]  //  t1  =   16 * src[1] + 15 * src[3] +  9 * src[5] +  4 * src[7]
+        mla             v17.4h, v4.4h, v0.h[1]  // -t2  = - 15 * src[1] +  4 * src[3] + 16 * src[5] +  9 * src[7]
+        sub             v2.4h, v24.4h, v3.4h    // t4/2 =                6/2 * src[2]             - 16/2 * src[6]
+        mls             v16.4h, v4.4h, v0.h[2]  // -t3  = -  9 * src[1] + 16 * src[3] -  4 * src[5] - 15 * src[7]
+        add             v3.4h, v7.4h, v20.4h    // t1/2 = 12/2 * src[0]             + 12/2 * src[4]
+        mls             v23.4h, v6.4h, v0.h[2]  // -t4  = -  4 * src[1] +  9 * src[3] - 15 * src[5] + 16 * src[7]
+        sub             v4.4h, v7.4h, v20.4h    // t2/2 = 12/2 * src[0]             - 12/2 * src[4]
+        neg             v6.4h, v21.4h           // -t1
+        add             v7.4h, v3.4h, v1.4h     // t5/2 = t1/2 + t3/2
+        sub             v19.4h, v3.4h, v1.4h    // t8/2 = t1/2 - t3/2
+        add             v20.4h, v4.4h, v2.4h    // t6/2 = t2/2 + t4/2
+        sub             v24.4h, v4.4h, v2.4h    // t7/2 = t2/2 - t4/2
+        add             v26.4h, v3.4h, v1.4h    // t5/2 = t1/2 + t3/2
+        add             v27.4h, v4.4h, v2.4h    // t6/2 = t2/2 + t4/2
+        sub             v2.4h, v4.4h, v2.4h     // t7/2 = t2/2 - t4/2
+        sub             v1.4h, v3.4h, v1.4h     // t8/2 = t1/2 - t3/2
+        neg             v3.4h, v17.4h           // +t2
+        neg             v4.4h, v16.4h           // +t3
+        neg             v28.4h, v23.4h          // +t4
+        ssra            v7.4h, v21.4h, #1       // (t5 + t1) >> 1
+        ssra            v1.4h, v23.4h, #1       // (t8 - t4) >> 1
+        ssra            v20.4h, v3.4h, #1       // (t6 + t2) >> 1
+        ssra            v24.4h, v4.4h, #1       // (t7 + t3) >> 1
+        ssra            v19.4h, v28.4h, #1      // (t8 + t4) >> 1
+        ssra            v2.4h, v16.4h, #1       // (t7 - t3) >> 1
+        ssra            v27.4h, v17.4h, #1      // (t6 - t2) >> 1
+        ssra            v26.4h, v6.4h, #1       // (t5 - t1) >> 1
+        trn1            v1.2d, v7.2d, v1.2d
+        trn1            v2.2d, v20.2d, v2.2d
+        trn1            v3.2d, v24.2d, v27.2d
+        trn1            v4.2d, v19.2d, v26.2d
+        srshr           v1.8h, v1.8h, #2        // (t5 + t1 + 4) >> 3, (t8 - t4 + 4) >> 3
+        srshr           v2.8h, v2.8h, #2        // (t6 + t2 + 4) >> 3, (t7 - t3 + 4) >> 3
+        srshr           v3.8h, v3.8h, #2        // (t7 + t3 + 4) >> 3, (t6 - t2 + 4) >> 3
+        srshr           v4.8h, v4.8h, #2        // (t8 + t4 + 4) >> 3, (t5 - t1 + 4) >> 3
+        trn2            v6.8h, v1.8h, v2.8h
+        trn1            v1.8h, v1.8h, v2.8h
+        trn2            v2.8h, v3.8h, v4.8h
+        trn1            v3.8h, v3.8h, v4.8h
+        trn2            v4.4s, v6.4s, v2.4s
+        trn1            v7.4s, v1.4s, v3.4s
+        trn2            v1.4s, v1.4s, v3.4s
+        mul             v3.8h, v4.8h, v0.h[5]   //                                                           22/2 * src[24]
+        trn1            v2.4s, v6.4s, v2.4s
+        mul             v4.8h, v4.8h, v0.h[4]   //                                                           10/2 * src[24]
+        mul             v6.8h, v7.8h, v0.h[6]   //            17 * src[0]
+        mul             v1.8h, v1.8h, v0.h[6]   //                                            17 * src[16]
+        mls             v3.8h, v2.8h, v0.h[4]   //  t4/2 =                - 10/2 * src[8]                  + 22/2 * src[24]
+        mla             v4.8h, v2.8h, v0.h[5]   //  t3/2 =                  22/2 * src[8]                  + 10/2 * src[24]
+        add             v0.8h, v6.8h, v1.8h     //   t1  =    17 * src[0]                 +   17 * src[16]
+        sub             v1.8h, v6.8h, v1.8h     //   t2  =    17 * src[0]                 -   17 * src[16]
+        neg             v2.8h, v3.8h            // -t4/2
+        neg             v6.8h, v4.8h            // -t3/2
+        ssra            v4.8h, v0.8h, #1        // (t1 + t3) >> 1
+        ssra            v2.8h, v1.8h, #1        // (t2 - t4) >> 1
+        ssra            v3.8h, v1.8h, #1        // (t2 + t4) >> 1
+        ssra            v6.8h, v0.8h, #1        // (t1 - t3) >> 1
+        srshr           v0.8h, v4.8h, #6        // (t1 + t3 + 64) >> 7
+        srshr           v1.8h, v2.8h, #6        // (t2 - t4 + 64) >> 7
+        srshr           v2.8h, v3.8h, #6        // (t2 + t4 + 64) >> 7
+        srshr           v3.8h, v6.8h, #6        // (t1 - t3 + 64) >> 7
+        uaddw           v0.8h, v0.8h, v5.8b
+        uaddw           v1.8h, v1.8h, v18.8b
+        uaddw           v2.8h, v2.8h, v22.8b
+        uaddw           v3.8h, v3.8h, v25.8b
+        sqxtun          v0.8b, v0.8h
+        sqxtun          v1.8b, v1.8h
+        sqxtun          v2.8b, v2.8h
+        sqxtun          v3.8b, v3.8h
+        st1             {v0.8b}, [x3], x1
+        st1             {v1.8b}, [x3], x1
+        st1             {v2.8b}, [x3], x1
+        st1             {v3.8b}, [x3]
+        ret
+endfunc
+
+// VC-1 4x8 inverse transform
+// On entry:
+//   x0 -> array of 8-bit samples, in row-major order
+//   x1 = row stride for 8-bit sample array
+//   x2 -> array of 16-bit inverse transform coefficients, in row-major order (row stride is 8 coefficients)
+// On exit:
+//   array at x0 updated by saturated addition of (narrowed) transformed block
+function ff_vc1_inv_trans_4x8_neon, export=1
+        mov             x3, #16
+        ldr             q0, .Lcoeffs_it8        // includes 4-point coefficients in upper half of vector
+        mov             x4, x0
+        ld1             {v1.d}[0], [x2], x3     // 00 01 02 03
+        ld1             {v2.d}[0], [x2], x3     // 10 11 12 13
+        ld1             {v3.d}[0], [x2], x3     // 20 21 22 23
+        ld1             {v4.d}[0], [x2], x3     // 30 31 32 33
+        ld1             {v1.d}[1], [x2], x3     // 40 41 42 43
+        ld1             {v2.d}[1], [x2], x3     // 50 51 52 53
+        ld1             {v3.d}[1], [x2], x3     // 60 61 62 63
+        ld1             {v4.d}[1], [x2]         // 70 71 72 73
+        ld1             {v5.s}[0], [x0], x1
+        ld1             {v6.s}[0], [x0], x1
+        ld1             {v7.s}[0], [x0], x1
+        trn2            v16.8h, v1.8h, v2.8h    // 01 11 03 13 41 51 43 53
+        trn1            v1.8h, v1.8h, v2.8h     // 00 10 02 12 40 50 42 52
+        trn2            v2.8h, v3.8h, v4.8h     // 21 31 23 33 61 71 63 73
+        trn1            v3.8h, v3.8h, v4.8h     // 20 30 22 32 60 70 62 72
+        ld1             {v4.s}[0], [x0], x1
+        trn2            v17.4s, v16.4s, v2.4s   // 03 13 23 33 43 53 63 73
+        trn1            v18.4s, v1.4s, v3.4s    // 00 10 20 30 40 50 60 70
+        trn1            v2.4s, v16.4s, v2.4s    // 01 11 21 31 41 51 61 71
+        mul             v16.8h, v17.8h, v0.h[4] //                                                          10/2 * src[3]
+        ld1             {v5.s}[1], [x0], x1
+        mul             v17.8h, v17.8h, v0.h[5] //                                                          22/2 * src[3]
+        ld1             {v6.s}[1], [x0], x1
+        trn2            v1.4s, v1.4s, v3.4s     // 02 12 22 32 42 52 62 72
+        mul             v3.8h, v18.8h, v0.h[6]  //            17 * src[0]
+        ld1             {v7.s}[1], [x0], x1
+        mul             v1.8h, v1.8h, v0.h[6]   //                                            17 * src[2]
+        ld1             {v4.s}[1], [x0]
+        mla             v16.8h, v2.8h, v0.h[5]  //  t3/2 =                  22/2 * src[1]                 + 10/2 * src[3]
+        mls             v17.8h, v2.8h, v0.h[4]  //  t4/2 =                - 10/2 * src[1]                 + 22/2 * src[3]
+        add             v2.8h, v3.8h, v1.8h     //   t1  =    17 * src[0]                 +   17 * src[2]
+        sub             v1.8h, v3.8h, v1.8h     //   t2  =    17 * src[0]                 -   17 * src[2]
+        neg             v3.8h, v16.8h           // -t3/2
+        ssra            v16.8h, v2.8h, #1       // (t1 + t3) >> 1
+        neg             v18.8h, v17.8h          // -t4/2
+        ssra            v17.8h, v1.8h, #1       // (t2 + t4) >> 1
+        ssra            v3.8h, v2.8h, #1        // (t1 - t3) >> 1
+        ssra            v18.8h, v1.8h, #1       // (t2 - t4) >> 1
+        srshr           v1.8h, v16.8h, #2       // (t1 + t3 + 64) >> 3
+        srshr           v2.8h, v17.8h, #2       // (t2 + t4 + 64) >> 3
+        srshr           v3.8h, v3.8h, #2        // (t1 - t3 + 64) >> 3
+        srshr           v16.8h, v18.8h, #2      // (t2 - t4 + 64) >> 3
+        trn2            v17.8h, v2.8h, v3.8h    // 12 13 32 33 52 53 72 73
+        trn2            v18.8h, v1.8h, v16.8h   // 10 11 30 31 50 51 70 71
+        trn1            v1.8h, v1.8h, v16.8h    // 00 01 20 21 40 41 60 61
+        trn1            v2.8h, v2.8h, v3.8h     // 02 03 22 23 42 43 62 63
+        trn1            v3.4s, v18.4s, v17.4s   // 10 11 12 13 50 51 52 53
+        trn2            v16.4s, v18.4s, v17.4s  // 30 31 32 33 70 71 72 73
+        trn1            v17.4s, v1.4s, v2.4s    // 00 01 02 03 40 41 42 43
+        mov             d18, v3.d[1]            // 50 51 52 53
+        shl             v19.4h, v3.4h, #4       //          16 * src[8]
+        mov             d20, v16.d[1]           // 70 71 72 73
+        shl             v21.4h, v16.4h, #4      //                        16 * src[24]
+        mov             d22, v17.d[1]           // 40 41 42 43
+        shl             v23.4h, v3.4h, #2       //           4 * src[8]
+        shl             v24.4h, v18.4h, #4      //                                       16 * src[40]
+        shl             v25.4h, v20.4h, #4      //                                                      16 * src[56]
+        shl             v26.4h, v18.4h, #2      //                                        4 * src[40]
+        trn2            v1.4s, v1.4s, v2.4s     // 20 21 22 23 60 61 62 63
+        ssra            v24.4h, v21.4h, #2      //                         4 * src[24] + 16 * src[40]
+        sub             v2.4h, v25.4h, v23.4h   //        -  4 * src[8]                               + 16 * src[56]
+        shl             v17.4h, v17.4h, #2      //         8/2 * src[0]
+        sub             v21.4h, v21.4h, v26.4h  //                        16 * src[24] -  4 * src[40]
+        shl             v22.4h, v22.4h, #2      //                                      8/2 * src[32]
+        mov             d23, v1.d[1]            // 60 61 62 63
+        ssra            v19.4h, v25.4h, #2      //          16 * src[8]                               +  4 * src[56]
+        mul             v25.4h, v1.4h, v0.h[0]  //                       6/2 * src[16]
+        shl             v1.4h, v1.4h, #3        //                      16/2 * src[16]
+        mls             v24.4h, v3.4h, v0.h[2]  //        - 15 * src[8] +  4 * src[24] + 16 * src[40]
+        ssra            v17.4h, v17.4h, #1      //        12/2 * src[0]
+        mls             v21.4h, v3.4h, v0.h[1]  //        -  9 * src[8] + 16 * src[24] -  4 * src[40]
+        ssra            v22.4h, v22.4h, #1      //                                     12/2 * src[32]
+        mla             v2.4h, v16.4h, v0.h[1]  //        -  4 * src[8] +  9 * src[24]                + 16 * src[56]
+        shl             v3.4h, v23.4h, #3       //                                                    16/2 * src[48]
+        mla             v19.4h, v16.4h, v0.h[2] //          16 * src[8] + 15 * src[24]                +  4 * src[56]
+        mla             v1.4h, v23.4h, v0.h[0]  // t3/2 =               16/2 * src[16]              +  6/2 * src[48]
+        mla             v24.4h, v20.4h, v0.h[1] // -t2  = - 15 * src[8] +  4 * src[24] + 16 * src[40] +  9 * src[56]
+        add             v16.4h, v17.4h, v22.4h  // t1/2 = 12/2 * src[0]              + 12/2 * src[32]
+        sub             v3.4h, v25.4h, v3.4h    // t4/2 =                6/2 * src[16]              - 16/2 * src[48]
+        sub             v17.4h, v17.4h, v22.4h  // t2/2 = 12/2 * src[0]              - 12/2 * src[32]
+        mls             v21.4h, v20.4h, v0.h[2] // -t3  = -  9 * src[8] + 16 * src[24] -  4 * src[40] - 15 * src[56]
+        mla             v19.4h, v18.4h, v0.h[1] //  t1  =   16 * src[8] + 15 * src[24] +  9 * src[40] +  4 * src[56]
+        add             v20.4h, v16.4h, v1.4h   // t5/2 = t1/2 + t3/2
+        mls             v2.4h, v18.4h, v0.h[2]  // -t4  = -  4 * src[8] +  9 * src[24] - 15 * src[40] + 16 * src[56]
+        sub             v0.4h, v16.4h, v1.4h    // t8/2 = t1/2 - t3/2
+        add             v18.4h, v17.4h, v3.4h   // t6/2 = t2/2 + t4/2
+        sub             v22.4h, v17.4h, v3.4h   // t7/2 = t2/2 - t4/2
+        neg             v23.4h, v24.4h          // +t2
+        sub             v25.4h, v17.4h, v3.4h   // t7/2 = t2/2 - t4/2
+        add             v3.4h, v17.4h, v3.4h    // t6/2 = t2/2 + t4/2
+        neg             v17.4h, v21.4h          // +t3
+        sub             v26.4h, v16.4h, v1.4h   // t8/2 = t1/2 - t3/2
+        add             v1.4h, v16.4h, v1.4h    // t5/2 = t1/2 + t3/2
+        neg             v16.4h, v19.4h          // -t1
+        neg             v27.4h, v2.4h           // +t4
+        ssra            v20.4h, v19.4h, #1      // (t5 + t1) >> 1
+        srsra           v0.4h, v2.4h, #1        // (t8 - t4 + 1) >> 1
+        ssra            v18.4h, v23.4h, #1      // (t6 + t2) >> 1
+        srsra           v22.4h, v21.4h, #1      // (t7 - t3 + 1) >> 1
+        ssra            v25.4h, v17.4h, #1      // (t7 + t3) >> 1
+        srsra           v3.4h, v24.4h, #1       // (t6 - t2 + 1) >> 1
+        ssra            v26.4h, v27.4h, #1      // (t8 + t4) >> 1
+        srsra           v1.4h, v16.4h, #1       // (t5 - t1 + 1) >> 1
+        trn1            v0.2d, v20.2d, v0.2d
+        trn1            v2.2d, v18.2d, v22.2d
+        trn1            v3.2d, v25.2d, v3.2d
+        trn1            v1.2d, v26.2d, v1.2d
+        srshr           v0.8h, v0.8h, #6        // (t5 + t1 + 64) >> 7, (t8 - t4 + 65) >> 7
+        srshr           v2.8h, v2.8h, #6        // (t6 + t2 + 64) >> 7, (t7 - t3 + 65) >> 7
+        srshr           v3.8h, v3.8h, #6        // (t7 + t3 + 64) >> 7, (t6 - t2 + 65) >> 7
+        srshr           v1.8h, v1.8h, #6        // (t8 + t4 + 64) >> 7, (t5 - t1 + 65) >> 7
+        uaddw           v0.8h, v0.8h, v5.8b
+        uaddw           v2.8h, v2.8h, v6.8b
+        uaddw           v3.8h, v3.8h, v7.8b
+        uaddw           v1.8h, v1.8h, v4.8b
+        sqxtun          v0.8b, v0.8h
+        sqxtun          v2.8b, v2.8h
+        sqxtun          v3.8b, v3.8h
+        sqxtun          v1.8b, v1.8h
+        st1             {v0.s}[0], [x4], x1
+        st1             {v2.s}[0], [x4], x1
+        st1             {v3.s}[0], [x4], x1
+        st1             {v1.s}[0], [x4], x1
+        st1             {v0.s}[1], [x4], x1
+        st1             {v2.s}[1], [x4], x1
+        st1             {v3.s}[1], [x4], x1
+        st1             {v1.s}[1], [x4]
+        ret
+endfunc
+
+// VC-1 4x4 inverse transform
+// On entry:
+//   x0 -> array of 8-bit samples, in row-major order
+//   x1 = row stride for 8-bit sample array
+//   x2 -> array of 16-bit inverse transform coefficients, in row-major order (row stride is 8 coefficients)
+// On exit:
+//   array at x0 updated by saturated addition of (narrowed) transformed block
+function ff_vc1_inv_trans_4x4_neon, export=1
+        mov             x3, #16
+        ldr             d0, .Lcoeffs_it4
+        mov             x4, x0
+        ld1             {v1.d}[0], [x2], x3     // 00 01 02 03
+        ld1             {v2.d}[0], [x2], x3     // 10 11 12 13
+        ld1             {v3.d}[0], [x2], x3     // 20 21 22 23
+        ld1             {v4.d}[0], [x2]         // 30 31 32 33
+        ld1             {v5.s}[0], [x0], x1
+        ld1             {v5.s}[1], [x0], x1
+        ld1             {v6.s}[0], [x0], x1
+        trn2            v7.4h, v1.4h, v2.4h     // 01 11 03 13
+        trn1            v1.4h, v1.4h, v2.4h     // 00 10 02 12
+        ld1             {v6.s}[1], [x0]
+        trn2            v2.4h, v3.4h, v4.4h     // 21 31 23 33
+        trn1            v3.4h, v3.4h, v4.4h     // 20 30 22 32
+        trn2            v4.2s, v7.2s, v2.2s     // 03 13 23 33
+        trn1            v16.2s, v1.2s, v3.2s    // 00 10 20 30
+        trn1            v2.2s, v7.2s, v2.2s     // 01 11 21 31
+        trn2            v1.2s, v1.2s, v3.2s     // 02 12 22 32
+        mul             v3.4h, v4.4h, v0.h[0]   //                                                          10/2 * src[3]
+        mul             v4.4h, v4.4h, v0.h[1]   //                                                          22/2 * src[3]
+        mul             v7.4h, v16.4h, v0.h[2]  //            17 * src[0]
+        mul             v1.4h, v1.4h, v0.h[2]   //                                            17 * src[2]
+        mla             v3.4h, v2.4h, v0.h[1]   //  t3/2 =                  22/2 * src[1]                 + 10/2 * src[3]
+        mls             v4.4h, v2.4h, v0.h[0]   //  t4/2 =                - 10/2 * src[1]                 + 22/2 * src[3]
+        add             v2.4h, v7.4h, v1.4h     //   t1  =    17 * src[0]                 +   17 * src[2]
+        sub             v1.4h, v7.4h, v1.4h     //   t2  =    17 * src[0]                 -   17 * src[2]
+        neg             v7.4h, v3.4h            // -t3/2
+        neg             v16.4h, v4.4h           // -t4/2
+        ssra            v3.4h, v2.4h, #1        // (t1 + t3) >> 1
+        ssra            v4.4h, v1.4h, #1        // (t2 + t4) >> 1
+        ssra            v16.4h, v1.4h, #1       // (t2 - t4) >> 1
+        ssra            v7.4h, v2.4h, #1        // (t1 - t3) >> 1
+        srshr           v1.4h, v3.4h, #2        // (t1 + t3 + 64) >> 3
+        srshr           v2.4h, v4.4h, #2        // (t2 + t4 + 64) >> 3
+        srshr           v3.4h, v16.4h, #2       // (t2 - t4 + 64) >> 3
+        srshr           v4.4h, v7.4h, #2        // (t1 - t3 + 64) >> 3
+        trn2            v7.4h, v1.4h, v3.4h     // 10 11 30 31
+        trn1            v1.4h, v1.4h, v3.4h     // 00 01 20 21
+        trn2            v3.4h, v2.4h, v4.4h     // 12 13 32 33
+        trn1            v2.4h, v2.4h, v4.4h     // 02 03 22 23
+        trn2            v4.2s, v7.2s, v3.2s     // 30 31 32 33
+        trn1            v16.2s, v1.2s, v2.2s    // 00 01 02 03
+        trn1            v3.2s, v7.2s, v3.2s     // 10 11 12 13
+        trn2            v1.2s, v1.2s, v2.2s     // 20 21 22 23
+        mul             v2.4h, v4.4h, v0.h[1]   //                                                           22/2 * src[24]
+        mul             v4.4h, v4.4h, v0.h[0]   //                                                           10/2 * src[24]
+        mul             v7.4h, v16.4h, v0.h[2]  //            17 * src[0]
+        mul             v1.4h, v1.4h, v0.h[2]   //                                            17 * src[16]
+        mls             v2.4h, v3.4h, v0.h[0]   //  t4/2 =                - 10/2 * src[8]                  + 22/2 * src[24]
+        mla             v4.4h, v3.4h, v0.h[1]   //  t3/2 =                  22/2 * src[8]                  + 10/2 * src[24]
+        add             v0.4h, v7.4h, v1.4h     //   t1  =    17 * src[0]                 +   17 * src[16]
+        sub             v1.4h, v7.4h, v1.4h     //   t2  =    17 * src[0]                 -   17 * src[16]
+        neg             v3.4h, v2.4h            // -t4/2
+        neg             v7.4h, v4.4h            // -t3/2
+        ssra            v4.4h, v0.4h, #1        // (t1 + t3) >> 1
+        ssra            v3.4h, v1.4h, #1        // (t2 - t4) >> 1
+        ssra            v2.4h, v1.4h, #1        // (t2 + t4) >> 1
+        ssra            v7.4h, v0.4h, #1        // (t1 - t3) >> 1
+        trn1            v0.2d, v4.2d, v3.2d
+        trn1            v1.2d, v2.2d, v7.2d
+        srshr           v0.8h, v0.8h, #6        // (t1 + t3 + 64) >> 7, (t2 - t4 + 64) >> 7
+        srshr           v1.8h, v1.8h, #6        // (t2 + t4 + 64) >> 7, (t1 - t3 + 64) >> 7
+        uaddw           v0.8h, v0.8h, v5.8b
+        uaddw           v1.8h, v1.8h, v6.8b
+        sqxtun          v0.8b, v0.8h
+        sqxtun          v1.8b, v1.8h
+        st1             {v0.s}[0], [x4], x1
+        st1             {v0.s}[1], [x4], x1
+        st1             {v1.s}[0], [x4], x1
+        st1             {v1.s}[1], [x4]
+        ret
+endfunc
+
+// VC-1 8x8 inverse transform, DC case
+// On entry:
+//   x0 -> array of 8-bit samples, in row-major order
+//   x1 = row stride for 8-bit sample array
+//   x2 -> 16-bit inverse transform DC coefficient
+// On exit:
+//   array at x0 updated by saturated addition of (narrowed) transformed block
+function ff_vc1_inv_trans_8x8_dc_neon, export=1
+        ldrsh           w2, [x2]
+        mov             x3, x0
+        ld1             {v0.8b}, [x0], x1
+        ld1             {v1.8b}, [x0], x1
+        ld1             {v2.8b}, [x0], x1
+        add             w2, w2, w2, lsl #1
+        ld1             {v3.8b}, [x0], x1
+        ld1             {v4.8b}, [x0], x1
+        add             w2, w2, #1
+        ld1             {v5.8b}, [x0], x1
+        asr             w2, w2, #1
+        ld1             {v6.8b}, [x0], x1
+        add             w2, w2, w2, lsl #1
+        ld1             {v7.8b}, [x0]
+        add             w0, w2, #16
+        asr             w0, w0, #5
+        dup             v16.8h, w0
+        uaddw           v0.8h, v16.8h, v0.8b
+        uaddw           v1.8h, v16.8h, v1.8b
+        uaddw           v2.8h, v16.8h, v2.8b
+        uaddw           v3.8h, v16.8h, v3.8b
+        uaddw           v4.8h, v16.8h, v4.8b
+        uaddw           v5.8h, v16.8h, v5.8b
+        sqxtun          v0.8b, v0.8h
+        uaddw           v6.8h, v16.8h, v6.8b
+        sqxtun          v1.8b, v1.8h
+        uaddw           v7.8h, v16.8h, v7.8b
+        sqxtun          v2.8b, v2.8h
+        sqxtun          v3.8b, v3.8h
+        sqxtun          v4.8b, v4.8h
+        st1             {v0.8b}, [x3], x1
+        sqxtun          v0.8b, v5.8h
+        st1             {v1.8b}, [x3], x1
+        sqxtun          v1.8b, v6.8h
+        st1             {v2.8b}, [x3], x1
+        sqxtun          v2.8b, v7.8h
+        st1             {v3.8b}, [x3], x1
+        st1             {v4.8b}, [x3], x1
+        st1             {v0.8b}, [x3], x1
+        st1             {v1.8b}, [x3], x1
+        st1             {v2.8b}, [x3]
+        ret
+endfunc
+
+// VC-1 8x4 inverse transform, DC case
+// On entry:
+//   x0 -> array of 8-bit samples, in row-major order
+//   x1 = row stride for 8-bit sample array
+//   x2 -> 16-bit inverse transform DC coefficient
+// On exit:
+//   array at x0 updated by saturated addition of (narrowed) transformed block
+function ff_vc1_inv_trans_8x4_dc_neon, export=1
+        ldrsh           w2, [x2]
+        mov             x3, x0
+        ld1             {v0.8b}, [x0], x1
+        ld1             {v1.8b}, [x0], x1
+        ld1             {v2.8b}, [x0], x1
+        add             w2, w2, w2, lsl #1
+        ld1             {v3.8b}, [x0]
+        add             w0, w2, #1
+        asr             w0, w0, #1
+        add             w0, w0, w0, lsl #4
+        add             w0, w0, #64
+        asr             w0, w0, #7
+        dup             v4.8h, w0
+        uaddw           v0.8h, v4.8h, v0.8b
+        uaddw           v1.8h, v4.8h, v1.8b
+        uaddw           v2.8h, v4.8h, v2.8b
+        uaddw           v3.8h, v4.8h, v3.8b
+        sqxtun          v0.8b, v0.8h
+        sqxtun          v1.8b, v1.8h
+        sqxtun          v2.8b, v2.8h
+        sqxtun          v3.8b, v3.8h
+        st1             {v0.8b}, [x3], x1
+        st1             {v1.8b}, [x3], x1
+        st1             {v2.8b}, [x3], x1
+        st1             {v3.8b}, [x3]
+        ret
+endfunc
+
+// VC-1 4x8 inverse transform, DC case
+// On entry:
+//   x0 -> array of 8-bit samples, in row-major order
+//   x1 = row stride for 8-bit sample array
+//   x2 -> 16-bit inverse transform DC coefficient
+// On exit:
+//   array at x0 updated by saturated addition of (narrowed) transformed block
+function ff_vc1_inv_trans_4x8_dc_neon, export=1
+        ldrsh           w2, [x2]
+        mov             x3, x0
+        ld1             {v0.s}[0], [x0], x1
+        ld1             {v1.s}[0], [x0], x1
+        ld1             {v2.s}[0], [x0], x1
+        add             w2, w2, w2, lsl #4
+        ld1             {v3.s}[0], [x0], x1
+        add             w2, w2, #4
+        asr             w2, w2, #3
+        add             w2, w2, w2, lsl #1
+        ld1             {v0.s}[1], [x0], x1
+        add             w2, w2, #16
+        asr             w2, w2, #5
+        dup             v4.8h, w2
+        ld1             {v1.s}[1], [x0], x1
+        ld1             {v2.s}[1], [x0], x1
+        ld1             {v3.s}[1], [x0]
+        uaddw           v0.8h, v4.8h, v0.8b
+        uaddw           v1.8h, v4.8h, v1.8b
+        uaddw           v2.8h, v4.8h, v2.8b
+        uaddw           v3.8h, v4.8h, v3.8b
+        sqxtun          v0.8b, v0.8h
+        sqxtun          v1.8b, v1.8h
+        sqxtun          v2.8b, v2.8h
+        sqxtun          v3.8b, v3.8h
+        st1             {v0.s}[0], [x3], x1
+        st1             {v1.s}[0], [x3], x1
+        st1             {v2.s}[0], [x3], x1
+        st1             {v3.s}[0], [x3], x1
+        st1             {v0.s}[1], [x3], x1
+        st1             {v1.s}[1], [x3], x1
+        st1             {v2.s}[1], [x3], x1
+        st1             {v3.s}[1], [x3]
+        ret
+endfunc
+
+// VC-1 4x4 inverse transform, DC case
+// On entry:
+//   x0 -> array of 8-bit samples, in row-major order
+//   x1 = row stride for 8-bit sample array
+//   x2 -> 16-bit inverse transform DC coefficient
+// On exit:
+//   array at x0 updated by saturated addition of (narrowed) transformed block
+function ff_vc1_inv_trans_4x4_dc_neon, export=1
+        ldrsh           w2, [x2]
+        mov             x3, x0
+        ld1             {v0.s}[0], [x0], x1
+        ld1             {v1.s}[0], [x0], x1
+        ld1             {v0.s}[1], [x0], x1
+        add             w2, w2, w2, lsl #4
+        ld1             {v1.s}[1], [x0]
+        add             w0, w2, #4
+        asr             w0, w0, #3
+        add             w0, w0, w0, lsl #4
+        add             w0, w0, #64
+        asr             w0, w0, #7
+        dup             v2.8h, w0
+        uaddw           v0.8h, v2.8h, v0.8b
+        uaddw           v1.8h, v2.8h, v1.8b
+        sqxtun          v0.8b, v0.8h
+        sqxtun          v1.8b, v1.8h
+        st1             {v0.s}[0], [x3], x1
+        st1             {v1.s}[0], [x3], x1
+        st1             {v0.s}[1], [x3], x1
+        st1             {v1.s}[1], [x3]
+        ret
+endfunc
+
 .align  5
+.Lcoeffs_it8:
+.quad   0x000F00090003
+.Lcoeffs_it4:
+.quad   0x0011000B0005
 .Lcoeffs:
 .quad   0x00050002
 
-- 
2.25.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [FFmpeg-devel] [PATCH 08/10] avcodec/idctdsp: Arm 64-bit NEON block add and clamp fast paths
  2022-03-25 18:52 ` [FFmpeg-devel] [PATCH v2 00/10] " Ben Avison
                     ` (6 preceding siblings ...)
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 07/10] avcodec/vc1: Arm 64-bit NEON inverse transform " Ben Avison
@ 2022-03-25 18:52   ` Ben Avison
  2022-03-30 14:14     ` Martin Storsjö
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 09/10] avcodec/vc1: Arm 64-bit NEON unescape fast path Ben Avison
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 10/10] avcodec/vc1: Arm 32-bit " Ben Avison
  9 siblings, 1 reply; 55+ messages in thread
From: Ben Avison @ 2022-03-25 18:52 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Ben Avison

checkasm benchmarks on 1.5 GHz Cortex-A72 are as follows.

idctdsp.add_pixels_clamped_c: 323.0
idctdsp.add_pixels_clamped_neon: 41.5
idctdsp.put_pixels_clamped_c: 243.0
idctdsp.put_pixels_clamped_neon: 30.0
idctdsp.put_signed_pixels_clamped_c: 225.7
idctdsp.put_signed_pixels_clamped_neon: 37.7

Signed-off-by: Ben Avison <bavison@riscosopen.org>
---
 libavcodec/aarch64/Makefile               |   3 +-
 libavcodec/aarch64/idctdsp_init_aarch64.c |  26 +++--
 libavcodec/aarch64/idctdsp_neon.S         | 130 ++++++++++++++++++++++
 3 files changed, 150 insertions(+), 9 deletions(-)
 create mode 100644 libavcodec/aarch64/idctdsp_neon.S

diff --git a/libavcodec/aarch64/Makefile b/libavcodec/aarch64/Makefile
index 5b25e4dfb9..c8935f205e 100644
--- a/libavcodec/aarch64/Makefile
+++ b/libavcodec/aarch64/Makefile
@@ -44,7 +44,8 @@ NEON-OBJS-$(CONFIG_H264PRED)            += aarch64/h264pred_neon.o
 NEON-OBJS-$(CONFIG_H264QPEL)            += aarch64/h264qpel_neon.o             \
                                            aarch64/hpeldsp_neon.o
 NEON-OBJS-$(CONFIG_HPELDSP)             += aarch64/hpeldsp_neon.o
-NEON-OBJS-$(CONFIG_IDCTDSP)             += aarch64/simple_idct_neon.o
+NEON-OBJS-$(CONFIG_IDCTDSP)             += aarch64/idctdsp_neon.o              \
+                                           aarch64/simple_idct_neon.o
 NEON-OBJS-$(CONFIG_MDCT)                += aarch64/mdct_neon.o
 NEON-OBJS-$(CONFIG_MPEGAUDIODSP)        += aarch64/mpegaudiodsp_neon.o
 NEON-OBJS-$(CONFIG_PIXBLOCKDSP)         += aarch64/pixblockdsp_neon.o
diff --git a/libavcodec/aarch64/idctdsp_init_aarch64.c b/libavcodec/aarch64/idctdsp_init_aarch64.c
index 742a3372e3..eec21aa5a2 100644
--- a/libavcodec/aarch64/idctdsp_init_aarch64.c
+++ b/libavcodec/aarch64/idctdsp_init_aarch64.c
@@ -27,19 +27,29 @@
 #include "libavcodec/idctdsp.h"
 #include "idct.h"
 
+void ff_put_pixels_clamped_neon(const int16_t *, uint8_t *, ptrdiff_t);
+void ff_put_signed_pixels_clamped_neon(const int16_t *, uint8_t *, ptrdiff_t);
+void ff_add_pixels_clamped_neon(const int16_t *, uint8_t *, ptrdiff_t);
+
 av_cold void ff_idctdsp_init_aarch64(IDCTDSPContext *c, AVCodecContext *avctx,
                                      unsigned high_bit_depth)
 {
     int cpu_flags = av_get_cpu_flags();
 
-    if (have_neon(cpu_flags) && !avctx->lowres && !high_bit_depth) {
-        if (avctx->idct_algo == FF_IDCT_AUTO ||
-            avctx->idct_algo == FF_IDCT_SIMPLEAUTO ||
-            avctx->idct_algo == FF_IDCT_SIMPLENEON) {
-            c->idct_put  = ff_simple_idct_put_neon;
-            c->idct_add  = ff_simple_idct_add_neon;
-            c->idct      = ff_simple_idct_neon;
-            c->perm_type = FF_IDCT_PERM_PARTTRANS;
+    if (have_neon(cpu_flags)) {
+        if (!avctx->lowres && !high_bit_depth) {
+            if (avctx->idct_algo == FF_IDCT_AUTO ||
+                avctx->idct_algo == FF_IDCT_SIMPLEAUTO ||
+                avctx->idct_algo == FF_IDCT_SIMPLENEON) {
+                c->idct_put  = ff_simple_idct_put_neon;
+                c->idct_add  = ff_simple_idct_add_neon;
+                c->idct      = ff_simple_idct_neon;
+                c->perm_type = FF_IDCT_PERM_PARTTRANS;
+            }
         }
+
+        c->add_pixels_clamped        = ff_add_pixels_clamped_neon;
+        c->put_pixels_clamped        = ff_put_pixels_clamped_neon;
+        c->put_signed_pixels_clamped = ff_put_signed_pixels_clamped_neon;
     }
 }
diff --git a/libavcodec/aarch64/idctdsp_neon.S b/libavcodec/aarch64/idctdsp_neon.S
new file mode 100644
index 0000000000..7f47611206
--- /dev/null
+++ b/libavcodec/aarch64/idctdsp_neon.S
@@ -0,0 +1,130 @@
+/*
+ * IDCT AArch64 NEON optimisations
+ *
+ * Copyright (c) 2022 Ben Avison <bavison@riscosopen.org>
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include "libavutil/aarch64/asm.S"
+
+// Clamp 16-bit signed block coefficients to unsigned 8-bit
+// On entry:
+//   x0 -> array of 64x 16-bit coefficients
+//   x1 -> 8-bit results
+//   x2 = row stride for results, bytes
+function ff_put_pixels_clamped_neon, export=1
+        ld1             {v0.16b, v1.16b, v2.16b, v3.16b}, [x0], #64
+        ld1             {v4.16b, v5.16b, v6.16b, v7.16b}, [x0]
+        sqxtun          v0.8b, v0.8h
+        sqxtun          v1.8b, v1.8h
+        sqxtun          v2.8b, v2.8h
+        sqxtun          v3.8b, v3.8h
+        sqxtun          v4.8b, v4.8h
+        st1             {v0.8b}, [x1], x2
+        sqxtun          v0.8b, v5.8h
+        st1             {v1.8b}, [x1], x2
+        sqxtun          v1.8b, v6.8h
+        st1             {v2.8b}, [x1], x2
+        sqxtun          v2.8b, v7.8h
+        st1             {v3.8b}, [x1], x2
+        st1             {v4.8b}, [x1], x2
+        st1             {v0.8b}, [x1], x2
+        st1             {v1.8b}, [x1], x2
+        st1             {v2.8b}, [x1]
+        ret
+endfunc
+
+// Clamp 16-bit signed block coefficients to signed 8-bit (biased by 128)
+// On entry:
+//   x0 -> array of 64x 16-bit coefficients
+//   x1 -> 8-bit results
+//   x2 = row stride for results, bytes
+function ff_put_signed_pixels_clamped_neon, export=1
+        ld1             {v0.16b, v1.16b, v2.16b, v3.16b}, [x0], #64
+        movi            v4.8b, #128
+        ld1             {v16.16b, v17.16b, v18.16b, v19.16b}, [x0]
+        sqxtn           v0.8b, v0.8h
+        sqxtn           v1.8b, v1.8h
+        sqxtn           v2.8b, v2.8h
+        sqxtn           v3.8b, v3.8h
+        sqxtn           v5.8b, v16.8h
+        add             v0.8b, v0.8b, v4.8b
+        sqxtn           v6.8b, v17.8h
+        add             v1.8b, v1.8b, v4.8b
+        sqxtn           v7.8b, v18.8h
+        add             v2.8b, v2.8b, v4.8b
+        sqxtn           v16.8b, v19.8h
+        add             v3.8b, v3.8b, v4.8b
+        st1             {v0.8b}, [x1], x2
+        add             v0.8b, v5.8b, v4.8b
+        st1             {v1.8b}, [x1], x2
+        add             v1.8b, v6.8b, v4.8b
+        st1             {v2.8b}, [x1], x2
+        add             v2.8b, v7.8b, v4.8b
+        st1             {v3.8b}, [x1], x2
+        add             v3.8b, v16.8b, v4.8b
+        st1             {v0.8b}, [x1], x2
+        st1             {v1.8b}, [x1], x2
+        st1             {v2.8b}, [x1], x2
+        st1             {v3.8b}, [x1]
+        ret
+endfunc
+
+// Add 16-bit signed block coefficients to unsigned 8-bit
+// On entry:
+//   x0 -> array of 64x 16-bit coefficients
+//   x1 -> 8-bit input and results
+//   x2 = row stride for 8-bit input and results, bytes
+function ff_add_pixels_clamped_neon, export=1
+        ld1             {v0.16b, v1.16b, v2.16b, v3.16b}, [x0], #64
+        mov             x3, x1
+        ld1             {v4.8b}, [x1], x2
+        ld1             {v5.8b}, [x1], x2
+        ld1             {v6.8b}, [x1], x2
+        ld1             {v7.8b}, [x1], x2
+        ld1             {v16.16b, v17.16b, v18.16b, v19.16b}, [x0]
+        uaddw           v0.8h, v0.8h, v4.8b
+        uaddw           v1.8h, v1.8h, v5.8b
+        uaddw           v2.8h, v2.8h, v6.8b
+        ld1             {v4.8b}, [x1], x2
+        uaddw           v3.8h, v3.8h, v7.8b
+        ld1             {v5.8b}, [x1], x2
+        sqxtun          v0.8b, v0.8h
+        ld1             {v6.8b}, [x1], x2
+        sqxtun          v1.8b, v1.8h
+        ld1             {v7.8b}, [x1]
+        sqxtun          v2.8b, v2.8h
+        sqxtun          v3.8b, v3.8h
+        uaddw           v4.8h, v16.8h, v4.8b
+        st1             {v0.8b}, [x3], x2
+        uaddw           v0.8h, v17.8h, v5.8b
+        st1             {v1.8b}, [x3], x2
+        uaddw           v1.8h, v18.8h, v6.8b
+        st1             {v2.8b}, [x3], x2
+        uaddw           v2.8h, v19.8h, v7.8b
+        sqxtun          v4.8b, v4.8h
+        sqxtun          v0.8b, v0.8h
+        st1             {v3.8b}, [x3], x2
+        sqxtun          v1.8b, v1.8h
+        sqxtun          v2.8b, v2.8h
+        st1             {v4.8b}, [x3], x2
+        st1             {v0.8b}, [x3], x2
+        st1             {v1.8b}, [x3], x2
+        st1             {v2.8b}, [x3]
+        ret
+endfunc
-- 
2.25.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [FFmpeg-devel] [PATCH 09/10] avcodec/vc1: Arm 64-bit NEON unescape fast path
  2022-03-25 18:52 ` [FFmpeg-devel] [PATCH v2 00/10] " Ben Avison
                     ` (7 preceding siblings ...)
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 08/10] avcodec/idctdsp: Arm 64-bit NEON block add and clamp " Ben Avison
@ 2022-03-25 18:52   ` Ben Avison
  2022-03-30 14:35     ` Martin Storsjö
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 10/10] avcodec/vc1: Arm 32-bit " Ben Avison
  9 siblings, 1 reply; 55+ messages in thread
From: Ben Avison @ 2022-03-25 18:52 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Ben Avison

checkasm benchmarks on 1.5 GHz Cortex-A72 are as follows.

vc1dsp.vc1_unescape_buffer_c: 655617.7
vc1dsp.vc1_unescape_buffer_neon: 118237.0

Signed-off-by: Ben Avison <bavison@riscosopen.org>
---
 libavcodec/aarch64/vc1dsp_init_aarch64.c |  61 ++++++++
 libavcodec/aarch64/vc1dsp_neon.S         | 176 +++++++++++++++++++++++
 2 files changed, 237 insertions(+)

diff --git a/libavcodec/aarch64/vc1dsp_init_aarch64.c b/libavcodec/aarch64/vc1dsp_init_aarch64.c
index b672b2aa99..161d5a972b 100644
--- a/libavcodec/aarch64/vc1dsp_init_aarch64.c
+++ b/libavcodec/aarch64/vc1dsp_init_aarch64.c
@@ -21,6 +21,7 @@
 #include "libavutil/attributes.h"
 #include "libavutil/cpu.h"
 #include "libavutil/aarch64/cpu.h"
+#include "libavutil/intreadwrite.h"
 #include "libavcodec/vc1dsp.h"
 
 #include "config.h"
@@ -51,6 +52,64 @@ void ff_put_vc1_chroma_mc4_neon(uint8_t *dst, uint8_t *src, ptrdiff_t stride,
 void ff_avg_vc1_chroma_mc4_neon(uint8_t *dst, uint8_t *src, ptrdiff_t stride,
                                 int h, int x, int y);
 
+int ff_vc1_unescape_buffer_helper_neon(const uint8_t *src, int size, uint8_t *dst);
+
+static int vc1_unescape_buffer_neon(const uint8_t *src, int size, uint8_t *dst)
+{
+    /* Dealing with starting and stopping, and removing escape bytes, are
+     * comparatively less time-sensitive, so are more clearly expressed using
+     * a C wrapper around the assembly inner loop. Note that we assume a
+     * little-endian machine that supports unaligned loads. */
+    int dsize = 0;
+    while (size >= 4)
+    {
+        int found = 0;
+        while (!found && (((uintptr_t) dst) & 7) && size >= 4)
+        {
+            found = (AV_RL32(src) &~ 0x03000000) == 0x00030000;
+            if (!found)
+            {
+                *dst++ = *src++;
+                --size;
+                ++dsize;
+            }
+        }
+        if (!found)
+        {
+            int skip = size - ff_vc1_unescape_buffer_helper_neon(src, size, dst);
+            dst += skip;
+            src += skip;
+            size -= skip;
+            dsize += skip;
+            while (!found && size >= 4)
+            {
+                found = (AV_RL32(src) &~ 0x03000000) == 0x00030000;
+                if (!found)
+                {
+                    *dst++ = *src++;
+                    --size;
+                    ++dsize;
+                }
+            }
+        }
+        if (found)
+        {
+            *dst++ = *src++;
+            *dst++ = *src++;
+            ++src;
+            size -= 3;
+            dsize += 2;
+        }
+    }
+    while (size > 0)
+    {
+        *dst++ = *src++;
+        --size;
+        ++dsize;
+    }
+    return dsize;
+}
+
 av_cold void ff_vc1dsp_init_aarch64(VC1DSPContext *dsp)
 {
     int cpu_flags = av_get_cpu_flags();
@@ -76,5 +135,7 @@ av_cold void ff_vc1dsp_init_aarch64(VC1DSPContext *dsp)
         dsp->avg_no_rnd_vc1_chroma_pixels_tab[0] = ff_avg_vc1_chroma_mc8_neon;
         dsp->put_no_rnd_vc1_chroma_pixels_tab[1] = ff_put_vc1_chroma_mc4_neon;
         dsp->avg_no_rnd_vc1_chroma_pixels_tab[1] = ff_avg_vc1_chroma_mc4_neon;
+
+        dsp->vc1_unescape_buffer = vc1_unescape_buffer_neon;
     }
 }
diff --git a/libavcodec/aarch64/vc1dsp_neon.S b/libavcodec/aarch64/vc1dsp_neon.S
index e68e0fce53..529c21d285 100644
--- a/libavcodec/aarch64/vc1dsp_neon.S
+++ b/libavcodec/aarch64/vc1dsp_neon.S
@@ -1374,3 +1374,179 @@ function ff_vc1_h_loop_filter16_neon, export=1
         st2             {v2.b, v3.b}[7], [x6]
 4:      ret
 endfunc
+
+// Copy at most the specified number of bytes from source to destination buffer,
+// stopping at a multiple of 32 bytes, none of which are the start of an escape sequence
+// On entry:
+//   x0 -> source buffer
+//   w1 = max number of bytes to copy
+//   x2 -> destination buffer, optimally 8-byte aligned
+// On exit:
+//   w0 = number of bytes not copied
+function ff_vc1_unescape_buffer_helper_neon, export=1
+        // Offset by 80 to screen out cases that are too short for us to handle,
+        // and also make it easy to test for loop termination, or to determine
+        // whether we need an odd number of half-iterations of the loop.
+        subs            w1, w1, #80
+        b.mi            90f
+
+        // Set up useful constants
+        movi            v20.4s, #3, lsl #24
+        movi            v21.4s, #3, lsl #16
+
+        tst             w1, #32
+        b.ne            1f
+
+          ld1             {v0.16b, v1.16b, v2.16b}, [x0], #48
+          ext             v25.16b, v0.16b, v1.16b, #1
+          ext             v26.16b, v0.16b, v1.16b, #2
+          ext             v27.16b, v0.16b, v1.16b, #3
+          ext             v29.16b, v1.16b, v2.16b, #1
+          ext             v30.16b, v1.16b, v2.16b, #2
+          ext             v31.16b, v1.16b, v2.16b, #3
+          bic             v24.16b, v0.16b, v20.16b
+          bic             v25.16b, v25.16b, v20.16b
+          bic             v26.16b, v26.16b, v20.16b
+          bic             v27.16b, v27.16b, v20.16b
+          bic             v28.16b, v1.16b, v20.16b
+          bic             v29.16b, v29.16b, v20.16b
+          bic             v30.16b, v30.16b, v20.16b
+          bic             v31.16b, v31.16b, v20.16b
+          eor             v24.16b, v24.16b, v21.16b
+          eor             v25.16b, v25.16b, v21.16b
+          eor             v26.16b, v26.16b, v21.16b
+          eor             v27.16b, v27.16b, v21.16b
+          eor             v28.16b, v28.16b, v21.16b
+          eor             v29.16b, v29.16b, v21.16b
+          eor             v30.16b, v30.16b, v21.16b
+          eor             v31.16b, v31.16b, v21.16b
+          cmeq            v24.4s, v24.4s, #0
+          cmeq            v25.4s, v25.4s, #0
+          cmeq            v26.4s, v26.4s, #0
+          cmeq            v27.4s, v27.4s, #0
+          add             w1, w1, #32
+          b               3f
+
+1:      ld1             {v3.16b, v4.16b, v5.16b}, [x0], #48
+        ext             v25.16b, v3.16b, v4.16b, #1
+        ext             v26.16b, v3.16b, v4.16b, #2
+        ext             v27.16b, v3.16b, v4.16b, #3
+        ext             v29.16b, v4.16b, v5.16b, #1
+        ext             v30.16b, v4.16b, v5.16b, #2
+        ext             v31.16b, v4.16b, v5.16b, #3
+        bic             v24.16b, v3.16b, v20.16b
+        bic             v25.16b, v25.16b, v20.16b
+        bic             v26.16b, v26.16b, v20.16b
+        bic             v27.16b, v27.16b, v20.16b
+        bic             v28.16b, v4.16b, v20.16b
+        bic             v29.16b, v29.16b, v20.16b
+        bic             v30.16b, v30.16b, v20.16b
+        bic             v31.16b, v31.16b, v20.16b
+        eor             v24.16b, v24.16b, v21.16b
+        eor             v25.16b, v25.16b, v21.16b
+        eor             v26.16b, v26.16b, v21.16b
+        eor             v27.16b, v27.16b, v21.16b
+        eor             v28.16b, v28.16b, v21.16b
+        eor             v29.16b, v29.16b, v21.16b
+        eor             v30.16b, v30.16b, v21.16b
+        eor             v31.16b, v31.16b, v21.16b
+        cmeq            v24.4s, v24.4s, #0
+        cmeq            v25.4s, v25.4s, #0
+        cmeq            v26.4s, v26.4s, #0
+        cmeq            v27.4s, v27.4s, #0
+        // Drop through...
+2:        mov             v0.16b, v5.16b
+          ld1             {v1.16b, v2.16b}, [x0], #32
+        cmeq            v28.4s, v28.4s, #0
+        cmeq            v29.4s, v29.4s, #0
+        cmeq            v30.4s, v30.4s, #0
+        cmeq            v31.4s, v31.4s, #0
+        orr             v24.16b, v24.16b, v25.16b
+        orr             v26.16b, v26.16b, v27.16b
+        orr             v28.16b, v28.16b, v29.16b
+        orr             v30.16b, v30.16b, v31.16b
+          ext             v25.16b, v0.16b, v1.16b, #1
+        orr             v22.16b, v24.16b, v26.16b
+          ext             v26.16b, v0.16b, v1.16b, #2
+          ext             v27.16b, v0.16b, v1.16b, #3
+          ext             v29.16b, v1.16b, v2.16b, #1
+        orr             v23.16b, v28.16b, v30.16b
+          ext             v30.16b, v1.16b, v2.16b, #2
+          ext             v31.16b, v1.16b, v2.16b, #3
+          bic             v24.16b, v0.16b, v20.16b
+          bic             v25.16b, v25.16b, v20.16b
+          bic             v26.16b, v26.16b, v20.16b
+        orr             v22.16b, v22.16b, v23.16b
+          bic             v27.16b, v27.16b, v20.16b
+          bic             v28.16b, v1.16b, v20.16b
+          bic             v29.16b, v29.16b, v20.16b
+          bic             v30.16b, v30.16b, v20.16b
+          bic             v31.16b, v31.16b, v20.16b
+        addv            s22, v22.4s
+          eor             v24.16b, v24.16b, v21.16b
+          eor             v25.16b, v25.16b, v21.16b
+          eor             v26.16b, v26.16b, v21.16b
+          eor             v27.16b, v27.16b, v21.16b
+          eor             v28.16b, v28.16b, v21.16b
+        mov             w3, v22.s[0]
+          eor             v29.16b, v29.16b, v21.16b
+          eor             v30.16b, v30.16b, v21.16b
+          eor             v31.16b, v31.16b, v21.16b
+          cmeq            v24.4s, v24.4s, #0
+          cmeq            v25.4s, v25.4s, #0
+          cmeq            v26.4s, v26.4s, #0
+          cmeq            v27.4s, v27.4s, #0
+        cbnz            w3, 90f
+        st1             {v3.16b, v4.16b}, [x2], #32
+3:          mov             v3.16b, v2.16b
+            ld1             {v4.16b, v5.16b}, [x0], #32
+          cmeq            v28.4s, v28.4s, #0
+          cmeq            v29.4s, v29.4s, #0
+          cmeq            v30.4s, v30.4s, #0
+          cmeq            v31.4s, v31.4s, #0
+          orr             v24.16b, v24.16b, v25.16b
+          orr             v26.16b, v26.16b, v27.16b
+          orr             v28.16b, v28.16b, v29.16b
+          orr             v30.16b, v30.16b, v31.16b
+            ext             v25.16b, v3.16b, v4.16b, #1
+          orr             v22.16b, v24.16b, v26.16b
+            ext             v26.16b, v3.16b, v4.16b, #2
+            ext             v27.16b, v3.16b, v4.16b, #3
+            ext             v29.16b, v4.16b, v5.16b, #1
+          orr             v23.16b, v28.16b, v30.16b
+            ext             v30.16b, v4.16b, v5.16b, #2
+            ext             v31.16b, v4.16b, v5.16b, #3
+            bic             v24.16b, v3.16b, v20.16b
+            bic             v25.16b, v25.16b, v20.16b
+            bic             v26.16b, v26.16b, v20.16b
+          orr             v22.16b, v22.16b, v23.16b
+            bic             v27.16b, v27.16b, v20.16b
+            bic             v28.16b, v4.16b, v20.16b
+            bic             v29.16b, v29.16b, v20.16b
+            bic             v30.16b, v30.16b, v20.16b
+            bic             v31.16b, v31.16b, v20.16b
+          addv            s22, v22.4s
+            eor             v24.16b, v24.16b, v21.16b
+            eor             v25.16b, v25.16b, v21.16b
+            eor             v26.16b, v26.16b, v21.16b
+            eor             v27.16b, v27.16b, v21.16b
+            eor             v28.16b, v28.16b, v21.16b
+          mov             w3, v22.s[0]
+            eor             v29.16b, v29.16b, v21.16b
+            eor             v30.16b, v30.16b, v21.16b
+            eor             v31.16b, v31.16b, v21.16b
+            cmeq            v24.4s, v24.4s, #0
+            cmeq            v25.4s, v25.4s, #0
+            cmeq            v26.4s, v26.4s, #0
+            cmeq            v27.4s, v27.4s, #0
+          cbnz            w3, 91f
+          st1             {v0.16b, v1.16b}, [x2], #32
+        subs            w1, w1, #64
+        b.pl            2b
+
+90:     add             w0, w1, #80
+        ret
+
+91:     sub             w1, w1, #32
+        b               90b
+endfunc
-- 
2.25.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* [FFmpeg-devel] [PATCH 10/10] avcodec/vc1: Arm 32-bit NEON unescape fast path
  2022-03-25 18:52 ` [FFmpeg-devel] [PATCH v2 00/10] " Ben Avison
                     ` (8 preceding siblings ...)
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 09/10] avcodec/vc1: Arm 64-bit NEON unescape fast path Ben Avison
@ 2022-03-25 18:52   ` Ben Avison
  2022-03-30 14:35     ` Martin Storsjö
  9 siblings, 1 reply; 55+ messages in thread
From: Ben Avison @ 2022-03-25 18:52 UTC (permalink / raw)
  To: ffmpeg-devel; +Cc: Ben Avison

checkasm benchmarks on 1.5 GHz Cortex-A72 are as follows.

vc1dsp.vc1_unescape_buffer_c: 918624.7
vc1dsp.vc1_unescape_buffer_neon: 142958.0

Signed-off-by: Ben Avison <bavison@riscosopen.org>
---
 libavcodec/arm/vc1dsp_init_neon.c |  61 +++++++++++++++
 libavcodec/arm/vc1dsp_neon.S      | 118 ++++++++++++++++++++++++++++++
 2 files changed, 179 insertions(+)

diff --git a/libavcodec/arm/vc1dsp_init_neon.c b/libavcodec/arm/vc1dsp_init_neon.c
index f5f5c702d7..48cb816b70 100644
--- a/libavcodec/arm/vc1dsp_init_neon.c
+++ b/libavcodec/arm/vc1dsp_init_neon.c
@@ -19,6 +19,7 @@
 #include <stdint.h>
 
 #include "libavutil/attributes.h"
+#include "libavutil/intreadwrite.h"
 #include "libavcodec/vc1dsp.h"
 #include "vc1dsp.h"
 
@@ -84,6 +85,64 @@ void ff_put_vc1_chroma_mc4_neon(uint8_t *dst, uint8_t *src, ptrdiff_t stride,
 void ff_avg_vc1_chroma_mc4_neon(uint8_t *dst, uint8_t *src, ptrdiff_t stride,
                                 int h, int x, int y);
 
+int ff_vc1_unescape_buffer_helper_neon(const uint8_t *src, int size, uint8_t *dst);
+
+static int vc1_unescape_buffer_neon(const uint8_t *src, int size, uint8_t *dst)
+{
+    /* Dealing with starting and stopping, and removing escape bytes, are
+     * comparatively less time-sensitive, so are more clearly expressed using
+     * a C wrapper around the assembly inner loop. Note that we assume a
+     * little-endian machine that supports unaligned loads. */
+    int dsize = 0;
+    while (size >= 4)
+    {
+        int found = 0;
+        while (!found && (((uintptr_t) dst) & 7) && size >= 4)
+        {
+            found = (AV_RL32(src) &~ 0x03000000) == 0x00030000;
+            if (!found)
+            {
+                *dst++ = *src++;
+                --size;
+                ++dsize;
+            }
+        }
+        if (!found)
+        {
+            int skip = size - ff_vc1_unescape_buffer_helper_neon(src, size, dst);
+            dst += skip;
+            src += skip;
+            size -= skip;
+            dsize += skip;
+            while (!found && size >= 4)
+            {
+                found = (AV_RL32(src) &~ 0x03000000) == 0x00030000;
+                if (!found)
+                {
+                    *dst++ = *src++;
+                    --size;
+                    ++dsize;
+                }
+            }
+        }
+        if (found)
+        {
+            *dst++ = *src++;
+            *dst++ = *src++;
+            ++src;
+            size -= 3;
+            dsize += 2;
+        }
+    }
+    while (size > 0)
+    {
+        *dst++ = *src++;
+        --size;
+        ++dsize;
+    }
+    return dsize;
+}
+
 #define FN_ASSIGN(X, Y) \
     dsp->put_vc1_mspel_pixels_tab[0][X+4*Y] = ff_put_vc1_mspel_mc##X##Y##_16_neon; \
     dsp->put_vc1_mspel_pixels_tab[1][X+4*Y] = ff_put_vc1_mspel_mc##X##Y##_neon
@@ -130,4 +189,6 @@ av_cold void ff_vc1dsp_init_neon(VC1DSPContext *dsp)
     dsp->avg_no_rnd_vc1_chroma_pixels_tab[0] = ff_avg_vc1_chroma_mc8_neon;
     dsp->put_no_rnd_vc1_chroma_pixels_tab[1] = ff_put_vc1_chroma_mc4_neon;
     dsp->avg_no_rnd_vc1_chroma_pixels_tab[1] = ff_avg_vc1_chroma_mc4_neon;
+
+    dsp->vc1_unescape_buffer = vc1_unescape_buffer_neon;
 }
diff --git a/libavcodec/arm/vc1dsp_neon.S b/libavcodec/arm/vc1dsp_neon.S
index a639e81171..8e97bc5e58 100644
--- a/libavcodec/arm/vc1dsp_neon.S
+++ b/libavcodec/arm/vc1dsp_neon.S
@@ -1804,3 +1804,121 @@ function ff_vc1_h_loop_filter16_neon, export=1
 4:      vpop            {d8-d15}
         pop             {r4-r6,pc}
 endfunc
+
+@ Copy at most the specified number of bytes from source to destination buffer,
+@ stopping at a multiple of 16 bytes, none of which are the start of an escape sequence
+@ On entry:
+@   r0 -> source buffer
+@   r1 = max number of bytes to copy
+@   r2 -> destination buffer, optimally 8-byte aligned
+@ On exit:
+@   r0 = number of bytes not copied
+function ff_vc1_unescape_buffer_helper_neon, export=1
+        @ Offset by 48 to screen out cases that are too short for us to handle,
+        @ and also make it easy to test for loop termination, or to determine
+        @ whether we need an odd number of half-iterations of the loop.
+        subs    r1, r1, #48
+        bmi     90f
+
+        @ Set up useful constants
+        vmov.i32        q0, #0x3000000
+        vmov.i32        q1, #0x30000
+
+        tst             r1, #16
+        bne             1f
+
+          vld1.8          {q8, q9}, [r0]!
+          vbic            q12, q8, q0
+          vext.8          q13, q8, q9, #1
+          vext.8          q14, q8, q9, #2
+          vext.8          q15, q8, q9, #3
+          veor            q12, q12, q1
+          vbic            q13, q13, q0
+          vbic            q14, q14, q0
+          vbic            q15, q15, q0
+          vceq.i32        q12, q12, #0
+          veor            q13, q13, q1
+          veor            q14, q14, q1
+          veor            q15, q15, q1
+          vceq.i32        q13, q13, #0
+          vceq.i32        q14, q14, #0
+          vceq.i32        q15, q15, #0
+          add             r1, r1, #16
+          b               3f
+
+1:      vld1.8          {q10, q11}, [r0]!
+        vbic            q12, q10, q0
+        vext.8          q13, q10, q11, #1
+        vext.8          q14, q10, q11, #2
+        vext.8          q15, q10, q11, #3
+        veor            q12, q12, q1
+        vbic            q13, q13, q0
+        vbic            q14, q14, q0
+        vbic            q15, q15, q0
+        vceq.i32        q12, q12, #0
+        veor            q13, q13, q1
+        veor            q14, q14, q1
+        veor            q15, q15, q1
+        vceq.i32        q13, q13, #0
+        vceq.i32        q14, q14, #0
+        vceq.i32        q15, q15, #0
+        @ Drop through...
+2:        vmov            q8, q11
+          vld1.8          {q9}, [r0]!
+        vorr            q13, q12, q13
+        vorr            q15, q14, q15
+          vbic            q12, q8, q0
+        vorr            q3, q13, q15
+          vext.8          q13, q8, q9, #1
+          vext.8          q14, q8, q9, #2
+          vext.8          q15, q8, q9, #3
+          veor            q12, q12, q1
+        vorr            d6, d6, d7
+          vbic            q13, q13, q0
+          vbic            q14, q14, q0
+          vbic            q15, q15, q0
+          vceq.i32        q12, q12, #0
+        vmov            r3, r12, d6
+          veor            q13, q13, q1
+          veor            q14, q14, q1
+          veor            q15, q15, q1
+          vceq.i32        q13, q13, #0
+          vceq.i32        q14, q14, #0
+          vceq.i32        q15, q15, #0
+        orrs            r3, r3, r12
+        bne             90f
+        vst1.64         {q10}, [r2]!
+3:          vmov            q10, q9
+            vld1.8          {q11}, [r0]!
+          vorr            q13, q12, q13
+          vorr            q15, q14, q15
+            vbic            q12, q10, q0
+          vorr            q3, q13, q15
+            vext.8          q13, q10, q11, #1
+            vext.8          q14, q10, q11, #2
+            vext.8          q15, q10, q11, #3
+            veor            q12, q12, q1
+          vorr            d6, d6, d7
+            vbic            q13, q13, q0
+            vbic            q14, q14, q0
+            vbic            q15, q15, q0
+            vceq.i32        q12, q12, #0
+          vmov            r3, r12, d6
+            veor            q13, q13, q1
+            veor            q14, q14, q1
+            veor            q15, q15, q1
+            vceq.i32        q13, q13, #0
+            vceq.i32        q14, q14, #0
+            vceq.i32        q15, q15, #0
+          orrs            r3, r3, r12
+          bne             91f
+          vst1.64         {q8}, [r2]!
+        subs            r1, r1, #32
+        bpl             2b
+
+90:     add             r0, r1, #48
+        bx              lr
+
+91:     sub             r1, r1, #16
+        b               90b
+endfunc
-- 
2.25.1

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 06/10] avcodec/vc1: Arm 32-bit NEON deblocking filter fast paths
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 06/10] avcodec/vc1: Arm 32-bit " Ben Avison
@ 2022-03-25 19:27     ` Lynne
  2022-03-25 19:49       ` Martin Storsjö
  2022-03-30 12:37     ` Martin Storsjö
  2022-03-30 13:03     ` Martin Storsjö
  2 siblings, 1 reply; 55+ messages in thread
From: Lynne @ 2022-03-25 19:27 UTC (permalink / raw)
  To: FFmpeg development discussions and patches

25 Mar 2022, 19:52 by bavison@riscosopen.org:

> checkasm benchmarks on 1.5 GHz Cortex-A72 are as follows. Note that the C
> version can still outperform the NEON version in specific cases. The balance
> between different code paths is stream-dependent, but in practice the best
> case happens about 5% of the time, the worst case happens about 40% of the
> time, and the complexity of the remaining cases fall somewhere in between.
> Therefore, taking the average of the best and worst case timings is
> probably a conservative estimate of the degree by which the NEON code
> improves performance.
>
> vc1dsp.vc1_h_loop_filter4_bestcase_c: 19.0
> vc1dsp.vc1_h_loop_filter4_bestcase_neon: 48.5
> vc1dsp.vc1_h_loop_filter4_worstcase_c: 144.7
> vc1dsp.vc1_h_loop_filter4_worstcase_neon: 76.2
> vc1dsp.vc1_h_loop_filter8_bestcase_c: 41.0
> vc1dsp.vc1_h_loop_filter8_bestcase_neon: 75.0
> vc1dsp.vc1_h_loop_filter8_worstcase_c: 294.0
> vc1dsp.vc1_h_loop_filter8_worstcase_neon: 102.7
> vc1dsp.vc1_h_loop_filter16_bestcase_c: 54.7
> vc1dsp.vc1_h_loop_filter16_bestcase_neon: 130.0
> vc1dsp.vc1_h_loop_filter16_worstcase_c: 569.7
> vc1dsp.vc1_h_loop_filter16_worstcase_neon: 186.7
> vc1dsp.vc1_v_loop_filter4_bestcase_c: 20.2
> vc1dsp.vc1_v_loop_filter4_bestcase_neon: 47.2
> vc1dsp.vc1_v_loop_filter4_worstcase_c: 164.2
> vc1dsp.vc1_v_loop_filter4_worstcase_neon: 68.5
> vc1dsp.vc1_v_loop_filter8_bestcase_c: 43.5
> vc1dsp.vc1_v_loop_filter8_bestcase_neon: 55.2
> vc1dsp.vc1_v_loop_filter8_worstcase_c: 316.2
> vc1dsp.vc1_v_loop_filter8_worstcase_neon: 72.7
> vc1dsp.vc1_v_loop_filter16_bestcase_c: 62.2
> vc1dsp.vc1_v_loop_filter16_bestcase_neon: 103.7
> vc1dsp.vc1_v_loop_filter16_worstcase_c: 646.5
> vc1dsp.vc1_v_loop_filter16_worstcase_neon: 110.7
>
> Signed-off-by: Ben Avison <bavison@riscosopen.org>
> ---
>  libavcodec/arm/vc1dsp_init_neon.c |  14 +
>  libavcodec/arm/vc1dsp_neon.S      | 643 ++++++++++++++++++++++++++++++
>  2 files changed, 657 insertions(+)
>
> diff --git a/libavcodec/arm/vc1dsp_init_neon.c b/libavcodec/arm/vc1dsp_init_neon.c
> index 2cca784f5a..f5f5c702d7 100644
> --- a/libavcodec/arm/vc1dsp_init_neon.c
> +++ b/libavcodec/arm/vc1dsp_init_neon.c
> @@ -32,6 +32,13 @@ void ff_vc1_inv_trans_4x8_dc_neon(uint8_t *dest, ptrdiff_t stride, int16_t *bloc
>  void ff_vc1_inv_trans_8x4_dc_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
>  void ff_vc1_inv_trans_4x4_dc_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
>  
> +void ff_vc1_v_loop_filter4_neon(uint8_t *src, int stride, int pq);
> +void ff_vc1_h_loop_filter4_neon(uint8_t *src, int stride, int pq);
> +void ff_vc1_v_loop_filter8_neon(uint8_t *src, int stride, int pq);
> +void ff_vc1_h_loop_filter8_neon(uint8_t *src, int stride, int pq);
> +void ff_vc1_v_loop_filter16_neon(uint8_t *src, int stride, int pq);
> +void ff_vc1_h_loop_filter16_neon(uint8_t *src, int stride, int pq);
> +
>  void ff_put_pixels8x8_neon(uint8_t *block, const uint8_t *pixels,
>  ptrdiff_t line_size, int rnd);
>  
> @@ -92,6 +99,13 @@ av_cold void ff_vc1dsp_init_neon(VC1DSPContext *dsp)
>  dsp->vc1_inv_trans_8x4_dc = ff_vc1_inv_trans_8x4_dc_neon;
>  dsp->vc1_inv_trans_4x4_dc = ff_vc1_inv_trans_4x4_dc_neon;
>  
> +    dsp->vc1_v_loop_filter4  = ff_vc1_v_loop_filter4_neon;
> +    dsp->vc1_h_loop_filter4  = ff_vc1_h_loop_filter4_neon;
> +    dsp->vc1_v_loop_filter8  = ff_vc1_v_loop_filter8_neon;
> +    dsp->vc1_h_loop_filter8  = ff_vc1_h_loop_filter8_neon;
> +    dsp->vc1_v_loop_filter16 = ff_vc1_v_loop_filter16_neon;
> +    dsp->vc1_h_loop_filter16 = ff_vc1_h_loop_filter16_neon;
> +
>  dsp->put_vc1_mspel_pixels_tab[1][ 0] = ff_put_pixels8x8_neon;
>  FN_ASSIGN(1, 0);
>  FN_ASSIGN(2, 0);
> diff --git a/libavcodec/arm/vc1dsp_neon.S b/libavcodec/arm/vc1dsp_neon.S
> index 93f043bf08..a639e81171 100644
> --- a/libavcodec/arm/vc1dsp_neon.S
> +++ b/libavcodec/arm/vc1dsp_neon.S
> @@ -1161,3 +1161,646 @@ function ff_vc1_inv_trans_4x4_dc_neon, export=1
>  vst1.32         {d1[1]},  [r0,:32]
>  bx              lr
>  endfunc
> +
> +@ VC-1 in-loop deblocking filter for 4 pixel pairs at boundary of vertically-neighbouring blocks
> +@ On entry:
> +@   r0 -> top-left pel of lower block
> +@   r1 = row stride, bytes
> +@   r2 = PQUANT bitstream parameter
> +function ff_vc1_v_loop_filter4_neon, export=1
> +        sub             r3, r0, r1, lsl #2
> +        vldr            d0, .Lcoeffs
> +        vld1.32         {d1[0]}, [r0], r1       @ P5
> +        vld1.32         {d2[0]}, [r3], r1       @ P1
> +        vld1.32         {d3[0]}, [r3], r1       @ P2
> +        vld1.32         {d4[0]}, [r0], r1       @ P6
> +        vld1.32         {d5[0]}, [r3], r1       @ P3
> +        vld1.32         {d6[0]}, [r0], r1       @ P7
> +        vld1.32         {d7[0]}, [r3]           @ P4
> +        vld1.32         {d16[0]}, [r0]          @ P8
>

Nice patches, but 2 notes so far:
What's with the weird comment syntax used only in this commit?
Different indentation style used. We try to indent our Arm assembly to:
<8 spaces><instruction><spaces until and column 24><instruction arguments>.
Take a look at e.g. libavcodec/aarch64/vp9itxfm_neon.S. It's just something that
stuck around.
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 06/10] avcodec/vc1: Arm 32-bit NEON deblocking filter fast paths
  2022-03-25 19:27     ` Lynne
@ 2022-03-25 19:49       ` Martin Storsjö
  2022-03-25 19:55         ` Lynne
  0 siblings, 1 reply; 55+ messages in thread
From: Martin Storsjö @ 2022-03-25 19:49 UTC (permalink / raw)
  To: FFmpeg development discussions and patches

On Fri, 25 Mar 2022, Lynne wrote:

> 25 Mar 2022, 19:52 by bavison@riscosopen.org:
>
>> +@ VC-1 in-loop deblocking filter for 4 pixel pairs at boundary of vertically-neighbouring blocks
>> +@ On entry:
>> +@   r0 -> top-left pel of lower block
>> +@   r1 = row stride, bytes
>> +@   r2 = PQUANT bitstream parameter
>> +function ff_vc1_v_loop_filter4_neon, export=1
>> +        sub             r3, r0, r1, lsl #2
>> +        vldr            d0, .Lcoeffs
>> +        vld1.32         {d1[0]}, [r0], r1       @ P5
>> +        vld1.32         {d2[0]}, [r3], r1       @ P1
>> +        vld1.32         {d3[0]}, [r3], r1       @ P2
>> +        vld1.32         {d4[0]}, [r0], r1       @ P6
>> +        vld1.32         {d5[0]}, [r3], r1       @ P3
>> +        vld1.32         {d6[0]}, [r0], r1       @ P7
>> +        vld1.32         {d7[0]}, [r3]           @ P4
>> +        vld1.32         {d16[0]}, [r0]          @ P8
>>
>
> Nice patches, but 2 notes so far:

Indeed, the first glance seems great so far, I haven't applied and poked 
them closer yet.

> What's with the weird comment syntax used only in this commit?

In 32 bit arm assembly, @ is a native assembler comment character, and 
lots of our existing 32 bit assembly uses that so far.

> Different indentation style used. We try to indent our Arm assembly to:
> <8 spaces><instruction><spaces until and column 24><instruction arguments>.

Hmm, I haven't applied this patch locally and checked yet, but at least 
from browsing just the patch, it seems to be quite correctly indented?

We already discussed this in the previous iteration of his patchset, and 
the cover letter mentioned that he had fixed it to match the convention 
now. (And even in the previous iteration, the 32 bit assembly matched the 
existing code.)

// Martin

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 06/10] avcodec/vc1: Arm 32-bit NEON deblocking filter fast paths
  2022-03-25 19:49       ` Martin Storsjö
@ 2022-03-25 19:55         ` Lynne
  0 siblings, 0 replies; 55+ messages in thread
From: Lynne @ 2022-03-25 19:55 UTC (permalink / raw)
  To: FFmpeg development discussions and patches

25 Mar 2022, 20:49 by martin@martin.st:

> On Fri, 25 Mar 2022, Lynne wrote:
>
>> 25 Mar 2022, 19:52 by bavison@riscosopen.org:
>>
>>> +@ VC-1 in-loop deblocking filter for 4 pixel pairs at boundary of vertically-neighbouring blocks
>>> +@ On entry:
>>> +@   r0 -> top-left pel of lower block
>>> +@   r1 = row stride, bytes
>>> +@   r2 = PQUANT bitstream parameter
>>> +function ff_vc1_v_loop_filter4_neon, export=1
>>> +        sub             r3, r0, r1, lsl #2
>>> +        vldr            d0, .Lcoeffs
>>> +        vld1.32         {d1[0]}, [r0], r1       @ P5
>>> +        vld1.32         {d2[0]}, [r3], r1       @ P1
>>> +        vld1.32         {d3[0]}, [r3], r1       @ P2
>>> +        vld1.32         {d4[0]}, [r0], r1       @ P6
>>> +        vld1.32         {d5[0]}, [r3], r1       @ P3
>>> +        vld1.32         {d6[0]}, [r0], r1       @ P7
>>> +        vld1.32         {d7[0]}, [r3]           @ P4
>>> +        vld1.32         {d16[0]}, [r0]          @ P8
>>>
>>
>> Nice patches, but 2 notes so far:
>>
>
> Indeed, the first glance seems great so far, I haven't applied and poked them closer yet.
>
>> What's with the weird comment syntax used only in this commit?
>>
>
> In 32 bit arm assembly, @ is a native assembler comment character, and lots of our existing 32 bit assembly uses that so far.
>
>> Different indentation style used. We try to indent our Arm assembly to:
>> <8 spaces><instruction><spaces until and column 24><instruction arguments>.
>>
>
> Hmm, I haven't applied this patch locally and checked yet, but at least from browsing just the patch, it seems to be quite correctly indented?
>
> We already discussed this in the previous iteration of his patchset, and the cover letter mentioned that he had fixed it to match the convention now. (And even in the previous iteration, the 32 bit assembly matched the existing code.)
>

Oh, right, my email client mangled them.
All looks good to me then.
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 01/10] checkasm: Add vc1dsp in-loop deblocking filter tests
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 01/10] checkasm: Add vc1dsp in-loop deblocking filter tests Ben Avison
@ 2022-03-25 22:53     ` Martin Storsjö
  2022-03-28 18:28       ` Ben Avison
  2022-03-29 12:24     ` Martin Storsjö
  2022-03-29 12:43     ` Martin Storsjö
  2 siblings, 1 reply; 55+ messages in thread
From: Martin Storsjö @ 2022-03-25 22:53 UTC (permalink / raw)
  To: FFmpeg development discussions and patches; +Cc: Ben Avison

On Fri, 25 Mar 2022, Ben Avison wrote:

> Note that the benchmarking results for these functions are highly dependent
> upon the input data. Therefore, each function is benchmarked twice,
> corresponding to the best and worst case complexity of the reference C
> implementation. The performance of a real stream decode will fall somewhere
> between these two extremes.
>
> Signed-off-by: Ben Avison <bavison@riscosopen.org>
> ---
> tests/checkasm/Makefile   |  1 +
> tests/checkasm/checkasm.c |  3 ++
> tests/checkasm/checkasm.h |  1 +
> tests/checkasm/vc1dsp.c   | 94 +++++++++++++++++++++++++++++++++++++++
> tests/fate/checkasm.mak   |  1 +
> 5 files changed, 100 insertions(+)
> create mode 100644 tests/checkasm/vc1dsp.c
>
> +#define CHECK_LOOP_FILTER(func)                                             \
> +    do {                                                                    \
> +        if (check_func(h.func, "vc1dsp." #func)) {                          \
> +            declare_func_emms(AV_CPU_FLAG_MMX, void, uint8_t *, int, int);  \
> +            for (int count = 1000; count > 0; --count) {                    \
> +                int pq = rnd() % 31 + 1;                                    \
> +                RANDOMIZE_BUFFER8_MID_WEIGHTED(filter_buf, 24 * 24);        \
> +                call_ref(filter_buf0 + 4 * 24 + 4, 24, pq);                 \
> +                call_new(filter_buf1 + 4 * 24 + 4, 24, pq);                 \
> +                if (memcmp(filter_buf0, filter_buf1, 24 * 24))              \
> +                    fail();                                                 \
> +            }                                                               \
> +        }                                                                   \
> +        for (int j = 0; j < 24; ++j)                                        \
> +            for (int i = 0; i < 24; ++i)                                    \
> +                filter_buf1[24*j + i] = 0x60 + 0x40 * (i >= 4 && j >= 4);   \
> +        if (check_func(h.func, "vc1dsp." #func "_bestcase")) {              \
> +            declare_func_emms(AV_CPU_FLAG_MMX, void, uint8_t *, int, int);  \
> +            bench_new(filter_buf1 + 4 * 24 + 4, 24, 1);                     \
> +            (void) checked_call;                                            \
> +        }                                                                   \
> +        if (check_func(h.func, "vc1dsp." #func "_worstcase")) {             \
> +            declare_func_emms(AV_CPU_FLAG_MMX, void, uint8_t *, int, int);  \
> +            bench_new(filter_buf1 + 4 * 24 + 4, 24, 31);                    \
> +            (void) checked_call;                                            \
> +        }                                                                   \

(not a full review, just something that cropped up in initial build 
testing)

Why do you have the "(void) checked_call;" here? The checked_call isn't 
something that is universally defined; its availability depends on the 
OS/arch combinations, on other combinations, call_new/call_ref just call 
the function straight away without a wrapper. In particular, on macOS on 
arm64, we don't use checked_call, due to differences in how parameters are 
packed on the stack in the darwin ABI compared to AAPCS.

// Martin

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 01/10] checkasm: Add vc1dsp in-loop deblocking filter tests
  2022-03-25 22:53     ` Martin Storsjö
@ 2022-03-28 18:28       ` Ben Avison
  2022-03-29 11:47         ` Martin Storsjö
  0 siblings, 1 reply; 55+ messages in thread
From: Ben Avison @ 2022-03-28 18:28 UTC (permalink / raw)
  To: ffmpeg-devel

On 25/03/2022 22:53, Martin Storsjö wrote:
> On Fri, 25 Mar 2022, Ben Avison wrote:
> 
>> +#define 
>> CHECK_LOOP_FILTER(func)                                             \
>> +    do 
>> {                                                                    \
>> +        if (check_func(h.func, "vc1dsp." #func)) 
>> {                          \
>> +            declare_func_emms(AV_CPU_FLAG_MMX, void, uint8_t *, int, 
>> int);  \
>> +            for (int count = 1000; count > 0; --count) 
>> {                    \
>> +                int pq = rnd() % 31 + 
>> 1;                                    \
>> +                RANDOMIZE_BUFFER8_MID_WEIGHTED(filter_buf, 24 * 
>> 24);        \
>> +                call_ref(filter_buf0 + 4 * 24 + 4, 24, 
>> pq);                 \
>> +                call_new(filter_buf1 + 4 * 24 + 4, 24, 
>> pq);                 \
>> +                if (memcmp(filter_buf0, filter_buf1, 24 * 
>> 24))              \
>> +                    
>> fail();                                                 \
>> +            
>> }                                                               \
>> +        
>> }                                                                   \
>> +        for (int j = 0; j < 24; 
>> ++j)                                        \
>> +            for (int i = 0; i < 24; 
>> ++i)                                    \
>> +                filter_buf1[24*j + i] = 0x60 + 0x40 * (i >= 4 && j >= 
>> 4);   \
>> +        if (check_func(h.func, "vc1dsp." #func "_bestcase")) 
>> {              \
>> +            declare_func_emms(AV_CPU_FLAG_MMX, void, uint8_t *, int, 
>> int);  \
>> +            bench_new(filter_buf1 + 4 * 24 + 4, 24, 
>> 1);                     \
>> +            (void) 
>> checked_call;                                            \
>> +        
>> }                                                                   \
>> +        if (check_func(h.func, "vc1dsp." #func "_worstcase")) 
>> {             \
>> +            declare_func_emms(AV_CPU_FLAG_MMX, void, uint8_t *, int, 
>> int);  \
>> +            bench_new(filter_buf1 + 4 * 24 + 4, 24, 
>> 31);                    \
>> +            (void) 
>> checked_call;                                            \
>> +        
>> }                                                                   \
> 
> (not a full review, just something that cropped up in initial build 
> testing)
> 
> Why do you have the "(void) checked_call;" here? The checked_call isn't 
> something that is universally defined; its availability depends on the 
> OS/arch combinations, on other combinations, call_new/call_ref just call 
> the function straight away without a wrapper.

OK, I missed that subtlety. My aim was to avoid the "unused variable" 
compiler warnings generated as a result of there being twice as many 
benchmark tests as correctness tests. I believe we need separate calls 
of check_func() to initialise the cycle counts for each benchmark, and 
copying the sequence of macros from checkasm/blockdsp.c, I was placing 
the declare_func_emms() invocations inside the if block that used 
check_func(). That meant that checked_call was initialised, but since 
the correctness test (call_ref / call_new) was in a different block 
scope, this checked_call declaration was never used.

Upon further investigation, I think it's valid to move the 
declare_func_emms() invocation up to the next largest block scope. That 
means it would only appear once rather than 3 times, and it wouldn't 
need the cast-to-void any more. Please do correct me if I'm wrong.

Ben
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 01/10] checkasm: Add vc1dsp in-loop deblocking filter tests
  2022-03-28 18:28       ` Ben Avison
@ 2022-03-29 11:47         ` Martin Storsjö
  0 siblings, 0 replies; 55+ messages in thread
From: Martin Storsjö @ 2022-03-29 11:47 UTC (permalink / raw)
  To: FFmpeg development discussions and patches

On Mon, 28 Mar 2022, Ben Avison wrote:

> On 25/03/2022 22:53, Martin Storsjö wrote:
>> On Fri, 25 Mar 2022, Ben Avison wrote:
>> 
>>> +#define 
>>> CHECK_LOOP_FILTER(func)                                             \
>>> +    do 
>>> {                                                                    \
>>> +        if (check_func(h.func, "vc1dsp." #func)) 
>>> {                          \
>>> +            declare_func_emms(AV_CPU_FLAG_MMX, void, uint8_t *, int, 
>>> int);  \
>>> +            for (int count = 1000; count > 0; --count) 
>>> {                    \
>>> +                int pq = rnd() % 31 + 
>>> 1;                                    \
>>> +                RANDOMIZE_BUFFER8_MID_WEIGHTED(filter_buf, 24 * 
>>> 24);        \
>>> +                call_ref(filter_buf0 + 4 * 24 + 4, 24, 
>>> pq);                 \
>>> +                call_new(filter_buf1 + 4 * 24 + 4, 24, 
>>> pq);                 \
>>> +                if (memcmp(filter_buf0, filter_buf1, 24 * 
>>> 24))              \
>>> + 
>>> fail();                                                 \
>>> + 
>>> }                                                               \
>>> + 
>>> }                                                                   \
>>> +        for (int j = 0; j < 24; 
>>> ++j)                                        \
>>> +            for (int i = 0; i < 24; 
>>> ++i)                                    \
>>> +                filter_buf1[24*j + i] = 0x60 + 0x40 * (i >= 4 && j >= 
>>> 4);   \
>>> +        if (check_func(h.func, "vc1dsp." #func "_bestcase")) 
>>> {              \
>>> +            declare_func_emms(AV_CPU_FLAG_MMX, void, uint8_t *, int, 
>>> int);  \
>>> +            bench_new(filter_buf1 + 4 * 24 + 4, 24, 
>>> 1);                     \
>>> +            (void) 
>>> checked_call;                                            \
>>> + 
>>> }                                                                   \
>>> +        if (check_func(h.func, "vc1dsp." #func "_worstcase")) 
>>> {             \
>>> +            declare_func_emms(AV_CPU_FLAG_MMX, void, uint8_t *, int, 
>>> int);  \
>>> +            bench_new(filter_buf1 + 4 * 24 + 4, 24, 
>>> 31);                    \
>>> +            (void) 
>>> checked_call;                                            \
>>> + 
>>> }                                                                   \
>> 
>> (not a full review, just something that cropped up in initial build 
>> testing)
>> 
>> Why do you have the "(void) checked_call;" here? The checked_call isn't 
>> something that is universally defined; its availability depends on the 
>> OS/arch combinations, on other combinations, call_new/call_ref just call 
>> the function straight away without a wrapper.
>
> OK, I missed that subtlety. My aim was to avoid the "unused variable" 
> compiler warnings generated as a result of there being twice as many 
> benchmark tests as correctness tests.

Oh, I see. I just ran into it when trying to compile on macOS, then edited 
it out and saw that it built fine there, but didn't try building for other 
platforms with the same modification.

> I believe we need separate calls of check_func() to initialise the cycle 
> counts for each benchmark, and copying the sequence of macros from 
> checkasm/blockdsp.c,

FWIW I think blockdsp.c might have been a bad example in that regard, as 
it expands the whole testcase with macros. (I chose it mainly as it was 
one of the shortest testcases.)

I think e.g. vp8dsp would have been a better example - with the toplevel 
checkasm_check_*() function just calling individual functions for the 
tests for various function groups. As check_func() can take a format 
string, you don't usually need the macro expansion for filling that in.

> I was placing the declare_func_emms() invocations inside the if block 
> that used check_func(). That meant that checked_call was initialised, 
> but since the correctness test (call_ref / call_new) was in a different 
> block scope, this checked_call declaration was never used.
>
> Upon further investigation, I think it's valid to move the 
> declare_func_emms() invocation up to the next largest block scope. That 
> means it would only appear once rather than 3 times, and it wouldn't 
> need the cast-to-void any more. Please do correct me if I'm wrong.

Yes, that seems correct to do. And looking at other examples, e.g. vp8dsp, 
that also uses such a structure, with declare_func_*() outside of 
check_func() - in a function like check_loopfilter_simple().

// Martin
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 01/10] checkasm: Add vc1dsp in-loop deblocking filter tests
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 01/10] checkasm: Add vc1dsp in-loop deblocking filter tests Ben Avison
  2022-03-25 22:53     ` Martin Storsjö
@ 2022-03-29 12:24     ` Martin Storsjö
  2022-03-29 12:43     ` Martin Storsjö
  2 siblings, 0 replies; 55+ messages in thread
From: Martin Storsjö @ 2022-03-29 12:24 UTC (permalink / raw)
  To: FFmpeg development discussions and patches; +Cc: Ben Avison

On Fri, 25 Mar 2022, Ben Avison wrote:

> Note that the benchmarking results for these functions are highly dependent
> upon the input data. Therefore, each function is benchmarked twice,
> corresponding to the best and worst case complexity of the reference C
> implementation. The performance of a real stream decode will fall somewhere
> between these two extremes.

Great idea to do separate benchmarking of the best/worst cases like this - 
that is usually a recurring issue in benchmarking loop filters.

(Another issue with benchmarking of loop filters, is that the same 
function is run repeatedly without resetting the input data inbetween - so 
depending on the exact setup, it's possible that the decision about 
whether to filter or not is taken differently in the first and last runs. 
But this implementation seems very good in that aspect!)

> +++ b/tests/checkasm/vc1dsp.c
> @@ -0,0 +1,94 @@
> +/*
> + * Copyright (c) 2022 Ben Avison
> + *
> + * This file is part of FFmpeg.
> + *
> + * FFmpeg is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + *
> + * FFmpeg is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> + * with FFmpeg; if not, write to the Free Software Foundation, Inc.,
> + * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
> + */
> +
> +#include <string.h>
> +
> +#include "checkasm.h"
> +
> +#include "libavcodec/vc1dsp.h"
> +
> +#include "libavutil/common.h"
> +#include "libavutil/internal.h"
> +#include "libavutil/intreadwrite.h"
> +#include "libavutil/mem_internal.h"
> +
> +#define RANDOMIZE_BUFFER8_MID_WEIGHTED(name, size)  \
> +    do {                                            \
> +        uint8_t *p##0 = name##0, *p##1 = name##1;   \
> +        int i = (size);                             \
> +        while (i-- > 0) {                           \
> +            int x = 0x80 | (rnd() & 0x7F);          \
> +            x >>= rnd() % 9;                        \
> +            if (rnd() & 1)                          \
> +                x = -x;                             \
> +            *p##1++ = *p##0++ = 0x80 + x;           \
> +        }                                           \
> +    } while (0)
> +
> +#define CHECK_LOOP_FILTER(func)                                             \
> +    do {                                                                    \
> +        if (check_func(h.func, "vc1dsp." #func)) {                          \
> +            declare_func_emms(AV_CPU_FLAG_MMX, void, uint8_t *, int, int);  \
> +            for (int count = 1000; count > 0; --count) {                    \
> +                int pq = rnd() % 31 + 1;                                    \
> +                RANDOMIZE_BUFFER8_MID_WEIGHTED(filter_buf, 24 * 24);        \
> +                call_ref(filter_buf0 + 4 * 24 + 4, 24, pq);                 \
> +                call_new(filter_buf1 + 4 * 24 + 4, 24, pq);                 \
> +                if (memcmp(filter_buf0, filter_buf1, 24 * 24))              \
> +                    fail();                                                 \
> +            }                                                               \
> +        }                                                                   \
> +        for (int j = 0; j < 24; ++j)                                        \
> +            for (int i = 0; i < 24; ++i)                                    \
> +                filter_buf1[24*j + i] = 0x60 + 0x40 * (i >= 4 && j >= 4);   \
> +        if (check_func(h.func, "vc1dsp." #func "_bestcase")) {              \
> +            declare_func_emms(AV_CPU_FLAG_MMX, void, uint8_t *, int, int);  \
> +            bench_new(filter_buf1 + 4 * 24 + 4, 24, 1);                     \
> +            (void) checked_call;                                            \
> +        }                                                                   \
> +        if (check_func(h.func, "vc1dsp." #func "_worstcase")) {             \
> +            declare_func_emms(AV_CPU_FLAG_MMX, void, uint8_t *, int, int);  \
> +            bench_new(filter_buf1 + 4 * 24 + 4, 24, 31);                    \
> +            (void) checked_call;                                            \
> +        }                                                                   \
> +    } while (0)
> +
> +void checkasm_check_vc1dsp(void)
> +{
> +    /* Deblocking filter buffers are big enough to hold a 16x16 block,
> +     * plus 4 rows/columns above/left to hold filter inputs (depending on
> +     * whether v or h neighbouring block edge) plus 4 rows/columns
> +     * right/below to catch write overflows */
> +    LOCAL_ALIGNED_4(uint8_t, filter_buf0, [24 * 24]);
> +    LOCAL_ALIGNED_4(uint8_t, filter_buf1, [24 * 24]);
> +
> +    VC1DSPContext h;
> +
> +    ff_vc1dsp_init(&h);
> +
> +    CHECK_LOOP_FILTER(vc1_v_loop_filter4);
> +    CHECK_LOOP_FILTER(vc1_h_loop_filter4);
> +    CHECK_LOOP_FILTER(vc1_v_loop_filter8);
> +    CHECK_LOOP_FILTER(vc1_h_loop_filter8);
> +    CHECK_LOOP_FILTER(vc1_v_loop_filter16);
> +    CHECK_LOOP_FILTER(vc1_h_loop_filter16);
> +
> +    report("loop_filter");
> +}

This looks great to me overall. I think it'd be nice to unmacro 
CHECK_LOOP_FILTER though and make a separate check_loopfilter() function 
like in vp8dsp.c instead, and move the declare_func_emms outside of 
check_func() as you concluded.

// Martin

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 02/10] checkasm: Add vc1dsp inverse transform tests
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 02/10] checkasm: Add vc1dsp inverse transform tests Ben Avison
@ 2022-03-29 12:41     ` Martin Storsjö
  0 siblings, 0 replies; 55+ messages in thread
From: Martin Storsjö @ 2022-03-29 12:41 UTC (permalink / raw)
  To: FFmpeg development discussions and patches; +Cc: Ben Avison

On Fri, 25 Mar 2022, Ben Avison wrote:

> This test deliberately doesn't exercise the full range of inputs described in
> the committee draft VC-1 standard. It says:
>
> input coefficients in frequency domain, D, satisfy   -2048 <= D < 2047
> intermediate coefficients, E, satisfy                -4096 <= E < 4095
> fully inverse-transformed coefficients, R, satisfy    -512 <= R <  511
>
> For one thing, the inequalities look odd. Did they mean them to go the
> other way round? That would make more sense because the equations generally
> both add and subtract coefficients multiplied by constants, including powers
> of 2. Requiring the most-negative values to be valid extends the number of
> bits to represent the intermediate values just for the sake of that one case!
>
> For another thing, the extreme values don't look to occur in real streams -
> both in my experience and supported by the following comment in the AArch32
> decoder:
>
>    tNhalf is half of the value of tN (as described in vc1_inv_trans_8x8_c).
>    This is done because sometimes files have input that causes tN + tM to
>    overflow. To avoid this overflow, we compute tNhalf, then compute
>    tNhalf + tM (which doesn't overflow), and then we use vhadd to compute
>    (tNhalf + (tNhalf + tM)) >> 1 which does not overflow because it is
>    one instruction.
>
> My AArch64 decoder goes further than this. It calculates tNhalf and tM
> then does an SRA (essentially a fused halve and add) to compute
> (tN + tM) >> 1 without ever having to hold (tNhalf + tM) in a 16-bit element
> without overflowing. It only encounters difficulties if either tNhalf or
> tM overflow in isolation.
>
> I haven't had sight of the final standard, so it's possible that these
> issues were dealt with during finalisation, which could explain the lack
> of usage of extreme inputs in real streams. Or a preponderance of decoders
> that only support 16-bit intermediate values in their inverse transforms
> might have caused encoders to steer clear of such cases.
>
> I have effectively followed this approach in the test, and limited the
> scale of the coefficients sufficient that both the existing AArch32 decoder
> and my new AArch64 decoder both pass.
>
> Signed-off-by: Ben Avison <bavison@riscosopen.org>
> ---
> tests/checkasm/vc1dsp.c | 258 ++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 258 insertions(+)

The reasoning sounds sensible to me.

I didn't try to follow the exact logic for how the input data is produced, 
but it seems reasonable.

It'd be nice to unmacro the function and wrap it in a separate standalone 
test function like check_idct() in vp8dsp, check_itxfm in vp9dsp or 
check_idct in hevc_idct.c. You may want to have a deeply nested loop to 
check e.g.

     for (int w = 4; w <= 8; w += 4) {
         for (int h = 4; w <= 8; w += 4) {
             for (int dc = 0; dc <= 1; dc++) {
                 if (w == 8 && h == 8 && dc == 0)
                     continue; // Tested separately
                 [... actual test ...]
                 (or call a separate check_idct_func(w,h,dc) function
                 to avoid unnecessarily deep indentation of a lot of code)
             }
         }
     }

// Martin

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 01/10] checkasm: Add vc1dsp in-loop deblocking filter tests
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 01/10] checkasm: Add vc1dsp in-loop deblocking filter tests Ben Avison
  2022-03-25 22:53     ` Martin Storsjö
  2022-03-29 12:24     ` Martin Storsjö
@ 2022-03-29 12:43     ` Martin Storsjö
  2 siblings, 0 replies; 55+ messages in thread
From: Martin Storsjö @ 2022-03-29 12:43 UTC (permalink / raw)
  To: FFmpeg development discussions and patches; +Cc: Ben Avison

On Fri, 25 Mar 2022, Ben Avison wrote:

> Note that the benchmarking results for these functions are highly dependent
> upon the input data. Therefore, each function is benchmarked twice,
> corresponding to the best and worst case complexity of the reference C
> implementation. The performance of a real stream decode will fall somewhere
> between these two extremes.
>
> Signed-off-by: Ben Avison <bavison@riscosopen.org>
> ---
> tests/checkasm/Makefile   |  1 +
> tests/checkasm/checkasm.c |  3 ++
> tests/checkasm/checkasm.h |  1 +
> tests/checkasm/vc1dsp.c   | 94 +++++++++++++++++++++++++++++++++++++++
> tests/fate/checkasm.mak   |  1 +
> 5 files changed, 100 insertions(+)
> create mode 100644 tests/checkasm/vc1dsp.c

Actually, this test already paid off - thanks! It caught a real issue with 
the existing x86 loopfilter assembly. The stride parameter is 'int', but 
the assembly uses it as a full register without clearing/sign extending 
the upper half.

Instead of complicating the assembly, the usual remedy is to change the 
parameter to ptrdiff_t, to avoid the issue altogether - I'll send a patch 
for that.

// Martin


_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 03/10] checkasm: Add idctdsp add/put-pixels-clamped tests
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 03/10] checkasm: Add idctdsp add/put-pixels-clamped tests Ben Avison
@ 2022-03-29 13:13     ` Martin Storsjö
  2022-03-29 19:56       ` Martin Storsjö
  2022-03-29 20:22       ` Ben Avison
  0 siblings, 2 replies; 55+ messages in thread
From: Martin Storsjö @ 2022-03-29 13:13 UTC (permalink / raw)
  To: FFmpeg development discussions and patches; +Cc: Ben Avison

On Fri, 25 Mar 2022, Ben Avison wrote:

> Disable ff_add_pixels_clamped_arm, which was found to fail the test. As this
> is normally only used for Arms prior to Armv6 (ARM11) it seems quite unlikely
> that anyone is still using this, so I haven't put in the effort to debug it.

I had a look at this function, and I see that the overflow checks are 
using

         tst             r6,  #0x100

to see whether the addition overflowed (either above or below). However, 
if block[] was e.g. 0x200, it's possible to overflow without setting this 
bit at all.

If it would be the case that the valid range of block[] values would be 
e.g. [-255,255], then this kind of overflow checking would work though. 
(As there exists assembly for armv6, then this function probably hasn't 
been used much in modern times, so this doesn't say much about what values 
actually are used here.)

Secondly, the clamping seems to be done with

         movne           r6,  r5,  lsr #24

However that should use asr, not lsr, I think, to get proper clamping in 
both ends?


Thirdly - the added test also occasionally fails for the other existing 
functions (armv6, neon) and the newly added aarch64 neon version. If you 
have e.g. src[] = 32767, dst[] = 255, then the widening 8->16 addition 
will overflow, as there's no operation that both widens and clamps at the 
same time.

I think this is reason to limit the range of src[] at least somewhat in 
the test, since I don't think the full 16 bit signed range actually is 
relevant here.

// Martin

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 03/10] checkasm: Add idctdsp add/put-pixels-clamped tests
  2022-03-29 13:13     ` Martin Storsjö
@ 2022-03-29 19:56       ` Martin Storsjö
  2022-03-29 20:22       ` Ben Avison
  1 sibling, 0 replies; 55+ messages in thread
From: Martin Storsjö @ 2022-03-29 19:56 UTC (permalink / raw)
  To: FFmpeg development discussions and patches; +Cc: Ben Avison

On Tue, 29 Mar 2022, Martin Storsjö wrote:

> On Fri, 25 Mar 2022, Ben Avison wrote:
>
>> Disable ff_add_pixels_clamped_arm, which was found to fail the test. As 
>> this
>> is normally only used for Arms prior to Armv6 (ARM11) it seems quite 
>> unlikely
>> that anyone is still using this, so I haven't put in the effort to debug 
>> it.
>
> I had a look at this function, and I see that the overflow checks are using
>
>        tst             r6,  #0x100
>
> to see whether the addition overflowed (either above or below). However, if 
> block[] was e.g. 0x200, it's possible to overflow without setting this bit at 
> all.
>
> If it would be the case that the valid range of block[] values would be e.g. 
> [-255,255], then this kind of overflow checking would work though. (As there 
> exists assembly for armv6, then this function probably hasn't been used much 
> in modern times, so this doesn't say much about what values actually are used 
> here.)
>
> Secondly, the clamping seems to be done with
>
>        movne           r6,  r5,  lsr #24
>
> However that should use asr, not lsr, I think, to get proper clamping in both 
> ends?

On second thought, no, lsr #24 should be correct here. But "tst r6, 
#0x100" probably is the main issue, given the range of input values set by 
the current test. No idea what the actual value range is, for the decoders 
that use this function though.

// Martin
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 03/10] checkasm: Add idctdsp add/put-pixels-clamped tests
  2022-03-29 13:13     ` Martin Storsjö
  2022-03-29 19:56       ` Martin Storsjö
@ 2022-03-29 20:22       ` Ben Avison
  2022-03-29 20:30         ` Martin Storsjö
  1 sibling, 1 reply; 55+ messages in thread
From: Ben Avison @ 2022-03-29 20:22 UTC (permalink / raw)
  To: ffmpeg-devel

On 29/03/2022 14:13, Martin Storsjö wrote:
> On Fri, 25 Mar 2022, Ben Avison wrote:
> 
>> Disable ff_add_pixels_clamped_arm, which was found to fail the test. 
> 
> I had a look at this function, and I see that the overflow checks are using
> 
>          tst             r6,  #0x100
> 
> to see whether the addition overflowed (either above or below). However, 
> if block[] was e.g. 0x200, it's possible to overflow without setting 
> this bit at all.

Yes, thinking about it, that test is only valid if the signed 16-bit 
value from block[] lies in the range -0x100..+0x100 inclusive, otherwise 
there exists at least one unsigned 8-bit value which should have clamped 
but won't.

> Secondly, the clamping seems to be done with
> 
>          movne           r6,  r5,  lsr #24
> 
> However that should use asr, not lsr, I think, to get proper clamping in 
> both ends?

r5 is the NOTted version, so all that's doing is selecting 0x000000FF if 
there was positive overflow, and 0x00000000 if there was negative 
overflow. Given that bit 8 and above need to be zero to facilitate 
repacking the 8-bit samples, that's the right thing to do.

> Thirdly - the added test also occasionally fails for the other existing 
> functions (armv6, neon) and the newly added aarch64 neon version. If you 
> have e.g. src[] = 32767, dst[] = 255, then the widening 8->16 addition 
> will overflow, as there's no operation that both widens and clamps at 
> the same time.

So it does. I obviously just didn't hit those cases in my test runs!

I can't easily test all codecs that use this function, but I just tried 
instrumenting the VC-1 case and it doesn't appear to actually use this 
particular function, so I'm none the wiser!

Should I just limit the 16-bit values to +/-0x100 and re-enable the 
armv4 fast path then?

Ben
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 03/10] checkasm: Add idctdsp add/put-pixels-clamped tests
  2022-03-29 20:22       ` Ben Avison
@ 2022-03-29 20:30         ` Martin Storsjö
  0 siblings, 0 replies; 55+ messages in thread
From: Martin Storsjö @ 2022-03-29 20:30 UTC (permalink / raw)
  To: FFmpeg development discussions and patches

On Tue, 29 Mar 2022, Ben Avison wrote:

>> Thirdly - the added test also occasionally fails for the other existing 
>> functions (armv6, neon) and the newly added aarch64 neon version. If you 
>> have e.g. src[] = 32767, dst[] = 255, then the widening 8->16 addition 
>> will overflow, as there's no operation that both widens and clamps at 
>> the same time.
>
> So it does. I obviously just didn't hit those cases in my test runs!
>
> I can't easily test all codecs that use this function, but I just tried 
> instrumenting the VC-1 case and it doesn't appear to actually use this 
> particular function, so I'm none the wiser!
>
> Should I just limit the 16-bit values to +/-0x100 and re-enable the 
> armv4 fast path then?

Yes, I think that'd be the safest path forward. Worst case, the test would 
be slightly too narrow and could miss some valid case - but that's at 
least better than having the test give false positives for perfectly 
correct assembly, that would work just fine for actual decoder use.

// Martin

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 04/10] avcodec/vc1: Introduce fast path for unescaping bitstream buffer
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 04/10] avcodec/vc1: Introduce fast path for unescaping bitstream buffer Ben Avison
@ 2022-03-29 20:37     ` Martin Storsjö
  2022-03-31 13:58       ` Ben Avison
  0 siblings, 1 reply; 55+ messages in thread
From: Martin Storsjö @ 2022-03-29 20:37 UTC (permalink / raw)
  To: FFmpeg development discussions and patches; +Cc: Ben Avison

On Fri, 25 Mar 2022, Ben Avison wrote:

> void ff_vc1dsp_init(VC1DSPContext* c);
> diff --git a/tests/checkasm/vc1dsp.c b/tests/checkasm/vc1dsp.c
> index 0823ccad31..0ab5892403 100644
> --- a/tests/checkasm/vc1dsp.c
> +++ b/tests/checkasm/vc1dsp.c
> @@ -286,6 +286,20 @@ static matrix *generate_inverse_quantized_transform_coefficients(size_t width, s
>         }                                                                   \
>     } while (0)
>
> +#define TEST_UNESCAPE                                                                                   \
> +    do {                                                                                            \
> +        for (int count = 100; count > 0; --count) {                                                 \
> +            escaped_offset = rnd() & 7;                                                             \
> +            unescaped_offset = rnd() & 7;                                                           \
> +            escaped_len = (1u << (rnd() % 8) + 3) - (rnd() & 7);                                    \
> +            RANDOMIZE_BUFFER8(unescaped, UNESCAPE_BUF_SIZE);                                        \

The output buffer will be overwritten in the end, but I guess this 
initialization is useful for making sure that the test doesn't 
accidentally rely on the output from the previous iteration, right?

> +            len0 = call_ref(escaped0 + escaped_offset, escaped_len, unescaped0 + unescaped_offset); \
> +            len1 = call_new(escaped1 + escaped_offset, escaped_len, unescaped1 + unescaped_offset); \
> +            if (len0 != len1 || memcmp(unescaped0, unescaped1, len0))                               \

Don't you need to include unescaped_offset here too? Otherwise you're just 
checking areas of the buffer that wasn't necessarily written.


> +                fail();                                                                             \
> +        }                                                                                           \
> +    } while (0)
> +

As with the rest of the checkasm tests - please unmacro most things where 
possible (except for the RANDOMIZE_* macros, those are ok to keep macroed 
if you want to). And sorry for leading you down a path with a bad example 
in that respect.

> void checkasm_check_vc1dsp(void)
> {
>     /* Inverse transform input coefficients are stored in a 16-bit buffer
> @@ -309,6 +323,14 @@ void checkasm_check_vc1dsp(void)
>     LOCAL_ALIGNED_4(uint8_t, filter_buf0, [24 * 24]);
>     LOCAL_ALIGNED_4(uint8_t, filter_buf1, [24 * 24]);
>
> +    /* This appears to be a typical length of buffer in use */
> +#define LOG2_UNESCAPE_BUF_SIZE 17
> +#define UNESCAPE_BUF_SIZE (1u<<LOG2_UNESCAPE_BUF_SIZE)
> +    LOCAL_ALIGNED_8(uint8_t, escaped0, [UNESCAPE_BUF_SIZE]);
> +    LOCAL_ALIGNED_8(uint8_t, escaped1, [UNESCAPE_BUF_SIZE]);
> +    LOCAL_ALIGNED_8(uint8_t, unescaped0, [UNESCAPE_BUF_SIZE]);
> +    LOCAL_ALIGNED_8(uint8_t, unescaped1, [UNESCAPE_BUF_SIZE]);
> +
>     VC1DSPContext h;
>
>     ff_vc1dsp_init(&h);
> @@ -349,4 +371,41 @@ void checkasm_check_vc1dsp(void)
>     CHECK_LOOP_FILTER(vc1_h_loop_filter16);
>
>     report("loop_filter");
> +
> +    if (check_func(h.vc1_unescape_buffer, "vc1dsp.vc1_unescape_buffer")) {
> +        int len0, len1, escaped_offset, unescaped_offset, escaped_len;
> +        declare_func_emms(AV_CPU_FLAG_MMX, int, const uint8_t *, int, uint8_t *);
> +
> +        /* Test data which consists of escapes sequences packed as tightly as possible */
> +        for (int x = 0; x < UNESCAPE_BUF_SIZE; ++x)
> +            escaped1[x] = escaped0[x] = 3 * (x % 3 == 0);
> +        TEST_UNESCAPE;
> +
> +        /* Test random data */
> +        RANDOMIZE_BUFFER8(escaped, UNESCAPE_BUF_SIZE);
> +        TEST_UNESCAPE;
> +
> +        /* Test data with escape sequences at random intervals */
> +        for (int x = 0; x <= UNESCAPE_BUF_SIZE - 4;) {
> +            int gap, gap_msb;
> +            escaped1[x+0] = escaped0[x+0] = 0;
> +            escaped1[x+1] = escaped0[x+1] = 0;
> +            escaped1[x+2] = escaped0[x+2] = 3;
> +            escaped1[x+3] = escaped0[x+3] = rnd() & 3;
> +            gap_msb = 2u << (rnd() % 8);
> +            gap = (rnd() &~ -gap_msb) | gap_msb;
> +            x += gap;
> +        }
> +        TEST_UNESCAPE;
> +
> +        /* Test data which is known to contain no escape sequences */
> +        memset(escaped0, 0xFF, UNESCAPE_BUF_SIZE);
> +        memset(escaped1, 0xFF, UNESCAPE_BUF_SIZE);
> +        TEST_UNESCAPE;
> +
> +        /* Benchmark the no-escape-sequences case */
> +        bench_new(escaped1, UNESCAPE_BUF_SIZE, unescaped1);
> +    }
> +
> +    report("unescape_buffer");
> }

The test looks great otherwise! But please split the code for it into a 
standalonef unction, e.g. check_unescape(), so the main 
checkasm_check_vc1dsp() just is a list of calls to check_loopfilter(), 
check_idct(), check_unescape() etc.

// Martin

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 05/10] avcodec/vc1: Arm 64-bit NEON deblocking filter fast paths
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 05/10] avcodec/vc1: Arm 64-bit NEON deblocking filter fast paths Ben Avison
@ 2022-03-30 12:35     ` Martin Storsjö
  2022-03-31 15:15       ` Ben Avison
  0 siblings, 1 reply; 55+ messages in thread
From: Martin Storsjö @ 2022-03-30 12:35 UTC (permalink / raw)
  To: FFmpeg development discussions and patches; +Cc: Ben Avison

On Fri, 25 Mar 2022, Ben Avison wrote:

> checkasm benchmarks on 1.5 GHz Cortex-A72 are as follows. Note that the C
> version can still outperform the NEON version in specific cases. The balance
> between different code paths is stream-dependent, but in practice the best
> case happens about 5% of the time, the worst case happens about 40% of the
> time, and the complexity of the remaining cases fall somewhere in between.
> Therefore, taking the average of the best and worst case timings is
> probably a conservative estimate of the degree by which the NEON code
> improves performance.
>
> vc1dsp.vc1_h_loop_filter4_bestcase_c: 10.7
> vc1dsp.vc1_h_loop_filter4_bestcase_neon: 43.5
> vc1dsp.vc1_h_loop_filter4_worstcase_c: 184.5
> vc1dsp.vc1_h_loop_filter4_worstcase_neon: 73.7
> vc1dsp.vc1_h_loop_filter8_bestcase_c: 31.2
> vc1dsp.vc1_h_loop_filter8_bestcase_neon: 62.2
> vc1dsp.vc1_h_loop_filter8_worstcase_c: 358.2
> vc1dsp.vc1_h_loop_filter8_worstcase_neon: 88.2
> vc1dsp.vc1_h_loop_filter16_bestcase_c: 51.0
> vc1dsp.vc1_h_loop_filter16_bestcase_neon: 107.7
> vc1dsp.vc1_h_loop_filter16_worstcase_c: 722.7
> vc1dsp.vc1_h_loop_filter16_worstcase_neon: 140.5
> vc1dsp.vc1_v_loop_filter4_bestcase_c: 9.7
> vc1dsp.vc1_v_loop_filter4_bestcase_neon: 43.0
> vc1dsp.vc1_v_loop_filter4_worstcase_c: 178.7
> vc1dsp.vc1_v_loop_filter4_worstcase_neon: 69.0
> vc1dsp.vc1_v_loop_filter8_bestcase_c: 30.2
> vc1dsp.vc1_v_loop_filter8_bestcase_neon: 50.7
> vc1dsp.vc1_v_loop_filter8_worstcase_c: 353.0
> vc1dsp.vc1_v_loop_filter8_worstcase_neon: 69.2
> vc1dsp.vc1_v_loop_filter16_bestcase_c: 60.0
> vc1dsp.vc1_v_loop_filter16_bestcase_neon: 90.0
> vc1dsp.vc1_v_loop_filter16_worstcase_c: 714.2
> vc1dsp.vc1_v_loop_filter16_worstcase_neon: 97.2
>
> Signed-off-by: Ben Avison <bavison@riscosopen.org>
> ---
> libavcodec/aarch64/Makefile              |   1 +
> libavcodec/aarch64/vc1dsp_init_aarch64.c |  14 +
> libavcodec/aarch64/vc1dsp_neon.S         | 698 +++++++++++++++++++++++
> 3 files changed, 713 insertions(+)
> create mode 100644 libavcodec/aarch64/vc1dsp_neon.S
>
> diff --git a/libavcodec/aarch64/vc1dsp_neon.S b/libavcodec/aarch64/vc1dsp_neon.S
> new file mode 100644
> index 0000000000..70391b4179
> --- /dev/null
> +++ b/libavcodec/aarch64/vc1dsp_neon.S
> @@ -0,0 +1,698 @@
> +/*
> + * VC1 AArch64 NEON optimisations
> + *
> + * Copyright (c) 2022 Ben Avison <bavison@riscosopen.org>
> + *
> + * This file is part of FFmpeg.
> + *
> + * FFmpeg is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation; either
> + * version 2.1 of the License, or (at your option) any later version.
> + *
> + * FFmpeg is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with FFmpeg; if not, write to the Free Software
> + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
> + */
> +
> +#include "libavutil/aarch64/asm.S"
> +
> +.align  5
> +.Lcoeffs:
> +.quad   0x00050002
> +

This constant is problematic when building with MSVC/armasm64.exe (via 
gas-preprocessor). (gas-preprocessor, processing assembly for use with 
armasm, can't handle any completely general gas assembly, but works fine 
as long as one sticks to the general code patterns/macros we use.)

The issue here is that this is a naked label before the first 
function/endfunc/const/endconst block. In practice it works fine if this 
follows after an existing function though. (gas-preprocessor currently 
needs an explicit .text directive, to set it up to emit code to the .text 
section. I guess I could look into handling that implicitly too.)

In practice, this issue disappears further ahead in the patch stack when 
other functions are added above this in the source file though, so it's 
not really an issue - I just thought I'd mention it.

> +// VC-1 in-loop deblocking filter for 4 pixel pairs at boundary of vertically-neighbouring blocks
> +// On entry:
> +//   x0 -> top-left pel of lower block
> +//   w1 = row stride, bytes
> +//   w2 = PQUANT bitstream parameter
> +function ff_vc1_v_loop_filter4_neon, export=1
> +        sub             x3, x0, w1, sxtw #2
> +        sxtw            x1, w1                  // technically, stride is signed int
> +        ldr             d0, .Lcoeffs
> +        ld1             {v1.s}[0], [x0], x1     // P5
> +        ld1             {v2.s}[0], [x3], x1     // P1
> +        ld1             {v3.s}[0], [x3], x1     // P2
> +        ld1             {v4.s}[0], [x0], x1     // P6
> +        ld1             {v5.s}[0], [x3], x1     // P3
> +        ld1             {v6.s}[0], [x0], x1     // P7
> +        ld1             {v7.s}[0], [x3]         // P4
> +        ld1             {v16.s}[0], [x0]        // P8
> +        ushll           v17.8h, v1.8b, #1       // 2*P5
> +        dup             v18.8h, w2              // pq
> +        ushll           v2.8h, v2.8b, #1        // 2*P1
> +        uxtl            v3.8h, v3.8b            // P2
> +        uxtl            v4.8h, v4.8b            // P6
> +        uxtl            v19.8h, v5.8b           // P3

Overall, the code looks sensible to me.

Would it make sense to share the 
core of the filter between the horizontal/vertical cases with e.g. a 
macro? (I didn't check in detail if there's much differences in the core 
of the filter. At most some differences in condition registers for partial 
writeout in the horizontal forms?)

If it's shareable, I guess the core of the filter even could be factorized 
to a separate sub-function to avoid duplicating it across the functions? 
(For the smaller filters it's probably no big deal, but for the bigger 
filters it could maybe be worthwhile?) It probably costs a couple cycles 
extra though, so if that's a too costly here, just macroing it to avoid 
duplication is fine too.

// Martin

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 06/10] avcodec/vc1: Arm 32-bit NEON deblocking filter fast paths
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 06/10] avcodec/vc1: Arm 32-bit " Ben Avison
  2022-03-25 19:27     ` Lynne
@ 2022-03-30 12:37     ` Martin Storsjö
  2022-03-30 13:03     ` Martin Storsjö
  2 siblings, 0 replies; 55+ messages in thread
From: Martin Storsjö @ 2022-03-30 12:37 UTC (permalink / raw)
  To: FFmpeg development discussions and patches; +Cc: Ben Avison

On Fri, 25 Mar 2022, Ben Avison wrote:

> checkasm benchmarks on 1.5 GHz Cortex-A72 are as follows. Note that the C
> version can still outperform the NEON version in specific cases. The balance
> between different code paths is stream-dependent, but in practice the best
> case happens about 5% of the time, the worst case happens about 40% of the
> time, and the complexity of the remaining cases fall somewhere in between.
> Therefore, taking the average of the best and worst case timings is
> probably a conservative estimate of the degree by which the NEON code
> improves performance.
>
> vc1dsp.vc1_h_loop_filter4_bestcase_c: 19.0
> vc1dsp.vc1_h_loop_filter4_bestcase_neon: 48.5
> vc1dsp.vc1_h_loop_filter4_worstcase_c: 144.7
> vc1dsp.vc1_h_loop_filter4_worstcase_neon: 76.2
> vc1dsp.vc1_h_loop_filter8_bestcase_c: 41.0
> vc1dsp.vc1_h_loop_filter8_bestcase_neon: 75.0
> vc1dsp.vc1_h_loop_filter8_worstcase_c: 294.0
> vc1dsp.vc1_h_loop_filter8_worstcase_neon: 102.7
> vc1dsp.vc1_h_loop_filter16_bestcase_c: 54.7
> vc1dsp.vc1_h_loop_filter16_bestcase_neon: 130.0
> vc1dsp.vc1_h_loop_filter16_worstcase_c: 569.7
> vc1dsp.vc1_h_loop_filter16_worstcase_neon: 186.7
> vc1dsp.vc1_v_loop_filter4_bestcase_c: 20.2
> vc1dsp.vc1_v_loop_filter4_bestcase_neon: 47.2
> vc1dsp.vc1_v_loop_filter4_worstcase_c: 164.2
> vc1dsp.vc1_v_loop_filter4_worstcase_neon: 68.5
> vc1dsp.vc1_v_loop_filter8_bestcase_c: 43.5
> vc1dsp.vc1_v_loop_filter8_bestcase_neon: 55.2
> vc1dsp.vc1_v_loop_filter8_worstcase_c: 316.2
> vc1dsp.vc1_v_loop_filter8_worstcase_neon: 72.7
> vc1dsp.vc1_v_loop_filter16_bestcase_c: 62.2
> vc1dsp.vc1_v_loop_filter16_bestcase_neon: 103.7
> vc1dsp.vc1_v_loop_filter16_worstcase_c: 646.5
> vc1dsp.vc1_v_loop_filter16_worstcase_neon: 110.7
>
> Signed-off-by: Ben Avison <bavison@riscosopen.org>
> ---
> libavcodec/arm/vc1dsp_init_neon.c |  14 +
> libavcodec/arm/vc1dsp_neon.S      | 643 ++++++++++++++++++++++++++++++
> 2 files changed, 657 insertions(+)

Looks like a close analogue to the arm64 case (i.e. looks good!), only the 
open question of code sharing/reuse between horizontal and vertical.

// Martin

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 06/10] avcodec/vc1: Arm 32-bit NEON deblocking filter fast paths
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 06/10] avcodec/vc1: Arm 32-bit " Ben Avison
  2022-03-25 19:27     ` Lynne
  2022-03-30 12:37     ` Martin Storsjö
@ 2022-03-30 13:03     ` Martin Storsjö
  2 siblings, 0 replies; 55+ messages in thread
From: Martin Storsjö @ 2022-03-30 13:03 UTC (permalink / raw)
  To: FFmpeg development discussions and patches; +Cc: Ben Avison

On Fri, 25 Mar 2022, Ben Avison wrote:

> checkasm benchmarks on 1.5 GHz Cortex-A72 are as follows. Note that the C
> version can still outperform the NEON version in specific cases. The balance
> between different code paths is stream-dependent, but in practice the best
> case happens about 5% of the time, the worst case happens about 40% of the
> time, and the complexity of the remaining cases fall somewhere in between.
> Therefore, taking the average of the best and worst case timings is
> probably a conservative estimate of the degree by which the NEON code
> improves performance.
>
> vc1dsp.vc1_h_loop_filter4_bestcase_c: 19.0
> vc1dsp.vc1_h_loop_filter4_bestcase_neon: 48.5
> vc1dsp.vc1_h_loop_filter4_worstcase_c: 144.7
> vc1dsp.vc1_h_loop_filter4_worstcase_neon: 76.2
> vc1dsp.vc1_h_loop_filter8_bestcase_c: 41.0
> vc1dsp.vc1_h_loop_filter8_bestcase_neon: 75.0
> vc1dsp.vc1_h_loop_filter8_worstcase_c: 294.0
> vc1dsp.vc1_h_loop_filter8_worstcase_neon: 102.7
> vc1dsp.vc1_h_loop_filter16_bestcase_c: 54.7
> vc1dsp.vc1_h_loop_filter16_bestcase_neon: 130.0
> vc1dsp.vc1_h_loop_filter16_worstcase_c: 569.7
> vc1dsp.vc1_h_loop_filter16_worstcase_neon: 186.7
> vc1dsp.vc1_v_loop_filter4_bestcase_c: 20.2
> vc1dsp.vc1_v_loop_filter4_bestcase_neon: 47.2
> vc1dsp.vc1_v_loop_filter4_worstcase_c: 164.2
> vc1dsp.vc1_v_loop_filter4_worstcase_neon: 68.5
> vc1dsp.vc1_v_loop_filter8_bestcase_c: 43.5
> vc1dsp.vc1_v_loop_filter8_bestcase_neon: 55.2
> vc1dsp.vc1_v_loop_filter8_worstcase_c: 316.2
> vc1dsp.vc1_v_loop_filter8_worstcase_neon: 72.7
> vc1dsp.vc1_v_loop_filter16_bestcase_c: 62.2
> vc1dsp.vc1_v_loop_filter16_bestcase_neon: 103.7
> vc1dsp.vc1_v_loop_filter16_worstcase_c: 646.5
> vc1dsp.vc1_v_loop_filter16_worstcase_neon: 110.7
>
> Signed-off-by: Ben Avison <bavison@riscosopen.org>
> ---
> libavcodec/arm/vc1dsp_init_neon.c |  14 +
> libavcodec/arm/vc1dsp_neon.S      | 643 ++++++++++++++++++++++++++++++
> 2 files changed, 657 insertions(+)

> +@ VC-1 in-loop deblocking filter for 8 pixel pairs at boundary of vertically-neighbouring blocks
> +@ On entry:
> +@   r0 -> top-left pel of lower block
> +@   r1 = row stride, bytes
> +@   r2 = PQUANT bitstream parameter
> +function ff_vc1_v_loop_filter8_neon, export=1
> +        sub             r3, r0, r1, lsl #2
> +        vldr            d0, .Lcoeffs
> +        vld1.32         {d1}, [r0], r1          @ P5
> +        vld1.32         {d2}, [r3], r1          @ P1
> +        vld1.32         {d3}, [r3], r1          @ P2
> +        vld1.32         {d4}, [r0], r1          @ P6
> +        vld1.32         {d5}, [r3], r1          @ P3
> +        vld1.32         {d6}, [r0], r1          @ P7

Oh btw - I presume these loads can be done with alignment? And same for 
some of the stores too? At least for some older cores, the alignment 
specifier helps a lot - so for 32 bit assembly, I try to add as much 
alignment specifiers as possible.

// Martin

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 07/10] avcodec/vc1: Arm 64-bit NEON inverse transform fast paths
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 07/10] avcodec/vc1: Arm 64-bit NEON inverse transform " Ben Avison
@ 2022-03-30 13:49     ` Martin Storsjö
  2022-03-30 14:01       ` Martin Storsjö
  2022-03-31 15:37       ` Ben Avison
  0 siblings, 2 replies; 55+ messages in thread
From: Martin Storsjö @ 2022-03-30 13:49 UTC (permalink / raw)
  To: FFmpeg development discussions and patches; +Cc: Ben Avison

On Fri, 25 Mar 2022, Ben Avison wrote:

> checkasm benchmarks on 1.5 GHz Cortex-A72 are as follows.
>
> vc1dsp.vc1_inv_trans_4x4_c: 158.2
> vc1dsp.vc1_inv_trans_4x4_neon: 65.7
> vc1dsp.vc1_inv_trans_4x4_dc_c: 86.5
> vc1dsp.vc1_inv_trans_4x4_dc_neon: 26.5
> vc1dsp.vc1_inv_trans_4x8_c: 335.2
> vc1dsp.vc1_inv_trans_4x8_neon: 106.2
> vc1dsp.vc1_inv_trans_4x8_dc_c: 151.2
> vc1dsp.vc1_inv_trans_4x8_dc_neon: 25.5
> vc1dsp.vc1_inv_trans_8x4_c: 365.7
> vc1dsp.vc1_inv_trans_8x4_neon: 97.2
> vc1dsp.vc1_inv_trans_8x4_dc_c: 139.7
> vc1dsp.vc1_inv_trans_8x4_dc_neon: 16.5
> vc1dsp.vc1_inv_trans_8x8_c: 547.7
> vc1dsp.vc1_inv_trans_8x8_neon: 137.0
> vc1dsp.vc1_inv_trans_8x8_dc_c: 268.2
> vc1dsp.vc1_inv_trans_8x8_dc_neon: 30.5
>
> Signed-off-by: Ben Avison <bavison@riscosopen.org>
> ---
> libavcodec/aarch64/vc1dsp_init_aarch64.c |  19 +
> libavcodec/aarch64/vc1dsp_neon.S         | 678 +++++++++++++++++++++++
> 2 files changed, 697 insertions(+)

Looks generally reasonable. Is it possible to factorize out the individual 
transforms (so that you'd e.g. invoke the same macro twice in the 8x8 and 
4x4 functions) without too much loss? The downshift which differs between 
thw two could either be left outside of the macro, or the downshift amount 
could be made a macro parameter.

// Martin

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 07/10] avcodec/vc1: Arm 64-bit NEON inverse transform fast paths
  2022-03-30 13:49     ` Martin Storsjö
@ 2022-03-30 14:01       ` Martin Storsjö
  2022-03-31 15:37       ` Ben Avison
  1 sibling, 0 replies; 55+ messages in thread
From: Martin Storsjö @ 2022-03-30 14:01 UTC (permalink / raw)
  To: FFmpeg development discussions and patches; +Cc: Ben Avison

On Wed, 30 Mar 2022, Martin Storsjö wrote:

> On Fri, 25 Mar 2022, Ben Avison wrote:
>
>> checkasm benchmarks on 1.5 GHz Cortex-A72 are as follows.
>> 
>> vc1dsp.vc1_inv_trans_4x4_c: 158.2
>> vc1dsp.vc1_inv_trans_4x4_neon: 65.7
>> vc1dsp.vc1_inv_trans_4x4_dc_c: 86.5
>> vc1dsp.vc1_inv_trans_4x4_dc_neon: 26.5
>> vc1dsp.vc1_inv_trans_4x8_c: 335.2
>> vc1dsp.vc1_inv_trans_4x8_neon: 106.2
>> vc1dsp.vc1_inv_trans_4x8_dc_c: 151.2
>> vc1dsp.vc1_inv_trans_4x8_dc_neon: 25.5
>> vc1dsp.vc1_inv_trans_8x4_c: 365.7
>> vc1dsp.vc1_inv_trans_8x4_neon: 97.2
>> vc1dsp.vc1_inv_trans_8x4_dc_c: 139.7
>> vc1dsp.vc1_inv_trans_8x4_dc_neon: 16.5
>> vc1dsp.vc1_inv_trans_8x8_c: 547.7
>> vc1dsp.vc1_inv_trans_8x8_neon: 137.0
>> vc1dsp.vc1_inv_trans_8x8_dc_c: 268.2
>> vc1dsp.vc1_inv_trans_8x8_dc_neon: 30.5
>> 
>> Signed-off-by: Ben Avison <bavison@riscosopen.org>
>> ---
>> libavcodec/aarch64/vc1dsp_init_aarch64.c |  19 +
>> libavcodec/aarch64/vc1dsp_neon.S         | 678 +++++++++++++++++++++++
>> 2 files changed, 697 insertions(+)
>
> Looks generally reasonable. Is it possible to factorize out the individual 
> transforms (so that you'd e.g. invoke the same macro twice in the 8x8 and 4x4 
> functions) without too much loss? The downshift which differs between thw two 
> could either be left outside of the macro, or the downshift amount could be 
> made a macro parameter.

Another aspect: I forgot the aspect that we have existing arm assembly for 
the idct. In some cases, there's value in keeping the implementations 
similar if possible and relevant. But your implementation seems quite 
straightforward, and seems to get better benchmark numbers on the same 
cores, so I guess it's fine to diverge and add a new from-scratch 
implementation here.

// Martin
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 08/10] avcodec/idctdsp: Arm 64-bit NEON block add and clamp fast paths
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 08/10] avcodec/idctdsp: Arm 64-bit NEON block add and clamp " Ben Avison
@ 2022-03-30 14:14     ` Martin Storsjö
  2022-03-31 16:47       ` Ben Avison
  0 siblings, 1 reply; 55+ messages in thread
From: Martin Storsjö @ 2022-03-30 14:14 UTC (permalink / raw)
  To: FFmpeg development discussions and patches; +Cc: Ben Avison

On Fri, 25 Mar 2022, Ben Avison wrote:

> checkasm benchmarks on 1.5 GHz Cortex-A72 are as follows.
>
> idctdsp.add_pixels_clamped_c: 323.0
> idctdsp.add_pixels_clamped_neon: 41.5
> idctdsp.put_pixels_clamped_c: 243.0
> idctdsp.put_pixels_clamped_neon: 30.0
> idctdsp.put_signed_pixels_clamped_c: 225.7
> idctdsp.put_signed_pixels_clamped_neon: 37.7
>
> Signed-off-by: Ben Avison <bavison@riscosopen.org>
> ---
> libavcodec/aarch64/Makefile               |   3 +-
> libavcodec/aarch64/idctdsp_init_aarch64.c |  26 +++--
> libavcodec/aarch64/idctdsp_neon.S         | 130 ++++++++++++++++++++++
> 3 files changed, 150 insertions(+), 9 deletions(-)
> create mode 100644 libavcodec/aarch64/idctdsp_neon.S

Generally LGTM

> +// Clamp 16-bit signed block coefficients to signed 8-bit (biased by 128)
> +// On entry:
> +//   x0 -> array of 64x 16-bit coefficients
> +//   x1 -> 8-bit results
> +//   x2 = row stride for results, bytes
> +function ff_put_signed_pixels_clamped_neon, export=1
> +        ld1             {v0.16b, v1.16b, v2.16b, v3.16b}, [x0], #64
> +        movi            v4.8b, #128
> +        ld1             {v16.16b, v17.16b, v18.16b, v19.16b}, [x0]
> +        sqxtn           v0.8b, v0.8h
> +        sqxtn           v1.8b, v1.8h
> +        sqxtn           v2.8b, v2.8h
> +        sqxtn           v3.8b, v3.8h
> +        sqxtn           v5.8b, v16.8h
> +        add             v0.8b, v0.8b, v4.8b

Here you could save 4 add instructions with sqxtn2 and adding .16b 
vectors, but I'm not sure if it's wortwhile. (It reduces the checkasm 
numbers by 0.7 for Cortex A72, by 0.3 for A73, but increases the runtime 
by 1.0 on A53.) Stranegely enough, I get much smaller numbers on my A72 
than you got. I get these:

idctdsp.add_pixels_clamped_c: 306.7
idctdsp.add_pixels_clamped_neon: 25.7
idctdsp.put_pixels_clamped_c: 217.2
idctdsp.put_pixels_clamped_neon: 15.2
idctdsp.put_signed_pixels_clamped_c: 216.7
idctdsp.put_signed_pixels_clamped_neon: 19.2

(The _c numbers are of course highly compiler dependent, but the assembly 
numbers should generally match quite closely. And AFAIK they should be 
measured in clock cycles, so CPU frequency shouldn't really play a role 
either.)

// Martin

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 09/10] avcodec/vc1: Arm 64-bit NEON unescape fast path
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 09/10] avcodec/vc1: Arm 64-bit NEON unescape fast path Ben Avison
@ 2022-03-30 14:35     ` Martin Storsjö
  0 siblings, 0 replies; 55+ messages in thread
From: Martin Storsjö @ 2022-03-30 14:35 UTC (permalink / raw)
  To: FFmpeg development discussions and patches; +Cc: Ben Avison

On Fri, 25 Mar 2022, Ben Avison wrote:

> checkasm benchmarks on 1.5 GHz Cortex-A72 are as follows.
>
> vc1dsp.vc1_unescape_buffer_c: 655617.7
> vc1dsp.vc1_unescape_buffer_neon: 118237.0
>
> Signed-off-by: Ben Avison <bavison@riscosopen.org>
> ---
> libavcodec/aarch64/vc1dsp_init_aarch64.c |  61 ++++++++
> libavcodec/aarch64/vc1dsp_neon.S         | 176 +++++++++++++++++++++++
> 2 files changed, 237 insertions(+)

LGTM

// Martin

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 10/10] avcodec/vc1: Arm 32-bit NEON unescape fast path
  2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 10/10] avcodec/vc1: Arm 32-bit " Ben Avison
@ 2022-03-30 14:35     ` Martin Storsjö
  0 siblings, 0 replies; 55+ messages in thread
From: Martin Storsjö @ 2022-03-30 14:35 UTC (permalink / raw)
  To: FFmpeg development discussions and patches; +Cc: Ben Avison

On Fri, 25 Mar 2022, Ben Avison wrote:

> checkasm benchmarks on 1.5 GHz Cortex-A72 are as follows.
>
> vc1dsp.vc1_unescape_buffer_c: 918624.7
> vc1dsp.vc1_unescape_buffer_neon: 142958.0
>
> Signed-off-by: Ben Avison <bavison@riscosopen.org>
> ---
> libavcodec/arm/vc1dsp_init_neon.c |  61 +++++++++++++++
> libavcodec/arm/vc1dsp_neon.S      | 118 ++++++++++++++++++++++++++++++
> 2 files changed, 179 insertions(+)

LGTM

// Martin

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 04/10] avcodec/vc1: Introduce fast path for unescaping bitstream buffer
  2022-03-29 20:37     ` Martin Storsjö
@ 2022-03-31 13:58       ` Ben Avison
  2022-03-31 14:07         ` Martin Storsjö
  0 siblings, 1 reply; 55+ messages in thread
From: Ben Avison @ 2022-03-31 13:58 UTC (permalink / raw)
  To: FFmpeg development discussions and patches, Martin Storsjö

On 29/03/2022 21:37, Martin Storsjö wrote:
> On Fri, 25 Mar 2022, Ben Avison wrote:
>> +#define 
>> TEST_UNESCAPE                                                                                   
>> \
>> +    do 
>> {                                                                                            
>> \
>> +        for (int count = 100; count > 0; --count) 
>> {                                                 \
>> +            escaped_offset = rnd() & 
>> 7;                                                             \
>> +            unescaped_offset = rnd() & 
>> 7;                                                           \
>> +            escaped_len = (1u << (rnd() % 8) + 3) - (rnd() & 
>> 7);                                    \
>> +            RANDOMIZE_BUFFER8(unescaped, 
>> UNESCAPE_BUF_SIZE);                                        \
> 
> The output buffer will be overwritten in the end, but I guess this 
> initialization is useful for making sure that the test doesn't 
> accidentally rely on the output from the previous iteration, right?

The main idea was to catch examples of writing to the buffer beyond the 
length reported (and less likely, writes before the start of the 
buffer). I suppose it's possible that someone might want to deliberately 
overwrite in specific conditions, but the test could always be loosened 
up at that point once those conditions become clearer.

>> +            len0 = call_ref(escaped0 + escaped_offset, escaped_len, 
>> unescaped0 + unescaped_offset); \
>> +            len1 = call_new(escaped1 + escaped_offset, escaped_len, 
>> unescaped1 + unescaped_offset); \
>> +            if (len0 != len1 || memcmp(unescaped0, unescaped1, 
>> len0))                               \
> 
> Don't you need to include unescaped_offset here too? Otherwise you're 
> just checking areas of the buffer that wasn't necessarily written.

I realise I should have made the memcmp length UNESCAPE_BUF_SIZE here to 
achieve what I intended. Testing len0 bytes from the start of the buffer 
neither checks all the written bytes nor checks the byte after those 
written :-$

> As with the rest of the checkasm tests - please unmacro most things 
> where possible (except for the RANDOMIZE_* macros, those are ok to keep 
> macroed if you want to).

In the case of TEST_UNESCAPE, I think it has to remain as a macro, 
otherwise the next function up ends up with a declare_func_emms() and a 
bench_new() but no call_ref() or call_new(), which means some builds end 
up with an unused function warning.

I can, however, split all the unescape tests out of 
checkasm_check_vc1dsp into a separate function (and separate functions 
for inverse-transform and deblocking tests).

Ben
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 04/10] avcodec/vc1: Introduce fast path for unescaping bitstream buffer
  2022-03-31 13:58       ` Ben Avison
@ 2022-03-31 14:07         ` Martin Storsjö
  0 siblings, 0 replies; 55+ messages in thread
From: Martin Storsjö @ 2022-03-31 14:07 UTC (permalink / raw)
  To: Ben Avison; +Cc: FFmpeg development discussions and patches

On Thu, 31 Mar 2022, Ben Avison wrote:

> On 29/03/2022 21:37, Martin Storsjö wrote:
>> On Fri, 25 Mar 2022, Ben Avison wrote:
>> As with the rest of the checkasm tests - please unmacro most things where 
>> possible (except for the RANDOMIZE_* macros, those are ok to keep macroed 
>> if you want to).
>
> In the case of TEST_UNESCAPE, I think it has to remain as a macro, otherwise 
> the next function up ends up with a declare_func_emms() and a bench_new() but 
> no call_ref() or call_new(), which means some builds end up with an unused 
> function warning.

Oh, right - yes, call_ref and call_new need to be in the same scope as 
declare_func, yes.

> I can, however, split all the unescape tests out of checkasm_check_vc1dsp 
> into a separate function (and separate functions for inverse-transform and 
> deblocking tests).

Awesome, thanks!

// Martin
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 05/10] avcodec/vc1: Arm 64-bit NEON deblocking filter fast paths
  2022-03-30 12:35     ` Martin Storsjö
@ 2022-03-31 15:15       ` Ben Avison
  2022-03-31 21:21         ` Martin Storsjö
  0 siblings, 1 reply; 55+ messages in thread
From: Ben Avison @ 2022-03-31 15:15 UTC (permalink / raw)
  To: Martin Storsjö, FFmpeg development discussions and patches

On 30/03/2022 13:35, Martin Storsjö wrote:
> Overall, the code looks sensible to me.
> 
> Would it make sense to share the core of the filter between the 
> horizontal/vertical cases with e.g. a macro? (I didn't check in detail 
> if there's much differences in the core of the filter. At most some 
> differences in condition registers for partial writeout in the 
> horizontal forms?)

Well, looking at the comments at the right-hand side of the source, 
which give the logical meaning of the results of each instruction, I 
admit there's a resemblance in the middle of the 8-pixel-pair function. 
However, the physical register assignments are quite different, and 
attempting to reassign the registers in one to match the other isn't a 
trivial task. It's hard enough when you start register assignment from 
the top of a function and work your way down, as I have done here.

In the 16-pixel-pair case, the fact that the input values arrive in a 
different order as the result of them, in one case, being loaded in 
regularly-increasing address order, and in the other, falling out of a 
matrix transposition, has resulted in even the logical order of 
instructions being quite different in the two cases.

In the 4-pixel-pair case, the values are packed differently into 
registers in the two cases, because in the v case, we're loading 4 
pixels between row-strides, which means it's easy to place each row in 
its own vector, whereas in the h case we load 4 rows of 8 pixels each 
and transpose, which leaves the values in 4 vectors rather than 8. Some 
of the filtering steps can be performed with the data packed in this way 
(calculating a1 and a2) while waiting for it to be restructured in order 
to calculate the other metrics, but it's not worth packing the data 
together in this way in the v case given that it starts off already 
separated. So the two implementations end up quite different in the 
operations they perform, not just the scheduling of instructions and in 
register assignment terms.

Some background: as you may have guessed, I didn't start out writing 
these functions as they currently appear. Prototype versions didn't care 
much for scheduling or keeping to a small number of registers. They were 
primarily for checking the correctness of the mathematics, and they'd 
use all available vectors, sometimes shuffling values between registers 
or to the stack to make room. Once I'd verified correctness, I then 
reworked them to keep to a minimal number of registers and to minimise 
stalls as far as possible.

I'm targeting the Cortex-A72, since that's what the Raspberry Pi 4 uses 
and it's on the cusp of having enough power to decode VC-1 BluRay 
streams, so I deliberately didn't take too much consideration of the 
requirements of earlier cores. Yes, it's an out-of-order core, but I 
reckoned there are probably limits to how wisely it can select 
instructions to execute (there have got to be limits to instruction 
queue lengths, for example). So based on the pipeline structure 
documented in Arm's Cortex-A72 software opimization guide, I arranged 
the instructions to best keep all pipelines busy as much as possible, 
then assigned registers to keep the instructions in this order.

For the most part, I was able to keep the number of vectors used low 
enough that no callee-saving was required - or failing that, at least 
avoiding having to spill values to the stack mid-function. But it came 
pretty close at times - witness for example the peculiar order in which 
vectors had to be loaded in the AArch32 version of 
ff_vc1_h_loop_filter16_neon. There's reason behind that!

In short, I'd really rather not tamper with these larger assembly 
functions any more unless I really have to.

Ben
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 07/10] avcodec/vc1: Arm 64-bit NEON inverse transform fast paths
  2022-03-30 13:49     ` Martin Storsjö
  2022-03-30 14:01       ` Martin Storsjö
@ 2022-03-31 15:37       ` Ben Avison
  2022-03-31 21:32         ` Martin Storsjö
  1 sibling, 1 reply; 55+ messages in thread
From: Ben Avison @ 2022-03-31 15:37 UTC (permalink / raw)
  To: Martin Storsjö, FFmpeg development discussions and patches

On 30/03/2022 14:49, Martin Storsjö wrote:
> Looks generally reasonable. Is it possible to factorize out the 
> individual transforms (so that you'd e.g. invoke the same macro twice in 
> the 8x8 and 4x4 functions) without too much loss?

There is a close analogy here with the vertical/horizontal deblocking 
filters, because while there are similarities between the two matrix 
multiplications within a transform, one of them follows a series of 
loads and the other follows a matrix transposition.

If you look for example at ff_vc1_inv_trans_8x8_neon, you'll see I was 
able to do a fair amount of overlap between sections of the function - 
particularly between the transpose and the second matrix multiplication, 
but to a lesser extent between the loads and the first matrix 
multiplication and between the second multiplication and the stores. 
This sort of overlapping is tricky to maintain when using macros. Also, 
it means the the order of operations within each matrix multiply ended 
up quite different.

At first sight, you might think that the multiplies from the 8x8 
function (which you might also view as kind of 8-tap filter) would be 
re-usable for the size-8 multiplies in the 8x4 or 4x8 function. Yes, the 
instructions are similar, save for using .4h elements rather than .8h 
elements, but that has significant impacts on scheduling. For example, 
the Cortex-A72, which is my primary target, can only do NEON bit-shifts 
in one pipeline at once, irrespective of whether the vectors are 64-bit 
or 128-bit long, while other instructions don't have such restrictions.

So while in theory you could factor some of this code out more, I 
suspect any attempt to do so would have a detrimental effect on performance.

Ben
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 08/10] avcodec/idctdsp: Arm 64-bit NEON block add and clamp fast paths
  2022-03-30 14:14     ` Martin Storsjö
@ 2022-03-31 16:47       ` Ben Avison
  2022-03-31 21:42         ` Martin Storsjö
  0 siblings, 1 reply; 55+ messages in thread
From: Ben Avison @ 2022-03-31 16:47 UTC (permalink / raw)
  To: FFmpeg development discussions and patches, Martin Storsjö

On 30/03/2022 15:14, Martin Storsjö wrote:
> On Fri, 25 Mar 2022, Ben Avison wrote:
>> +// Clamp 16-bit signed block coefficients to signed 8-bit (biased by 
>> 128)
>> +// On entry:
>> +//   x0 -> array of 64x 16-bit coefficients
>> +//   x1 -> 8-bit results
>> +//   x2 = row stride for results, bytes
>> +function ff_put_signed_pixels_clamped_neon, export=1
>> +        ld1             {v0.16b, v1.16b, v2.16b, v3.16b}, [x0], #64
>> +        movi            v4.8b, #128
>> +        ld1             {v16.16b, v17.16b, v18.16b, v19.16b}, [x0]
>> +        sqxtn           v0.8b, v0.8h
>> +        sqxtn           v1.8b, v1.8h
>> +        sqxtn           v2.8b, v2.8h
>> +        sqxtn           v3.8b, v3.8h
>> +        sqxtn           v5.8b, v16.8h
>> +        add             v0.8b, v0.8b, v4.8b
> 
> Here you could save 4 add instructions with sqxtn2 and adding .16b 
> vectors, but I'm not sure if it's wortwhile. (It reduces the checkasm 
> numbers by 0.7 for Cortex A72, by 0.3 for A73, but increases the runtime 
> by 1.0 on A53.) Stranegely enough, I get much smaller numbers on my A72 
> than you got.

That's weird. As you say, it should be independent of clock-frequency. 
FWIW, I'm benchmarking on a Raspberry Pi 4; I'd assume all its board 
variants' Cortex-A72 cores are of identical revision.

Now I run it again, I'm getting these figures:

idctdsp.add_pixels_clamped_c: 313.3
idctdsp.add_pixels_clamped_neon: 24.3
idctdsp.put_pixels_clamped_c: 220.3
idctdsp.put_pixels_clamped_neon: 15.5
idctdsp.put_signed_pixels_clamped_c: 210.5
idctdsp.put_signed_pixels_clamped_neon: 19.5

which is more in line with what you see! I am getting a lot of 
variability between runs though - from a small sample, I'm seeing 
add_pixels_clamped_neon coming out as anything from 21 to 30, which is 
well above the sort of differences you're seeing between alternate 
implementations.

This sort of case is always going to be difficult to schedule optimally 
for multiple core - factors like how much dual-issuing is possible, 
latency before values can be used, load speed and the granularity of 
scoreboarding parts of vectors, all vary widely.

In the case of the Cortex-A72, the critical path goes
ld1 of first 16 bytes -> sqxtn:  5 cycles
sqxtn -> add:                    4 cycles
add -> st1 of first 8 bytes:     3 cycles

It then bangs out one store per cycle, a total of 8. Everything else can 
largely be fitted in around this - so for example, other than I-cache 
usage, there shouldn't be a disadvantage to the adds being non-Q-form as 
they should dual-issue with the sqxtns and st2s - you'll notice I have 
them alternating.

I'd have expected anything interfering with this (such as by updating 
half the vector input required by any Q-form add) to slow things down.

Ben
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 05/10] avcodec/vc1: Arm 64-bit NEON deblocking filter fast paths
  2022-03-31 15:15       ` Ben Avison
@ 2022-03-31 21:21         ` Martin Storsjö
  0 siblings, 0 replies; 55+ messages in thread
From: Martin Storsjö @ 2022-03-31 21:21 UTC (permalink / raw)
  To: Ben Avison; +Cc: FFmpeg development discussions and patches

On Thu, 31 Mar 2022, Ben Avison wrote:

> On 30/03/2022 13:35, Martin Storsjö wrote:
>> Overall, the code looks sensible to me.
>> 
>> Would it make sense to share the core of the filter between the 
>> horizontal/vertical cases with e.g. a macro? (I didn't check in detail if 
>> there's much differences in the core of the filter. At most some 
>> differences in condition registers for partial writeout in the horizontal 
>> forms?)
>
> Well, looking at the comments at the right-hand side of the source, which 
> give the logical meaning of the results of each instruction, I admit there's 
> a resemblance in the middle of the 8-pixel-pair function.

Actually, I didn't try to follow/compare it to that level, I just assumed 
them to be similar.

> However, the physical register assignments are quite different, and 
> attempting to reassign the registers in one to match the other isn't a 
> trivial task. It's hard enough when you start register assignment from 
> the top of a function and work your way down, as I have done here.
>
> In the 16-pixel-pair case, the fact that the input values arrive in a 
> different order as the result of them, in one case, being loaded in 
> regularly-increasing address order, and in the other, falling out of a matrix 
> transposition, has resulted in even the logical order of instructions being 
> quite different in the two cases.
>
> In the 4-pixel-pair case, the values are packed differently into registers in 
> the two cases, because in the v case, we're loading 4 pixels between 
> row-strides, which means it's easy to place each row in its own vector, 
> whereas in the h case we load 4 rows of 8 pixels each and transpose, which 
> leaves the values in 4 vectors rather than 8. Some of the filtering steps can 
> be performed with the data packed in this way (calculating a1 and a2) while 
> waiting for it to be restructured in order to calculate the other metrics, 
> but it's not worth packing the data together in this way in the v case given 
> that it starts off already separated. So the two implementations end up quite 
> different in the operations they perform, not just the scheduling of 
> instructions and in register assignment terms.
>
> Some background: as you may have guessed, I didn't start out writing these 
> functions as they currently appear. Prototype versions didn't care much for 
> scheduling or keeping to a small number of registers. They were primarily for 
> checking the correctness of the mathematics, and they'd use all available 
> vectors, sometimes shuffling values between registers or to the stack to make 
> room. Once I'd verified correctness, I then reworked them to keep to a 
> minimal number of registers and to minimise stalls as far as possible.
>
> I'm targeting the Cortex-A72, since that's what the Raspberry Pi 4 uses and 
> it's on the cusp of having enough power to decode VC-1 BluRay streams, so I 
> deliberately didn't take too much consideration of the requirements of 
> earlier cores. Yes, it's an out-of-order core, but I reckoned there are 
> probably limits to how wisely it can select instructions to execute (there 
> have got to be limits to instruction queue lengths, for example). So based on 
> the pipeline structure documented in Arm's Cortex-A72 software opimization 
> guide, I arranged the instructions to best keep all pipelines busy as much as 
> possible, then assigned registers to keep the instructions in this order.
>
> For the most part, I was able to keep the number of vectors used low enough 
> that no callee-saving was required - or failing that, at least avoiding 
> having to spill values to the stack mid-function. But it came pretty close at 
> times - witness for example the peculiar order in which vectors had to be 
> loaded in the AArch32 version of ff_vc1_h_loop_filter16_neon. There's reason 
> behind that!
>
> In short, I'd really rather not tamper with these larger assembly functions 
> any more unless I really have to.

Ok, fair enough.

FWIW, my point of view was from implementing the loop filters for VP9 and 
AV1, where I did the core filter as one shared implementation for both 
variants, and where the frontend functions just load (and transpose) data 
into the registers used as input for the common core filter, and vice 
versa.

But I presume that a custom implementation for each of them can be more 
optimal, at the cost of more code to maintain (but if there are no bugs, 
it usually doesn't need maintainance either).

Thus - fair enough, this code probably is ok then.

// Martin

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 07/10] avcodec/vc1: Arm 64-bit NEON inverse transform fast paths
  2022-03-31 15:37       ` Ben Avison
@ 2022-03-31 21:32         ` Martin Storsjö
  0 siblings, 0 replies; 55+ messages in thread
From: Martin Storsjö @ 2022-03-31 21:32 UTC (permalink / raw)
  To: Ben Avison; +Cc: FFmpeg development discussions and patches

On Thu, 31 Mar 2022, Ben Avison wrote:

> On 30/03/2022 14:49, Martin Storsjö wrote:
>> Looks generally reasonable. Is it possible to factorize out the individual 
>> transforms (so that you'd e.g. invoke the same macro twice in the 8x8 and 
>> 4x4 functions) without too much loss?
>
> There is a close analogy here with the vertical/horizontal deblocking 
> filters, because while there are similarities between the two matrix 
> multiplications within a transform, one of them follows a series of loads and 
> the other follows a matrix transposition.
>
> If you look for example at ff_vc1_inv_trans_8x8_neon, you'll see I was able 
> to do a fair amount of overlap between sections of the function - 
> particularly between the transpose and the second matrix multiplication, but 
> to a lesser extent between the loads and the first matrix multiplication and 
> between the second multiplication and the stores. This sort of overlapping is 
> tricky to maintain when using macros. Also, it means the the order of 
> operations within each matrix multiply ended up quite different.
>
> At first sight, you might think that the multiplies from the 8x8 function 
> (which you might also view as kind of 8-tap filter) would be re-usable for 
> the size-8 multiplies in the 8x4 or 4x8 function. Yes, the instructions are 
> similar, save for using .4h elements rather than .8h elements, but that has 
> significant impacts on scheduling. For example, the Cortex-A72, which is my 
> primary target, can only do NEON bit-shifts in one pipeline at once, 
> irrespective of whether the vectors are 64-bit or 128-bit long, while other 
> instructions don't have such restrictions.
>
> So while in theory you could factor some of this code out more, I suspect any 
> attempt to do so would have a detrimental effect on performance.

Ok, fair enough. Yes, it's always a trade off between code simplicity and 
getting the optimal interleaving. As you've spent the effort on making it 
efficient with respect to that, let's go with that then!

(FWIW, for future endeavours, having the checkasm tests in place while 
developing/tuning the implementation does allow getting good empirical 
data on how much you gain from different alternative scheduling choices. I 
usually don't follow the optimization guides for any specific core, but 
track the benchmark numbers for a couple different cores and try to pick a 
scheduling that is a decent compromise for all of them.)

Also, for future work - if you have checkasm tests in place while working 
on the assembly, I usually amend the test with debug printouts that 
visualize the output of the reference and the tested function, and a map 
showing which elements differ - which makes tracking down issues a whole 
lot easier. I don't think any of the checkasm tests in ffmpeg have such 
printouts though, but within e.g. the dav1d project, the checkasm tool is 
extended with helpers for comparing and printing such debug aids.

// Martin
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

* Re: [FFmpeg-devel] [PATCH 08/10] avcodec/idctdsp: Arm 64-bit NEON block add and clamp fast paths
  2022-03-31 16:47       ` Ben Avison
@ 2022-03-31 21:42         ` Martin Storsjö
  0 siblings, 0 replies; 55+ messages in thread
From: Martin Storsjö @ 2022-03-31 21:42 UTC (permalink / raw)
  To: Ben Avison; +Cc: FFmpeg development discussions and patches

On Thu, 31 Mar 2022, Ben Avison wrote:

> On 30/03/2022 15:14, Martin Storsjö wrote:
>> On Fri, 25 Mar 2022, Ben Avison wrote:
>>> +// Clamp 16-bit signed block coefficients to signed 8-bit (biased by 128)
>>> +// On entry:
>>> +//   x0 -> array of 64x 16-bit coefficients
>>> +//   x1 -> 8-bit results
>>> +//   x2 = row stride for results, bytes
>>> +function ff_put_signed_pixels_clamped_neon, export=1
>>> +        ld1             {v0.16b, v1.16b, v2.16b, v3.16b}, [x0], #64
>>> +        movi            v4.8b, #128
>>> +        ld1             {v16.16b, v17.16b, v18.16b, v19.16b}, [x0]
>>> +        sqxtn           v0.8b, v0.8h
>>> +        sqxtn           v1.8b, v1.8h
>>> +        sqxtn           v2.8b, v2.8h
>>> +        sqxtn           v3.8b, v3.8h
>>> +        sqxtn           v5.8b, v16.8h
>>> +        add             v0.8b, v0.8b, v4.8b
>> 
>> Here you could save 4 add instructions with sqxtn2 and adding .16b vectors, 
>> but I'm not sure if it's wortwhile. (It reduces the checkasm numbers by 0.7 
>> for Cortex A72, by 0.3 for A73, but increases the runtime by 1.0 on A53.) 
>> Stranegely enough, I get much smaller numbers on my A72 than you got.
>
> That's weird. As you say, it should be independent of clock-frequency. FWIW, 
> I'm benchmarking on a Raspberry Pi 4; I'd assume all its board variants' 
> Cortex-A72 cores are of identical revision.
>
> Now I run it again, I'm getting these figures:
>
> idctdsp.add_pixels_clamped_c: 313.3
> idctdsp.add_pixels_clamped_neon: 24.3
> idctdsp.put_pixels_clamped_c: 220.3
> idctdsp.put_pixels_clamped_neon: 15.5
> idctdsp.put_signed_pixels_clamped_c: 210.5
> idctdsp.put_signed_pixels_clamped_neon: 19.5
>
> which is more in line with what you see! I am getting a lot of variability 
> between runs though - from a small sample, I'm seeing add_pixels_clamped_neon 
> coming out as anything from 21 to 30, which is well above the sort of 
> differences you're seeing between alternate implementations.

That's indeed weird. I don't have a Raspberry Pi 4 myself though, but for 
functions in this size range on the devboards I test on, I get essentially 
perfectly stable numbers each time - which is great for empirically 
testing different implementation strategies.

> This sort of case is always going to be difficult to schedule optimally for 
> multiple core - factors like how much dual-issuing is possible, latency 
> before values can be used, load speed and the granularity of scoreboarding 
> parts of vectors, all vary widely.

Yup, indeed. In most cases, an implementation that is good for one core is 
usually decent for the other ones as well, but sometimes it ends up a 
compromise, where optimizing for one makes things worse for another one. 
As long as the chosen implementation isn't very suboptimal for some common 
cores, it probably doesn't matter much though.

// Martin
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".

^ permalink raw reply	[flat|nested] 55+ messages in thread

end of thread, other threads:[~2022-03-31 21:42 UTC | newest]

Thread overview: 55+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-17 18:58 [FFmpeg-devel] [PATCH 0/6] avcodec/vc1: Arm optimisations Ben Avison
2022-03-17 18:58 ` [FFmpeg-devel] [PATCH 1/6] avcodec/vc1: Arm 64-bit NEON deblocking filter fast paths Ben Avison
2022-03-17 18:58 ` [FFmpeg-devel] [PATCH 2/6] avcodec/vc1: Arm 32-bit " Ben Avison
2022-03-17 18:58 ` [FFmpeg-devel] [PATCH 3/6] avcodec/vc1: Arm 64-bit NEON inverse transform " Ben Avison
2022-03-17 18:58 ` [FFmpeg-devel] [PATCH 4/6] avcodec/idctdsp: Arm 64-bit NEON block add and clamp " Ben Avison
2022-03-17 18:58 ` [FFmpeg-devel] [PATCH 5/6] avcodec/blockdsp: Arm 64-bit NEON block clear " Ben Avison
2022-03-17 18:58 ` [FFmpeg-devel] [PATCH 6/6] avcodec/vc1: Introduce fast path for unescaping bitstream buffer Ben Avison
2022-03-18 19:10   ` Andreas Rheinhardt
2022-03-21 15:51     ` Ben Avison
2022-03-21 20:44       ` Martin Storsjö
2022-03-19 23:06 ` [FFmpeg-devel] [PATCH 0/6] avcodec/vc1: Arm optimisations Martin Storsjö
2022-03-19 23:07   ` Martin Storsjö
2022-03-21 17:37   ` Ben Avison
2022-03-21 22:29     ` Martin Storsjö
2022-03-25 18:52 ` [FFmpeg-devel] [PATCH v2 00/10] " Ben Avison
2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 01/10] checkasm: Add vc1dsp in-loop deblocking filter tests Ben Avison
2022-03-25 22:53     ` Martin Storsjö
2022-03-28 18:28       ` Ben Avison
2022-03-29 11:47         ` Martin Storsjö
2022-03-29 12:24     ` Martin Storsjö
2022-03-29 12:43     ` Martin Storsjö
2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 02/10] checkasm: Add vc1dsp inverse transform tests Ben Avison
2022-03-29 12:41     ` Martin Storsjö
2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 03/10] checkasm: Add idctdsp add/put-pixels-clamped tests Ben Avison
2022-03-29 13:13     ` Martin Storsjö
2022-03-29 19:56       ` Martin Storsjö
2022-03-29 20:22       ` Ben Avison
2022-03-29 20:30         ` Martin Storsjö
2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 04/10] avcodec/vc1: Introduce fast path for unescaping bitstream buffer Ben Avison
2022-03-29 20:37     ` Martin Storsjö
2022-03-31 13:58       ` Ben Avison
2022-03-31 14:07         ` Martin Storsjö
2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 05/10] avcodec/vc1: Arm 64-bit NEON deblocking filter fast paths Ben Avison
2022-03-30 12:35     ` Martin Storsjö
2022-03-31 15:15       ` Ben Avison
2022-03-31 21:21         ` Martin Storsjö
2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 06/10] avcodec/vc1: Arm 32-bit " Ben Avison
2022-03-25 19:27     ` Lynne
2022-03-25 19:49       ` Martin Storsjö
2022-03-25 19:55         ` Lynne
2022-03-30 12:37     ` Martin Storsjö
2022-03-30 13:03     ` Martin Storsjö
2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 07/10] avcodec/vc1: Arm 64-bit NEON inverse transform " Ben Avison
2022-03-30 13:49     ` Martin Storsjö
2022-03-30 14:01       ` Martin Storsjö
2022-03-31 15:37       ` Ben Avison
2022-03-31 21:32         ` Martin Storsjö
2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 08/10] avcodec/idctdsp: Arm 64-bit NEON block add and clamp " Ben Avison
2022-03-30 14:14     ` Martin Storsjö
2022-03-31 16:47       ` Ben Avison
2022-03-31 21:42         ` Martin Storsjö
2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 09/10] avcodec/vc1: Arm 64-bit NEON unescape fast path Ben Avison
2022-03-30 14:35     ` Martin Storsjö
2022-03-25 18:52   ` [FFmpeg-devel] [PATCH 10/10] avcodec/vc1: Arm 32-bit " Ben Avison
2022-03-30 14:35     ` Martin Storsjö

Git Inbox Mirror of the ffmpeg-devel mailing list - see https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

This inbox may be cloned and mirrored by anyone:

	git clone --mirror https://master.gitmailbox.com/ffmpegdev/0 ffmpegdev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 ffmpegdev ffmpegdev/ https://master.gitmailbox.com/ffmpegdev \
		ffmpegdev@gitmailbox.com
	public-inbox-index ffmpegdev

Example config snippet for mirrors.


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git