* [FFmpeg-devel] [PATCH v3 00/10] avcodec/vc1: Arm optimisations
@ 2022-03-31 17:23 Ben Avison
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 01/10] checkasm: Add vc1dsp in-loop deblocking filter tests Ben Avison
` (10 more replies)
0 siblings, 11 replies; 13+ messages in thread
From: Ben Avison @ 2022-03-31 17:23 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Ben Avison
The VC1 decoder was missing lots of important fast paths for Arm, especially
for 64-bit Arm. This submission fills in implementations for all functions
where a fast path already existed and the fallback C implementation was
taking 1% or more of the runtime, and adds a new fast path to permit
vc1_unescape_buffer() to be overridden.
I've measured the playback speed on a 1.5 GHz Cortex-A72 (Raspberry Pi 4)
using `ffmpeg -i <bitstream> -f null -` for a couple of example streams:
Architecture: AArch32 AArch32 AArch64 AArch64
Stream: 1 2 1 2
Before speed: 1.22x 0.82x 1.00x 0.67x
After speed: 1.31x 0.98x 1.39x 1.06x
Improvement: 7.4% 20% 39% 58%
`make fate` passes on both AArch32 and AArch64.
Changes in v2:
* Refactor checkasm tests to convert some macros into functions.
* Remove cast-to-void of checked_call.
* Limit 16-bit values in idctdsp checkasm test to +/-0x100.
* Reinstate ff_add_pixels_clamped_arm.
* Adapt vc1 deblocking filters to specify stride as ptrdiff_t.
* Add align specifiers to a few VLD/VST instructions for AArch32 deblocking
filter, and adapt checkasm test not to test with tighter alignment than is
encountered in normal use.
* Correct unescape buffer memcmp length.
* Update benchmarks for AArch64 idctdsp.
Ben Avison (10):
checkasm: Add vc1dsp in-loop deblocking filter tests
checkasm: Add vc1dsp inverse transform tests
checkasm: Add idctdsp add/put-pixels-clamped tests
avcodec/vc1: Introduce fast path for unescaping bitstream buffer
avcodec/vc1: Arm 64-bit NEON deblocking filter fast paths
avcodec/vc1: Arm 32-bit NEON deblocking filter fast paths
avcodec/vc1: Arm 64-bit NEON inverse transform fast paths
avcodec/idctdsp: Arm 64-bit NEON block add and clamp fast paths
avcodec/vc1: Arm 64-bit NEON unescape fast path
avcodec/vc1: Arm 32-bit NEON unescape fast path
libavcodec/aarch64/Makefile | 4 +-
libavcodec/aarch64/idctdsp_init_aarch64.c | 26 +-
libavcodec/aarch64/idctdsp_neon.S | 130 ++
libavcodec/aarch64/vc1dsp_init_aarch64.c | 94 ++
libavcodec/aarch64/vc1dsp_neon.S | 1546 +++++++++++++++++++++
libavcodec/arm/vc1dsp_init_neon.c | 75 +
libavcodec/arm/vc1dsp_neon.S | 761 ++++++++++
libavcodec/vc1dec.c | 20 +-
libavcodec/vc1dsp.c | 2 +
libavcodec/vc1dsp.h | 3 +
tests/checkasm/Makefile | 2 +
tests/checkasm/checkasm.c | 6 +
tests/checkasm/checkasm.h | 2 +
tests/checkasm/idctdsp.c | 98 ++
tests/checkasm/vc1dsp.c | 452 ++++++
tests/fate/checkasm.mak | 2 +
16 files changed, 3204 insertions(+), 19 deletions(-)
create mode 100644 libavcodec/aarch64/idctdsp_neon.S
create mode 100644 libavcodec/aarch64/vc1dsp_neon.S
create mode 100644 tests/checkasm/idctdsp.c
create mode 100644 tests/checkasm/vc1dsp.c
--
2.25.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 13+ messages in thread
* [FFmpeg-devel] [PATCH v3 01/10] checkasm: Add vc1dsp in-loop deblocking filter tests
2022-03-31 17:23 [FFmpeg-devel] [PATCH v3 00/10] avcodec/vc1: Arm optimisations Ben Avison
@ 2022-03-31 17:23 ` Ben Avison
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 02/10] checkasm: Add vc1dsp inverse transform tests Ben Avison
` (9 subsequent siblings)
10 siblings, 0 replies; 13+ messages in thread
From: Ben Avison @ 2022-03-31 17:23 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Ben Avison
Note that the benchmarking results for these functions are highly dependent
upon the input data. Therefore, each function is benchmarked twice,
corresponding to the best and worst case complexity of the reference C
implementation. The performance of a real stream decode will fall somewhere
between these two extremes.
Signed-off-by: Ben Avison <bavison@riscosopen.org>
---
tests/checkasm/Makefile | 1 +
tests/checkasm/checkasm.c | 3 ++
tests/checkasm/checkasm.h | 1 +
tests/checkasm/vc1dsp.c | 102 ++++++++++++++++++++++++++++++++++++++
tests/fate/checkasm.mak | 1 +
5 files changed, 108 insertions(+)
create mode 100644 tests/checkasm/vc1dsp.c
diff --git a/tests/checkasm/Makefile b/tests/checkasm/Makefile
index f768b1144e..7133a6ee66 100644
--- a/tests/checkasm/Makefile
+++ b/tests/checkasm/Makefile
@@ -11,6 +11,7 @@ AVCODECOBJS-$(CONFIG_H264PRED) += h264pred.o
AVCODECOBJS-$(CONFIG_H264QPEL) += h264qpel.o
AVCODECOBJS-$(CONFIG_LLVIDDSP) += llviddsp.o
AVCODECOBJS-$(CONFIG_LLVIDENCDSP) += llviddspenc.o
+AVCODECOBJS-$(CONFIG_VC1DSP) += vc1dsp.o
AVCODECOBJS-$(CONFIG_VP8DSP) += vp8dsp.o
AVCODECOBJS-$(CONFIG_VIDEODSP) += videodsp.o
diff --git a/tests/checkasm/checkasm.c b/tests/checkasm/checkasm.c
index 748d6a9f3a..c2efd81b6d 100644
--- a/tests/checkasm/checkasm.c
+++ b/tests/checkasm/checkasm.c
@@ -147,6 +147,9 @@ static const struct {
#if CONFIG_V210_ENCODER
{ "v210enc", checkasm_check_v210enc },
#endif
+ #if CONFIG_VC1DSP
+ { "vc1dsp", checkasm_check_vc1dsp },
+ #endif
#if CONFIG_VP8DSP
{ "vp8dsp", checkasm_check_vp8dsp },
#endif
diff --git a/tests/checkasm/checkasm.h b/tests/checkasm/checkasm.h
index c3192d8c23..52ab18a5b1 100644
--- a/tests/checkasm/checkasm.h
+++ b/tests/checkasm/checkasm.h
@@ -78,6 +78,7 @@ void checkasm_check_sw_scale(void);
void checkasm_check_utvideodsp(void);
void checkasm_check_v210dec(void);
void checkasm_check_v210enc(void);
+void checkasm_check_vc1dsp(void);
void checkasm_check_vf_eq(void);
void checkasm_check_vf_gblur(void);
void checkasm_check_vf_hflip(void);
diff --git a/tests/checkasm/vc1dsp.c b/tests/checkasm/vc1dsp.c
new file mode 100644
index 0000000000..2fd6c74d6c
--- /dev/null
+++ b/tests/checkasm/vc1dsp.c
@@ -0,0 +1,102 @@
+/*
+ * Copyright (c) 2022 Ben Avison
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with FFmpeg; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+ */
+
+#include <string.h>
+
+#include "checkasm.h"
+
+#include "libavcodec/vc1dsp.h"
+
+#include "libavutil/common.h"
+#include "libavutil/internal.h"
+#include "libavutil/intreadwrite.h"
+#include "libavutil/mem_internal.h"
+
+#define VC1DSP_TEST(func) { #func, offsetof(VC1DSPContext, func) },
+
+typedef struct {
+ const char *name;
+ size_t offset;
+} test;
+
+#define RANDOMIZE_BUFFER8_MID_WEIGHTED(name, size) \
+ do { \
+ uint8_t *p##0 = name##0, *p##1 = name##1; \
+ int i = (size); \
+ while (i-- > 0) { \
+ int x = 0x80 | (rnd() & 0x7F); \
+ x >>= rnd() % 9; \
+ if (rnd() & 1) \
+ x = -x; \
+ *p##1++ = *p##0++ = 0x80 + x; \
+ } \
+ } while (0)
+
+static void check_loop_filter(void)
+{
+ /* Deblocking filter buffers are big enough to hold a 16x16 block,
+ * plus 16 columns left and 4 rows above to hold filter inputs
+ * (depending on whether v or h neighbouring block edge, oversized
+ * horizontally to maintain 16-byte alignment) plus 16 columns and
+ * 4 rows below to catch write overflows */
+ LOCAL_ALIGNED_16(uint8_t, filter_buf0, [24 * 48]);
+ LOCAL_ALIGNED_16(uint8_t, filter_buf1, [24 * 48]);
+
+ VC1DSPContext h;
+
+ const test tests[] = {
+ VC1DSP_TEST(vc1_v_loop_filter4)
+ VC1DSP_TEST(vc1_h_loop_filter4)
+ VC1DSP_TEST(vc1_v_loop_filter8)
+ VC1DSP_TEST(vc1_h_loop_filter8)
+ VC1DSP_TEST(vc1_v_loop_filter16)
+ VC1DSP_TEST(vc1_h_loop_filter16)
+ };
+
+ ff_vc1dsp_init(&h);
+
+ for (size_t t = 0; t < FF_ARRAY_ELEMS(tests); ++t) {
+ void (*func)(uint8_t *, ptrdiff_t, int) = *(void **)((intptr_t) &h + tests[t].offset);
+ declare_func_emms(AV_CPU_FLAG_MMX, void, uint8_t *, ptrdiff_t, int);
+ if (check_func(func, "vc1dsp.%s", tests[t].name)) {
+ for (int count = 1000; count > 0; --count) {
+ int pq = rnd() % 31 + 1;
+ RANDOMIZE_BUFFER8_MID_WEIGHTED(filter_buf, 24 * 48);
+ call_ref(filter_buf0 + 4 * 48 + 16, 48, pq);
+ call_new(filter_buf1 + 4 * 48 + 16, 48, pq);
+ if (memcmp(filter_buf0, filter_buf1, 24 * 48))
+ fail();
+ }
+ }
+ for (int j = 0; j < 24; ++j)
+ for (int i = 0; i < 48; ++i)
+ filter_buf1[j * 48 + i] = 0x60 + 0x40 * (i >= 16 && j >= 4);
+ if (check_func(func, "vc1dsp.%s_bestcase", tests[t].name))
+ bench_new(filter_buf1 + 4 * 48 + 16, 48, 1);
+ if (check_func(func, "vc1dsp.%s_worstcase", tests[t].name))
+ bench_new(filter_buf1 + 4 * 48 + 16, 48, 31);
+ }
+}
+
+void checkasm_check_vc1dsp(void)
+{
+ check_loop_filter();
+ report("loop_filter");
+}
diff --git a/tests/fate/checkasm.mak b/tests/fate/checkasm.mak
index 6db8f09d12..99e6bb13c4 100644
--- a/tests/fate/checkasm.mak
+++ b/tests/fate/checkasm.mak
@@ -32,6 +32,7 @@ FATE_CHECKASM = fate-checkasm-aacpsdsp \
fate-checkasm-utvideodsp \
fate-checkasm-v210dec \
fate-checkasm-v210enc \
+ fate-checkasm-vc1dsp \
fate-checkasm-vf_blend \
fate-checkasm-vf_colorspace \
fate-checkasm-vf_eq \
--
2.25.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 13+ messages in thread
* [FFmpeg-devel] [PATCH v3 02/10] checkasm: Add vc1dsp inverse transform tests
2022-03-31 17:23 [FFmpeg-devel] [PATCH v3 00/10] avcodec/vc1: Arm optimisations Ben Avison
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 01/10] checkasm: Add vc1dsp in-loop deblocking filter tests Ben Avison
@ 2022-03-31 17:23 ` Ben Avison
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 03/10] checkasm: Add idctdsp add/put-pixels-clamped tests Ben Avison
` (8 subsequent siblings)
10 siblings, 0 replies; 13+ messages in thread
From: Ben Avison @ 2022-03-31 17:23 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Ben Avison
This test deliberately doesn't exercise the full range of inputs described in
the committee draft VC-1 standard. It says:
input coefficients in frequency domain, D, satisfy -2048 <= D < 2047
intermediate coefficients, E, satisfy -4096 <= E < 4095
fully inverse-transformed coefficients, R, satisfy -512 <= R < 511
For one thing, the inequalities look odd. Did they mean them to go the
other way round? That would make more sense because the equations generally
both add and subtract coefficients multiplied by constants, including powers
of 2. Requiring the most-negative values to be valid extends the number of
bits to represent the intermediate values just for the sake of that one case!
For another thing, the extreme values don't look to occur in real streams -
both in my experience and supported by the following comment in the AArch32
decoder:
tNhalf is half of the value of tN (as described in vc1_inv_trans_8x8_c).
This is done because sometimes files have input that causes tN + tM to
overflow. To avoid this overflow, we compute tNhalf, then compute
tNhalf + tM (which doesn't overflow), and then we use vhadd to compute
(tNhalf + (tNhalf + tM)) >> 1 which does not overflow because it is
one instruction.
My AArch64 decoder goes further than this. It calculates tNhalf and tM
then does an SRA (essentially a fused halve and add) to compute
(tN + tM) >> 1 without ever having to hold (tNhalf + tM) in a 16-bit element
without overflowing. It only encounters difficulties if either tNhalf or
tM overflow in isolation.
I haven't had sight of the final standard, so it's possible that these
issues were dealt with during finalisation, which could explain the lack
of usage of extreme inputs in real streams. Or a preponderance of decoders
that only support 16-bit intermediate values in their inverse transforms
might have caused encoders to steer clear of such cases.
I have effectively followed this approach in the test, and limited the
scale of the coefficients sufficient that both the existing AArch32 decoder
and my new AArch64 decoder both pass.
Signed-off-by: Ben Avison <bavison@riscosopen.org>
---
tests/checkasm/vc1dsp.c | 283 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 283 insertions(+)
diff --git a/tests/checkasm/vc1dsp.c b/tests/checkasm/vc1dsp.c
index 2fd6c74d6c..7d4457306f 100644
--- a/tests/checkasm/vc1dsp.c
+++ b/tests/checkasm/vc1dsp.c
@@ -30,12 +30,208 @@
#include "libavutil/mem_internal.h"
#define VC1DSP_TEST(func) { #func, offsetof(VC1DSPContext, func) },
+#define VC1DSP_SIZED_TEST(func, width, height) { #func, offsetof(VC1DSPContext, func), width, height },
typedef struct {
const char *name;
size_t offset;
+ int width;
+ int height;
} test;
+typedef struct matrix {
+ size_t width;
+ size_t height;
+ float d[];
+} matrix;
+
+static const matrix T8 = { 8, 8, {
+ 12, 12, 12, 12, 12, 12, 12, 12,
+ 16, 15, 9, 4, -4, -9, -15, -16,
+ 16, 6, -6, -16, -16, -6, 6, 16,
+ 15, -4, -16, -9, 9, 16, 4, -15,
+ 12, -12, -12, 12, 12, -12, -12, 12,
+ 9, -16, 4, 15, -15, -4, 16, -9,
+ 6, -16, 16, -6, -6, 16, -16, 6,
+ 4, -9, 15, -16, 16, -15, 9, -4
+} };
+
+static const matrix T4 = { 4, 4, {
+ 17, 17, 17, 17,
+ 22, 10, -10, -22,
+ 17, -17, -17, 17,
+ 10, -22, 22, -10
+} };
+
+static const matrix T8t = { 8, 8, {
+ 12, 16, 16, 15, 12, 9, 6, 4,
+ 12, 15, 6, -4, -12, -16, -16, -9,
+ 12, 9, -6, -16, -12, 4, 16, 15,
+ 12, 4, -16, -9, 12, 15, -6, -16,
+ 12, -4, -16, 9, 12, -15, -6, 16,
+ 12, -9, -6, 16, -12, -4, 16, -15,
+ 12, -15, 6, 4, -12, 16, -16, 9,
+ 12, -16, 16, -15, 12, -9, 6, -4
+} };
+
+static const matrix T4t = { 4, 4, {
+ 17, 22, 17, 10,
+ 17, 10, -17, -22,
+ 17, -10, -17, 22,
+ 17, -22, 17, -10
+} };
+
+static matrix *new_matrix(size_t width, size_t height)
+{
+ matrix *out = av_mallocz(sizeof (matrix) + height * width * sizeof (float));
+ if (out == NULL) {
+ fprintf(stderr, "Memory allocation failure\n");
+ exit(EXIT_FAILURE);
+ }
+ out->width = width;
+ out->height = height;
+ return out;
+}
+
+static matrix *multiply(const matrix *a, const matrix *b)
+{
+ matrix *out;
+ if (a->width != b->height) {
+ fprintf(stderr, "Incompatible multiplication\n");
+ exit(EXIT_FAILURE);
+ }
+ out = new_matrix(b->width, a->height);
+ for (int j = 0; j < out->height; ++j)
+ for (int i = 0; i < out->width; ++i) {
+ float sum = 0;
+ for (int k = 0; k < a->width; ++k)
+ sum += a->d[j * a->width + k] * b->d[k * b->width + i];
+ out->d[j * out->width + i] = sum;
+ }
+ return out;
+}
+
+static void normalise(matrix *a)
+{
+ for (int j = 0; j < a->height; ++j)
+ for (int i = 0; i < a->width; ++i) {
+ float *p = a->d + j * a->width + i;
+ *p *= 64;
+ if (a->height == 4)
+ *p /= (const unsigned[]) { 289, 292, 289, 292 } [j];
+ else
+ *p /= (const unsigned[]) { 288, 289, 292, 289, 288, 289, 292, 289 } [j];
+ if (a->width == 4)
+ *p /= (const unsigned[]) { 289, 292, 289, 292 } [i];
+ else
+ *p /= (const unsigned[]) { 288, 289, 292, 289, 288, 289, 292, 289 } [i];
+ }
+}
+
+static void divide_and_round_nearest(matrix *a, float by)
+{
+ for (int j = 0; j < a->height; ++j)
+ for (int i = 0; i < a->width; ++i) {
+ float *p = a->d + j * a->width + i;
+ *p = rintf(*p / by);
+ }
+}
+
+static void tweak(matrix *a)
+{
+ for (int j = 4; j < a->height; ++j)
+ for (int i = 0; i < a->width; ++i) {
+ float *p = a->d + j * a->width + i;
+ *p += 1;
+ }
+}
+
+/* The VC-1 spec places restrictions on the values permitted at three
+ * different stages:
+ * - D: the input coefficients in frequency domain
+ * - E: the intermediate coefficients, inverse-transformed only horizontally
+ * - R: the fully inverse-transformed coefficients
+ *
+ * To fully cater for the ranges specified requires various intermediate
+ * values to be held to 17-bit precision; yet these conditions do not appear
+ * to be utilised in real-world streams. At least some assembly
+ * implementations have chosen to restrict these values to 16-bit precision,
+ * to accelerate the decoding of real-world streams at the cost of strict
+ * adherence to the spec. To avoid our test marking these as failures,
+ * reduce our random inputs.
+ */
+#define ATTENUATION 4
+
+static matrix *generate_inverse_quantized_transform_coefficients(size_t width, size_t height)
+{
+ matrix *raw, *tmp, *D, *E, *R;
+ raw = new_matrix(width, height);
+ for (int i = 0; i < width * height; ++i)
+ raw->d[i] = (int) (rnd() % (1024/ATTENUATION)) - 512/ATTENUATION;
+ tmp = multiply(height == 8 ? &T8 : &T4, raw);
+ D = multiply(tmp, width == 8 ? &T8t : &T4t);
+ normalise(D);
+ divide_and_round_nearest(D, 1);
+ for (int i = 0; i < width * height; ++i) {
+ if (D->d[i] < -2048/ATTENUATION || D->d[i] > 2048/ATTENUATION-1) {
+ /* Rare, so simply try again */
+ av_free(raw);
+ av_free(tmp);
+ av_free(D);
+ return generate_inverse_quantized_transform_coefficients(width, height);
+ }
+ }
+ E = multiply(D, width == 8 ? &T8 : &T4);
+ divide_and_round_nearest(E, 8);
+ for (int i = 0; i < width * height; ++i)
+ if (E->d[i] < -4096/ATTENUATION || E->d[i] > 4096/ATTENUATION-1) {
+ /* Rare, so simply try again */
+ av_free(raw);
+ av_free(tmp);
+ av_free(D);
+ av_free(E);
+ return generate_inverse_quantized_transform_coefficients(width, height);
+ }
+ R = multiply(height == 8 ? &T8t : &T4t, E);
+ tweak(R);
+ divide_and_round_nearest(R, 128);
+ for (int i = 0; i < width * height; ++i)
+ if (R->d[i] < -512/ATTENUATION || R->d[i] > 512/ATTENUATION-1) {
+ /* Rare, so simply try again */
+ av_free(raw);
+ av_free(tmp);
+ av_free(D);
+ av_free(E);
+ av_free(R);
+ return generate_inverse_quantized_transform_coefficients(width, height);
+ }
+ av_free(raw);
+ av_free(tmp);
+ av_free(E);
+ av_free(R);
+ return D;
+}
+
+#define RANDOMIZE_BUFFER16(name, size) \
+ do { \
+ int i; \
+ for (i = 0; i < size; ++i) { \
+ uint16_t r = rnd(); \
+ AV_WN16A(name##0 + i, r); \
+ AV_WN16A(name##1 + i, r); \
+ } \
+ } while (0)
+
+#define RANDOMIZE_BUFFER8(name, size) \
+ do { \
+ int i; \
+ for (i = 0; i < size; ++i) { \
+ uint8_t r = rnd(); \
+ name##0[i] = r; \
+ name##1[i] = r; \
+ } \
+ } while (0)
+
#define RANDOMIZE_BUFFER8_MID_WEIGHTED(name, size) \
do { \
uint8_t *p##0 = name##0, *p##1 = name##1; \
@@ -49,6 +245,89 @@ typedef struct {
} \
} while (0)
+static void check_inv_trans_inplace(void)
+{
+ /* Inverse transform input coefficients are stored in a 16-bit buffer
+ * with row stride of 8 coefficients irrespective of transform size.
+ * vc1_inv_trans_8x8 differs from the others in two ways: coefficients
+ * are stored in column-major order, and the outputs are written back
+ * to the input buffer, so we oversize it slightly to catch overruns. */
+ LOCAL_ALIGNED_16(int16_t, inv_trans_in0, [10 * 8]);
+ LOCAL_ALIGNED_16(int16_t, inv_trans_in1, [10 * 8]);
+
+ VC1DSPContext h;
+
+ ff_vc1dsp_init(&h);
+
+ if (check_func(h.vc1_inv_trans_8x8, "vc1dsp.vc1_inv_trans_8x8")) {
+ matrix *coeffs;
+ declare_func_emms(AV_CPU_FLAG_MMX, void, int16_t *);
+ RANDOMIZE_BUFFER16(inv_trans_in, 10 * 8);
+ coeffs = generate_inverse_quantized_transform_coefficients(8, 8);
+ for (int j = 0; j < 8; ++j)
+ for (int i = 0; i < 8; ++i) {
+ int idx = 8 + i * 8 + j;
+ inv_trans_in1[idx] = inv_trans_in0[idx] = coeffs->d[j * 8 + i];
+ }
+ call_ref(inv_trans_in0 + 8);
+ call_new(inv_trans_in1 + 8);
+ if (memcmp(inv_trans_in0, inv_trans_in1, 10 * 8 * sizeof (int16_t)))
+ fail();
+ bench_new(inv_trans_in1 + 8);
+ av_free(coeffs);
+ }
+}
+
+static void check_inv_trans_adding(void)
+{
+ /* Inverse transform input coefficients are stored in a 16-bit buffer
+ * with row stride of 8 coefficients irrespective of transform size. */
+ LOCAL_ALIGNED_16(int16_t, inv_trans_in0, [8 * 8]);
+ LOCAL_ALIGNED_16(int16_t, inv_trans_in1, [8 * 8]);
+
+ /* For all but vc1_inv_trans_8x8, the inverse transform is narrowed and
+ * added with saturation to an array of unsigned 8-bit values. Oversize
+ * this by 8 samples left and right and one row above and below. */
+ LOCAL_ALIGNED_8(uint8_t, inv_trans_out0, [10 * 24]);
+ LOCAL_ALIGNED_8(uint8_t, inv_trans_out1, [10 * 24]);
+
+ VC1DSPContext h;
+
+ const test tests[] = {
+ VC1DSP_SIZED_TEST(vc1_inv_trans_8x4, 8, 4)
+ VC1DSP_SIZED_TEST(vc1_inv_trans_4x8, 4, 8)
+ VC1DSP_SIZED_TEST(vc1_inv_trans_4x4, 4, 4)
+ VC1DSP_SIZED_TEST(vc1_inv_trans_8x8_dc, 8, 8)
+ VC1DSP_SIZED_TEST(vc1_inv_trans_8x4_dc, 8, 4)
+ VC1DSP_SIZED_TEST(vc1_inv_trans_4x8_dc, 4, 8)
+ VC1DSP_SIZED_TEST(vc1_inv_trans_4x4_dc, 4, 4)
+ };
+
+ ff_vc1dsp_init(&h);
+
+ for (size_t t = 0; t < FF_ARRAY_ELEMS(tests); ++t) {
+ void (*func)(uint8_t *, ptrdiff_t, int16_t *) = *(void **)((intptr_t) &h + tests[t].offset);
+ if (check_func(func, "vc1dsp.%s", tests[t].name)) {
+ matrix *coeffs;
+ declare_func_emms(AV_CPU_FLAG_MMX, void, uint8_t *, ptrdiff_t, int16_t *);
+ RANDOMIZE_BUFFER16(inv_trans_in, 8 * 8);
+ RANDOMIZE_BUFFER8(inv_trans_out, 10 * 24);
+ coeffs = generate_inverse_quantized_transform_coefficients(tests[t].width, tests[t].height);
+ for (int j = 0; j < tests[t].height; ++j)
+ for (int i = 0; i < tests[t].width; ++i) {
+ int idx = j * 8 + i;
+ inv_trans_in1[idx] = inv_trans_in0[idx] = coeffs->d[j * tests[t].width + i];
+ }
+ call_ref(inv_trans_out0 + 24 + 8, 24, inv_trans_in0);
+ call_new(inv_trans_out1 + 24 + 8, 24, inv_trans_in1);
+ if (memcmp(inv_trans_out0, inv_trans_out1, 10 * 24))
+ fail();
+ bench_new(inv_trans_out1 + 24 + 8, 24, inv_trans_in1 + 8);
+ av_free(coeffs);
+ }
+ }
+}
+
static void check_loop_filter(void)
{
/* Deblocking filter buffers are big enough to hold a 16x16 block,
@@ -97,6 +376,10 @@ static void check_loop_filter(void)
void checkasm_check_vc1dsp(void)
{
+ check_inv_trans_inplace();
+ check_inv_trans_adding();
+ report("inv_trans");
+
check_loop_filter();
report("loop_filter");
}
--
2.25.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 13+ messages in thread
* [FFmpeg-devel] [PATCH v3 03/10] checkasm: Add idctdsp add/put-pixels-clamped tests
2022-03-31 17:23 [FFmpeg-devel] [PATCH v3 00/10] avcodec/vc1: Arm optimisations Ben Avison
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 01/10] checkasm: Add vc1dsp in-loop deblocking filter tests Ben Avison
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 02/10] checkasm: Add vc1dsp inverse transform tests Ben Avison
@ 2022-03-31 17:23 ` Ben Avison
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 04/10] avcodec/vc1: Introduce fast path for unescaping bitstream buffer Ben Avison
` (7 subsequent siblings)
10 siblings, 0 replies; 13+ messages in thread
From: Ben Avison @ 2022-03-31 17:23 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Ben Avison
Signed-off-by: Ben Avison <bavison@riscosopen.org>
---
tests/checkasm/Makefile | 1 +
tests/checkasm/checkasm.c | 3 ++
tests/checkasm/checkasm.h | 1 +
tests/checkasm/idctdsp.c | 98 +++++++++++++++++++++++++++++++++++++++
tests/fate/checkasm.mak | 1 +
5 files changed, 104 insertions(+)
create mode 100644 tests/checkasm/idctdsp.c
diff --git a/tests/checkasm/Makefile b/tests/checkasm/Makefile
index 7133a6ee66..f6b1008855 100644
--- a/tests/checkasm/Makefile
+++ b/tests/checkasm/Makefile
@@ -9,6 +9,7 @@ AVCODECOBJS-$(CONFIG_G722DSP) += g722dsp.o
AVCODECOBJS-$(CONFIG_H264DSP) += h264dsp.o
AVCODECOBJS-$(CONFIG_H264PRED) += h264pred.o
AVCODECOBJS-$(CONFIG_H264QPEL) += h264qpel.o
+AVCODECOBJS-$(CONFIG_IDCTDSP) += idctdsp.o
AVCODECOBJS-$(CONFIG_LLVIDDSP) += llviddsp.o
AVCODECOBJS-$(CONFIG_LLVIDENCDSP) += llviddspenc.o
AVCODECOBJS-$(CONFIG_VC1DSP) += vc1dsp.o
diff --git a/tests/checkasm/checkasm.c b/tests/checkasm/checkasm.c
index c2efd81b6d..57134f96ea 100644
--- a/tests/checkasm/checkasm.c
+++ b/tests/checkasm/checkasm.c
@@ -123,6 +123,9 @@ static const struct {
#if CONFIG_HUFFYUV_DECODER
{ "huffyuvdsp", checkasm_check_huffyuvdsp },
#endif
+ #if CONFIG_IDCTDSP
+ { "idctdsp", checkasm_check_idctdsp },
+ #endif
#if CONFIG_JPEG2000_DECODER
{ "jpeg2000dsp", checkasm_check_jpeg2000dsp },
#endif
diff --git a/tests/checkasm/checkasm.h b/tests/checkasm/checkasm.h
index 52ab18a5b1..a86db140e3 100644
--- a/tests/checkasm/checkasm.h
+++ b/tests/checkasm/checkasm.h
@@ -64,6 +64,7 @@ void checkasm_check_hevc_idct(void);
void checkasm_check_hevc_pel(void);
void checkasm_check_hevc_sao(void);
void checkasm_check_huffyuvdsp(void);
+void checkasm_check_idctdsp(void);
void checkasm_check_jpeg2000dsp(void);
void checkasm_check_llviddsp(void);
void checkasm_check_llviddspenc(void);
diff --git a/tests/checkasm/idctdsp.c b/tests/checkasm/idctdsp.c
new file mode 100644
index 0000000000..02724536a7
--- /dev/null
+++ b/tests/checkasm/idctdsp.c
@@ -0,0 +1,98 @@
+/*
+ * Copyright (c) 2022 Ben Avison
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with FFmpeg; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+ */
+
+#include <string.h>
+
+#include "checkasm.h"
+
+#include "libavcodec/idctdsp.h"
+
+#include "libavutil/common.h"
+#include "libavutil/internal.h"
+#include "libavutil/intreadwrite.h"
+#include "libavutil/mem_internal.h"
+
+#define IDCTDSP_TEST(func) { #func, offsetof(IDCTDSPContext, func) },
+
+typedef struct {
+ const char *name;
+ size_t offset;
+} test;
+
+#define RANDOMIZE_BUFFER16(name, size) \
+ do { \
+ int i; \
+ for (i = 0; i < size; ++i) { \
+ uint16_t r = rnd() % 0x201 - 0x100; \
+ AV_WN16A(name##0 + i, r); \
+ AV_WN16A(name##1 + i, r); \
+ } \
+ } while (0)
+
+#define RANDOMIZE_BUFFER8(name, size) \
+ do { \
+ int i; \
+ for (i = 0; i < size; ++i) { \
+ uint8_t r = rnd(); \
+ name##0[i] = r; \
+ name##1[i] = r; \
+ } \
+ } while (0)
+
+static void check_add_put_clamped(void)
+{
+ /* Source buffers are only as big as needed, since any over-read won't affect results */
+ LOCAL_ALIGNED_16(int16_t, src0, [64]);
+ LOCAL_ALIGNED_16(int16_t, src1, [64]);
+ /* Destination buffers have borders of one row above/below and 8 columns left/right to catch overflows */
+ LOCAL_ALIGNED_8(uint8_t, dst0, [10 * 24]);
+ LOCAL_ALIGNED_8(uint8_t, dst1, [10 * 24]);
+
+ AVCodecContext avctx = { 0 };
+ IDCTDSPContext h;
+
+ const test tests[] = {
+ IDCTDSP_TEST(add_pixels_clamped)
+ IDCTDSP_TEST(put_pixels_clamped)
+ IDCTDSP_TEST(put_signed_pixels_clamped)
+ };
+
+ ff_idctdsp_init(&h, &avctx);
+
+ for (size_t t = 0; t < FF_ARRAY_ELEMS(tests); ++t) {
+ void (*func)(const int16_t *, uint8_t * ptrdiff_t) = *(void **)((intptr_t) &h + tests[t].offset);
+ if (check_func(func, "idctdsp.%s", tests[t].name)) {
+ declare_func_emms(AV_CPU_FLAG_MMX, void, const int16_t *, uint8_t *, ptrdiff_t);
+ RANDOMIZE_BUFFER16(src, 64);
+ RANDOMIZE_BUFFER8(dst, 10 * 24);
+ call_ref(src0, dst0 + 24 + 8, 24);
+ call_new(src1, dst1 + 24 + 8, 24);
+ if (memcmp(dst0, dst1, 10 * 24))
+ fail();
+ bench_new(src1, dst1 + 24 + 8, 24);
+ }
+ }
+}
+
+void checkasm_check_idctdsp(void)
+{
+ check_add_put_clamped();
+ report("idctdsp");
+}
diff --git a/tests/fate/checkasm.mak b/tests/fate/checkasm.mak
index 99e6bb13c4..c6273db183 100644
--- a/tests/fate/checkasm.mak
+++ b/tests/fate/checkasm.mak
@@ -19,6 +19,7 @@ FATE_CHECKASM = fate-checkasm-aacpsdsp \
fate-checkasm-hevc_pel \
fate-checkasm-hevc_sao \
fate-checkasm-huffyuvdsp \
+ fate-checkasm-idctdsp \
fate-checkasm-jpeg2000dsp \
fate-checkasm-llviddsp \
fate-checkasm-llviddspenc \
--
2.25.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 13+ messages in thread
* [FFmpeg-devel] [PATCH v3 04/10] avcodec/vc1: Introduce fast path for unescaping bitstream buffer
2022-03-31 17:23 [FFmpeg-devel] [PATCH v3 00/10] avcodec/vc1: Arm optimisations Ben Avison
` (2 preceding siblings ...)
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 03/10] checkasm: Add idctdsp add/put-pixels-clamped tests Ben Avison
@ 2022-03-31 17:23 ` Ben Avison
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 05/10] avcodec/vc1: Arm 64-bit NEON deblocking filter fast paths Ben Avison
` (6 subsequent siblings)
10 siblings, 0 replies; 13+ messages in thread
From: Ben Avison @ 2022-03-31 17:23 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Ben Avison
Includes a checkasm test.
Signed-off-by: Ben Avison <bavison@riscosopen.org>
---
libavcodec/vc1dec.c | 20 ++++++------
libavcodec/vc1dsp.c | 2 ++
libavcodec/vc1dsp.h | 3 ++
tests/checkasm/vc1dsp.c | 67 +++++++++++++++++++++++++++++++++++++++++
4 files changed, 82 insertions(+), 10 deletions(-)
diff --git a/libavcodec/vc1dec.c b/libavcodec/vc1dec.c
index e279ffd1c1..0426e8a752 100644
--- a/libavcodec/vc1dec.c
+++ b/libavcodec/vc1dec.c
@@ -491,7 +491,7 @@ static av_cold int vc1_decode_init(AVCodecContext *avctx)
size = next - start - 4;
if (size <= 0)
continue;
- buf2_size = vc1_unescape_buffer(start + 4, size, buf2);
+ buf2_size = v->vc1dsp.vc1_unescape_buffer(start + 4, size, buf2);
init_get_bits(&gb, buf2, buf2_size * 8);
switch (AV_RB32(start)) {
case VC1_CODE_SEQHDR:
@@ -681,7 +681,7 @@ static int vc1_decode_frame(AVCodecContext *avctx, void *data,
case VC1_CODE_FRAME:
if (avctx->hwaccel)
buf_start = start;
- buf_size2 = vc1_unescape_buffer(start + 4, size, buf2);
+ buf_size2 = v->vc1dsp.vc1_unescape_buffer(start + 4, size, buf2);
break;
case VC1_CODE_FIELD: {
int buf_size3;
@@ -698,8 +698,8 @@ static int vc1_decode_frame(AVCodecContext *avctx, void *data,
ret = AVERROR(ENOMEM);
goto err;
}
- buf_size3 = vc1_unescape_buffer(start + 4, size,
- slices[n_slices].buf);
+ buf_size3 = v->vc1dsp.vc1_unescape_buffer(start + 4, size,
+ slices[n_slices].buf);
init_get_bits(&slices[n_slices].gb, slices[n_slices].buf,
buf_size3 << 3);
slices[n_slices].mby_start = avctx->coded_height + 31 >> 5;
@@ -710,7 +710,7 @@ static int vc1_decode_frame(AVCodecContext *avctx, void *data,
break;
}
case VC1_CODE_ENTRYPOINT: /* it should be before frame data */
- buf_size2 = vc1_unescape_buffer(start + 4, size, buf2);
+ buf_size2 = v->vc1dsp.vc1_unescape_buffer(start + 4, size, buf2);
init_get_bits(&s->gb, buf2, buf_size2 * 8);
ff_vc1_decode_entry_point(avctx, v, &s->gb);
break;
@@ -727,8 +727,8 @@ static int vc1_decode_frame(AVCodecContext *avctx, void *data,
ret = AVERROR(ENOMEM);
goto err;
}
- buf_size3 = vc1_unescape_buffer(start + 4, size,
- slices[n_slices].buf);
+ buf_size3 = v->vc1dsp.vc1_unescape_buffer(start + 4, size,
+ slices[n_slices].buf);
init_get_bits(&slices[n_slices].gb, slices[n_slices].buf,
buf_size3 << 3);
slices[n_slices].mby_start = get_bits(&slices[n_slices].gb, 9);
@@ -762,7 +762,7 @@ static int vc1_decode_frame(AVCodecContext *avctx, void *data,
ret = AVERROR(ENOMEM);
goto err;
}
- buf_size3 = vc1_unescape_buffer(divider + 4, buf + buf_size - divider - 4, slices[n_slices].buf);
+ buf_size3 = v->vc1dsp.vc1_unescape_buffer(divider + 4, buf + buf_size - divider - 4, slices[n_slices].buf);
init_get_bits(&slices[n_slices].gb, slices[n_slices].buf,
buf_size3 << 3);
slices[n_slices].mby_start = s->mb_height + 1 >> 1;
@@ -771,9 +771,9 @@ static int vc1_decode_frame(AVCodecContext *avctx, void *data,
n_slices1 = n_slices - 1;
n_slices++;
}
- buf_size2 = vc1_unescape_buffer(buf, divider - buf, buf2);
+ buf_size2 = v->vc1dsp.vc1_unescape_buffer(buf, divider - buf, buf2);
} else {
- buf_size2 = vc1_unescape_buffer(buf, buf_size, buf2);
+ buf_size2 = v->vc1dsp.vc1_unescape_buffer(buf, buf_size, buf2);
}
init_get_bits(&s->gb, buf2, buf_size2*8);
} else{
diff --git a/libavcodec/vc1dsp.c b/libavcodec/vc1dsp.c
index f651d7d461..f1b7bb2397 100644
--- a/libavcodec/vc1dsp.c
+++ b/libavcodec/vc1dsp.c
@@ -34,6 +34,7 @@
#include "rnd_avg.h"
#include "vc1dsp.h"
#include "startcode.h"
+#include "vc1_common.h"
/* Apply overlap transform to horizontal edge */
static void vc1_v_overlap_c(uint8_t *src, ptrdiff_t stride)
@@ -1030,6 +1031,7 @@ av_cold void ff_vc1dsp_init(VC1DSPContext *dsp)
#endif /* CONFIG_WMV3IMAGE_DECODER || CONFIG_VC1IMAGE_DECODER */
dsp->startcode_find_candidate = ff_startcode_find_candidate_c;
+ dsp->vc1_unescape_buffer = vc1_unescape_buffer;
if (ARCH_AARCH64)
ff_vc1dsp_init_aarch64(dsp);
diff --git a/libavcodec/vc1dsp.h b/libavcodec/vc1dsp.h
index fe60025a2a..7ed1776ca7 100644
--- a/libavcodec/vc1dsp.h
+++ b/libavcodec/vc1dsp.h
@@ -80,6 +80,9 @@ typedef struct VC1DSPContext {
* one or more further zero bytes and a one byte.
*/
int (*startcode_find_candidate)(const uint8_t *buf, int size);
+
+ /* Copy a buffer, removing startcode emulation escape bytes as we go */
+ int (*vc1_unescape_buffer)(const uint8_t *src, int size, uint8_t *dst);
} VC1DSPContext;
void ff_vc1dsp_init(VC1DSPContext* c);
diff --git a/tests/checkasm/vc1dsp.c b/tests/checkasm/vc1dsp.c
index 7d4457306f..52628d15e4 100644
--- a/tests/checkasm/vc1dsp.c
+++ b/tests/checkasm/vc1dsp.c
@@ -374,6 +374,70 @@ static void check_loop_filter(void)
}
}
+#define TEST_UNESCAPE \
+ do { \
+ for (int count = 100; count > 0; --count) { \
+ escaped_offset = rnd() & 7; \
+ unescaped_offset = rnd() & 7; \
+ escaped_len = (1u << (rnd() % 8) + 3) - (rnd() & 7); \
+ RANDOMIZE_BUFFER8(unescaped, UNESCAPE_BUF_SIZE); \
+ len0 = call_ref(escaped0 + escaped_offset, escaped_len, unescaped0 + unescaped_offset); \
+ len1 = call_new(escaped1 + escaped_offset, escaped_len, unescaped1 + unescaped_offset); \
+ if (len0 != len1 || memcmp(unescaped0, unescaped1, UNESCAPE_BUF_SIZE)) \
+ fail(); \
+ } \
+ } while (0)
+
+static void check_unescape(void)
+{
+ /* This appears to be a typical length of buffer in use */
+#define LOG2_UNESCAPE_BUF_SIZE 17
+#define UNESCAPE_BUF_SIZE (1u<<LOG2_UNESCAPE_BUF_SIZE)
+ LOCAL_ALIGNED_8(uint8_t, escaped0, [UNESCAPE_BUF_SIZE]);
+ LOCAL_ALIGNED_8(uint8_t, escaped1, [UNESCAPE_BUF_SIZE]);
+ LOCAL_ALIGNED_8(uint8_t, unescaped0, [UNESCAPE_BUF_SIZE]);
+ LOCAL_ALIGNED_8(uint8_t, unescaped1, [UNESCAPE_BUF_SIZE]);
+
+ VC1DSPContext h;
+
+ ff_vc1dsp_init(&h);
+
+ if (check_func(h.vc1_unescape_buffer, "vc1dsp.vc1_unescape_buffer")) {
+ int len0, len1, escaped_offset, unescaped_offset, escaped_len;
+ declare_func_emms(AV_CPU_FLAG_MMX, int, const uint8_t *, int, uint8_t *);
+
+ /* Test data which consists of escapes sequences packed as tightly as possible */
+ for (int x = 0; x < UNESCAPE_BUF_SIZE; ++x)
+ escaped1[x] = escaped0[x] = 3 * (x % 3 == 0);
+ TEST_UNESCAPE;
+
+ /* Test random data */
+ RANDOMIZE_BUFFER8(escaped, UNESCAPE_BUF_SIZE);
+ TEST_UNESCAPE;
+
+ /* Test data with escape sequences at random intervals */
+ for (int x = 0; x <= UNESCAPE_BUF_SIZE - 4;) {
+ int gap, gap_msb;
+ escaped1[x+0] = escaped0[x+0] = 0;
+ escaped1[x+1] = escaped0[x+1] = 0;
+ escaped1[x+2] = escaped0[x+2] = 3;
+ escaped1[x+3] = escaped0[x+3] = rnd() & 3;
+ gap_msb = 2u << (rnd() % 8);
+ gap = (rnd() &~ -gap_msb) | gap_msb;
+ x += gap;
+ }
+ TEST_UNESCAPE;
+
+ /* Test data which is known to contain no escape sequences */
+ memset(escaped0, 0xFF, UNESCAPE_BUF_SIZE);
+ memset(escaped1, 0xFF, UNESCAPE_BUF_SIZE);
+ TEST_UNESCAPE;
+
+ /* Benchmark the no-escape-sequences case */
+ bench_new(escaped1, UNESCAPE_BUF_SIZE, unescaped1);
+ }
+}
+
void checkasm_check_vc1dsp(void)
{
check_inv_trans_inplace();
@@ -382,4 +446,7 @@ void checkasm_check_vc1dsp(void)
check_loop_filter();
report("loop_filter");
+
+ check_unescape();
+ report("unescape_buffer");
}
--
2.25.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 13+ messages in thread
* [FFmpeg-devel] [PATCH v3 05/10] avcodec/vc1: Arm 64-bit NEON deblocking filter fast paths
2022-03-31 17:23 [FFmpeg-devel] [PATCH v3 00/10] avcodec/vc1: Arm optimisations Ben Avison
` (3 preceding siblings ...)
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 04/10] avcodec/vc1: Introduce fast path for unescaping bitstream buffer Ben Avison
@ 2022-03-31 17:23 ` Ben Avison
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 06/10] avcodec/vc1: Arm 32-bit " Ben Avison
` (5 subsequent siblings)
10 siblings, 0 replies; 13+ messages in thread
From: Ben Avison @ 2022-03-31 17:23 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Ben Avison
checkasm benchmarks on 1.5 GHz Cortex-A72 are as follows. Note that the C
version can still outperform the NEON version in specific cases. The balance
between different code paths is stream-dependent, but in practice the best
case happens about 5% of the time, the worst case happens about 40% of the
time, and the complexity of the remaining cases fall somewhere in between.
Therefore, taking the average of the best and worst case timings is
probably a conservative estimate of the degree by which the NEON code
improves performance.
vc1dsp.vc1_h_loop_filter4_bestcase_c: 10.7
vc1dsp.vc1_h_loop_filter4_bestcase_neon: 43.5
vc1dsp.vc1_h_loop_filter4_worstcase_c: 184.5
vc1dsp.vc1_h_loop_filter4_worstcase_neon: 73.7
vc1dsp.vc1_h_loop_filter8_bestcase_c: 31.2
vc1dsp.vc1_h_loop_filter8_bestcase_neon: 62.2
vc1dsp.vc1_h_loop_filter8_worstcase_c: 358.2
vc1dsp.vc1_h_loop_filter8_worstcase_neon: 88.2
vc1dsp.vc1_h_loop_filter16_bestcase_c: 51.0
vc1dsp.vc1_h_loop_filter16_bestcase_neon: 107.7
vc1dsp.vc1_h_loop_filter16_worstcase_c: 722.7
vc1dsp.vc1_h_loop_filter16_worstcase_neon: 140.5
vc1dsp.vc1_v_loop_filter4_bestcase_c: 9.7
vc1dsp.vc1_v_loop_filter4_bestcase_neon: 43.0
vc1dsp.vc1_v_loop_filter4_worstcase_c: 178.7
vc1dsp.vc1_v_loop_filter4_worstcase_neon: 69.0
vc1dsp.vc1_v_loop_filter8_bestcase_c: 30.2
vc1dsp.vc1_v_loop_filter8_bestcase_neon: 50.7
vc1dsp.vc1_v_loop_filter8_worstcase_c: 353.0
vc1dsp.vc1_v_loop_filter8_worstcase_neon: 69.2
vc1dsp.vc1_v_loop_filter16_bestcase_c: 60.0
vc1dsp.vc1_v_loop_filter16_bestcase_neon: 90.0
vc1dsp.vc1_v_loop_filter16_worstcase_c: 714.2
vc1dsp.vc1_v_loop_filter16_worstcase_neon: 97.2
Signed-off-by: Ben Avison <bavison@riscosopen.org>
---
libavcodec/aarch64/Makefile | 1 +
libavcodec/aarch64/vc1dsp_init_aarch64.c | 14 +
libavcodec/aarch64/vc1dsp_neon.S | 692 +++++++++++++++++++++++
3 files changed, 707 insertions(+)
create mode 100644 libavcodec/aarch64/vc1dsp_neon.S
diff --git a/libavcodec/aarch64/Makefile b/libavcodec/aarch64/Makefile
index 954461f81d..5b25e4dfb9 100644
--- a/libavcodec/aarch64/Makefile
+++ b/libavcodec/aarch64/Makefile
@@ -48,6 +48,7 @@ NEON-OBJS-$(CONFIG_IDCTDSP) += aarch64/simple_idct_neon.o
NEON-OBJS-$(CONFIG_MDCT) += aarch64/mdct_neon.o
NEON-OBJS-$(CONFIG_MPEGAUDIODSP) += aarch64/mpegaudiodsp_neon.o
NEON-OBJS-$(CONFIG_PIXBLOCKDSP) += aarch64/pixblockdsp_neon.o
+NEON-OBJS-$(CONFIG_VC1DSP) += aarch64/vc1dsp_neon.o
NEON-OBJS-$(CONFIG_VP8DSP) += aarch64/vp8dsp_neon.o
# decoders/encoders
diff --git a/libavcodec/aarch64/vc1dsp_init_aarch64.c b/libavcodec/aarch64/vc1dsp_init_aarch64.c
index 13dfd74940..8f96e4802d 100644
--- a/libavcodec/aarch64/vc1dsp_init_aarch64.c
+++ b/libavcodec/aarch64/vc1dsp_init_aarch64.c
@@ -25,6 +25,13 @@
#include "config.h"
+void ff_vc1_v_loop_filter4_neon(uint8_t *src, ptrdiff_t stride, int pq);
+void ff_vc1_h_loop_filter4_neon(uint8_t *src, ptrdiff_t stride, int pq);
+void ff_vc1_v_loop_filter8_neon(uint8_t *src, ptrdiff_t stride, int pq);
+void ff_vc1_h_loop_filter8_neon(uint8_t *src, ptrdiff_t stride, int pq);
+void ff_vc1_v_loop_filter16_neon(uint8_t *src, ptrdiff_t stride, int pq);
+void ff_vc1_h_loop_filter16_neon(uint8_t *src, ptrdiff_t stride, int pq);
+
void ff_put_vc1_chroma_mc8_neon(uint8_t *dst, uint8_t *src, ptrdiff_t stride,
int h, int x, int y);
void ff_avg_vc1_chroma_mc8_neon(uint8_t *dst, uint8_t *src, ptrdiff_t stride,
@@ -39,6 +46,13 @@ av_cold void ff_vc1dsp_init_aarch64(VC1DSPContext *dsp)
int cpu_flags = av_get_cpu_flags();
if (have_neon(cpu_flags)) {
+ dsp->vc1_v_loop_filter4 = ff_vc1_v_loop_filter4_neon;
+ dsp->vc1_h_loop_filter4 = ff_vc1_h_loop_filter4_neon;
+ dsp->vc1_v_loop_filter8 = ff_vc1_v_loop_filter8_neon;
+ dsp->vc1_h_loop_filter8 = ff_vc1_h_loop_filter8_neon;
+ dsp->vc1_v_loop_filter16 = ff_vc1_v_loop_filter16_neon;
+ dsp->vc1_h_loop_filter16 = ff_vc1_h_loop_filter16_neon;
+
dsp->put_no_rnd_vc1_chroma_pixels_tab[0] = ff_put_vc1_chroma_mc8_neon;
dsp->avg_no_rnd_vc1_chroma_pixels_tab[0] = ff_avg_vc1_chroma_mc8_neon;
dsp->put_no_rnd_vc1_chroma_pixels_tab[1] = ff_put_vc1_chroma_mc4_neon;
diff --git a/libavcodec/aarch64/vc1dsp_neon.S b/libavcodec/aarch64/vc1dsp_neon.S
new file mode 100644
index 0000000000..1ea9fa75ff
--- /dev/null
+++ b/libavcodec/aarch64/vc1dsp_neon.S
@@ -0,0 +1,692 @@
+/*
+ * VC1 AArch64 NEON optimisations
+ *
+ * Copyright (c) 2022 Ben Avison <bavison@riscosopen.org>
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include "libavutil/aarch64/asm.S"
+
+.align 5
+.Lcoeffs:
+.quad 0x00050002
+
+// VC-1 in-loop deblocking filter for 4 pixel pairs at boundary of vertically-neighbouring blocks
+// On entry:
+// x0 -> top-left pel of lower block
+// x1 = row stride, bytes
+// w2 = PQUANT bitstream parameter
+function ff_vc1_v_loop_filter4_neon, export=1
+ sub x3, x0, w1, sxtw #2
+ ldr d0, .Lcoeffs
+ ld1 {v1.s}[0], [x0], x1 // P5
+ ld1 {v2.s}[0], [x3], x1 // P1
+ ld1 {v3.s}[0], [x3], x1 // P2
+ ld1 {v4.s}[0], [x0], x1 // P6
+ ld1 {v5.s}[0], [x3], x1 // P3
+ ld1 {v6.s}[0], [x0], x1 // P7
+ ld1 {v7.s}[0], [x3] // P4
+ ld1 {v16.s}[0], [x0] // P8
+ ushll v17.8h, v1.8b, #1 // 2*P5
+ dup v18.8h, w2 // pq
+ ushll v2.8h, v2.8b, #1 // 2*P1
+ uxtl v3.8h, v3.8b // P2
+ uxtl v4.8h, v4.8b // P6
+ uxtl v19.8h, v5.8b // P3
+ mls v2.4h, v3.4h, v0.h[1] // 2*P1-5*P2
+ uxtl v3.8h, v6.8b // P7
+ mls v17.4h, v4.4h, v0.h[1] // 2*P5-5*P6
+ ushll v5.8h, v5.8b, #1 // 2*P3
+ uxtl v6.8h, v7.8b // P4
+ mla v17.4h, v3.4h, v0.h[1] // 2*P5-5*P6+5*P7
+ uxtl v3.8h, v16.8b // P8
+ mla v2.4h, v19.4h, v0.h[1] // 2*P1-5*P2+5*P3
+ uxtl v1.8h, v1.8b // P5
+ mls v5.4h, v6.4h, v0.h[1] // 2*P3-5*P4
+ mls v17.4h, v3.4h, v0.h[0] // 2*P5-5*P6+5*P7-2*P8
+ sub v3.4h, v6.4h, v1.4h // P4-P5
+ mls v2.4h, v6.4h, v0.h[0] // 2*P1-5*P2+5*P3-2*P4
+ mla v5.4h, v1.4h, v0.h[1] // 2*P3-5*P4+5*P5
+ mls v5.4h, v4.4h, v0.h[0] // 2*P3-5*P4+5*P5-2*P6
+ abs v4.4h, v3.4h
+ srshr v7.4h, v17.4h, #3
+ srshr v2.4h, v2.4h, #3
+ sshr v4.4h, v4.4h, #1 // clip
+ srshr v5.4h, v5.4h, #3
+ abs v7.4h, v7.4h // a2
+ sshr v3.4h, v3.4h, #8 // clip_sign
+ abs v2.4h, v2.4h // a1
+ cmeq v16.4h, v4.4h, #0 // test clip == 0
+ abs v17.4h, v5.4h // a0
+ sshr v5.4h, v5.4h, #8 // a0_sign
+ cmhs v19.4h, v2.4h, v7.4h // test a1 >= a2
+ cmhs v18.4h, v17.4h, v18.4h // test a0 >= pq
+ sub v3.4h, v3.4h, v5.4h // clip_sign - a0_sign
+ bsl v19.8b, v7.8b, v2.8b // a3
+ orr v2.8b, v16.8b, v18.8b // test clip == 0 || a0 >= pq
+ uqsub v5.4h, v17.4h, v19.4h // a0 >= a3 ? a0-a3 : 0 (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+ cmhs v7.4h, v19.4h, v17.4h // test a3 >= a0
+ mul v0.4h, v5.4h, v0.h[1] // a0 >= a3 ? 5*(a0-a3) : 0
+ orr v5.8b, v2.8b, v7.8b // test clip == 0 || a0 >= pq || a3 >= a0
+ mov w0, v5.s[1] // move to gp reg
+ ushr v0.4h, v0.4h, #3 // a0 >= a3 ? (5*(a0-a3))>>3 : 0
+ cmhs v5.4h, v0.4h, v4.4h
+ tbnz w0, #0, 1f // none of the 4 pixel pairs should be updated if this one is not filtered
+ bsl v5.8b, v4.8b, v0.8b // FFMIN(d, clip)
+ bic v0.8b, v5.8b, v2.8b // set each d to zero if it should not be filtered because clip == 0 || a0 >= pq (a3 > a0 case already zeroed by saturating sub)
+ mls v6.4h, v0.4h, v3.4h // invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P4
+ mla v1.4h, v0.4h, v3.4h // invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P5
+ sqxtun v0.8b, v6.8h
+ sqxtun v1.8b, v1.8h
+ st1 {v0.s}[0], [x3], x1
+ st1 {v1.s}[0], [x3]
+1: ret
+endfunc
+
+// VC-1 in-loop deblocking filter for 4 pixel pairs at boundary of horizontally-neighbouring blocks
+// On entry:
+// x0 -> top-left pel of right block
+// x1 = row stride, bytes
+// w2 = PQUANT bitstream parameter
+function ff_vc1_h_loop_filter4_neon, export=1
+ sub x3, x0, #4 // where to start reading
+ ldr d0, .Lcoeffs
+ ld1 {v1.8b}, [x3], x1
+ sub x0, x0, #1 // where to start writing
+ ld1 {v2.8b}, [x3], x1
+ ld1 {v3.8b}, [x3], x1
+ ld1 {v4.8b}, [x3]
+ dup v5.8h, w2 // pq
+ trn1 v6.8b, v1.8b, v2.8b
+ trn2 v1.8b, v1.8b, v2.8b
+ trn1 v2.8b, v3.8b, v4.8b
+ trn2 v3.8b, v3.8b, v4.8b
+ trn1 v4.4h, v6.4h, v2.4h // P1, P5
+ trn1 v7.4h, v1.4h, v3.4h // P2, P6
+ trn2 v2.4h, v6.4h, v2.4h // P3, P7
+ trn2 v1.4h, v1.4h, v3.4h // P4, P8
+ ushll v3.8h, v4.8b, #1 // 2*P1, 2*P5
+ uxtl v6.8h, v7.8b // P2, P6
+ uxtl v7.8h, v2.8b // P3, P7
+ uxtl v1.8h, v1.8b // P4, P8
+ mls v3.8h, v6.8h, v0.h[1] // 2*P1-5*P2, 2*P5-5*P6
+ ushll v2.8h, v2.8b, #1 // 2*P3, 2*P7
+ uxtl v4.8h, v4.8b // P1, P5
+ mla v3.8h, v7.8h, v0.h[1] // 2*P1-5*P2+5*P3, 2*P5-5*P6+5*P7
+ mov d6, v6.d[1] // P6
+ mls v3.8h, v1.8h, v0.h[0] // 2*P1-5*P2+5*P3-2*P4, 2*P5-5*P6+5*P7-2*P8
+ mov d4, v4.d[1] // P5
+ mls v2.4h, v1.4h, v0.h[1] // 2*P3-5*P4
+ mla v2.4h, v4.4h, v0.h[1] // 2*P3-5*P4+5*P5
+ sub v7.4h, v1.4h, v4.4h // P4-P5
+ mls v2.4h, v6.4h, v0.h[0] // 2*P3-5*P4+5*P5-2*P6
+ srshr v3.8h, v3.8h, #3
+ abs v6.4h, v7.4h
+ sshr v7.4h, v7.4h, #8 // clip_sign
+ srshr v2.4h, v2.4h, #3
+ abs v3.8h, v3.8h // a1, a2
+ sshr v6.4h, v6.4h, #1 // clip
+ mov d16, v3.d[1] // a2
+ abs v17.4h, v2.4h // a0
+ cmeq v18.4h, v6.4h, #0 // test clip == 0
+ sshr v2.4h, v2.4h, #8 // a0_sign
+ cmhs v19.4h, v3.4h, v16.4h // test a1 >= a2
+ cmhs v5.4h, v17.4h, v5.4h // test a0 >= pq
+ sub v2.4h, v7.4h, v2.4h // clip_sign - a0_sign
+ bsl v19.8b, v16.8b, v3.8b // a3
+ orr v3.8b, v18.8b, v5.8b // test clip == 0 || a0 >= pq
+ uqsub v5.4h, v17.4h, v19.4h // a0 >= a3 ? a0-a3 : 0 (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+ cmhs v7.4h, v19.4h, v17.4h // test a3 >= a0
+ mul v0.4h, v5.4h, v0.h[1] // a0 >= a3 ? 5*(a0-a3) : 0
+ orr v5.8b, v3.8b, v7.8b // test clip == 0 || a0 >= pq || a3 >= a0
+ mov w2, v5.s[1] // move to gp reg
+ ushr v0.4h, v0.4h, #3 // a0 >= a3 ? (5*(a0-a3))>>3 : 0
+ cmhs v5.4h, v0.4h, v6.4h
+ tbnz w2, #0, 1f // none of the 4 pixel pairs should be updated if this one is not filtered
+ bsl v5.8b, v6.8b, v0.8b // FFMIN(d, clip)
+ bic v0.8b, v5.8b, v3.8b // set each d to zero if it should not be filtered because clip == 0 || a0 >= pq (a3 > a0 case already zeroed by saturating sub)
+ mla v4.4h, v0.4h, v2.4h // invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P5
+ mls v1.4h, v0.4h, v2.4h // invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P4
+ sqxtun v3.8b, v4.8h
+ sqxtun v2.8b, v1.8h
+ st2 {v2.b, v3.b}[0], [x0], x1
+ st2 {v2.b, v3.b}[1], [x0], x1
+ st2 {v2.b, v3.b}[2], [x0], x1
+ st2 {v2.b, v3.b}[3], [x0]
+1: ret
+endfunc
+
+// VC-1 in-loop deblocking filter for 8 pixel pairs at boundary of vertically-neighbouring blocks
+// On entry:
+// x0 -> top-left pel of lower block
+// x1 = row stride, bytes
+// w2 = PQUANT bitstream parameter
+function ff_vc1_v_loop_filter8_neon, export=1
+ sub x3, x0, w1, sxtw #2
+ ldr d0, .Lcoeffs
+ ld1 {v1.8b}, [x0], x1 // P5
+ movi v2.2d, #0x0000ffff00000000
+ ld1 {v3.8b}, [x3], x1 // P1
+ ld1 {v4.8b}, [x3], x1 // P2
+ ld1 {v5.8b}, [x0], x1 // P6
+ ld1 {v6.8b}, [x3], x1 // P3
+ ld1 {v7.8b}, [x0], x1 // P7
+ ushll v16.8h, v1.8b, #1 // 2*P5
+ ushll v3.8h, v3.8b, #1 // 2*P1
+ ld1 {v17.8b}, [x3] // P4
+ uxtl v4.8h, v4.8b // P2
+ ld1 {v18.8b}, [x0] // P8
+ uxtl v5.8h, v5.8b // P6
+ dup v19.8h, w2 // pq
+ uxtl v20.8h, v6.8b // P3
+ mls v3.8h, v4.8h, v0.h[1] // 2*P1-5*P2
+ uxtl v4.8h, v7.8b // P7
+ ushll v6.8h, v6.8b, #1 // 2*P3
+ mls v16.8h, v5.8h, v0.h[1] // 2*P5-5*P6
+ uxtl v7.8h, v17.8b // P4
+ uxtl v17.8h, v18.8b // P8
+ mla v16.8h, v4.8h, v0.h[1] // 2*P5-5*P6+5*P7
+ uxtl v1.8h, v1.8b // P5
+ mla v3.8h, v20.8h, v0.h[1] // 2*P1-5*P2+5*P3
+ sub v4.8h, v7.8h, v1.8h // P4-P5
+ mls v6.8h, v7.8h, v0.h[1] // 2*P3-5*P4
+ mls v16.8h, v17.8h, v0.h[0] // 2*P5-5*P6+5*P7-2*P8
+ abs v17.8h, v4.8h
+ sshr v4.8h, v4.8h, #8 // clip_sign
+ mls v3.8h, v7.8h, v0.h[0] // 2*P1-5*P2+5*P3-2*P4
+ sshr v17.8h, v17.8h, #1 // clip
+ mla v6.8h, v1.8h, v0.h[1] // 2*P3-5*P4+5*P5
+ srshr v16.8h, v16.8h, #3
+ mls v6.8h, v5.8h, v0.h[0] // 2*P3-5*P4+5*P5-2*P6
+ cmeq v5.8h, v17.8h, #0 // test clip == 0
+ srshr v3.8h, v3.8h, #3
+ abs v16.8h, v16.8h // a2
+ abs v3.8h, v3.8h // a1
+ srshr v6.8h, v6.8h, #3
+ cmhs v18.8h, v3.8h, v16.8h // test a1 >= a2
+ abs v20.8h, v6.8h // a0
+ sshr v6.8h, v6.8h, #8 // a0_sign
+ bsl v18.16b, v16.16b, v3.16b // a3
+ cmhs v3.8h, v20.8h, v19.8h // test a0 >= pq
+ sub v4.8h, v4.8h, v6.8h // clip_sign - a0_sign
+ uqsub v6.8h, v20.8h, v18.8h // a0 >= a3 ? a0-a3 : 0 (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+ cmhs v16.8h, v18.8h, v20.8h // test a3 >= a0
+ orr v3.16b, v5.16b, v3.16b // test clip == 0 || a0 >= pq
+ mul v0.8h, v6.8h, v0.h[1] // a0 >= a3 ? 5*(a0-a3) : 0
+ orr v5.16b, v3.16b, v16.16b // test clip == 0 || a0 >= pq || a3 >= a0
+ cmtst v2.2d, v5.2d, v2.2d // if 2nd of each group of is not filtered, then none of the others in the group should be either
+ mov w0, v5.s[1] // move to gp reg
+ ushr v0.8h, v0.8h, #3 // a0 >= a3 ? (5*(a0-a3))>>3 : 0
+ mov w2, v5.s[3]
+ orr v2.16b, v3.16b, v2.16b
+ cmhs v3.8h, v0.8h, v17.8h
+ and w0, w0, w2
+ bsl v3.16b, v17.16b, v0.16b // FFMIN(d, clip)
+ tbnz w0, #0, 1f // none of the 8 pixel pairs should be updated in this case
+ bic v0.16b, v3.16b, v2.16b // set each d to zero if it should not be filtered
+ mls v7.8h, v0.8h, v4.8h // invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P4
+ mla v1.8h, v0.8h, v4.8h // invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P5
+ sqxtun v0.8b, v7.8h
+ sqxtun v1.8b, v1.8h
+ st1 {v0.8b}, [x3], x1
+ st1 {v1.8b}, [x3]
+1: ret
+endfunc
+
+// VC-1 in-loop deblocking filter for 8 pixel pairs at boundary of horizontally-neighbouring blocks
+// On entry:
+// x0 -> top-left pel of right block
+// x1 = row stride, bytes
+// w2 = PQUANT bitstream parameter
+function ff_vc1_h_loop_filter8_neon, export=1
+ sub x3, x0, #4 // where to start reading
+ ldr d0, .Lcoeffs
+ ld1 {v1.8b}, [x3], x1 // P1[0], P2[0]...
+ sub x0, x0, #1 // where to start writing
+ ld1 {v2.8b}, [x3], x1
+ add x4, x0, x1, lsl #2
+ ld1 {v3.8b}, [x3], x1
+ ld1 {v4.8b}, [x3], x1
+ ld1 {v5.8b}, [x3], x1
+ ld1 {v6.8b}, [x3], x1
+ ld1 {v7.8b}, [x3], x1
+ trn1 v16.8b, v1.8b, v2.8b // P1[0], P1[1], P3[0]...
+ ld1 {v17.8b}, [x3]
+ trn2 v1.8b, v1.8b, v2.8b // P2[0], P2[1], P4[0]...
+ trn1 v2.8b, v3.8b, v4.8b // P1[2], P1[3], P3[2]...
+ trn2 v3.8b, v3.8b, v4.8b // P2[2], P2[3], P4[2]...
+ dup v4.8h, w2 // pq
+ trn1 v18.8b, v5.8b, v6.8b // P1[4], P1[5], P3[4]...
+ trn2 v5.8b, v5.8b, v6.8b // P2[4], P2[5], P4[4]...
+ trn1 v6.4h, v16.4h, v2.4h // P1[0], P1[1], P1[2], P1[3], P5[0]...
+ trn1 v19.4h, v1.4h, v3.4h // P2[0], P2[1], P2[2], P2[3], P6[0]...
+ trn1 v20.8b, v7.8b, v17.8b // P1[6], P1[7], P3[6]...
+ trn2 v7.8b, v7.8b, v17.8b // P2[6], P2[7], P4[6]...
+ trn2 v2.4h, v16.4h, v2.4h // P3[0], P3[1], P3[2], P3[3], P7[0]...
+ trn2 v1.4h, v1.4h, v3.4h // P4[0], P4[1], P4[2], P4[3], P8[0]...
+ trn1 v3.4h, v18.4h, v20.4h // P1[4], P1[5], P1[6], P1[7], P5[4]...
+ trn1 v16.4h, v5.4h, v7.4h // P2[4], P2[5], P2[6], P2[7], P6[4]...
+ trn2 v17.4h, v18.4h, v20.4h // P3[4], P3[5], P3[6], P3[7], P7[4]...
+ trn2 v5.4h, v5.4h, v7.4h // P4[4], P4[5], P4[6], P4[7], P8[4]...
+ trn1 v7.2s, v6.2s, v3.2s // P1
+ trn1 v18.2s, v19.2s, v16.2s // P2
+ trn2 v3.2s, v6.2s, v3.2s // P5
+ trn2 v6.2s, v19.2s, v16.2s // P6
+ trn1 v16.2s, v2.2s, v17.2s // P3
+ trn2 v2.2s, v2.2s, v17.2s // P7
+ ushll v7.8h, v7.8b, #1 // 2*P1
+ trn1 v17.2s, v1.2s, v5.2s // P4
+ ushll v19.8h, v3.8b, #1 // 2*P5
+ trn2 v1.2s, v1.2s, v5.2s // P8
+ uxtl v5.8h, v18.8b // P2
+ uxtl v6.8h, v6.8b // P6
+ uxtl v18.8h, v16.8b // P3
+ mls v7.8h, v5.8h, v0.h[1] // 2*P1-5*P2
+ uxtl v2.8h, v2.8b // P7
+ ushll v5.8h, v16.8b, #1 // 2*P3
+ mls v19.8h, v6.8h, v0.h[1] // 2*P5-5*P6
+ uxtl v16.8h, v17.8b // P4
+ uxtl v1.8h, v1.8b // P8
+ mla v19.8h, v2.8h, v0.h[1] // 2*P5-5*P6+5*P7
+ uxtl v2.8h, v3.8b // P5
+ mla v7.8h, v18.8h, v0.h[1] // 2*P1-5*P2+5*P3
+ sub v3.8h, v16.8h, v2.8h // P4-P5
+ mls v5.8h, v16.8h, v0.h[1] // 2*P3-5*P4
+ mls v19.8h, v1.8h, v0.h[0] // 2*P5-5*P6+5*P7-2*P8
+ abs v1.8h, v3.8h
+ sshr v3.8h, v3.8h, #8 // clip_sign
+ mls v7.8h, v16.8h, v0.h[0] // 2*P1-5*P2+5*P3-2*P4
+ sshr v1.8h, v1.8h, #1 // clip
+ mla v5.8h, v2.8h, v0.h[1] // 2*P3-5*P4+5*P5
+ srshr v17.8h, v19.8h, #3
+ mls v5.8h, v6.8h, v0.h[0] // 2*P3-5*P4+5*P5-2*P6
+ cmeq v6.8h, v1.8h, #0 // test clip == 0
+ srshr v7.8h, v7.8h, #3
+ abs v17.8h, v17.8h // a2
+ abs v7.8h, v7.8h // a1
+ srshr v5.8h, v5.8h, #3
+ cmhs v18.8h, v7.8h, v17.8h // test a1 >= a2
+ abs v19.8h, v5.8h // a0
+ sshr v5.8h, v5.8h, #8 // a0_sign
+ bsl v18.16b, v17.16b, v7.16b // a3
+ cmhs v4.8h, v19.8h, v4.8h // test a0 >= pq
+ sub v3.8h, v3.8h, v5.8h // clip_sign - a0_sign
+ uqsub v5.8h, v19.8h, v18.8h // a0 >= a3 ? a0-a3 : 0 (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+ cmhs v7.8h, v18.8h, v19.8h // test a3 >= a0
+ orr v4.16b, v6.16b, v4.16b // test clip == 0 || a0 >= pq
+ mul v0.8h, v5.8h, v0.h[1] // a0 >= a3 ? 5*(a0-a3) : 0
+ orr v5.16b, v4.16b, v7.16b // test clip == 0 || a0 >= pq || a3 >= a0
+ mov w2, v5.s[1] // move to gp reg
+ ushr v0.8h, v0.8h, #3 // a0 >= a3 ? (5*(a0-a3))>>3 : 0
+ mov w3, v5.s[3]
+ cmhs v5.8h, v0.8h, v1.8h
+ and w5, w2, w3
+ bsl v5.16b, v1.16b, v0.16b // FFMIN(d, clip)
+ tbnz w5, #0, 2f // none of the 8 pixel pairs should be updated in this case
+ bic v0.16b, v5.16b, v4.16b // set each d to zero if it should not be filtered because clip == 0 || a0 >= pq (a3 > a0 case already zeroed by saturating sub)
+ mla v2.8h, v0.8h, v3.8h // invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P5
+ mls v16.8h, v0.8h, v3.8h // invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P4
+ sqxtun v1.8b, v2.8h
+ sqxtun v0.8b, v16.8h
+ tbnz w2, #0, 1f // none of the first 4 pixel pairs should be updated if so
+ st2 {v0.b, v1.b}[0], [x0], x1
+ st2 {v0.b, v1.b}[1], [x0], x1
+ st2 {v0.b, v1.b}[2], [x0], x1
+ st2 {v0.b, v1.b}[3], [x0]
+1: tbnz w3, #0, 2f // none of the second 4 pixel pairs should be updated if so
+ st2 {v0.b, v1.b}[4], [x4], x1
+ st2 {v0.b, v1.b}[5], [x4], x1
+ st2 {v0.b, v1.b}[6], [x4], x1
+ st2 {v0.b, v1.b}[7], [x4]
+2: ret
+endfunc
+
+// VC-1 in-loop deblocking filter for 16 pixel pairs at boundary of vertically-neighbouring blocks
+// On entry:
+// x0 -> top-left pel of lower block
+// x1 = row stride, bytes
+// w2 = PQUANT bitstream parameter
+function ff_vc1_v_loop_filter16_neon, export=1
+ sub x3, x0, w1, sxtw #2
+ ldr d0, .Lcoeffs
+ ld1 {v1.16b}, [x0], x1 // P5
+ movi v2.2d, #0x0000ffff00000000
+ ld1 {v3.16b}, [x3], x1 // P1
+ ld1 {v4.16b}, [x3], x1 // P2
+ ld1 {v5.16b}, [x0], x1 // P6
+ ld1 {v6.16b}, [x3], x1 // P3
+ ld1 {v7.16b}, [x0], x1 // P7
+ ushll v16.8h, v1.8b, #1 // 2*P5[0..7]
+ ushll v17.8h, v3.8b, #1 // 2*P1[0..7]
+ ld1 {v18.16b}, [x3] // P4
+ uxtl v19.8h, v4.8b // P2[0..7]
+ ld1 {v20.16b}, [x0] // P8
+ uxtl v21.8h, v5.8b // P6[0..7]
+ dup v22.8h, w2 // pq
+ ushll2 v3.8h, v3.16b, #1 // 2*P1[8..15]
+ mls v17.8h, v19.8h, v0.h[1] // 2*P1[0..7]-5*P2[0..7]
+ ushll2 v19.8h, v1.16b, #1 // 2*P5[8..15]
+ uxtl2 v4.8h, v4.16b // P2[8..15]
+ mls v16.8h, v21.8h, v0.h[1] // 2*P5[0..7]-5*P6[0..7]
+ uxtl2 v5.8h, v5.16b // P6[8..15]
+ uxtl v23.8h, v6.8b // P3[0..7]
+ uxtl v24.8h, v7.8b // P7[0..7]
+ mls v3.8h, v4.8h, v0.h[1] // 2*P1[8..15]-5*P2[8..15]
+ ushll v4.8h, v6.8b, #1 // 2*P3[0..7]
+ uxtl v25.8h, v18.8b // P4[0..7]
+ mls v19.8h, v5.8h, v0.h[1] // 2*P5[8..15]-5*P6[8..15]
+ uxtl2 v26.8h, v6.16b // P3[8..15]
+ mla v17.8h, v23.8h, v0.h[1] // 2*P1[0..7]-5*P2[0..7]+5*P3[0..7]
+ uxtl2 v7.8h, v7.16b // P7[8..15]
+ ushll2 v6.8h, v6.16b, #1 // 2*P3[8..15]
+ mla v16.8h, v24.8h, v0.h[1] // 2*P5[0..7]-5*P6[0..7]+5*P7[0..7]
+ uxtl2 v18.8h, v18.16b // P4[8..15]
+ uxtl v23.8h, v20.8b // P8[0..7]
+ mls v4.8h, v25.8h, v0.h[1] // 2*P3[0..7]-5*P4[0..7]
+ uxtl v24.8h, v1.8b // P5[0..7]
+ uxtl2 v20.8h, v20.16b // P8[8..15]
+ mla v3.8h, v26.8h, v0.h[1] // 2*P1[8..15]-5*P2[8..15]+5*P3[8..15]
+ uxtl2 v1.8h, v1.16b // P5[8..15]
+ sub v26.8h, v25.8h, v24.8h // P4[0..7]-P5[0..7]
+ mla v19.8h, v7.8h, v0.h[1] // 2*P5[8..15]-5*P6[8..15]+5*P7[8..15]
+ sub v7.8h, v18.8h, v1.8h // P4[8..15]-P5[8..15]
+ mls v6.8h, v18.8h, v0.h[1] // 2*P3[8..15]-5*P4[8..15]
+ abs v27.8h, v26.8h
+ sshr v26.8h, v26.8h, #8 // clip_sign[0..7]
+ mls v17.8h, v25.8h, v0.h[0] // 2*P1[0..7]-5*P2[0..7]+5*P3[0..7]-2*P4[0..7]
+ abs v28.8h, v7.8h
+ sshr v27.8h, v27.8h, #1 // clip[0..7]
+ mls v16.8h, v23.8h, v0.h[0] // 2*P5[0..7]-5*P6[0..7]+5*P7[0..7]-2*P8[0..7]
+ sshr v7.8h, v7.8h, #8 // clip_sign[8..15]
+ sshr v23.8h, v28.8h, #1 // clip[8..15]
+ mla v4.8h, v24.8h, v0.h[1] // 2*P3[0..7]-5*P4[0..7]+5*P5[0..7]
+ cmeq v28.8h, v27.8h, #0 // test clip[0..7] == 0
+ srshr v17.8h, v17.8h, #3
+ mls v3.8h, v18.8h, v0.h[0] // 2*P1[8..15]-5*P2[8..15]+5*P3[8..15]-2*P4[8..15]
+ cmeq v29.8h, v23.8h, #0 // test clip[8..15] == 0
+ srshr v16.8h, v16.8h, #3
+ mls v19.8h, v20.8h, v0.h[0] // 2*P5[8..15]-5*P6[8..15]+5*P7[8..15]-2*P8[8..15]
+ abs v17.8h, v17.8h // a1[0..7]
+ mla v6.8h, v1.8h, v0.h[1] // 2*P3[8..15]-5*P4[8..15]+5*P5[8..15]
+ srshr v3.8h, v3.8h, #3
+ mls v4.8h, v21.8h, v0.h[0] // 2*P3[0..7]-5*P4[0..7]+5*P5[0..7]-2*P6[0..7]
+ abs v16.8h, v16.8h // a2[0..7]
+ srshr v19.8h, v19.8h, #3
+ mls v6.8h, v5.8h, v0.h[0] // 2*P3[8..15]-5*P4[8..15]+5*P5[8..15]-2*P6[8..15]
+ cmhs v5.8h, v17.8h, v16.8h // test a1[0..7] >= a2[0..7]
+ abs v3.8h, v3.8h // a1[8..15]
+ srshr v4.8h, v4.8h, #3
+ abs v19.8h, v19.8h // a2[8..15]
+ bsl v5.16b, v16.16b, v17.16b // a3[0..7]
+ srshr v6.8h, v6.8h, #3
+ cmhs v16.8h, v3.8h, v19.8h // test a1[8..15] >= a2[8.15]
+ abs v17.8h, v4.8h // a0[0..7]
+ sshr v4.8h, v4.8h, #8 // a0_sign[0..7]
+ bsl v16.16b, v19.16b, v3.16b // a3[8..15]
+ uqsub v3.8h, v17.8h, v5.8h // a0[0..7] >= a3[0..7] ? a0[0..7]-a3[0..7] : 0 (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+ abs v19.8h, v6.8h // a0[8..15]
+ cmhs v20.8h, v17.8h, v22.8h // test a0[0..7] >= pq
+ cmhs v5.8h, v5.8h, v17.8h // test a3[0..7] >= a0[0..7]
+ sub v4.8h, v26.8h, v4.8h // clip_sign[0..7] - a0_sign[0..7]
+ sshr v6.8h, v6.8h, #8 // a0_sign[8..15]
+ mul v3.8h, v3.8h, v0.h[1] // a0[0..7] >= a3[0..7] ? 5*(a0[0..7]-a3[0..7]) : 0
+ uqsub v17.8h, v19.8h, v16.8h // a0[8..15] >= a3[8..15] ? a0[8..15]-a3[8..15] : 0 (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+ orr v20.16b, v28.16b, v20.16b // test clip[0..7] == 0 || a0[0..7] >= pq
+ cmhs v21.8h, v19.8h, v22.8h // test a0[8..15] >= pq
+ cmhs v16.8h, v16.8h, v19.8h // test a3[8..15] >= a0[8..15]
+ mul v0.8h, v17.8h, v0.h[1] // a0[8..15] >= a3[8..15] ? 5*(a0[8..15]-a3[8..15]) : 0
+ sub v6.8h, v7.8h, v6.8h // clip_sign[8..15] - a0_sign[8..15]
+ orr v5.16b, v20.16b, v5.16b // test clip[0..7] == 0 || a0[0..7] >= pq || a3[0..7] >= a0[0..7]
+ ushr v3.8h, v3.8h, #3 // a0[0..7] >= a3[0..7] ? (5*(a0[0..7]-a3[0..7]))>>3 : 0
+ orr v7.16b, v29.16b, v21.16b // test clip[8..15] == 0 || a0[8..15] >= pq
+ cmtst v17.2d, v5.2d, v2.2d // if 2nd of each group of is not filtered, then none of the others in the group should be either
+ mov w0, v5.s[1] // move to gp reg
+ cmhs v19.8h, v3.8h, v27.8h
+ ushr v0.8h, v0.8h, #3 // a0[8..15] >= a3[8..15] ? (5*(a0[8..15]-a3[8..15]))>>3 : 0
+ mov w2, v5.s[3]
+ orr v5.16b, v7.16b, v16.16b // test clip[8..15] == 0 || a0[8..15] >= pq || a3[8..15] >= a0[8..15]
+ orr v16.16b, v20.16b, v17.16b
+ bsl v19.16b, v27.16b, v3.16b // FFMIN(d[0..7], clip[0..7])
+ cmtst v2.2d, v5.2d, v2.2d
+ cmhs v3.8h, v0.8h, v23.8h
+ mov w4, v5.s[1]
+ mov w5, v5.s[3]
+ and w0, w0, w2
+ bic v5.16b, v19.16b, v16.16b // set each d[0..7] to zero if it should not be filtered because clip[0..7] == 0 || a0[0..7] >= pq (a3 > a0 case already zeroed by saturating sub)
+ orr v2.16b, v7.16b, v2.16b
+ bsl v3.16b, v23.16b, v0.16b // FFMIN(d[8..15], clip[8..15])
+ mls v25.8h, v5.8h, v4.8h // invert d[0..7] depending on clip_sign[0..7] & a0_sign[0..7], or zero it if they match, and accumulate into P4[0..7]
+ and w2, w4, w5
+ bic v0.16b, v3.16b, v2.16b // set each d[8..15] to zero if it should not be filtered because clip[8..15] == 0 || a0[8..15] >= pq (a3 > a0 case already zeroed by saturating sub)
+ mla v24.8h, v5.8h, v4.8h // invert d[0..7] depending on clip_sign[0..7] & a0_sign[0..7], or zero it if they match, and accumulate into P5[0..7]
+ and w0, w0, w2
+ mls v18.8h, v0.8h, v6.8h // invert d[8..15] depending on clip_sign[8..15] & a0_sign[8..15], or zero it if they match, and accumulate into P4[8..15]
+ sqxtun v2.8b, v25.8h
+ tbnz w0, #0, 1f // none of the 16 pixel pairs should be updated in this case
+ mla v1.8h, v0.8h, v6.8h // invert d[8..15] depending on clip_sign[8..15] & a0_sign[8..15], or zero it if they match, and accumulate into P5[8..15]
+ sqxtun v0.8b, v24.8h
+ sqxtun2 v2.16b, v18.8h
+ sqxtun2 v0.16b, v1.8h
+ st1 {v2.16b}, [x3], x1
+ st1 {v0.16b}, [x3]
+1: ret
+endfunc
+
+// VC-1 in-loop deblocking filter for 16 pixel pairs at boundary of horizontally-neighbouring blocks
+// On entry:
+// x0 -> top-left pel of right block
+// x1 = row stride, bytes
+// w2 = PQUANT bitstream parameter
+function ff_vc1_h_loop_filter16_neon, export=1
+ sub x3, x0, #4 // where to start reading
+ ldr d0, .Lcoeffs
+ ld1 {v1.8b}, [x3], x1 // P1[0], P2[0]...
+ sub x0, x0, #1 // where to start writing
+ ld1 {v2.8b}, [x3], x1
+ add x4, x0, x1, lsl #3
+ ld1 {v3.8b}, [x3], x1
+ add x5, x0, x1, lsl #2
+ ld1 {v4.8b}, [x3], x1
+ add x6, x4, x1, lsl #2
+ ld1 {v5.8b}, [x3], x1
+ ld1 {v6.8b}, [x3], x1
+ ld1 {v7.8b}, [x3], x1
+ trn1 v16.8b, v1.8b, v2.8b // P1[0], P1[1], P3[0]...
+ ld1 {v17.8b}, [x3], x1
+ trn2 v1.8b, v1.8b, v2.8b // P2[0], P2[1], P4[0]...
+ ld1 {v2.8b}, [x3], x1
+ trn1 v18.8b, v3.8b, v4.8b // P1[2], P1[3], P3[2]...
+ ld1 {v19.8b}, [x3], x1
+ trn2 v3.8b, v3.8b, v4.8b // P2[2], P2[3], P4[2]...
+ ld1 {v4.8b}, [x3], x1
+ trn1 v20.8b, v5.8b, v6.8b // P1[4], P1[5], P3[4]...
+ ld1 {v21.8b}, [x3], x1
+ trn2 v5.8b, v5.8b, v6.8b // P2[4], P2[5], P4[4]...
+ ld1 {v6.8b}, [x3], x1
+ trn1 v22.8b, v7.8b, v17.8b // P1[6], P1[7], P3[6]...
+ ld1 {v23.8b}, [x3], x1
+ trn2 v7.8b, v7.8b, v17.8b // P2[6], P2[7], P4[6]...
+ ld1 {v17.8b}, [x3], x1
+ trn1 v24.8b, v2.8b, v19.8b // P1[8], P1[9], P3[8]...
+ ld1 {v25.8b}, [x3]
+ trn2 v2.8b, v2.8b, v19.8b // P2[8], P2[9], P4[8]...
+ trn1 v19.4h, v16.4h, v18.4h // P1[0], P1[1], P1[2], P1[3], P5[0]...
+ trn1 v26.8b, v4.8b, v21.8b // P1[10], P1[11], P3[10]...
+ trn2 v4.8b, v4.8b, v21.8b // P2[10], P2[11], P4[10]...
+ trn1 v21.4h, v1.4h, v3.4h // P2[0], P2[1], P2[2], P2[3], P6[0]...
+ trn1 v27.4h, v20.4h, v22.4h // P1[4], P1[5], P1[6], P1[7], P5[4]...
+ trn1 v28.8b, v6.8b, v23.8b // P1[12], P1[13], P3[12]...
+ trn2 v6.8b, v6.8b, v23.8b // P2[12], P2[13], P4[12]...
+ trn1 v23.4h, v5.4h, v7.4h // P2[4], P2[5], P2[6], P2[7], P6[4]...
+ trn1 v29.4h, v24.4h, v26.4h // P1[8], P1[9], P1[10], P1[11], P5[8]...
+ trn1 v30.8b, v17.8b, v25.8b // P1[14], P1[15], P3[14]...
+ trn2 v17.8b, v17.8b, v25.8b // P2[14], P2[15], P4[14]...
+ trn1 v25.4h, v2.4h, v4.4h // P2[8], P2[9], P2[10], P2[11], P6[8]...
+ trn1 v31.2s, v19.2s, v27.2s // P1[0..7]
+ trn2 v19.2s, v19.2s, v27.2s // P5[0..7]
+ trn1 v27.2s, v21.2s, v23.2s // P2[0..7]
+ trn2 v21.2s, v21.2s, v23.2s // P6[0..7]
+ trn1 v23.4h, v28.4h, v30.4h // P1[12], P1[13], P1[14], P1[15], P5[12]...
+ trn2 v16.4h, v16.4h, v18.4h // P3[0], P3[1], P3[2], P3[3], P7[0]...
+ trn1 v18.4h, v6.4h, v17.4h // P2[12], P2[13], P2[14], P2[15], P6[12]...
+ trn2 v20.4h, v20.4h, v22.4h // P3[4], P3[5], P3[6], P3[7], P7[4]...
+ trn2 v22.4h, v24.4h, v26.4h // P3[8], P3[9], P3[10], P3[11], P7[8]...
+ trn1 v24.2s, v29.2s, v23.2s // P1[8..15]
+ trn2 v23.2s, v29.2s, v23.2s // P5[8..15]
+ trn1 v26.2s, v25.2s, v18.2s // P2[8..15]
+ trn2 v18.2s, v25.2s, v18.2s // P6[8..15]
+ trn2 v25.4h, v28.4h, v30.4h // P3[12], P3[13], P3[14], P3[15], P7[12]...
+ trn2 v1.4h, v1.4h, v3.4h // P4[0], P4[1], P4[2], P4[3], P8[0]...
+ trn2 v3.4h, v5.4h, v7.4h // P4[4], P4[5], P4[6], P4[7], P8[4]...
+ trn2 v2.4h, v2.4h, v4.4h // P4[8], P4[9], P4[10], P4[11], P8[8]...
+ trn2 v4.4h, v6.4h, v17.4h // P4[12], P4[13], P4[14], P4[15], P8[12]...
+ ushll v5.8h, v31.8b, #1 // 2*P1[0..7]
+ ushll v6.8h, v19.8b, #1 // 2*P5[0..7]
+ trn1 v7.2s, v16.2s, v20.2s // P3[0..7]
+ uxtl v17.8h, v27.8b // P2[0..7]
+ trn2 v16.2s, v16.2s, v20.2s // P7[0..7]
+ uxtl v20.8h, v21.8b // P6[0..7]
+ trn1 v21.2s, v22.2s, v25.2s // P3[8..15]
+ ushll v24.8h, v24.8b, #1 // 2*P1[8..15]
+ trn2 v22.2s, v22.2s, v25.2s // P7[8..15]
+ ushll v25.8h, v23.8b, #1 // 2*P5[8..15]
+ trn1 v27.2s, v1.2s, v3.2s // P4[0..7]
+ uxtl v26.8h, v26.8b // P2[8..15]
+ mls v5.8h, v17.8h, v0.h[1] // 2*P1[0..7]-5*P2[0..7]
+ uxtl v17.8h, v18.8b // P6[8..15]
+ mls v6.8h, v20.8h, v0.h[1] // 2*P5[0..7]-5*P6[0..7]
+ trn1 v18.2s, v2.2s, v4.2s // P4[8..15]
+ uxtl v28.8h, v7.8b // P3[0..7]
+ mls v24.8h, v26.8h, v0.h[1] // 2*P1[8..15]-5*P2[8..15]
+ uxtl v16.8h, v16.8b // P7[0..7]
+ uxtl v26.8h, v21.8b // P3[8..15]
+ mls v25.8h, v17.8h, v0.h[1] // 2*P5[8..15]-5*P6[8..15]
+ uxtl v22.8h, v22.8b // P7[8..15]
+ ushll v7.8h, v7.8b, #1 // 2*P3[0..7]
+ uxtl v27.8h, v27.8b // P4[0..7]
+ trn2 v1.2s, v1.2s, v3.2s // P8[0..7]
+ ushll v3.8h, v21.8b, #1 // 2*P3[8..15]
+ trn2 v2.2s, v2.2s, v4.2s // P8[8..15]
+ uxtl v4.8h, v18.8b // P4[8..15]
+ mla v5.8h, v28.8h, v0.h[1] // 2*P1[0..7]-5*P2[0..7]+5*P3[0..7]
+ uxtl v1.8h, v1.8b // P8[0..7]
+ mla v6.8h, v16.8h, v0.h[1] // 2*P5[0..7]-5*P6[0..7]+5*P7[0..7]
+ uxtl v2.8h, v2.8b // P8[8..15]
+ uxtl v16.8h, v19.8b // P5[0..7]
+ mla v24.8h, v26.8h, v0.h[1] // 2*P1[8..15]-5*P2[8..15]+5*P3[8..15]
+ uxtl v18.8h, v23.8b // P5[8..15]
+ dup v19.8h, w2 // pq
+ mla v25.8h, v22.8h, v0.h[1] // 2*P5[8..15]-5*P6[8..15]+5*P7[8..15]
+ sub v21.8h, v27.8h, v16.8h // P4[0..7]-P5[0..7]
+ sub v22.8h, v4.8h, v18.8h // P4[8..15]-P5[8..15]
+ mls v7.8h, v27.8h, v0.h[1] // 2*P3[0..7]-5*P4[0..7]
+ abs v23.8h, v21.8h
+ mls v3.8h, v4.8h, v0.h[1] // 2*P3[8..15]-5*P4[8..15]
+ abs v26.8h, v22.8h
+ sshr v21.8h, v21.8h, #8 // clip_sign[0..7]
+ mls v5.8h, v27.8h, v0.h[0] // 2*P1[0..7]-5*P2[0..7]+5*P3[0..7]-2*P4[0..7]
+ sshr v23.8h, v23.8h, #1 // clip[0..7]
+ sshr v26.8h, v26.8h, #1 // clip[8..15]
+ mls v6.8h, v1.8h, v0.h[0] // 2*P5[0..7]-5*P6[0..7]+5*P7[0..7]-2*P8[0..7]
+ sshr v1.8h, v22.8h, #8 // clip_sign[8..15]
+ cmeq v22.8h, v23.8h, #0 // test clip[0..7] == 0
+ mls v24.8h, v4.8h, v0.h[0] // 2*P1[8..15]-5*P2[8..15]+5*P3[8..15]-2*P4[8..15]
+ cmeq v28.8h, v26.8h, #0 // test clip[8..15] == 0
+ srshr v5.8h, v5.8h, #3
+ mls v25.8h, v2.8h, v0.h[0] // 2*P5[8..15]-5*P6[8..15]+5*P7[8..15]-2*P8[8..15]
+ srshr v2.8h, v6.8h, #3
+ mla v7.8h, v16.8h, v0.h[1] // 2*P3[0..7]-5*P4[0..7]+5*P5[0..7]
+ srshr v6.8h, v24.8h, #3
+ mla v3.8h, v18.8h, v0.h[1] // 2*P3[8..15]-5*P4[8..15]+5*P5[8..15]
+ abs v5.8h, v5.8h // a1[0..7]
+ srshr v24.8h, v25.8h, #3
+ mls v3.8h, v17.8h, v0.h[0] // 2*P3[8..15]-5*P4[8..15]+5*P5[8..15]-2*P6[8..15]
+ abs v2.8h, v2.8h // a2[0..7]
+ abs v6.8h, v6.8h // a1[8..15]
+ mls v7.8h, v20.8h, v0.h[0] // 2*P3[0..7]-5*P4[0..7]+5*P5[0..7]-2*P6[0..7]
+ abs v17.8h, v24.8h // a2[8..15]
+ cmhs v20.8h, v5.8h, v2.8h // test a1[0..7] >= a2[0..7]
+ srshr v3.8h, v3.8h, #3
+ cmhs v24.8h, v6.8h, v17.8h // test a1[8..15] >= a2[8.15]
+ srshr v7.8h, v7.8h, #3
+ bsl v20.16b, v2.16b, v5.16b // a3[0..7]
+ abs v2.8h, v3.8h // a0[8..15]
+ sshr v3.8h, v3.8h, #8 // a0_sign[8..15]
+ bsl v24.16b, v17.16b, v6.16b // a3[8..15]
+ abs v5.8h, v7.8h // a0[0..7]
+ sshr v6.8h, v7.8h, #8 // a0_sign[0..7]
+ cmhs v7.8h, v2.8h, v19.8h // test a0[8..15] >= pq
+ sub v1.8h, v1.8h, v3.8h // clip_sign[8..15] - a0_sign[8..15]
+ uqsub v3.8h, v2.8h, v24.8h // a0[8..15] >= a3[8..15] ? a0[8..15]-a3[8..15] : 0 (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+ cmhs v2.8h, v24.8h, v2.8h // test a3[8..15] >= a0[8..15]
+ uqsub v17.8h, v5.8h, v20.8h // a0[0..7] >= a3[0..7] ? a0[0..7]-a3[0..7] : 0 (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+ cmhs v19.8h, v5.8h, v19.8h // test a0[0..7] >= pq
+ orr v7.16b, v28.16b, v7.16b // test clip[8..15] == 0 || a0[8..15] >= pq
+ sub v6.8h, v21.8h, v6.8h // clip_sign[0..7] - a0_sign[0..7]
+ mul v3.8h, v3.8h, v0.h[1] // a0[8..15] >= a3[8..15] ? 5*(a0[8..15]-a3[8..15]) : 0
+ cmhs v5.8h, v20.8h, v5.8h // test a3[0..7] >= a0[0..7]
+ orr v19.16b, v22.16b, v19.16b // test clip[0..7] == 0 || a0[0..7] >= pq
+ mul v0.8h, v17.8h, v0.h[1] // a0[0..7] >= a3[0..7] ? 5*(a0[0..7]-a3[0..7]) : 0
+ orr v2.16b, v7.16b, v2.16b // test clip[8..15] == 0 || a0[8..15] >= pq || a3[8..15] >= a0[8..15]
+ orr v5.16b, v19.16b, v5.16b // test clip[0..7] == 0 || a0[0..7] >= pq || a3[0..7] >= a0[0..7]
+ ushr v3.8h, v3.8h, #3 // a0[8..15] >= a3[8..15] ? (5*(a0[8..15]-a3[8..15]))>>3 : 0
+ mov w7, v2.s[1]
+ mov w8, v2.s[3]
+ ushr v0.8h, v0.8h, #3 // a0[0..7] >= a3[0..7] ? (5*(a0[0..7]-a3[0..7]))>>3 : 0
+ mov w2, v5.s[1] // move to gp reg
+ cmhs v2.8h, v3.8h, v26.8h
+ mov w3, v5.s[3]
+ cmhs v5.8h, v0.8h, v23.8h
+ bsl v2.16b, v26.16b, v3.16b // FFMIN(d[8..15], clip[8..15])
+ and w9, w7, w8
+ bsl v5.16b, v23.16b, v0.16b // FFMIN(d[0..7], clip[0..7])
+ and w10, w2, w3
+ bic v0.16b, v2.16b, v7.16b // set each d[8..15] to zero if it should not be filtered because clip[8..15] == 0 || a0[8..15] >= pq (a3 > a0 case already zeroed by saturating sub)
+ and w9, w10, w9
+ bic v2.16b, v5.16b, v19.16b // set each d[0..7] to zero if it should not be filtered because clip[0..7] == 0 || a0[0..7] >= pq (a3 > a0 case already zeroed by saturating sub)
+ mls v4.8h, v0.8h, v1.8h // invert d[8..15] depending on clip_sign[8..15] & a0_sign[8..15], or zero it if they match, and accumulate into P4
+ tbnz w9, #0, 4f // none of the 16 pixel pairs should be updated in this case
+ mls v27.8h, v2.8h, v6.8h // invert d[0..7] depending on clip_sign[0..7] & a0_sign[0..7], or zero it if they match, and accumulate into P4
+ mla v16.8h, v2.8h, v6.8h // invert d[0..7] depending on clip_sign[0..7] & a0_sign[0..7], or zero it if they match, and accumulate into P5
+ sqxtun v2.8b, v4.8h
+ mla v18.8h, v0.8h, v1.8h // invert d[8..15] depending on clip_sign[8..15] & a0_sign[8..15], or zero it if they match, and accumulate into P5
+ sqxtun v0.8b, v27.8h
+ sqxtun v1.8b, v16.8h
+ sqxtun v3.8b, v18.8h
+ tbnz w2, #0, 1f
+ st2 {v0.b, v1.b}[0], [x0], x1
+ st2 {v0.b, v1.b}[1], [x0], x1
+ st2 {v0.b, v1.b}[2], [x0], x1
+ st2 {v0.b, v1.b}[3], [x0]
+1: tbnz w3, #0, 2f
+ st2 {v0.b, v1.b}[4], [x5], x1
+ st2 {v0.b, v1.b}[5], [x5], x1
+ st2 {v0.b, v1.b}[6], [x5], x1
+ st2 {v0.b, v1.b}[7], [x5]
+2: tbnz w7, #0, 3f
+ st2 {v2.b, v3.b}[0], [x4], x1
+ st2 {v2.b, v3.b}[1], [x4], x1
+ st2 {v2.b, v3.b}[2], [x4], x1
+ st2 {v2.b, v3.b}[3], [x4]
+3: tbnz w8, #0, 4f
+ st2 {v2.b, v3.b}[4], [x6], x1
+ st2 {v2.b, v3.b}[5], [x6], x1
+ st2 {v2.b, v3.b}[6], [x6], x1
+ st2 {v2.b, v3.b}[7], [x6]
+4: ret
+endfunc
--
2.25.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 13+ messages in thread
* [FFmpeg-devel] [PATCH v3 06/10] avcodec/vc1: Arm 32-bit NEON deblocking filter fast paths
2022-03-31 17:23 [FFmpeg-devel] [PATCH v3 00/10] avcodec/vc1: Arm optimisations Ben Avison
` (4 preceding siblings ...)
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 05/10] avcodec/vc1: Arm 64-bit NEON deblocking filter fast paths Ben Avison
@ 2022-03-31 17:23 ` Ben Avison
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 07/10] avcodec/vc1: Arm 64-bit NEON inverse transform " Ben Avison
` (4 subsequent siblings)
10 siblings, 0 replies; 13+ messages in thread
From: Ben Avison @ 2022-03-31 17:23 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Ben Avison
checkasm benchmarks on 1.5 GHz Cortex-A72 are as follows. Note that the C
version can still outperform the NEON version in specific cases. The balance
between different code paths is stream-dependent, but in practice the best
case happens about 5% of the time, the worst case happens about 40% of the
time, and the complexity of the remaining cases fall somewhere in between.
Therefore, taking the average of the best and worst case timings is
probably a conservative estimate of the degree by which the NEON code
improves performance.
vc1dsp.vc1_h_loop_filter4_bestcase_c: 19.0
vc1dsp.vc1_h_loop_filter4_bestcase_neon: 48.5
vc1dsp.vc1_h_loop_filter4_worstcase_c: 144.7
vc1dsp.vc1_h_loop_filter4_worstcase_neon: 76.2
vc1dsp.vc1_h_loop_filter8_bestcase_c: 41.0
vc1dsp.vc1_h_loop_filter8_bestcase_neon: 75.0
vc1dsp.vc1_h_loop_filter8_worstcase_c: 294.0
vc1dsp.vc1_h_loop_filter8_worstcase_neon: 102.7
vc1dsp.vc1_h_loop_filter16_bestcase_c: 54.7
vc1dsp.vc1_h_loop_filter16_bestcase_neon: 130.0
vc1dsp.vc1_h_loop_filter16_worstcase_c: 569.7
vc1dsp.vc1_h_loop_filter16_worstcase_neon: 186.7
vc1dsp.vc1_v_loop_filter4_bestcase_c: 20.2
vc1dsp.vc1_v_loop_filter4_bestcase_neon: 47.2
vc1dsp.vc1_v_loop_filter4_worstcase_c: 164.2
vc1dsp.vc1_v_loop_filter4_worstcase_neon: 68.5
vc1dsp.vc1_v_loop_filter8_bestcase_c: 43.5
vc1dsp.vc1_v_loop_filter8_bestcase_neon: 55.2
vc1dsp.vc1_v_loop_filter8_worstcase_c: 316.2
vc1dsp.vc1_v_loop_filter8_worstcase_neon: 72.7
vc1dsp.vc1_v_loop_filter16_bestcase_c: 62.2
vc1dsp.vc1_v_loop_filter16_bestcase_neon: 103.7
vc1dsp.vc1_v_loop_filter16_worstcase_c: 646.5
vc1dsp.vc1_v_loop_filter16_worstcase_neon: 110.7
Signed-off-by: Ben Avison <bavison@riscosopen.org>
---
libavcodec/arm/vc1dsp_init_neon.c | 14 +
libavcodec/arm/vc1dsp_neon.S | 643 ++++++++++++++++++++++++++++++
2 files changed, 657 insertions(+)
diff --git a/libavcodec/arm/vc1dsp_init_neon.c b/libavcodec/arm/vc1dsp_init_neon.c
index 2cca784f5a..f5f5c702d7 100644
--- a/libavcodec/arm/vc1dsp_init_neon.c
+++ b/libavcodec/arm/vc1dsp_init_neon.c
@@ -32,6 +32,13 @@ void ff_vc1_inv_trans_4x8_dc_neon(uint8_t *dest, ptrdiff_t stride, int16_t *bloc
void ff_vc1_inv_trans_8x4_dc_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
void ff_vc1_inv_trans_4x4_dc_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
+void ff_vc1_v_loop_filter4_neon(uint8_t *src, int stride, int pq);
+void ff_vc1_h_loop_filter4_neon(uint8_t *src, int stride, int pq);
+void ff_vc1_v_loop_filter8_neon(uint8_t *src, int stride, int pq);
+void ff_vc1_h_loop_filter8_neon(uint8_t *src, int stride, int pq);
+void ff_vc1_v_loop_filter16_neon(uint8_t *src, int stride, int pq);
+void ff_vc1_h_loop_filter16_neon(uint8_t *src, int stride, int pq);
+
void ff_put_pixels8x8_neon(uint8_t *block, const uint8_t *pixels,
ptrdiff_t line_size, int rnd);
@@ -92,6 +99,13 @@ av_cold void ff_vc1dsp_init_neon(VC1DSPContext *dsp)
dsp->vc1_inv_trans_8x4_dc = ff_vc1_inv_trans_8x4_dc_neon;
dsp->vc1_inv_trans_4x4_dc = ff_vc1_inv_trans_4x4_dc_neon;
+ dsp->vc1_v_loop_filter4 = ff_vc1_v_loop_filter4_neon;
+ dsp->vc1_h_loop_filter4 = ff_vc1_h_loop_filter4_neon;
+ dsp->vc1_v_loop_filter8 = ff_vc1_v_loop_filter8_neon;
+ dsp->vc1_h_loop_filter8 = ff_vc1_h_loop_filter8_neon;
+ dsp->vc1_v_loop_filter16 = ff_vc1_v_loop_filter16_neon;
+ dsp->vc1_h_loop_filter16 = ff_vc1_h_loop_filter16_neon;
+
dsp->put_vc1_mspel_pixels_tab[1][ 0] = ff_put_pixels8x8_neon;
FN_ASSIGN(1, 0);
FN_ASSIGN(2, 0);
diff --git a/libavcodec/arm/vc1dsp_neon.S b/libavcodec/arm/vc1dsp_neon.S
index 93f043bf08..ba54221ef6 100644
--- a/libavcodec/arm/vc1dsp_neon.S
+++ b/libavcodec/arm/vc1dsp_neon.S
@@ -1161,3 +1161,646 @@ function ff_vc1_inv_trans_4x4_dc_neon, export=1
vst1.32 {d1[1]}, [r0,:32]
bx lr
endfunc
+
+@ VC-1 in-loop deblocking filter for 4 pixel pairs at boundary of vertically-neighbouring blocks
+@ On entry:
+@ r0 -> top-left pel of lower block
+@ r1 = row stride, bytes
+@ r2 = PQUANT bitstream parameter
+function ff_vc1_v_loop_filter4_neon, export=1
+ sub r3, r0, r1, lsl #2
+ vldr d0, .Lcoeffs
+ vld1.32 {d1[0]}, [r0], r1 @ P5
+ vld1.32 {d2[0]}, [r3], r1 @ P1
+ vld1.32 {d3[0]}, [r3], r1 @ P2
+ vld1.32 {d4[0]}, [r0], r1 @ P6
+ vld1.32 {d5[0]}, [r3], r1 @ P3
+ vld1.32 {d6[0]}, [r0], r1 @ P7
+ vld1.32 {d7[0]}, [r3] @ P4
+ vld1.32 {d16[0]}, [r0] @ P8
+ vshll.u8 q9, d1, #1 @ 2*P5
+ vdup.16 d17, r2 @ pq
+ vshll.u8 q10, d2, #1 @ 2*P1
+ vmovl.u8 q11, d3 @ P2
+ vmovl.u8 q1, d4 @ P6
+ vmovl.u8 q12, d5 @ P3
+ vmls.i16 d20, d22, d0[1] @ 2*P1-5*P2
+ vmovl.u8 q11, d6 @ P7
+ vmls.i16 d18, d2, d0[1] @ 2*P5-5*P6
+ vshll.u8 q2, d5, #1 @ 2*P3
+ vmovl.u8 q3, d7 @ P4
+ vmla.i16 d18, d22, d0[1] @ 2*P5-5*P6+5*P7
+ vmovl.u8 q11, d16 @ P8
+ vmla.u16 d20, d24, d0[1] @ 2*P1-5*P2+5*P3
+ vmovl.u8 q12, d1 @ P5
+ vmls.u16 d4, d6, d0[1] @ 2*P3-5*P4
+ vmls.u16 d18, d22, d0[0] @ 2*P5-5*P6+5*P7-2*P8
+ vsub.i16 d1, d6, d24 @ P4-P5
+ vmls.i16 d20, d6, d0[0] @ 2*P1-5*P2+5*P3-2*P4
+ vmla.i16 d4, d24, d0[1] @ 2*P3-5*P4+5*P5
+ vmls.i16 d4, d2, d0[0] @ 2*P3-5*P4+5*P5-2*P6
+ vabs.s16 d2, d1
+ vrshr.s16 d3, d18, #3
+ vrshr.s16 d5, d20, #3
+ vshr.s16 d2, d2, #1 @ clip
+ vrshr.s16 d4, d4, #3
+ vabs.s16 d3, d3 @ a2
+ vshr.s16 d1, d1, #8 @ clip_sign
+ vabs.s16 d5, d5 @ a1
+ vceq.i16 d7, d2, #0 @ test clip == 0
+ vabs.s16 d16, d4 @ a0
+ vshr.s16 d4, d4, #8 @ a0_sign
+ vcge.s16 d18, d5, d3 @ test a1 >= a2
+ vcge.s16 d17, d16, d17 @ test a0 >= pq
+ vbsl d18, d3, d5 @ a3
+ vsub.i16 d1, d1, d4 @ clip_sign - a0_sign
+ vorr d3, d7, d17 @ test clip == 0 || a0 >= pq
+ vqsub.u16 d4, d16, d18 @ a0 >= a3 ? a0-a3 : 0 (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+ vcge.s16 d5, d18, d16 @ test a3 >= a0
+ vmul.i16 d0, d4, d0[1] @ a0 >= a3 ? 5*(a0-a3) : 0
+ vorr d4, d3, d5 @ test clip == 0 || a0 >= pq || a3 >= a0
+ vmov.32 r0, d4[1] @ move to gp reg
+ vshr.u16 d0, d0, #3 @ a0 >= a3 ? (5*(a0-a3))>>3 : 0
+ vcge.s16 d4, d0, d2
+ tst r0, #1
+ bne 1f @ none of the 4 pixel pairs should be updated if this one is not filtered
+ vbsl d4, d2, d0 @ FFMIN(d, clip)
+ vbic d0, d4, d3 @ set each d to zero if it should not be filtered because clip == 0 || a0 >= pq (a3 > a0 case already zeroed by saturating sub)
+ vmls.i16 d6, d0, d1 @ invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P4
+ vmla.i16 d24, d0, d1 @ invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P5
+ vqmovun.s16 d0, q3
+ vqmovun.s16 d1, q12
+ vst1.32 {d0[0]}, [r3], r1
+ vst1.32 {d1[0]}, [r3]
+1: bx lr
+endfunc
+
+@ VC-1 in-loop deblocking filter for 4 pixel pairs at boundary of horizontally-neighbouring blocks
+@ On entry:
+@ r0 -> top-left pel of right block
+@ r1 = row stride, bytes
+@ r2 = PQUANT bitstream parameter
+function ff_vc1_h_loop_filter4_neon, export=1
+ sub r3, r0, #4 @ where to start reading
+ vldr d0, .Lcoeffs
+ vld1.32 {d2}, [r3], r1
+ sub r0, r0, #1 @ where to start writing
+ vld1.32 {d4}, [r3], r1
+ vld1.32 {d3}, [r3], r1
+ vld1.32 {d5}, [r3]
+ vdup.16 d1, r2 @ pq
+ vtrn.8 q1, q2
+ vtrn.16 d2, d3 @ P1, P5, P3, P7
+ vtrn.16 d4, d5 @ P2, P6, P4, P8
+ vshll.u8 q3, d2, #1 @ 2*P1, 2*P5
+ vmovl.u8 q8, d4 @ P2, P6
+ vmovl.u8 q9, d3 @ P3, P7
+ vmovl.u8 q2, d5 @ P4, P8
+ vmls.i16 q3, q8, d0[1] @ 2*P1-5*P2, 2*P5-5*P6
+ vshll.u8 q10, d3, #1 @ 2*P3, 2*P7
+ vmovl.u8 q1, d2 @ P1, P5
+ vmla.i16 q3, q9, d0[1] @ 2*P1-5*P2+5*P3, 2*P5-5*P6+5*P7
+ vmls.i16 q3, q2, d0[0] @ 2*P1-5*P2+5*P3-2*P4, 2*P5-5*P6+5*P7-2*P8
+ vmov d2, d3 @ needs to be in an even-numbered vector for when we come to narrow it later
+ vmls.i16 d20, d4, d0[1] @ 2*P3-5*P4
+ vmla.i16 d20, d3, d0[1] @ 2*P3-5*P4+5*P5
+ vsub.i16 d3, d4, d2 @ P4-P5
+ vmls.i16 d20, d17, d0[0] @ 2*P3-5*P4+5*P5-2*P6
+ vrshr.s16 q3, q3, #3
+ vabs.s16 d5, d3
+ vshr.s16 d3, d3, #8 @ clip_sign
+ vrshr.s16 d16, d20, #3
+ vabs.s16 q3, q3 @ a1, a2
+ vshr.s16 d5, d5, #1 @ clip
+ vabs.s16 d17, d16 @ a0
+ vceq.i16 d18, d5, #0 @ test clip == 0
+ vshr.s16 d16, d16, #8 @ a0_sign
+ vcge.s16 d19, d6, d7 @ test a1 >= a2
+ vcge.s16 d1, d17, d1 @ test a0 >= pq
+ vsub.i16 d16, d3, d16 @ clip_sign - a0_sign
+ vbsl d19, d7, d6 @ a3
+ vorr d1, d18, d1 @ test clip == 0 || a0 >= pq
+ vqsub.u16 d3, d17, d19 @ a0 >= a3 ? a0-a3 : 0 (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+ vcge.s16 d6, d19, d17 @ test a3 >= a0 @
+ vmul.i16 d0, d3, d0[1] @ a0 >= a3 ? 5*(a0-a3) : 0
+ vorr d3, d1, d6 @ test clip == 0 || a0 >= pq || a3 >= a0
+ vmov.32 r2, d3[1] @ move to gp reg
+ vshr.u16 d0, d0, #3 @ a0 >= a3 ? (5*(a0-a3))>>3 : 0
+ vcge.s16 d3, d0, d5
+ tst r2, #1
+ bne 1f @ none of the 4 pixel pairs should be updated if this one is not filtered
+ vbsl d3, d5, d0 @ FFMIN(d, clip)
+ vbic d0, d3, d1 @ set each d to zero if it should not be filtered because clip == 0 || a0 >= pq (a3 > a0 case already zeroed by saturating sub)
+ vmla.i16 d2, d0, d16 @ invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P5
+ vmls.i16 d4, d0, d16 @ invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P4
+ vqmovun.s16 d1, q1
+ vqmovun.s16 d0, q2
+ vst2.8 {d0[0], d1[0]}, [r0], r1
+ vst2.8 {d0[1], d1[1]}, [r0], r1
+ vst2.8 {d0[2], d1[2]}, [r0], r1
+ vst2.8 {d0[3], d1[3]}, [r0]
+1: bx lr
+endfunc
+
+@ VC-1 in-loop deblocking filter for 8 pixel pairs at boundary of vertically-neighbouring blocks
+@ On entry:
+@ r0 -> top-left pel of lower block
+@ r1 = row stride, bytes
+@ r2 = PQUANT bitstream parameter
+function ff_vc1_v_loop_filter8_neon, export=1
+ sub r3, r0, r1, lsl #2
+ vldr d0, .Lcoeffs
+ vld1.32 {d1}, [r0 :64], r1 @ P5
+ vld1.32 {d2}, [r3 :64], r1 @ P1
+ vld1.32 {d3}, [r3 :64], r1 @ P2
+ vld1.32 {d4}, [r0 :64], r1 @ P6
+ vld1.32 {d5}, [r3 :64], r1 @ P3
+ vld1.32 {d6}, [r0 :64], r1 @ P7
+ vshll.u8 q8, d1, #1 @ 2*P5
+ vshll.u8 q9, d2, #1 @ 2*P1
+ vld1.32 {d7}, [r3 :64] @ P4
+ vmovl.u8 q1, d3 @ P2
+ vld1.32 {d20}, [r0 :64] @ P8
+ vmovl.u8 q11, d4 @ P6
+ vdup.16 q12, r2 @ pq
+ vmovl.u8 q13, d5 @ P3
+ vmls.i16 q9, q1, d0[1] @ 2*P1-5*P2
+ vmovl.u8 q1, d6 @ P7
+ vshll.u8 q2, d5, #1 @ 2*P3
+ vmls.i16 q8, q11, d0[1] @ 2*P5-5*P6
+ vmovl.u8 q3, d7 @ P4
+ vmovl.u8 q10, d20 @ P8
+ vmla.i16 q8, q1, d0[1] @ 2*P5-5*P6+5*P7
+ vmovl.u8 q1, d1 @ P5
+ vmla.i16 q9, q13, d0[1] @ 2*P1-5*P2+5*P3
+ vsub.i16 q13, q3, q1 @ P4-P5
+ vmls.i16 q2, q3, d0[1] @ 2*P3-5*P4
+ vmls.i16 q8, q10, d0[0] @ 2*P5-5*P6+5*P7-2*P8
+ vabs.s16 q10, q13
+ vshr.s16 q13, q13, #8 @ clip_sign
+ vmls.i16 q9, q3, d0[0] @ 2*P1-5*P2+5*P3-2*P4
+ vshr.s16 q10, q10, #1 @ clip
+ vmla.i16 q2, q1, d0[1] @ 2*P3-5*P4+5*P5
+ vrshr.s16 q8, q8, #3
+ vmls.i16 q2, q11, d0[0] @ 2*P3-5*P4+5*P5-2*P6
+ vceq.i16 q11, q10, #0 @ test clip == 0
+ vrshr.s16 q9, q9, #3
+ vabs.s16 q8, q8 @ a2
+ vabs.s16 q9, q9 @ a1
+ vrshr.s16 q2, q2, #3
+ vcge.s16 q14, q9, q8 @ test a1 >= a2
+ vabs.s16 q15, q2 @ a0
+ vshr.s16 q2, q2, #8 @ a0_sign
+ vbsl q14, q8, q9 @ a3
+ vcge.s16 q8, q15, q12 @ test a0 >= pq
+ vsub.i16 q2, q13, q2 @ clip_sign - a0_sign
+ vqsub.u16 q9, q15, q14 @ a0 >= a3 ? a0-a3 : 0 (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+ vcge.s16 q12, q14, q15 @ test a3 >= a0
+ vorr q8, q11, q8 @ test clip == 0 || a0 >= pq
+ vmul.i16 q0, q9, d0[1] @ a0 >= a3 ? 5*(a0-a3) : 0
+ vorr q9, q8, q12 @ test clip == 0 || a0 >= pq || a3 >= a0
+ vshl.i64 q11, q9, #16
+ vmov.32 r0, d18[1] @ move to gp reg
+ vshr.u16 q0, q0, #3 @ a0 >= a3 ? (5*(a0-a3))>>3 : 0
+ vmov.32 r2, d19[1]
+ vshr.s64 q9, q11, #48
+ vcge.s16 q11, q0, q10
+ vorr q8, q8, q9
+ and r0, r0, r2
+ vbsl q11, q10, q0 @ FFMIN(d, clip)
+ tst r0, #1
+ bne 1f @ none of the 8 pixel pairs should be updated in this case
+ vbic q0, q11, q8 @ set each d to zero if it should not be filtered
+ vmls.i16 q3, q0, q2 @ invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P4
+ vmla.i16 q1, q0, q2 @ invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P5
+ vqmovun.s16 d0, q3
+ vqmovun.s16 d1, q1
+ vst1.32 {d0}, [r3 :64], r1
+ vst1.32 {d1}, [r3 :64]
+1: bx lr
+endfunc
+
+.align 5
+.Lcoeffs:
+.quad 0x00050002
+
+@ VC-1 in-loop deblocking filter for 8 pixel pairs at boundary of horizontally-neighbouring blocks
+@ On entry:
+@ r0 -> top-left pel of right block
+@ r1 = row stride, bytes
+@ r2 = PQUANT bitstream parameter
+function ff_vc1_h_loop_filter8_neon, export=1
+ push {lr}
+ sub r3, r0, #4 @ where to start reading
+ vldr d0, .Lcoeffs
+ vld1.32 {d2}, [r3], r1 @ P1[0], P2[0]...
+ sub r0, r0, #1 @ where to start writing
+ vld1.32 {d4}, [r3], r1
+ add r12, r0, r1, lsl #2
+ vld1.32 {d3}, [r3], r1
+ vld1.32 {d5}, [r3], r1
+ vld1.32 {d6}, [r3], r1
+ vld1.32 {d16}, [r3], r1
+ vld1.32 {d7}, [r3], r1
+ vld1.32 {d17}, [r3]
+ vtrn.8 q1, q2 @ P1[0], P1[1], P3[0]... P1[2], P1[3], P3[2]... P2[0], P2[1], P4[0]... P2[2], P2[3], P4[2]...
+ vdup.16 q9, r2 @ pq
+ vtrn.16 d2, d3 @ P1[0], P1[1], P1[2], P1[3], P5[0]... P3[0], P3[1], P3[2], P3[3], P7[0]...
+ vtrn.16 d4, d5 @ P2[0], P2[1], P2[2], P2[3], P6[0]... P4[0], P4[1], P4[2], P4[3], P8[0]...
+ vtrn.8 q3, q8 @ P1[4], P1[5], P3[4]... P1[6], P1[7], P3[6]... P2[4], P2[5], P4[4]... P2[6], P2[7], P4[6]...
+ vtrn.16 d6, d7 @ P1[4], P1[5], P1[6], P1[7], P5[4]... P3[4], P3[5], P3[5], P3[7], P7[4]...
+ vtrn.16 d16, d17 @ P2[4], P2[5], P2[6], P2[7], P6[4]... P4[4], P4[5], P4[6], P4[7], P8[4]...
+ vtrn.32 d2, d6 @ P1, P5
+ vtrn.32 d4, d16 @ P2, P6
+ vtrn.32 d3, d7 @ P3, P7
+ vtrn.32 d5, d17 @ P4, P8
+ vshll.u8 q10, d2, #1 @ 2*P1
+ vshll.u8 q11, d6, #1 @ 2*P5
+ vmovl.u8 q12, d4 @ P2
+ vmovl.u8 q13, d16 @ P6
+ vmovl.u8 q14, d3 @ P3
+ vmls.i16 q10, q12, d0[1] @ 2*P1-5*P2
+ vmovl.u8 q12, d7 @ P7
+ vshll.u8 q1, d3, #1 @ 2*P3
+ vmls.i16 q11, q13, d0[1] @ 2*P5-5*P6
+ vmovl.u8 q2, d5 @ P4
+ vmovl.u8 q8, d17 @ P8
+ vmla.i16 q11, q12, d0[1] @ 2*P5-5*P6+5*P7
+ vmovl.u8 q3, d6 @ P5
+ vmla.i16 q10, q14, d0[1] @ 2*P1-5*P2+5*P3
+ vsub.i16 q12, q2, q3 @ P4-P5
+ vmls.i16 q1, q2, d0[1] @ 2*P3-5*P4
+ vmls.i16 q11, q8, d0[0] @ 2*P5-5*P6+5*P7-2*P8
+ vabs.s16 q8, q12
+ vshr.s16 q12, q12, #8 @ clip_sign
+ vmls.i16 q10, q2, d0[0] @ 2*P1-5*P2+5*P3-2*P4
+ vshr.s16 q8, q8, #1 @ clip
+ vmla.i16 q1, q3, d0[1] @ 2*P3-5*P4+5*P5
+ vrshr.s16 q11, q11, #3
+ vmls.i16 q1, q13, d0[0] @ 2*P3-5*P4+5*P5-2*P6
+ vceq.i16 q13, q8, #0 @ test clip == 0
+ vrshr.s16 q10, q10, #3
+ vabs.s16 q11, q11 @ a2
+ vabs.s16 q10, q10 @ a1
+ vrshr.s16 q1, q1, #3
+ vcge.s16 q14, q10, q11 @ test a1 >= a2
+ vabs.s16 q15, q1 @ a0
+ vshr.s16 q1, q1, #8 @ a0_sign
+ vbsl q14, q11, q10 @ a3
+ vcge.s16 q9, q15, q9 @ test a0 >= pq
+ vsub.i16 q1, q12, q1 @ clip_sign - a0_sign
+ vqsub.u16 q10, q15, q14 @ a0 >= a3 ? a0-a3 : 0 (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+ vcge.s16 q11, q14, q15 @ test a3 >= a0
+ vorr q9, q13, q9 @ test clip == 0 || a0 >= pq
+ vmul.i16 q0, q10, d0[1] @ a0 >= a3 ? 5*(a0-a3) : 0
+ vorr q10, q9, q11 @ test clip == 0 || a0 >= pq || a3 >= a0
+ vmov.32 r2, d20[1] @ move to gp reg
+ vshr.u16 q0, q0, #3 @ a0 >= a3 ? (5*(a0-a3))>>3 : 0
+ vmov.32 r3, d21[1]
+ vcge.s16 q10, q0, q8
+ and r14, r2, r3
+ vbsl q10, q8, q0 @ FFMIN(d, clip)
+ tst r14, #1
+ bne 2f @ none of the 8 pixel pairs should be updated in this case
+ vbic q0, q10, q9 @ set each d to zero if it should not be filtered because clip == 0 || a0 >= pq (a3 > a0 case already zeroed by saturating sub)
+ vmla.i16 q3, q0, q1 @ invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P5
+ vmls.i16 q2, q0, q1 @ invert d depending on clip_sign & a0_sign, or zero it if they match, and accumulate into P4
+ vqmovun.s16 d1, q3
+ vqmovun.s16 d0, q2
+ tst r2, #1
+ bne 1f @ none of the first 4 pixel pairs should be updated if so
+ vst2.8 {d0[0], d1[0]}, [r0], r1
+ vst2.8 {d0[1], d1[1]}, [r0], r1
+ vst2.8 {d0[2], d1[2]}, [r0], r1
+ vst2.8 {d0[3], d1[3]}, [r0]
+1: tst r3, #1
+ bne 2f @ none of the second 4 pixel pairs should be updated if so
+ vst2.8 {d0[4], d1[4]}, [r12], r1
+ vst2.8 {d0[5], d1[5]}, [r12], r1
+ vst2.8 {d0[6], d1[6]}, [r12], r1
+ vst2.8 {d0[7], d1[7]}, [r12]
+2: pop {pc}
+endfunc
+
+@ VC-1 in-loop deblocking filter for 16 pixel pairs at boundary of vertically-neighbouring blocks
+@ On entry:
+@ r0 -> top-left pel of lower block
+@ r1 = row stride, bytes
+@ r2 = PQUANT bitstream parameter
+function ff_vc1_v_loop_filter16_neon, export=1
+ vpush {d8-d15}
+ sub r3, r0, r1, lsl #2
+ vldr d0, .Lcoeffs
+ vld1.64 {q1}, [r0 :128], r1 @ P5
+ vld1.64 {q2}, [r3 :128], r1 @ P1
+ vld1.64 {q3}, [r3 :128], r1 @ P2
+ vld1.64 {q4}, [r0 :128], r1 @ P6
+ vld1.64 {q5}, [r3 :128], r1 @ P3
+ vld1.64 {q6}, [r0 :128], r1 @ P7
+ vshll.u8 q7, d2, #1 @ 2*P5[0..7]
+ vshll.u8 q8, d4, #1 @ 2*P1[0..7]
+ vld1.64 {q9}, [r3 :128] @ P4
+ vmovl.u8 q10, d6 @ P2[0..7]
+ vld1.64 {q11}, [r0 :128] @ P8
+ vmovl.u8 q12, d8 @ P6[0..7]
+ vdup.16 q13, r2 @ pq
+ vshll.u8 q2, d5, #1 @ 2*P1[8..15]
+ vmls.i16 q8, q10, d0[1] @ 2*P1[0..7]-5*P2[0..7]
+ vshll.u8 q10, d3, #1 @ 2*P5[8..15]
+ vmovl.u8 q3, d7 @ P2[8..15]
+ vmls.i16 q7, q12, d0[1] @ 2*P5[0..7]-5*P6[0..7]
+ vmovl.u8 q4, d9 @ P6[8..15]
+ vmovl.u8 q14, d10 @ P3[0..7]
+ vmovl.u8 q15, d12 @ P7[0..7]
+ vmls.i16 q2, q3, d0[1] @ 2*P1[8..15]-5*P2[8..15]
+ vshll.u8 q3, d10, #1 @ 2*P3[0..7]
+ vmls.i16 q10, q4, d0[1] @ 2*P5[8..15]-5*P6[8..15]
+ vmovl.u8 q6, d13 @ P7[8..15]
+ vmla.i16 q8, q14, d0[1] @ 2*P1[0..7]-5*P2[0..7]+5*P3[0..7]
+ vmovl.u8 q14, d18 @ P4[0..7]
+ vmovl.u8 q9, d19 @ P4[8..15]
+ vmla.i16 q7, q15, d0[1] @ 2*P5[0..7]-5*P6[0..7]+5*P7[0..7]
+ vmovl.u8 q15, d11 @ P3[8..15]
+ vshll.u8 q5, d11, #1 @ 2*P3[8..15]
+ vmls.i16 q3, q14, d0[1] @ 2*P3[0..7]-5*P4[0..7]
+ vmla.i16 q2, q15, d0[1] @ 2*P1[8..15]-5*P2[8..15]+5*P3[8..15]
+ vmovl.u8 q15, d22 @ P8[0..7]
+ vmovl.u8 q11, d23 @ P8[8..15]
+ vmla.i16 q10, q6, d0[1] @ 2*P5[8..15]-5*P6[8..15]+5*P7[8..15]
+ vmovl.u8 q6, d2 @ P5[0..7]
+ vmovl.u8 q1, d3 @ P5[8..15]
+ vmls.i16 q5, q9, d0[1] @ 2*P3[8..15]-5*P4[8..15]
+ vmls.i16 q8, q14, d0[0] @ 2*P1[0..7]-5*P2[0..7]+5*P3[0..7]-2*P4[0..7]
+ vmls.i16 q7, q15, d0[0] @ 2*P5[0..7]-5*P6[0..7]+5*P7[0..7]-2*P8[0..7]
+ vsub.i16 q15, q14, q6 @ P4[0..7]-P5[0..7]
+ vmla.i16 q3, q6, d0[1] @ 2*P3[0..7]-5*P4[0..7]+5*P5[0..7]
+ vrshr.s16 q8, q8, #3
+ vmls.i16 q2, q9, d0[0] @ 2*P1[8..15]-5*P2[8..15]+5*P3[8..15]-2*P4[8..15]
+ vrshr.s16 q7, q7, #3
+ vmls.i16 q10, q11, d0[0] @ 2*P5[8..15]-5*P6[8..15]+5*P7[8..15]-2*P8[8..15]
+ vabs.s16 q11, q15
+ vabs.s16 q8, q8 @ a1[0..7]
+ vmla.i16 q5, q1, d0[1] @ 2*P3[8..15]-5*P4[8..15]+5*P5[8..15]
+ vshr.s16 q15, q15, #8 @ clip_sign[0..7]
+ vrshr.s16 q2, q2, #3
+ vmls.i16 q3, q12, d0[0] @ 2*P3[0..7]-5*P4[0..7]+5*P5[0..7]-2*P6[0..7]
+ vabs.s16 q7, q7 @ a2[0..7]
+ vrshr.s16 q10, q10, #3
+ vsub.i16 q12, q9, q1 @ P4[8..15]-P5[8..15]
+ vshr.s16 q11, q11, #1 @ clip[0..7]
+ vmls.i16 q5, q4, d0[0] @ 2*P3[8..15]-5*P4[8..15]+5*P5[8..15]-2*P6[8..15]
+ vcge.s16 q4, q8, q7 @ test a1[0..7] >= a2[0..7]
+ vabs.s16 q2, q2 @ a1[8..15]
+ vrshr.s16 q3, q3, #3
+ vabs.s16 q10, q10 @ a2[8..15]
+ vbsl q4, q7, q8 @ a3[0..7]
+ vabs.s16 q7, q12
+ vshr.s16 q8, q12, #8 @ clip_sign[8..15]
+ vrshr.s16 q5, q5, #3
+ vcge.s16 q12, q2, q10 @ test a1[8..15] >= a2[8.15]
+ vshr.s16 q7, q7, #1 @ clip[8..15]
+ vbsl q12, q10, q2 @ a3[8..15]
+ vabs.s16 q2, q3 @ a0[0..7]
+ vceq.i16 q10, q11, #0 @ test clip[0..7] == 0
+ vshr.s16 q3, q3, #8 @ a0_sign[0..7]
+ vsub.i16 q3, q15, q3 @ clip_sign[0..7] - a0_sign[0..7]
+ vcge.s16 q15, q2, q13 @ test a0[0..7] >= pq
+ vorr q10, q10, q15 @ test clip[0..7] == 0 || a0[0..7] >= pq
+ vqsub.u16 q15, q2, q4 @ a0[0..7] >= a3[0..7] ? a0[0..7]-a3[0..7] : 0 (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+ vcge.s16 q2, q4, q2 @ test a3[0..7] >= a0[0..7]
+ vabs.s16 q4, q5 @ a0[8..15]
+ vshr.s16 q5, q5, #8 @ a0_sign[8..15]
+ vmul.i16 q15, q15, d0[1] @ a0[0..7] >= a3[0..7] ? 5*(a0[0..7]-a3[0..7]) : 0
+ vcge.s16 q13, q4, q13 @ test a0[8..15] >= pq
+ vorr q2, q10, q2 @ test clip[0..7] == 0 || a0[0..7] >= pq || a3[0..7] >= a0[0..7]
+ vsub.i16 q5, q8, q5 @ clip_sign[8..15] - a0_sign[8..15]
+ vceq.i16 q8, q7, #0 @ test clip[8..15] == 0
+ vshr.u16 q15, q15, #3 @ a0[0..7] >= a3[0..7] ? (5*(a0[0..7]-a3[0..7]))>>3 : 0
+ vmov.32 r0, d4[1] @ move to gp reg
+ vorr q8, q8, q13 @ test clip[8..15] == 0 || a0[8..15] >= pq
+ vqsub.u16 q13, q4, q12 @ a0[8..15] >= a3[8..15] ? a0[8..15]-a3[8..15] : 0 (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+ vmov.32 r2, d5[1]
+ vcge.s16 q4, q12, q4 @ test a3[8..15] >= a0[8..15]
+ vshl.i64 q2, q2, #16
+ vcge.s16 q12, q15, q11
+ vmul.i16 q0, q13, d0[1] @ a0[8..15] >= a3[8..15] ? 5*(a0[8..15]-a3[8..15]) : 0
+ vorr q4, q8, q4 @ test clip[8..15] == 0 || a0[8..15] >= pq || a3[8..15] >= a0[8..15]
+ vshr.s64 q2, q2, #48
+ and r0, r0, r2
+ vbsl q12, q11, q15 @ FFMIN(d[0..7], clip[0..7])
+ vshl.i64 q11, q4, #16
+ vmov.32 r2, d8[1]
+ vshr.u16 q0, q0, #3 @ a0[8..15] >= a3[8..15] ? (5*(a0[8..15]-a3[8..15]))>>3 : 0
+ vorr q2, q10, q2
+ vmov.32 r12, d9[1]
+ vshr.s64 q4, q11, #48
+ vcge.s16 q10, q0, q7
+ vbic q2, q12, q2 @ set each d[0..7] to zero if it should not be filtered because clip[0..7] == 0 || a0[0..7] >= pq (a3 > a0 case already zeroed by saturating sub)
+ vorr q4, q8, q4
+ and r2, r2, r12
+ vbsl q10, q7, q0 @ FFMIN(d[8..15], clip[8..15])
+ vmls.i16 q14, q2, q3 @ invert d[0..7] depending on clip_sign[0..7] & a0_sign[0..7], or zero it if they match, and accumulate into P4[0..7]
+ and r0, r0, r2
+ vbic q0, q10, q4 @ set each d[8..15] to zero if it should not be filtered because clip[8..15] == 0 || a0[8..15] >= pq (a3 > a0 case already zeroed by saturating sub)
+ tst r0, #1
+ bne 1f @ none of the 16 pixel pairs should be updated in this case
+ vmla.i16 q6, q2, q3 @ invert d[0..7] depending on clip_sign[0..7] & a0_sign[0..7], or zero it if they match, and accumulate into P5[0..7]
+ vmls.i16 q9, q0, q5 @ invert d[8..15] depending on clip_sign[8..15] & a0_sign[8..15], or zero it if they match, and accumulate into P4[8..15]
+ vqmovun.s16 d4, q14
+ vmla.i16 q1, q0, q5 @ invert d[8..15] depending on clip_sign[8..15] & a0_sign[8..15], or zero it if they match, and accumulate into P5[8..15]
+ vqmovun.s16 d0, q6
+ vqmovun.s16 d5, q9
+ vqmovun.s16 d1, q1
+ vst1.64 {q2}, [r3 :128], r1
+ vst1.64 {q0}, [r3 :128]
+1: vpop {d8-d15}
+ bx lr
+endfunc
+
+@ VC-1 in-loop deblocking filter for 16 pixel pairs at boundary of horizontally-neighbouring blocks
+@ On entry:
+@ r0 -> top-left pel of right block
+@ r1 = row stride, bytes
+@ r2 = PQUANT bitstream parameter
+function ff_vc1_h_loop_filter16_neon, export=1
+ push {r4-r6,lr}
+ vpush {d8-d15}
+ sub r3, r0, #4 @ where to start reading
+ vldr d0, .Lcoeffs
+ vld1.32 {d2}, [r3], r1 @ P1[0], P2[0]...
+ sub r0, r0, #1 @ where to start writing
+ vld1.32 {d3}, [r3], r1
+ add r4, r0, r1, lsl #2
+ vld1.32 {d10}, [r3], r1
+ vld1.32 {d11}, [r3], r1
+ vld1.32 {d16}, [r3], r1
+ vld1.32 {d4}, [r3], r1
+ vld1.32 {d8}, [r3], r1
+ vtrn.8 d2, d3 @ P1[0], P1[1], P3[0]... P2[0], P2[1], P4[0]...
+ vld1.32 {d14}, [r3], r1
+ vld1.32 {d5}, [r3], r1
+ vtrn.8 d10, d11 @ P1[2], P1[3], P3[2]... P2[2], P2[3], P4[2]...
+ vld1.32 {d6}, [r3], r1
+ vld1.32 {d12}, [r3], r1
+ vtrn.8 d16, d4 @ P1[4], P1[5], P3[4]... P2[4], P2[5], P4[4]...
+ vld1.32 {d13}, [r3], r1
+ vtrn.16 d2, d10 @ P1[0], P1[1], P1[2], P1[3], P5[0]... P3[0], P3[1], P3[2], P3[3], P7[0]...
+ vld1.32 {d1}, [r3], r1
+ vtrn.8 d8, d14 @ P1[6], P1[7], P3[6]... P2[6], P2[7], P4[6]...
+ vld1.32 {d7}, [r3], r1
+ vtrn.16 d3, d11 @ P2[0], P2[1], P2[2], P2[3], P6[0]... P4[0], P4[1], P4[2], P4[3], P8[0]...
+ vld1.32 {d9}, [r3], r1
+ vtrn.8 d5, d6 @ P1[8], P1[9], P3[8]... P2[8], P2[9], P4[8]...
+ vld1.32 {d15}, [r3]
+ vtrn.16 d16, d8 @ P1[4], P1[5], P1[6], P1[7], P5[4]... P3[4], P3[5], P3[6], P3[7], P7[4]...
+ vtrn.16 d4, d14 @ P2[4], P2[5], P2[6], P2[7], P6[4]... P4[4], P4[5], P4[6], P4[7], P8[4]...
+ vtrn.8 d12, d13 @ P1[10], P1[11], P3[10]... P2[10], P2[11], P4[10]...
+ vdup.16 q9, r2 @ pq
+ vtrn.8 d1, d7 @ P1[12], P1[13], P3[12]... P2[12], P2[13], P4[12]...
+ vtrn.32 d2, d16 @ P1[0..7], P5[0..7]
+ vtrn.16 d5, d12 @ P1[8], P1[7], P1[10], P1[11], P5[8]... P3[8], P3[9], P3[10], P3[11], P7[8]...
+ vtrn.16 d6, d13 @ P2[8], P2[7], P2[10], P2[11], P6[8]... P4[8], P4[9], P4[10], P4[11], P8[8]...
+ vtrn.8 d9, d15 @ P1[14], P1[15], P3[14]... P2[14], P2[15], P4[14]...
+ vtrn.32 d3, d4 @ P2[0..7], P6[0..7]
+ vshll.u8 q10, d2, #1 @ 2*P1[0..7]
+ vtrn.32 d10, d8 @ P3[0..7], P7[0..7]
+ vshll.u8 q11, d16, #1 @ 2*P5[0..7]
+ vtrn.32 d11, d14 @ P4[0..7], P8[0..7]
+ vtrn.16 d1, d9 @ P1[12], P1[13], P1[14], P1[15], P5[12]... P3[12], P3[13], P3[14], P3[15], P7[12]...
+ vtrn.16 d7, d15 @ P2[12], P2[13], P2[14], P2[15], P6[12]... P4[12], P4[13], P4[14], P4[15], P8[12]...
+ vmovl.u8 q1, d3 @ P2[0..7]
+ vmovl.u8 q12, d4 @ P6[0..7]
+ vtrn.32 d5, d1 @ P1[8..15], P5[8..15]
+ vtrn.32 d6, d7 @ P2[8..15], P6[8..15]
+ vtrn.32 d12, d9 @ P3[8..15], P7[8..15]
+ vtrn.32 d13, d15 @ P4[8..15], P8[8..15]
+ vmls.i16 q10, q1, d0[1] @ 2*P1[0..7]-5*P2[0..7]
+ vmovl.u8 q1, d10 @ P3[0..7]
+ vshll.u8 q2, d5, #1 @ 2*P1[8..15]
+ vshll.u8 q13, d1, #1 @ 2*P5[8..15]
+ vmls.i16 q11, q12, d0[1] @ 2*P5[0..7]-5*P6[0..7]
+ vmovl.u8 q14, d6 @ P2[8..15]
+ vmovl.u8 q3, d7 @ P6[8..15]
+ vmovl.u8 q15, d8 @ P7[0..7]
+ vmla.i16 q10, q1, d0[1] @ 2*P1[0..7]-5*P2[0..7]+5*P3[0..7]
+ vmovl.u8 q1, d12 @ P3[8..15]
+ vmls.i16 q2, q14, d0[1] @ 2*P1[8..15]-5*P2[8..15]
+ vmovl.u8 q4, d9 @ P7[8..15]
+ vshll.u8 q14, d10, #1 @ 2*P3[0..7]
+ vmls.i16 q13, q3, d0[1] @ 2*P5[8..15]-5*P6[8..15]
+ vmovl.u8 q5, d11 @ P4[0..7]
+ vmla.i16 q11, q15, d0[1] @ 2*P5[0..7]-5*P6[0..7]+5*P7[0..7]
+ vshll.u8 q15, d12, #1 @ 2*P3[8..15]
+ vmovl.u8 q6, d13 @ P4[8..15]
+ vmla.i16 q2, q1, d0[1] @ 2*P1[8..15]-5*P2[8..15]+5*P3[8..15]
+ vmovl.u8 q1, d14 @ P8[0..7]
+ vmovl.u8 q7, d15 @ P8[8..15]
+ vmla.i16 q13, q4, d0[1] @ 2*P5[8..15]-5*P6[8..15]+5*P7[8..15]
+ vmovl.u8 q4, d16 @ P5[0..7]
+ vmovl.u8 q8, d1 @ P5[8..15]
+ vmls.i16 q14, q5, d0[1] @ 2*P3[0..7]-5*P4[0..7]
+ vmls.i16 q15, q6, d0[1] @ 2*P3[8..15]-5*P4[8..15]
+ vmls.i16 q10, q5, d0[0] @ 2*P1[0..7]-5*P2[0..7]+5*P3[0..7]-2*P4[0..7]
+ vmls.i16 q11, q1, d0[0] @ 2*P5[0..7]-5*P6[0..7]+5*P7[0..7]-2*P8[0..7]
+ vsub.i16 q1, q5, q4 @ P4[0..7]-P5[0..7]
+ vmls.i16 q2, q6, d0[0] @ 2*P1[8..15]-5*P2[8..15]+5*P3[8..15]-2*P4[8..15]
+ vrshr.s16 q10, q10, #3
+ vmls.i16 q13, q7, d0[0] @ 2*P5[8..15]-5*P6[8..15]+5*P7[8..15]-2*P8[8..15]
+ vsub.i16 q7, q6, q8 @ P4[8..15]-P5[8..15]
+ vrshr.s16 q11, q11, #3
+ vmla.s16 q14, q4, d0[1] @ 2*P3[0..7]-5*P4[0..7]+5*P5[0..7]
+ vrshr.s16 q2, q2, #3
+ vmla.i16 q15, q8, d0[1] @ 2*P3[8..15]-5*P4[8..15]+5*P5[8..15]
+ vabs.s16 q10, q10 @ a1[0..7]
+ vrshr.s16 q13, q13, #3
+ vmls.i16 q15, q3, d0[0] @ 2*P3[8..15]-5*P4[8..15]+5*P5[8..15]-2*P6[8..15]
+ vabs.s16 q3, q11 @ a2[0..7]
+ vabs.s16 q2, q2 @ a1[8..15]
+ vmls.i16 q14, q12, d0[0] @ 2*P3[0..7]-5*P4[0..7]+5*P5[0..7]-2*P6[0..7]
+ vabs.s16 q11, q1
+ vabs.s16 q12, q13 @ a2[8..15]
+ vcge.s16 q13, q10, q3 @ test a1[0..7] >= a2[0..7]
+ vshr.s16 q1, q1, #8 @ clip_sign[0..7]
+ vrshr.s16 q15, q15, #3
+ vshr.s16 q11, q11, #1 @ clip[0..7]
+ vrshr.s16 q14, q14, #3
+ vbsl q13, q3, q10 @ a3[0..7]
+ vcge.s16 q3, q2, q12 @ test a1[8..15] >= a2[8.15]
+ vabs.s16 q10, q15 @ a0[8..15]
+ vshr.s16 q15, q15, #8 @ a0_sign[8..15]
+ vbsl q3, q12, q2 @ a3[8..15]
+ vabs.s16 q2, q14 @ a0[0..7]
+ vabs.s16 q12, q7
+ vshr.s16 q7, q7, #8 @ clip_sign[8..15]
+ vshr.s16 q14, q14, #8 @ a0_sign[0..7]
+ vshr.s16 q12, q12, #1 @ clip[8..15]
+ vsub.i16 q7, q7, q15 @ clip_sign[8..15] - a0_sign[8..15]
+ vqsub.u16 q15, q10, q3 @ a0[8..15] >= a3[8..15] ? a0[8..15]-a3[8..15] : 0 (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+ vcge.s16 q3, q3, q10 @ test a3[8..15] >= a0[8..15]
+ vcge.s16 q10, q10, q9 @ test a0[8..15] >= pq
+ vcge.s16 q9, q2, q9 @ test a0[0..7] >= pq
+ vsub.i16 q1, q1, q14 @ clip_sign[0..7] - a0_sign[0..7]
+ vqsub.u16 q14, q2, q13 @ a0[0..7] >= a3[0..7] ? a0[0..7]-a3[0..7] : 0 (a0 > a3 in all cases where filtering is enabled, so makes more sense to subtract this way round than the opposite and then taking the abs)
+ vcge.s16 q2, q13, q2 @ test a3[0..7] >= a0[0..7]
+ vmul.i16 q13, q15, d0[1] @ a0[8..15] >= a3[8..15] ? 5*(a0[8..15]-a3[8..15]) : 0
+ vceq.i16 q15, q11, #0 @ test clip[0..7] == 0
+ vmul.i16 q0, q14, d0[1] @ a0[0..7] >= a3[0..7] ? 5*(a0[0..7]-a3[0..7]) : 0
+ vorr q9, q15, q9 @ test clip[0..7] == 0 || a0[0..7] >= pq
+ vceq.i16 q14, q12, #0 @ test clip[8..15] == 0
+ vshr.u16 q13, q13, #3 @ a0[8..15] >= a3[8..15] ? (5*(a0[8..15]-a3[8..15]))>>3 : 0
+ vorr q2, q9, q2 @ test clip[0..7] == 0 || a0[0..7] >= pq || a3[0..7] >= a0[0..7]
+ vshr.u16 q0, q0, #3 @ a0[0..7] >= a3[0..7] ? (5*(a0[0..7]-a3[0..7]))>>3 : 0
+ vorr q10, q14, q10 @ test clip[8..15] == 0 || a0[8..15] >= pq
+ vcge.s16 q14, q13, q12
+ vmov.32 r2, d4[1] @ move to gp reg
+ vorr q3, q10, q3 @ test clip[8..15] == 0 || a0[8..15] >= pq || a3[8..15] >= a0[8..15]
+ vmov.32 r3, d5[1]
+ vcge.s16 q2, q0, q11
+ vbsl q14, q12, q13 @ FFMIN(d[8..15], clip[8..15])
+ vbsl q2, q11, q0 @ FFMIN(d[0..7], clip[0..7])
+ vmov.32 r5, d6[1]
+ vbic q0, q14, q10 @ set each d[8..15] to zero if it should not be filtered because clip[8..15] == 0 || a0[8..15] >= pq (a3 > a0 case already zeroed by saturating sub)
+ vmov.32 r6, d7[1]
+ and r12, r2, r3
+ vbic q2, q2, q9 @ set each d[0..7] to zero if it should not be filtered because clip[0..7] == 0 || a0[0..7] >= pq (a3 > a0 case already zeroed by saturating sub)
+ vmls.i16 q6, q0, q7 @ invert d[8..15] depending on clip_sign[8..15] & a0_sign[8..15], or zero it if they match, and accumulate into P4
+ vmls.i16 q5, q2, q1 @ invert d[0..7] depending on clip_sign[0..7] & a0_sign[0..7], or zero it if they match, and accumulate into P4
+ and r14, r5, r6
+ vmla.i16 q4, q2, q1 @ invert d[0..7] depending on clip_sign[0..7] & a0_sign[0..7], or zero it if they match, and accumulate into P5
+ and r12, r12, r14
+ vqmovun.s16 d4, q6
+ vmla.i16 q8, q0, q7 @ invert d[8..15] depending on clip_sign[8..15] & a0_sign[8..15], or zero it if they match, and accumulate into P5
+ tst r12, #1
+ bne 4f @ none of the 16 pixel pairs should be updated in this case
+ vqmovun.s16 d2, q5
+ vqmovun.s16 d3, q4
+ vqmovun.s16 d5, q8
+ tst r2, #1
+ bne 1f
+ vst2.8 {d2[0], d3[0]}, [r0], r1
+ vst2.8 {d2[1], d3[1]}, [r0], r1
+ vst2.8 {d2[2], d3[2]}, [r0], r1
+ vst2.8 {d2[3], d3[3]}, [r0]
+1: add r0, r4, r1, lsl #2
+ tst r3, #1
+ bne 2f
+ vst2.8 {d2[4], d3[4]}, [r4], r1
+ vst2.8 {d2[5], d3[5]}, [r4], r1
+ vst2.8 {d2[6], d3[6]}, [r4], r1
+ vst2.8 {d2[7], d3[7]}, [r4]
+2: add r4, r0, r1, lsl #2
+ tst r5, #1
+ bne 3f
+ vst2.8 {d4[0], d5[0]}, [r0], r1
+ vst2.8 {d4[1], d5[1]}, [r0], r1
+ vst2.8 {d4[2], d5[2]}, [r0], r1
+ vst2.8 {d4[3], d5[3]}, [r0]
+3: tst r6, #1
+ bne 4f
+ vst2.8 {d4[4], d5[4]}, [r4], r1
+ vst2.8 {d4[5], d5[5]}, [r4], r1
+ vst2.8 {d4[6], d5[6]}, [r4], r1
+ vst2.8 {d4[7], d5[7]}, [r4]
+4: vpop {d8-d15}
+ pop {r4-r6,pc}
+endfunc
--
2.25.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 13+ messages in thread
* [FFmpeg-devel] [PATCH v3 07/10] avcodec/vc1: Arm 64-bit NEON inverse transform fast paths
2022-03-31 17:23 [FFmpeg-devel] [PATCH v3 00/10] avcodec/vc1: Arm optimisations Ben Avison
` (5 preceding siblings ...)
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 06/10] avcodec/vc1: Arm 32-bit " Ben Avison
@ 2022-03-31 17:23 ` Ben Avison
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 08/10] avcodec/idctdsp: Arm 64-bit NEON block add and clamp " Ben Avison
` (3 subsequent siblings)
10 siblings, 0 replies; 13+ messages in thread
From: Ben Avison @ 2022-03-31 17:23 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Ben Avison
checkasm benchmarks on 1.5 GHz Cortex-A72 are as follows.
vc1dsp.vc1_inv_trans_4x4_c: 158.2
vc1dsp.vc1_inv_trans_4x4_neon: 65.7
vc1dsp.vc1_inv_trans_4x4_dc_c: 86.5
vc1dsp.vc1_inv_trans_4x4_dc_neon: 26.5
vc1dsp.vc1_inv_trans_4x8_c: 335.2
vc1dsp.vc1_inv_trans_4x8_neon: 106.2
vc1dsp.vc1_inv_trans_4x8_dc_c: 151.2
vc1dsp.vc1_inv_trans_4x8_dc_neon: 25.5
vc1dsp.vc1_inv_trans_8x4_c: 365.7
vc1dsp.vc1_inv_trans_8x4_neon: 97.2
vc1dsp.vc1_inv_trans_8x4_dc_c: 139.7
vc1dsp.vc1_inv_trans_8x4_dc_neon: 16.5
vc1dsp.vc1_inv_trans_8x8_c: 547.7
vc1dsp.vc1_inv_trans_8x8_neon: 137.0
vc1dsp.vc1_inv_trans_8x8_dc_c: 268.2
vc1dsp.vc1_inv_trans_8x8_dc_neon: 30.5
Signed-off-by: Ben Avison <bavison@riscosopen.org>
---
libavcodec/aarch64/vc1dsp_init_aarch64.c | 19 +
libavcodec/aarch64/vc1dsp_neon.S | 678 +++++++++++++++++++++++
2 files changed, 697 insertions(+)
diff --git a/libavcodec/aarch64/vc1dsp_init_aarch64.c b/libavcodec/aarch64/vc1dsp_init_aarch64.c
index 8f96e4802d..e0eb52dd63 100644
--- a/libavcodec/aarch64/vc1dsp_init_aarch64.c
+++ b/libavcodec/aarch64/vc1dsp_init_aarch64.c
@@ -25,6 +25,16 @@
#include "config.h"
+void ff_vc1_inv_trans_8x8_neon(int16_t *block);
+void ff_vc1_inv_trans_8x4_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
+void ff_vc1_inv_trans_4x8_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
+void ff_vc1_inv_trans_4x4_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
+
+void ff_vc1_inv_trans_8x8_dc_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
+void ff_vc1_inv_trans_8x4_dc_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
+void ff_vc1_inv_trans_4x8_dc_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
+void ff_vc1_inv_trans_4x4_dc_neon(uint8_t *dest, ptrdiff_t stride, int16_t *block);
+
void ff_vc1_v_loop_filter4_neon(uint8_t *src, ptrdiff_t stride, int pq);
void ff_vc1_h_loop_filter4_neon(uint8_t *src, ptrdiff_t stride, int pq);
void ff_vc1_v_loop_filter8_neon(uint8_t *src, ptrdiff_t stride, int pq);
@@ -46,6 +56,15 @@ av_cold void ff_vc1dsp_init_aarch64(VC1DSPContext *dsp)
int cpu_flags = av_get_cpu_flags();
if (have_neon(cpu_flags)) {
+ dsp->vc1_inv_trans_8x8 = ff_vc1_inv_trans_8x8_neon;
+ dsp->vc1_inv_trans_8x4 = ff_vc1_inv_trans_8x4_neon;
+ dsp->vc1_inv_trans_4x8 = ff_vc1_inv_trans_4x8_neon;
+ dsp->vc1_inv_trans_4x4 = ff_vc1_inv_trans_4x4_neon;
+ dsp->vc1_inv_trans_8x8_dc = ff_vc1_inv_trans_8x8_dc_neon;
+ dsp->vc1_inv_trans_8x4_dc = ff_vc1_inv_trans_8x4_dc_neon;
+ dsp->vc1_inv_trans_4x8_dc = ff_vc1_inv_trans_4x8_dc_neon;
+ dsp->vc1_inv_trans_4x4_dc = ff_vc1_inv_trans_4x4_dc_neon;
+
dsp->vc1_v_loop_filter4 = ff_vc1_v_loop_filter4_neon;
dsp->vc1_h_loop_filter4 = ff_vc1_h_loop_filter4_neon;
dsp->vc1_v_loop_filter8 = ff_vc1_v_loop_filter8_neon;
diff --git a/libavcodec/aarch64/vc1dsp_neon.S b/libavcodec/aarch64/vc1dsp_neon.S
index 1ea9fa75ff..0201db4f78 100644
--- a/libavcodec/aarch64/vc1dsp_neon.S
+++ b/libavcodec/aarch64/vc1dsp_neon.S
@@ -22,7 +22,685 @@
#include "libavutil/aarch64/asm.S"
+// VC-1 8x8 inverse transform
+// On entry:
+// x0 -> array of 16-bit inverse transform coefficients, in column-major order
+// On exit:
+// array at x0 updated to hold transformed block; also now held in row-major order
+function ff_vc1_inv_trans_8x8_neon, export=1
+ ld1 {v1.16b, v2.16b}, [x0], #32
+ ld1 {v3.16b, v4.16b}, [x0], #32
+ ld1 {v5.16b, v6.16b}, [x0], #32
+ shl v1.8h, v1.8h, #2 // 8/2 * src[0]
+ sub x1, x0, #3*32
+ ld1 {v16.16b, v17.16b}, [x0]
+ shl v7.8h, v2.8h, #4 // 16 * src[8]
+ shl v18.8h, v2.8h, #2 // 4 * src[8]
+ shl v19.8h, v4.8h, #4 // 16 * src[24]
+ ldr d0, .Lcoeffs_it8
+ shl v5.8h, v5.8h, #2 // 8/2 * src[32]
+ shl v20.8h, v6.8h, #4 // 16 * src[40]
+ shl v21.8h, v6.8h, #2 // 4 * src[40]
+ shl v22.8h, v17.8h, #4 // 16 * src[56]
+ ssra v20.8h, v19.8h, #2 // 4 * src[24] + 16 * src[40]
+ mul v23.8h, v3.8h, v0.h[0] // 6/2 * src[16]
+ sub v19.8h, v19.8h, v21.8h // 16 * src[24] - 4 * src[40]
+ ssra v7.8h, v22.8h, #2 // 16 * src[8] + 4 * src[56]
+ sub v18.8h, v22.8h, v18.8h // - 4 * src[8] + 16 * src[56]
+ shl v3.8h, v3.8h, #3 // 16/2 * src[16]
+ mls v20.8h, v2.8h, v0.h[2] // - 15 * src[8] + 4 * src[24] + 16 * src[40]
+ ssra v1.8h, v1.8h, #1 // 12/2 * src[0]
+ ssra v5.8h, v5.8h, #1 // 12/2 * src[32]
+ mla v7.8h, v4.8h, v0.h[2] // 16 * src[8] + 15 * src[24] + 4 * src[56]
+ shl v21.8h, v16.8h, #3 // 16/2 * src[48]
+ mls v19.8h, v2.8h, v0.h[1] // - 9 * src[8] + 16 * src[24] - 4 * src[40]
+ sub v2.8h, v23.8h, v21.8h // t4/2 = 6/2 * src[16] - 16/2 * src[48]
+ mla v18.8h, v4.8h, v0.h[1] // - 4 * src[8] + 9 * src[24] + 16 * src[56]
+ add v4.8h, v1.8h, v5.8h // t1/2 = 12/2 * src[0] + 12/2 * src[32]
+ sub v1.8h, v1.8h, v5.8h // t2/2 = 12/2 * src[0] - 12/2 * src[32]
+ mla v3.8h, v16.8h, v0.h[0] // t3/2 = 16/2 * src[16] + 6/2 * src[48]
+ mla v7.8h, v6.8h, v0.h[1] // t1 = 16 * src[8] + 15 * src[24] + 9 * src[40] + 4 * src[56]
+ add v5.8h, v1.8h, v2.8h // t6/2 = t2/2 + t4/2
+ sub v16.8h, v1.8h, v2.8h // t7/2 = t2/2 - t4/2
+ mla v20.8h, v17.8h, v0.h[1] // -t2 = - 15 * src[8] + 4 * src[24] + 16 * src[40] + 9 * src[56]
+ add v21.8h, v1.8h, v2.8h // t6/2 = t2/2 + t4/2
+ add v22.8h, v4.8h, v3.8h // t5/2 = t1/2 + t3/2
+ mls v19.8h, v17.8h, v0.h[2] // -t3 = - 9 * src[8] + 16 * src[24] - 4 * src[40] - 15 * src[56]
+ sub v17.8h, v4.8h, v3.8h // t8/2 = t1/2 - t3/2
+ add v23.8h, v4.8h, v3.8h // t5/2 = t1/2 + t3/2
+ mls v18.8h, v6.8h, v0.h[2] // -t4 = - 4 * src[8] + 9 * src[24] - 15 * src[40] + 16 * src[56]
+ sub v1.8h, v1.8h, v2.8h // t7/2 = t2/2 - t4/2
+ sub v2.8h, v4.8h, v3.8h // t8/2 = t1/2 - t3/2
+ neg v3.8h, v7.8h // -t1
+ neg v4.8h, v20.8h // +t2
+ neg v6.8h, v19.8h // +t3
+ ssra v22.8h, v7.8h, #1 // (t5 + t1) >> 1
+ ssra v1.8h, v19.8h, #1 // (t7 - t3) >> 1
+ neg v7.8h, v18.8h // +t4
+ ssra v5.8h, v4.8h, #1 // (t6 + t2) >> 1
+ ssra v16.8h, v6.8h, #1 // (t7 + t3) >> 1
+ ssra v2.8h, v18.8h, #1 // (t8 - t4) >> 1
+ ssra v17.8h, v7.8h, #1 // (t8 + t4) >> 1
+ ssra v21.8h, v20.8h, #1 // (t6 - t2) >> 1
+ ssra v23.8h, v3.8h, #1 // (t5 - t1) >> 1
+ srshr v3.8h, v22.8h, #2 // (t5 + t1 + 4) >> 3
+ srshr v4.8h, v5.8h, #2 // (t6 + t2 + 4) >> 3
+ srshr v5.8h, v16.8h, #2 // (t7 + t3 + 4) >> 3
+ srshr v6.8h, v17.8h, #2 // (t8 + t4 + 4) >> 3
+ srshr v2.8h, v2.8h, #2 // (t8 - t4 + 4) >> 3
+ srshr v1.8h, v1.8h, #2 // (t7 - t3 + 4) >> 3
+ srshr v7.8h, v21.8h, #2 // (t6 - t2 + 4) >> 3
+ srshr v16.8h, v23.8h, #2 // (t5 - t1 + 4) >> 3
+ trn2 v17.8h, v3.8h, v4.8h
+ trn2 v18.8h, v5.8h, v6.8h
+ trn2 v19.8h, v2.8h, v1.8h
+ trn2 v20.8h, v7.8h, v16.8h
+ trn1 v21.4s, v17.4s, v18.4s
+ trn2 v17.4s, v17.4s, v18.4s
+ trn1 v18.4s, v19.4s, v20.4s
+ trn2 v19.4s, v19.4s, v20.4s
+ trn1 v3.8h, v3.8h, v4.8h
+ trn2 v4.2d, v21.2d, v18.2d
+ trn1 v20.2d, v17.2d, v19.2d
+ trn1 v5.8h, v5.8h, v6.8h
+ trn1 v1.8h, v2.8h, v1.8h
+ trn1 v2.8h, v7.8h, v16.8h
+ trn1 v6.2d, v21.2d, v18.2d
+ trn2 v7.2d, v17.2d, v19.2d
+ shl v16.8h, v20.8h, #4 // 16 * src[24]
+ shl v17.8h, v4.8h, #4 // 16 * src[40]
+ trn1 v18.4s, v3.4s, v5.4s
+ trn1 v19.4s, v1.4s, v2.4s
+ shl v21.8h, v7.8h, #4 // 16 * src[56]
+ shl v22.8h, v6.8h, #2 // 4 * src[8]
+ shl v23.8h, v4.8h, #2 // 4 * src[40]
+ trn2 v3.4s, v3.4s, v5.4s
+ trn2 v1.4s, v1.4s, v2.4s
+ shl v2.8h, v6.8h, #4 // 16 * src[8]
+ sub v5.8h, v16.8h, v23.8h // 16 * src[24] - 4 * src[40]
+ ssra v17.8h, v16.8h, #2 // 4 * src[24] + 16 * src[40]
+ sub v16.8h, v21.8h, v22.8h // - 4 * src[8] + 16 * src[56]
+ trn1 v22.2d, v18.2d, v19.2d
+ trn2 v18.2d, v18.2d, v19.2d
+ trn1 v19.2d, v3.2d, v1.2d
+ ssra v2.8h, v21.8h, #2 // 16 * src[8] + 4 * src[56]
+ mls v17.8h, v6.8h, v0.h[2] // - 15 * src[8] + 4 * src[24] + 16 * src[40]
+ shl v21.8h, v22.8h, #2 // 8/2 * src[0]
+ shl v18.8h, v18.8h, #2 // 8/2 * src[32]
+ mls v5.8h, v6.8h, v0.h[1] // - 9 * src[8] + 16 * src[24] - 4 * src[40]
+ shl v6.8h, v19.8h, #3 // 16/2 * src[16]
+ trn2 v1.2d, v3.2d, v1.2d
+ mla v16.8h, v20.8h, v0.h[1] // - 4 * src[8] + 9 * src[24] + 16 * src[56]
+ ssra v21.8h, v21.8h, #1 // 12/2 * src[0]
+ ssra v18.8h, v18.8h, #1 // 12/2 * src[32]
+ mul v3.8h, v19.8h, v0.h[0] // 6/2 * src[16]
+ shl v19.8h, v1.8h, #3 // 16/2 * src[48]
+ mla v2.8h, v20.8h, v0.h[2] // 16 * src[8] + 15 * src[24] + 4 * src[56]
+ add v20.8h, v21.8h, v18.8h // t1/2 = 12/2 * src[0] + 12/2 * src[32]
+ mla v6.8h, v1.8h, v0.h[0] // t3/2 = 16/2 * src[16] + 6/2 * src[48]
+ sub v1.8h, v21.8h, v18.8h // t2/2 = 12/2 * src[0] - 12/2 * src[32]
+ sub v3.8h, v3.8h, v19.8h // t4/2 = 6/2 * src[16] - 16/2 * src[48]
+ mla v17.8h, v7.8h, v0.h[1] // -t2 = - 15 * src[8] + 4 * src[24] + 16 * src[40] + 9 * src[56]
+ mls v5.8h, v7.8h, v0.h[2] // -t3 = - 9 * src[8] + 16 * src[24] - 4 * src[40] - 15 * src[56]
+ add v7.8h, v1.8h, v3.8h // t6/2 = t2/2 + t4/2
+ add v18.8h, v20.8h, v6.8h // t5/2 = t1/2 + t3/2
+ mls v16.8h, v4.8h, v0.h[2] // -t4 = - 4 * src[8] + 9 * src[24] - 15 * src[40] + 16 * src[56]
+ sub v19.8h, v1.8h, v3.8h // t7/2 = t2/2 - t4/2
+ neg v21.8h, v17.8h // +t2
+ mla v2.8h, v4.8h, v0.h[1] // t1 = 16 * src[8] + 15 * src[24] + 9 * src[40] + 4 * src[56]
+ sub v0.8h, v20.8h, v6.8h // t8/2 = t1/2 - t3/2
+ neg v4.8h, v5.8h // +t3
+ sub v22.8h, v1.8h, v3.8h // t7/2 = t2/2 - t4/2
+ sub v23.8h, v20.8h, v6.8h // t8/2 = t1/2 - t3/2
+ neg v24.8h, v16.8h // +t4
+ add v6.8h, v20.8h, v6.8h // t5/2 = t1/2 + t3/2
+ add v1.8h, v1.8h, v3.8h // t6/2 = t2/2 + t4/2
+ ssra v7.8h, v21.8h, #1 // (t6 + t2) >> 1
+ neg v3.8h, v2.8h // -t1
+ ssra v18.8h, v2.8h, #1 // (t5 + t1) >> 1
+ ssra v19.8h, v4.8h, #1 // (t7 + t3) >> 1
+ ssra v0.8h, v24.8h, #1 // (t8 + t4) >> 1
+ srsra v23.8h, v16.8h, #1 // (t8 - t4 + 1) >> 1
+ srsra v22.8h, v5.8h, #1 // (t7 - t3 + 1) >> 1
+ srsra v1.8h, v17.8h, #1 // (t6 - t2 + 1) >> 1
+ srsra v6.8h, v3.8h, #1 // (t5 - t1 + 1) >> 1
+ srshr v2.8h, v18.8h, #6 // (t5 + t1 + 64) >> 7
+ srshr v3.8h, v7.8h, #6 // (t6 + t2 + 64) >> 7
+ srshr v4.8h, v19.8h, #6 // (t7 + t3 + 64) >> 7
+ srshr v5.8h, v0.8h, #6 // (t8 + t4 + 64) >> 7
+ srshr v16.8h, v23.8h, #6 // (t8 - t4 + 65) >> 7
+ srshr v17.8h, v22.8h, #6 // (t7 - t3 + 65) >> 7
+ st1 {v2.16b, v3.16b}, [x1], #32
+ srshr v0.8h, v1.8h, #6 // (t6 - t2 + 65) >> 7
+ srshr v1.8h, v6.8h, #6 // (t5 - t1 + 65) >> 7
+ st1 {v4.16b, v5.16b}, [x1], #32
+ st1 {v16.16b, v17.16b}, [x1], #32
+ st1 {v0.16b, v1.16b}, [x1]
+ ret
+endfunc
+
+// VC-1 8x4 inverse transform
+// On entry:
+// x0 -> array of 8-bit samples, in row-major order
+// x1 = row stride for 8-bit sample array
+// x2 -> array of 16-bit inverse transform coefficients, in row-major order
+// On exit:
+// array at x0 updated by saturated addition of (narrowed) transformed block
+function ff_vc1_inv_trans_8x4_neon, export=1
+ ld1 {v1.8b, v2.8b, v3.8b, v4.8b}, [x2], #32
+ mov x3, x0
+ ld1 {v16.8b, v17.8b, v18.8b, v19.8b}, [x2]
+ ldr q0, .Lcoeffs_it8 // includes 4-point coefficients in upper half of vector
+ ld1 {v5.8b}, [x0], x1
+ trn2 v6.4h, v1.4h, v3.4h
+ trn2 v7.4h, v2.4h, v4.4h
+ trn1 v1.4h, v1.4h, v3.4h
+ trn1 v2.4h, v2.4h, v4.4h
+ trn2 v3.4h, v16.4h, v18.4h
+ trn2 v4.4h, v17.4h, v19.4h
+ trn1 v16.4h, v16.4h, v18.4h
+ trn1 v17.4h, v17.4h, v19.4h
+ ld1 {v18.8b}, [x0], x1
+ trn1 v19.2s, v6.2s, v3.2s
+ trn2 v3.2s, v6.2s, v3.2s
+ trn1 v6.2s, v7.2s, v4.2s
+ trn2 v4.2s, v7.2s, v4.2s
+ trn1 v7.2s, v1.2s, v16.2s
+ trn1 v20.2s, v2.2s, v17.2s
+ shl v21.4h, v19.4h, #4 // 16 * src[1]
+ trn2 v1.2s, v1.2s, v16.2s
+ shl v16.4h, v3.4h, #4 // 16 * src[3]
+ trn2 v2.2s, v2.2s, v17.2s
+ shl v17.4h, v6.4h, #4 // 16 * src[5]
+ ld1 {v22.8b}, [x0], x1
+ shl v23.4h, v4.4h, #4 // 16 * src[7]
+ mul v24.4h, v1.4h, v0.h[0] // 6/2 * src[2]
+ ld1 {v25.8b}, [x0]
+ shl v26.4h, v19.4h, #2 // 4 * src[1]
+ shl v27.4h, v6.4h, #2 // 4 * src[5]
+ ssra v21.4h, v23.4h, #2 // 16 * src[1] + 4 * src[7]
+ ssra v17.4h, v16.4h, #2 // 4 * src[3] + 16 * src[5]
+ sub v23.4h, v23.4h, v26.4h // - 4 * src[1] + 16 * src[7]
+ sub v16.4h, v16.4h, v27.4h // 16 * src[3] - 4 * src[5]
+ shl v7.4h, v7.4h, #2 // 8/2 * src[0]
+ shl v20.4h, v20.4h, #2 // 8/2 * src[4]
+ mla v21.4h, v3.4h, v0.h[2] // 16 * src[1] + 15 * src[3] + 4 * src[7]
+ shl v1.4h, v1.4h, #3 // 16/2 * src[2]
+ mls v17.4h, v19.4h, v0.h[2] // - 15 * src[1] + 4 * src[3] + 16 * src[5]
+ ssra v7.4h, v7.4h, #1 // 12/2 * src[0]
+ mls v16.4h, v19.4h, v0.h[1] // - 9 * src[1] + 16 * src[3] - 4 * src[5]
+ ssra v20.4h, v20.4h, #1 // 12/2 * src[4]
+ mla v23.4h, v3.4h, v0.h[1] // - 4 * src[1] + 9 * src[3] + 16 * src[7]
+ shl v3.4h, v2.4h, #3 // 16/2 * src[6]
+ mla v1.4h, v2.4h, v0.h[0] // t3/2 = 16/2 * src[2] + 6/2 * src[6]
+ mla v21.4h, v6.4h, v0.h[1] // t1 = 16 * src[1] + 15 * src[3] + 9 * src[5] + 4 * src[7]
+ mla v17.4h, v4.4h, v0.h[1] // -t2 = - 15 * src[1] + 4 * src[3] + 16 * src[5] + 9 * src[7]
+ sub v2.4h, v24.4h, v3.4h // t4/2 = 6/2 * src[2] - 16/2 * src[6]
+ mls v16.4h, v4.4h, v0.h[2] // -t3 = - 9 * src[1] + 16 * src[3] - 4 * src[5] - 15 * src[7]
+ add v3.4h, v7.4h, v20.4h // t1/2 = 12/2 * src[0] + 12/2 * src[4]
+ mls v23.4h, v6.4h, v0.h[2] // -t4 = - 4 * src[1] + 9 * src[3] - 15 * src[5] + 16 * src[7]
+ sub v4.4h, v7.4h, v20.4h // t2/2 = 12/2 * src[0] - 12/2 * src[4]
+ neg v6.4h, v21.4h // -t1
+ add v7.4h, v3.4h, v1.4h // t5/2 = t1/2 + t3/2
+ sub v19.4h, v3.4h, v1.4h // t8/2 = t1/2 - t3/2
+ add v20.4h, v4.4h, v2.4h // t6/2 = t2/2 + t4/2
+ sub v24.4h, v4.4h, v2.4h // t7/2 = t2/2 - t4/2
+ add v26.4h, v3.4h, v1.4h // t5/2 = t1/2 + t3/2
+ add v27.4h, v4.4h, v2.4h // t6/2 = t2/2 + t4/2
+ sub v2.4h, v4.4h, v2.4h // t7/2 = t2/2 - t4/2
+ sub v1.4h, v3.4h, v1.4h // t8/2 = t1/2 - t3/2
+ neg v3.4h, v17.4h // +t2
+ neg v4.4h, v16.4h // +t3
+ neg v28.4h, v23.4h // +t4
+ ssra v7.4h, v21.4h, #1 // (t5 + t1) >> 1
+ ssra v1.4h, v23.4h, #1 // (t8 - t4) >> 1
+ ssra v20.4h, v3.4h, #1 // (t6 + t2) >> 1
+ ssra v24.4h, v4.4h, #1 // (t7 + t3) >> 1
+ ssra v19.4h, v28.4h, #1 // (t8 + t4) >> 1
+ ssra v2.4h, v16.4h, #1 // (t7 - t3) >> 1
+ ssra v27.4h, v17.4h, #1 // (t6 - t2) >> 1
+ ssra v26.4h, v6.4h, #1 // (t5 - t1) >> 1
+ trn1 v1.2d, v7.2d, v1.2d
+ trn1 v2.2d, v20.2d, v2.2d
+ trn1 v3.2d, v24.2d, v27.2d
+ trn1 v4.2d, v19.2d, v26.2d
+ srshr v1.8h, v1.8h, #2 // (t5 + t1 + 4) >> 3, (t8 - t4 + 4) >> 3
+ srshr v2.8h, v2.8h, #2 // (t6 + t2 + 4) >> 3, (t7 - t3 + 4) >> 3
+ srshr v3.8h, v3.8h, #2 // (t7 + t3 + 4) >> 3, (t6 - t2 + 4) >> 3
+ srshr v4.8h, v4.8h, #2 // (t8 + t4 + 4) >> 3, (t5 - t1 + 4) >> 3
+ trn2 v6.8h, v1.8h, v2.8h
+ trn1 v1.8h, v1.8h, v2.8h
+ trn2 v2.8h, v3.8h, v4.8h
+ trn1 v3.8h, v3.8h, v4.8h
+ trn2 v4.4s, v6.4s, v2.4s
+ trn1 v7.4s, v1.4s, v3.4s
+ trn2 v1.4s, v1.4s, v3.4s
+ mul v3.8h, v4.8h, v0.h[5] // 22/2 * src[24]
+ trn1 v2.4s, v6.4s, v2.4s
+ mul v4.8h, v4.8h, v0.h[4] // 10/2 * src[24]
+ mul v6.8h, v7.8h, v0.h[6] // 17 * src[0]
+ mul v1.8h, v1.8h, v0.h[6] // 17 * src[16]
+ mls v3.8h, v2.8h, v0.h[4] // t4/2 = - 10/2 * src[8] + 22/2 * src[24]
+ mla v4.8h, v2.8h, v0.h[5] // t3/2 = 22/2 * src[8] + 10/2 * src[24]
+ add v0.8h, v6.8h, v1.8h // t1 = 17 * src[0] + 17 * src[16]
+ sub v1.8h, v6.8h, v1.8h // t2 = 17 * src[0] - 17 * src[16]
+ neg v2.8h, v3.8h // -t4/2
+ neg v6.8h, v4.8h // -t3/2
+ ssra v4.8h, v0.8h, #1 // (t1 + t3) >> 1
+ ssra v2.8h, v1.8h, #1 // (t2 - t4) >> 1
+ ssra v3.8h, v1.8h, #1 // (t2 + t4) >> 1
+ ssra v6.8h, v0.8h, #1 // (t1 - t3) >> 1
+ srshr v0.8h, v4.8h, #6 // (t1 + t3 + 64) >> 7
+ srshr v1.8h, v2.8h, #6 // (t2 - t4 + 64) >> 7
+ srshr v2.8h, v3.8h, #6 // (t2 + t4 + 64) >> 7
+ srshr v3.8h, v6.8h, #6 // (t1 - t3 + 64) >> 7
+ uaddw v0.8h, v0.8h, v5.8b
+ uaddw v1.8h, v1.8h, v18.8b
+ uaddw v2.8h, v2.8h, v22.8b
+ uaddw v3.8h, v3.8h, v25.8b
+ sqxtun v0.8b, v0.8h
+ sqxtun v1.8b, v1.8h
+ sqxtun v2.8b, v2.8h
+ sqxtun v3.8b, v3.8h
+ st1 {v0.8b}, [x3], x1
+ st1 {v1.8b}, [x3], x1
+ st1 {v2.8b}, [x3], x1
+ st1 {v3.8b}, [x3]
+ ret
+endfunc
+
+// VC-1 4x8 inverse transform
+// On entry:
+// x0 -> array of 8-bit samples, in row-major order
+// x1 = row stride for 8-bit sample array
+// x2 -> array of 16-bit inverse transform coefficients, in row-major order (row stride is 8 coefficients)
+// On exit:
+// array at x0 updated by saturated addition of (narrowed) transformed block
+function ff_vc1_inv_trans_4x8_neon, export=1
+ mov x3, #16
+ ldr q0, .Lcoeffs_it8 // includes 4-point coefficients in upper half of vector
+ mov x4, x0
+ ld1 {v1.d}[0], [x2], x3 // 00 01 02 03
+ ld1 {v2.d}[0], [x2], x3 // 10 11 12 13
+ ld1 {v3.d}[0], [x2], x3 // 20 21 22 23
+ ld1 {v4.d}[0], [x2], x3 // 30 31 32 33
+ ld1 {v1.d}[1], [x2], x3 // 40 41 42 43
+ ld1 {v2.d}[1], [x2], x3 // 50 51 52 53
+ ld1 {v3.d}[1], [x2], x3 // 60 61 62 63
+ ld1 {v4.d}[1], [x2] // 70 71 72 73
+ ld1 {v5.s}[0], [x0], x1
+ ld1 {v6.s}[0], [x0], x1
+ ld1 {v7.s}[0], [x0], x1
+ trn2 v16.8h, v1.8h, v2.8h // 01 11 03 13 41 51 43 53
+ trn1 v1.8h, v1.8h, v2.8h // 00 10 02 12 40 50 42 52
+ trn2 v2.8h, v3.8h, v4.8h // 21 31 23 33 61 71 63 73
+ trn1 v3.8h, v3.8h, v4.8h // 20 30 22 32 60 70 62 72
+ ld1 {v4.s}[0], [x0], x1
+ trn2 v17.4s, v16.4s, v2.4s // 03 13 23 33 43 53 63 73
+ trn1 v18.4s, v1.4s, v3.4s // 00 10 20 30 40 50 60 70
+ trn1 v2.4s, v16.4s, v2.4s // 01 11 21 31 41 51 61 71
+ mul v16.8h, v17.8h, v0.h[4] // 10/2 * src[3]
+ ld1 {v5.s}[1], [x0], x1
+ mul v17.8h, v17.8h, v0.h[5] // 22/2 * src[3]
+ ld1 {v6.s}[1], [x0], x1
+ trn2 v1.4s, v1.4s, v3.4s // 02 12 22 32 42 52 62 72
+ mul v3.8h, v18.8h, v0.h[6] // 17 * src[0]
+ ld1 {v7.s}[1], [x0], x1
+ mul v1.8h, v1.8h, v0.h[6] // 17 * src[2]
+ ld1 {v4.s}[1], [x0]
+ mla v16.8h, v2.8h, v0.h[5] // t3/2 = 22/2 * src[1] + 10/2 * src[3]
+ mls v17.8h, v2.8h, v0.h[4] // t4/2 = - 10/2 * src[1] + 22/2 * src[3]
+ add v2.8h, v3.8h, v1.8h // t1 = 17 * src[0] + 17 * src[2]
+ sub v1.8h, v3.8h, v1.8h // t2 = 17 * src[0] - 17 * src[2]
+ neg v3.8h, v16.8h // -t3/2
+ ssra v16.8h, v2.8h, #1 // (t1 + t3) >> 1
+ neg v18.8h, v17.8h // -t4/2
+ ssra v17.8h, v1.8h, #1 // (t2 + t4) >> 1
+ ssra v3.8h, v2.8h, #1 // (t1 - t3) >> 1
+ ssra v18.8h, v1.8h, #1 // (t2 - t4) >> 1
+ srshr v1.8h, v16.8h, #2 // (t1 + t3 + 64) >> 3
+ srshr v2.8h, v17.8h, #2 // (t2 + t4 + 64) >> 3
+ srshr v3.8h, v3.8h, #2 // (t1 - t3 + 64) >> 3
+ srshr v16.8h, v18.8h, #2 // (t2 - t4 + 64) >> 3
+ trn2 v17.8h, v2.8h, v3.8h // 12 13 32 33 52 53 72 73
+ trn2 v18.8h, v1.8h, v16.8h // 10 11 30 31 50 51 70 71
+ trn1 v1.8h, v1.8h, v16.8h // 00 01 20 21 40 41 60 61
+ trn1 v2.8h, v2.8h, v3.8h // 02 03 22 23 42 43 62 63
+ trn1 v3.4s, v18.4s, v17.4s // 10 11 12 13 50 51 52 53
+ trn2 v16.4s, v18.4s, v17.4s // 30 31 32 33 70 71 72 73
+ trn1 v17.4s, v1.4s, v2.4s // 00 01 02 03 40 41 42 43
+ mov d18, v3.d[1] // 50 51 52 53
+ shl v19.4h, v3.4h, #4 // 16 * src[8]
+ mov d20, v16.d[1] // 70 71 72 73
+ shl v21.4h, v16.4h, #4 // 16 * src[24]
+ mov d22, v17.d[1] // 40 41 42 43
+ shl v23.4h, v3.4h, #2 // 4 * src[8]
+ shl v24.4h, v18.4h, #4 // 16 * src[40]
+ shl v25.4h, v20.4h, #4 // 16 * src[56]
+ shl v26.4h, v18.4h, #2 // 4 * src[40]
+ trn2 v1.4s, v1.4s, v2.4s // 20 21 22 23 60 61 62 63
+ ssra v24.4h, v21.4h, #2 // 4 * src[24] + 16 * src[40]
+ sub v2.4h, v25.4h, v23.4h // - 4 * src[8] + 16 * src[56]
+ shl v17.4h, v17.4h, #2 // 8/2 * src[0]
+ sub v21.4h, v21.4h, v26.4h // 16 * src[24] - 4 * src[40]
+ shl v22.4h, v22.4h, #2 // 8/2 * src[32]
+ mov d23, v1.d[1] // 60 61 62 63
+ ssra v19.4h, v25.4h, #2 // 16 * src[8] + 4 * src[56]
+ mul v25.4h, v1.4h, v0.h[0] // 6/2 * src[16]
+ shl v1.4h, v1.4h, #3 // 16/2 * src[16]
+ mls v24.4h, v3.4h, v0.h[2] // - 15 * src[8] + 4 * src[24] + 16 * src[40]
+ ssra v17.4h, v17.4h, #1 // 12/2 * src[0]
+ mls v21.4h, v3.4h, v0.h[1] // - 9 * src[8] + 16 * src[24] - 4 * src[40]
+ ssra v22.4h, v22.4h, #1 // 12/2 * src[32]
+ mla v2.4h, v16.4h, v0.h[1] // - 4 * src[8] + 9 * src[24] + 16 * src[56]
+ shl v3.4h, v23.4h, #3 // 16/2 * src[48]
+ mla v19.4h, v16.4h, v0.h[2] // 16 * src[8] + 15 * src[24] + 4 * src[56]
+ mla v1.4h, v23.4h, v0.h[0] // t3/2 = 16/2 * src[16] + 6/2 * src[48]
+ mla v24.4h, v20.4h, v0.h[1] // -t2 = - 15 * src[8] + 4 * src[24] + 16 * src[40] + 9 * src[56]
+ add v16.4h, v17.4h, v22.4h // t1/2 = 12/2 * src[0] + 12/2 * src[32]
+ sub v3.4h, v25.4h, v3.4h // t4/2 = 6/2 * src[16] - 16/2 * src[48]
+ sub v17.4h, v17.4h, v22.4h // t2/2 = 12/2 * src[0] - 12/2 * src[32]
+ mls v21.4h, v20.4h, v0.h[2] // -t3 = - 9 * src[8] + 16 * src[24] - 4 * src[40] - 15 * src[56]
+ mla v19.4h, v18.4h, v0.h[1] // t1 = 16 * src[8] + 15 * src[24] + 9 * src[40] + 4 * src[56]
+ add v20.4h, v16.4h, v1.4h // t5/2 = t1/2 + t3/2
+ mls v2.4h, v18.4h, v0.h[2] // -t4 = - 4 * src[8] + 9 * src[24] - 15 * src[40] + 16 * src[56]
+ sub v0.4h, v16.4h, v1.4h // t8/2 = t1/2 - t3/2
+ add v18.4h, v17.4h, v3.4h // t6/2 = t2/2 + t4/2
+ sub v22.4h, v17.4h, v3.4h // t7/2 = t2/2 - t4/2
+ neg v23.4h, v24.4h // +t2
+ sub v25.4h, v17.4h, v3.4h // t7/2 = t2/2 - t4/2
+ add v3.4h, v17.4h, v3.4h // t6/2 = t2/2 + t4/2
+ neg v17.4h, v21.4h // +t3
+ sub v26.4h, v16.4h, v1.4h // t8/2 = t1/2 - t3/2
+ add v1.4h, v16.4h, v1.4h // t5/2 = t1/2 + t3/2
+ neg v16.4h, v19.4h // -t1
+ neg v27.4h, v2.4h // +t4
+ ssra v20.4h, v19.4h, #1 // (t5 + t1) >> 1
+ srsra v0.4h, v2.4h, #1 // (t8 - t4 + 1) >> 1
+ ssra v18.4h, v23.4h, #1 // (t6 + t2) >> 1
+ srsra v22.4h, v21.4h, #1 // (t7 - t3 + 1) >> 1
+ ssra v25.4h, v17.4h, #1 // (t7 + t3) >> 1
+ srsra v3.4h, v24.4h, #1 // (t6 - t2 + 1) >> 1
+ ssra v26.4h, v27.4h, #1 // (t8 + t4) >> 1
+ srsra v1.4h, v16.4h, #1 // (t5 - t1 + 1) >> 1
+ trn1 v0.2d, v20.2d, v0.2d
+ trn1 v2.2d, v18.2d, v22.2d
+ trn1 v3.2d, v25.2d, v3.2d
+ trn1 v1.2d, v26.2d, v1.2d
+ srshr v0.8h, v0.8h, #6 // (t5 + t1 + 64) >> 7, (t8 - t4 + 65) >> 7
+ srshr v2.8h, v2.8h, #6 // (t6 + t2 + 64) >> 7, (t7 - t3 + 65) >> 7
+ srshr v3.8h, v3.8h, #6 // (t7 + t3 + 64) >> 7, (t6 - t2 + 65) >> 7
+ srshr v1.8h, v1.8h, #6 // (t8 + t4 + 64) >> 7, (t5 - t1 + 65) >> 7
+ uaddw v0.8h, v0.8h, v5.8b
+ uaddw v2.8h, v2.8h, v6.8b
+ uaddw v3.8h, v3.8h, v7.8b
+ uaddw v1.8h, v1.8h, v4.8b
+ sqxtun v0.8b, v0.8h
+ sqxtun v2.8b, v2.8h
+ sqxtun v3.8b, v3.8h
+ sqxtun v1.8b, v1.8h
+ st1 {v0.s}[0], [x4], x1
+ st1 {v2.s}[0], [x4], x1
+ st1 {v3.s}[0], [x4], x1
+ st1 {v1.s}[0], [x4], x1
+ st1 {v0.s}[1], [x4], x1
+ st1 {v2.s}[1], [x4], x1
+ st1 {v3.s}[1], [x4], x1
+ st1 {v1.s}[1], [x4]
+ ret
+endfunc
+
+// VC-1 4x4 inverse transform
+// On entry:
+// x0 -> array of 8-bit samples, in row-major order
+// x1 = row stride for 8-bit sample array
+// x2 -> array of 16-bit inverse transform coefficients, in row-major order (row stride is 8 coefficients)
+// On exit:
+// array at x0 updated by saturated addition of (narrowed) transformed block
+function ff_vc1_inv_trans_4x4_neon, export=1
+ mov x3, #16
+ ldr d0, .Lcoeffs_it4
+ mov x4, x0
+ ld1 {v1.d}[0], [x2], x3 // 00 01 02 03
+ ld1 {v2.d}[0], [x2], x3 // 10 11 12 13
+ ld1 {v3.d}[0], [x2], x3 // 20 21 22 23
+ ld1 {v4.d}[0], [x2] // 30 31 32 33
+ ld1 {v5.s}[0], [x0], x1
+ ld1 {v5.s}[1], [x0], x1
+ ld1 {v6.s}[0], [x0], x1
+ trn2 v7.4h, v1.4h, v2.4h // 01 11 03 13
+ trn1 v1.4h, v1.4h, v2.4h // 00 10 02 12
+ ld1 {v6.s}[1], [x0]
+ trn2 v2.4h, v3.4h, v4.4h // 21 31 23 33
+ trn1 v3.4h, v3.4h, v4.4h // 20 30 22 32
+ trn2 v4.2s, v7.2s, v2.2s // 03 13 23 33
+ trn1 v16.2s, v1.2s, v3.2s // 00 10 20 30
+ trn1 v2.2s, v7.2s, v2.2s // 01 11 21 31
+ trn2 v1.2s, v1.2s, v3.2s // 02 12 22 32
+ mul v3.4h, v4.4h, v0.h[0] // 10/2 * src[3]
+ mul v4.4h, v4.4h, v0.h[1] // 22/2 * src[3]
+ mul v7.4h, v16.4h, v0.h[2] // 17 * src[0]
+ mul v1.4h, v1.4h, v0.h[2] // 17 * src[2]
+ mla v3.4h, v2.4h, v0.h[1] // t3/2 = 22/2 * src[1] + 10/2 * src[3]
+ mls v4.4h, v2.4h, v0.h[0] // t4/2 = - 10/2 * src[1] + 22/2 * src[3]
+ add v2.4h, v7.4h, v1.4h // t1 = 17 * src[0] + 17 * src[2]
+ sub v1.4h, v7.4h, v1.4h // t2 = 17 * src[0] - 17 * src[2]
+ neg v7.4h, v3.4h // -t3/2
+ neg v16.4h, v4.4h // -t4/2
+ ssra v3.4h, v2.4h, #1 // (t1 + t3) >> 1
+ ssra v4.4h, v1.4h, #1 // (t2 + t4) >> 1
+ ssra v16.4h, v1.4h, #1 // (t2 - t4) >> 1
+ ssra v7.4h, v2.4h, #1 // (t1 - t3) >> 1
+ srshr v1.4h, v3.4h, #2 // (t1 + t3 + 64) >> 3
+ srshr v2.4h, v4.4h, #2 // (t2 + t4 + 64) >> 3
+ srshr v3.4h, v16.4h, #2 // (t2 - t4 + 64) >> 3
+ srshr v4.4h, v7.4h, #2 // (t1 - t3 + 64) >> 3
+ trn2 v7.4h, v1.4h, v3.4h // 10 11 30 31
+ trn1 v1.4h, v1.4h, v3.4h // 00 01 20 21
+ trn2 v3.4h, v2.4h, v4.4h // 12 13 32 33
+ trn1 v2.4h, v2.4h, v4.4h // 02 03 22 23
+ trn2 v4.2s, v7.2s, v3.2s // 30 31 32 33
+ trn1 v16.2s, v1.2s, v2.2s // 00 01 02 03
+ trn1 v3.2s, v7.2s, v3.2s // 10 11 12 13
+ trn2 v1.2s, v1.2s, v2.2s // 20 21 22 23
+ mul v2.4h, v4.4h, v0.h[1] // 22/2 * src[24]
+ mul v4.4h, v4.4h, v0.h[0] // 10/2 * src[24]
+ mul v7.4h, v16.4h, v0.h[2] // 17 * src[0]
+ mul v1.4h, v1.4h, v0.h[2] // 17 * src[16]
+ mls v2.4h, v3.4h, v0.h[0] // t4/2 = - 10/2 * src[8] + 22/2 * src[24]
+ mla v4.4h, v3.4h, v0.h[1] // t3/2 = 22/2 * src[8] + 10/2 * src[24]
+ add v0.4h, v7.4h, v1.4h // t1 = 17 * src[0] + 17 * src[16]
+ sub v1.4h, v7.4h, v1.4h // t2 = 17 * src[0] - 17 * src[16]
+ neg v3.4h, v2.4h // -t4/2
+ neg v7.4h, v4.4h // -t3/2
+ ssra v4.4h, v0.4h, #1 // (t1 + t3) >> 1
+ ssra v3.4h, v1.4h, #1 // (t2 - t4) >> 1
+ ssra v2.4h, v1.4h, #1 // (t2 + t4) >> 1
+ ssra v7.4h, v0.4h, #1 // (t1 - t3) >> 1
+ trn1 v0.2d, v4.2d, v3.2d
+ trn1 v1.2d, v2.2d, v7.2d
+ srshr v0.8h, v0.8h, #6 // (t1 + t3 + 64) >> 7, (t2 - t4 + 64) >> 7
+ srshr v1.8h, v1.8h, #6 // (t2 + t4 + 64) >> 7, (t1 - t3 + 64) >> 7
+ uaddw v0.8h, v0.8h, v5.8b
+ uaddw v1.8h, v1.8h, v6.8b
+ sqxtun v0.8b, v0.8h
+ sqxtun v1.8b, v1.8h
+ st1 {v0.s}[0], [x4], x1
+ st1 {v0.s}[1], [x4], x1
+ st1 {v1.s}[0], [x4], x1
+ st1 {v1.s}[1], [x4]
+ ret
+endfunc
+
+// VC-1 8x8 inverse transform, DC case
+// On entry:
+// x0 -> array of 8-bit samples, in row-major order
+// x1 = row stride for 8-bit sample array
+// x2 -> 16-bit inverse transform DC coefficient
+// On exit:
+// array at x0 updated by saturated addition of (narrowed) transformed block
+function ff_vc1_inv_trans_8x8_dc_neon, export=1
+ ldrsh w2, [x2]
+ mov x3, x0
+ ld1 {v0.8b}, [x0], x1
+ ld1 {v1.8b}, [x0], x1
+ ld1 {v2.8b}, [x0], x1
+ add w2, w2, w2, lsl #1
+ ld1 {v3.8b}, [x0], x1
+ ld1 {v4.8b}, [x0], x1
+ add w2, w2, #1
+ ld1 {v5.8b}, [x0], x1
+ asr w2, w2, #1
+ ld1 {v6.8b}, [x0], x1
+ add w2, w2, w2, lsl #1
+ ld1 {v7.8b}, [x0]
+ add w0, w2, #16
+ asr w0, w0, #5
+ dup v16.8h, w0
+ uaddw v0.8h, v16.8h, v0.8b
+ uaddw v1.8h, v16.8h, v1.8b
+ uaddw v2.8h, v16.8h, v2.8b
+ uaddw v3.8h, v16.8h, v3.8b
+ uaddw v4.8h, v16.8h, v4.8b
+ uaddw v5.8h, v16.8h, v5.8b
+ sqxtun v0.8b, v0.8h
+ uaddw v6.8h, v16.8h, v6.8b
+ sqxtun v1.8b, v1.8h
+ uaddw v7.8h, v16.8h, v7.8b
+ sqxtun v2.8b, v2.8h
+ sqxtun v3.8b, v3.8h
+ sqxtun v4.8b, v4.8h
+ st1 {v0.8b}, [x3], x1
+ sqxtun v0.8b, v5.8h
+ st1 {v1.8b}, [x3], x1
+ sqxtun v1.8b, v6.8h
+ st1 {v2.8b}, [x3], x1
+ sqxtun v2.8b, v7.8h
+ st1 {v3.8b}, [x3], x1
+ st1 {v4.8b}, [x3], x1
+ st1 {v0.8b}, [x3], x1
+ st1 {v1.8b}, [x3], x1
+ st1 {v2.8b}, [x3]
+ ret
+endfunc
+
+// VC-1 8x4 inverse transform, DC case
+// On entry:
+// x0 -> array of 8-bit samples, in row-major order
+// x1 = row stride for 8-bit sample array
+// x2 -> 16-bit inverse transform DC coefficient
+// On exit:
+// array at x0 updated by saturated addition of (narrowed) transformed block
+function ff_vc1_inv_trans_8x4_dc_neon, export=1
+ ldrsh w2, [x2]
+ mov x3, x0
+ ld1 {v0.8b}, [x0], x1
+ ld1 {v1.8b}, [x0], x1
+ ld1 {v2.8b}, [x0], x1
+ add w2, w2, w2, lsl #1
+ ld1 {v3.8b}, [x0]
+ add w0, w2, #1
+ asr w0, w0, #1
+ add w0, w0, w0, lsl #4
+ add w0, w0, #64
+ asr w0, w0, #7
+ dup v4.8h, w0
+ uaddw v0.8h, v4.8h, v0.8b
+ uaddw v1.8h, v4.8h, v1.8b
+ uaddw v2.8h, v4.8h, v2.8b
+ uaddw v3.8h, v4.8h, v3.8b
+ sqxtun v0.8b, v0.8h
+ sqxtun v1.8b, v1.8h
+ sqxtun v2.8b, v2.8h
+ sqxtun v3.8b, v3.8h
+ st1 {v0.8b}, [x3], x1
+ st1 {v1.8b}, [x3], x1
+ st1 {v2.8b}, [x3], x1
+ st1 {v3.8b}, [x3]
+ ret
+endfunc
+
+// VC-1 4x8 inverse transform, DC case
+// On entry:
+// x0 -> array of 8-bit samples, in row-major order
+// x1 = row stride for 8-bit sample array
+// x2 -> 16-bit inverse transform DC coefficient
+// On exit:
+// array at x0 updated by saturated addition of (narrowed) transformed block
+function ff_vc1_inv_trans_4x8_dc_neon, export=1
+ ldrsh w2, [x2]
+ mov x3, x0
+ ld1 {v0.s}[0], [x0], x1
+ ld1 {v1.s}[0], [x0], x1
+ ld1 {v2.s}[0], [x0], x1
+ add w2, w2, w2, lsl #4
+ ld1 {v3.s}[0], [x0], x1
+ add w2, w2, #4
+ asr w2, w2, #3
+ add w2, w2, w2, lsl #1
+ ld1 {v0.s}[1], [x0], x1
+ add w2, w2, #16
+ asr w2, w2, #5
+ dup v4.8h, w2
+ ld1 {v1.s}[1], [x0], x1
+ ld1 {v2.s}[1], [x0], x1
+ ld1 {v3.s}[1], [x0]
+ uaddw v0.8h, v4.8h, v0.8b
+ uaddw v1.8h, v4.8h, v1.8b
+ uaddw v2.8h, v4.8h, v2.8b
+ uaddw v3.8h, v4.8h, v3.8b
+ sqxtun v0.8b, v0.8h
+ sqxtun v1.8b, v1.8h
+ sqxtun v2.8b, v2.8h
+ sqxtun v3.8b, v3.8h
+ st1 {v0.s}[0], [x3], x1
+ st1 {v1.s}[0], [x3], x1
+ st1 {v2.s}[0], [x3], x1
+ st1 {v3.s}[0], [x3], x1
+ st1 {v0.s}[1], [x3], x1
+ st1 {v1.s}[1], [x3], x1
+ st1 {v2.s}[1], [x3], x1
+ st1 {v3.s}[1], [x3]
+ ret
+endfunc
+
+// VC-1 4x4 inverse transform, DC case
+// On entry:
+// x0 -> array of 8-bit samples, in row-major order
+// x1 = row stride for 8-bit sample array
+// x2 -> 16-bit inverse transform DC coefficient
+// On exit:
+// array at x0 updated by saturated addition of (narrowed) transformed block
+function ff_vc1_inv_trans_4x4_dc_neon, export=1
+ ldrsh w2, [x2]
+ mov x3, x0
+ ld1 {v0.s}[0], [x0], x1
+ ld1 {v1.s}[0], [x0], x1
+ ld1 {v0.s}[1], [x0], x1
+ add w2, w2, w2, lsl #4
+ ld1 {v1.s}[1], [x0]
+ add w0, w2, #4
+ asr w0, w0, #3
+ add w0, w0, w0, lsl #4
+ add w0, w0, #64
+ asr w0, w0, #7
+ dup v2.8h, w0
+ uaddw v0.8h, v2.8h, v0.8b
+ uaddw v1.8h, v2.8h, v1.8b
+ sqxtun v0.8b, v0.8h
+ sqxtun v1.8b, v1.8h
+ st1 {v0.s}[0], [x3], x1
+ st1 {v1.s}[0], [x3], x1
+ st1 {v0.s}[1], [x3], x1
+ st1 {v1.s}[1], [x3]
+ ret
+endfunc
+
.align 5
+.Lcoeffs_it8:
+.quad 0x000F00090003
+.Lcoeffs_it4:
+.quad 0x0011000B0005
.Lcoeffs:
.quad 0x00050002
--
2.25.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 13+ messages in thread
* [FFmpeg-devel] [PATCH v3 08/10] avcodec/idctdsp: Arm 64-bit NEON block add and clamp fast paths
2022-03-31 17:23 [FFmpeg-devel] [PATCH v3 00/10] avcodec/vc1: Arm optimisations Ben Avison
` (6 preceding siblings ...)
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 07/10] avcodec/vc1: Arm 64-bit NEON inverse transform " Ben Avison
@ 2022-03-31 17:23 ` Ben Avison
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 09/10] avcodec/vc1: Arm 64-bit NEON unescape fast path Ben Avison
` (2 subsequent siblings)
10 siblings, 0 replies; 13+ messages in thread
From: Ben Avison @ 2022-03-31 17:23 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Ben Avison
checkasm benchmarks on 1.5 GHz Cortex-A72 are as follows.
idctdsp.add_pixels_clamped_c: 313.3
idctdsp.add_pixels_clamped_neon: 24.3
idctdsp.put_pixels_clamped_c: 220.3
idctdsp.put_pixels_clamped_neon: 15.5
idctdsp.put_signed_pixels_clamped_c: 210.5
idctdsp.put_signed_pixels_clamped_neon: 19.5
Signed-off-by: Ben Avison <bavison@riscosopen.org>
---
libavcodec/aarch64/Makefile | 3 +-
libavcodec/aarch64/idctdsp_init_aarch64.c | 26 +++--
libavcodec/aarch64/idctdsp_neon.S | 130 ++++++++++++++++++++++
3 files changed, 150 insertions(+), 9 deletions(-)
create mode 100644 libavcodec/aarch64/idctdsp_neon.S
diff --git a/libavcodec/aarch64/Makefile b/libavcodec/aarch64/Makefile
index 5b25e4dfb9..c8935f205e 100644
--- a/libavcodec/aarch64/Makefile
+++ b/libavcodec/aarch64/Makefile
@@ -44,7 +44,8 @@ NEON-OBJS-$(CONFIG_H264PRED) += aarch64/h264pred_neon.o
NEON-OBJS-$(CONFIG_H264QPEL) += aarch64/h264qpel_neon.o \
aarch64/hpeldsp_neon.o
NEON-OBJS-$(CONFIG_HPELDSP) += aarch64/hpeldsp_neon.o
-NEON-OBJS-$(CONFIG_IDCTDSP) += aarch64/simple_idct_neon.o
+NEON-OBJS-$(CONFIG_IDCTDSP) += aarch64/idctdsp_neon.o \
+ aarch64/simple_idct_neon.o
NEON-OBJS-$(CONFIG_MDCT) += aarch64/mdct_neon.o
NEON-OBJS-$(CONFIG_MPEGAUDIODSP) += aarch64/mpegaudiodsp_neon.o
NEON-OBJS-$(CONFIG_PIXBLOCKDSP) += aarch64/pixblockdsp_neon.o
diff --git a/libavcodec/aarch64/idctdsp_init_aarch64.c b/libavcodec/aarch64/idctdsp_init_aarch64.c
index 742a3372e3..eec21aa5a2 100644
--- a/libavcodec/aarch64/idctdsp_init_aarch64.c
+++ b/libavcodec/aarch64/idctdsp_init_aarch64.c
@@ -27,19 +27,29 @@
#include "libavcodec/idctdsp.h"
#include "idct.h"
+void ff_put_pixels_clamped_neon(const int16_t *, uint8_t *, ptrdiff_t);
+void ff_put_signed_pixels_clamped_neon(const int16_t *, uint8_t *, ptrdiff_t);
+void ff_add_pixels_clamped_neon(const int16_t *, uint8_t *, ptrdiff_t);
+
av_cold void ff_idctdsp_init_aarch64(IDCTDSPContext *c, AVCodecContext *avctx,
unsigned high_bit_depth)
{
int cpu_flags = av_get_cpu_flags();
- if (have_neon(cpu_flags) && !avctx->lowres && !high_bit_depth) {
- if (avctx->idct_algo == FF_IDCT_AUTO ||
- avctx->idct_algo == FF_IDCT_SIMPLEAUTO ||
- avctx->idct_algo == FF_IDCT_SIMPLENEON) {
- c->idct_put = ff_simple_idct_put_neon;
- c->idct_add = ff_simple_idct_add_neon;
- c->idct = ff_simple_idct_neon;
- c->perm_type = FF_IDCT_PERM_PARTTRANS;
+ if (have_neon(cpu_flags)) {
+ if (!avctx->lowres && !high_bit_depth) {
+ if (avctx->idct_algo == FF_IDCT_AUTO ||
+ avctx->idct_algo == FF_IDCT_SIMPLEAUTO ||
+ avctx->idct_algo == FF_IDCT_SIMPLENEON) {
+ c->idct_put = ff_simple_idct_put_neon;
+ c->idct_add = ff_simple_idct_add_neon;
+ c->idct = ff_simple_idct_neon;
+ c->perm_type = FF_IDCT_PERM_PARTTRANS;
+ }
}
+
+ c->add_pixels_clamped = ff_add_pixels_clamped_neon;
+ c->put_pixels_clamped = ff_put_pixels_clamped_neon;
+ c->put_signed_pixels_clamped = ff_put_signed_pixels_clamped_neon;
}
}
diff --git a/libavcodec/aarch64/idctdsp_neon.S b/libavcodec/aarch64/idctdsp_neon.S
new file mode 100644
index 0000000000..7f47611206
--- /dev/null
+++ b/libavcodec/aarch64/idctdsp_neon.S
@@ -0,0 +1,130 @@
+/*
+ * IDCT AArch64 NEON optimisations
+ *
+ * Copyright (c) 2022 Ben Avison <bavison@riscosopen.org>
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+#include "libavutil/aarch64/asm.S"
+
+// Clamp 16-bit signed block coefficients to unsigned 8-bit
+// On entry:
+// x0 -> array of 64x 16-bit coefficients
+// x1 -> 8-bit results
+// x2 = row stride for results, bytes
+function ff_put_pixels_clamped_neon, export=1
+ ld1 {v0.16b, v1.16b, v2.16b, v3.16b}, [x0], #64
+ ld1 {v4.16b, v5.16b, v6.16b, v7.16b}, [x0]
+ sqxtun v0.8b, v0.8h
+ sqxtun v1.8b, v1.8h
+ sqxtun v2.8b, v2.8h
+ sqxtun v3.8b, v3.8h
+ sqxtun v4.8b, v4.8h
+ st1 {v0.8b}, [x1], x2
+ sqxtun v0.8b, v5.8h
+ st1 {v1.8b}, [x1], x2
+ sqxtun v1.8b, v6.8h
+ st1 {v2.8b}, [x1], x2
+ sqxtun v2.8b, v7.8h
+ st1 {v3.8b}, [x1], x2
+ st1 {v4.8b}, [x1], x2
+ st1 {v0.8b}, [x1], x2
+ st1 {v1.8b}, [x1], x2
+ st1 {v2.8b}, [x1]
+ ret
+endfunc
+
+// Clamp 16-bit signed block coefficients to signed 8-bit (biased by 128)
+// On entry:
+// x0 -> array of 64x 16-bit coefficients
+// x1 -> 8-bit results
+// x2 = row stride for results, bytes
+function ff_put_signed_pixels_clamped_neon, export=1
+ ld1 {v0.16b, v1.16b, v2.16b, v3.16b}, [x0], #64
+ movi v4.8b, #128
+ ld1 {v16.16b, v17.16b, v18.16b, v19.16b}, [x0]
+ sqxtn v0.8b, v0.8h
+ sqxtn v1.8b, v1.8h
+ sqxtn v2.8b, v2.8h
+ sqxtn v3.8b, v3.8h
+ sqxtn v5.8b, v16.8h
+ add v0.8b, v0.8b, v4.8b
+ sqxtn v6.8b, v17.8h
+ add v1.8b, v1.8b, v4.8b
+ sqxtn v7.8b, v18.8h
+ add v2.8b, v2.8b, v4.8b
+ sqxtn v16.8b, v19.8h
+ add v3.8b, v3.8b, v4.8b
+ st1 {v0.8b}, [x1], x2
+ add v0.8b, v5.8b, v4.8b
+ st1 {v1.8b}, [x1], x2
+ add v1.8b, v6.8b, v4.8b
+ st1 {v2.8b}, [x1], x2
+ add v2.8b, v7.8b, v4.8b
+ st1 {v3.8b}, [x1], x2
+ add v3.8b, v16.8b, v4.8b
+ st1 {v0.8b}, [x1], x2
+ st1 {v1.8b}, [x1], x2
+ st1 {v2.8b}, [x1], x2
+ st1 {v3.8b}, [x1]
+ ret
+endfunc
+
+// Add 16-bit signed block coefficients to unsigned 8-bit
+// On entry:
+// x0 -> array of 64x 16-bit coefficients
+// x1 -> 8-bit input and results
+// x2 = row stride for 8-bit input and results, bytes
+function ff_add_pixels_clamped_neon, export=1
+ ld1 {v0.16b, v1.16b, v2.16b, v3.16b}, [x0], #64
+ mov x3, x1
+ ld1 {v4.8b}, [x1], x2
+ ld1 {v5.8b}, [x1], x2
+ ld1 {v6.8b}, [x1], x2
+ ld1 {v7.8b}, [x1], x2
+ ld1 {v16.16b, v17.16b, v18.16b, v19.16b}, [x0]
+ uaddw v0.8h, v0.8h, v4.8b
+ uaddw v1.8h, v1.8h, v5.8b
+ uaddw v2.8h, v2.8h, v6.8b
+ ld1 {v4.8b}, [x1], x2
+ uaddw v3.8h, v3.8h, v7.8b
+ ld1 {v5.8b}, [x1], x2
+ sqxtun v0.8b, v0.8h
+ ld1 {v6.8b}, [x1], x2
+ sqxtun v1.8b, v1.8h
+ ld1 {v7.8b}, [x1]
+ sqxtun v2.8b, v2.8h
+ sqxtun v3.8b, v3.8h
+ uaddw v4.8h, v16.8h, v4.8b
+ st1 {v0.8b}, [x3], x2
+ uaddw v0.8h, v17.8h, v5.8b
+ st1 {v1.8b}, [x3], x2
+ uaddw v1.8h, v18.8h, v6.8b
+ st1 {v2.8b}, [x3], x2
+ uaddw v2.8h, v19.8h, v7.8b
+ sqxtun v4.8b, v4.8h
+ sqxtun v0.8b, v0.8h
+ st1 {v3.8b}, [x3], x2
+ sqxtun v1.8b, v1.8h
+ sqxtun v2.8b, v2.8h
+ st1 {v4.8b}, [x3], x2
+ st1 {v0.8b}, [x3], x2
+ st1 {v1.8b}, [x3], x2
+ st1 {v2.8b}, [x3]
+ ret
+endfunc
--
2.25.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 13+ messages in thread
* [FFmpeg-devel] [PATCH v3 09/10] avcodec/vc1: Arm 64-bit NEON unescape fast path
2022-03-31 17:23 [FFmpeg-devel] [PATCH v3 00/10] avcodec/vc1: Arm optimisations Ben Avison
` (7 preceding siblings ...)
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 08/10] avcodec/idctdsp: Arm 64-bit NEON block add and clamp " Ben Avison
@ 2022-03-31 17:23 ` Ben Avison
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 10/10] avcodec/vc1: Arm 32-bit " Ben Avison
2022-03-31 21:50 ` [FFmpeg-devel] [PATCH v3 00/10] avcodec/vc1: Arm optimisations Martin Storsjö
10 siblings, 0 replies; 13+ messages in thread
From: Ben Avison @ 2022-03-31 17:23 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Ben Avison
checkasm benchmarks on 1.5 GHz Cortex-A72 are as follows.
vc1dsp.vc1_unescape_buffer_c: 655617.7
vc1dsp.vc1_unescape_buffer_neon: 118237.0
Signed-off-by: Ben Avison <bavison@riscosopen.org>
---
libavcodec/aarch64/vc1dsp_init_aarch64.c | 61 ++++++++
libavcodec/aarch64/vc1dsp_neon.S | 176 +++++++++++++++++++++++
2 files changed, 237 insertions(+)
diff --git a/libavcodec/aarch64/vc1dsp_init_aarch64.c b/libavcodec/aarch64/vc1dsp_init_aarch64.c
index e0eb52dd63..a7976fd596 100644
--- a/libavcodec/aarch64/vc1dsp_init_aarch64.c
+++ b/libavcodec/aarch64/vc1dsp_init_aarch64.c
@@ -21,6 +21,7 @@
#include "libavutil/attributes.h"
#include "libavutil/cpu.h"
#include "libavutil/aarch64/cpu.h"
+#include "libavutil/intreadwrite.h"
#include "libavcodec/vc1dsp.h"
#include "config.h"
@@ -51,6 +52,64 @@ void ff_put_vc1_chroma_mc4_neon(uint8_t *dst, uint8_t *src, ptrdiff_t stride,
void ff_avg_vc1_chroma_mc4_neon(uint8_t *dst, uint8_t *src, ptrdiff_t stride,
int h, int x, int y);
+int ff_vc1_unescape_buffer_helper_neon(const uint8_t *src, int size, uint8_t *dst);
+
+static int vc1_unescape_buffer_neon(const uint8_t *src, int size, uint8_t *dst)
+{
+ /* Dealing with starting and stopping, and removing escape bytes, are
+ * comparatively less time-sensitive, so are more clearly expressed using
+ * a C wrapper around the assembly inner loop. Note that we assume a
+ * little-endian machine that supports unaligned loads. */
+ int dsize = 0;
+ while (size >= 4)
+ {
+ int found = 0;
+ while (!found && (((uintptr_t) dst) & 7) && size >= 4)
+ {
+ found = (AV_RL32(src) &~ 0x03000000) == 0x00030000;
+ if (!found)
+ {
+ *dst++ = *src++;
+ --size;
+ ++dsize;
+ }
+ }
+ if (!found)
+ {
+ int skip = size - ff_vc1_unescape_buffer_helper_neon(src, size, dst);
+ dst += skip;
+ src += skip;
+ size -= skip;
+ dsize += skip;
+ while (!found && size >= 4)
+ {
+ found = (AV_RL32(src) &~ 0x03000000) == 0x00030000;
+ if (!found)
+ {
+ *dst++ = *src++;
+ --size;
+ ++dsize;
+ }
+ }
+ }
+ if (found)
+ {
+ *dst++ = *src++;
+ *dst++ = *src++;
+ ++src;
+ size -= 3;
+ dsize += 2;
+ }
+ }
+ while (size > 0)
+ {
+ *dst++ = *src++;
+ --size;
+ ++dsize;
+ }
+ return dsize;
+}
+
av_cold void ff_vc1dsp_init_aarch64(VC1DSPContext *dsp)
{
int cpu_flags = av_get_cpu_flags();
@@ -76,5 +135,7 @@ av_cold void ff_vc1dsp_init_aarch64(VC1DSPContext *dsp)
dsp->avg_no_rnd_vc1_chroma_pixels_tab[0] = ff_avg_vc1_chroma_mc8_neon;
dsp->put_no_rnd_vc1_chroma_pixels_tab[1] = ff_put_vc1_chroma_mc4_neon;
dsp->avg_no_rnd_vc1_chroma_pixels_tab[1] = ff_avg_vc1_chroma_mc4_neon;
+
+ dsp->vc1_unescape_buffer = vc1_unescape_buffer_neon;
}
}
diff --git a/libavcodec/aarch64/vc1dsp_neon.S b/libavcodec/aarch64/vc1dsp_neon.S
index 0201db4f78..9a96c2523c 100644
--- a/libavcodec/aarch64/vc1dsp_neon.S
+++ b/libavcodec/aarch64/vc1dsp_neon.S
@@ -1368,3 +1368,179 @@ function ff_vc1_h_loop_filter16_neon, export=1
st2 {v2.b, v3.b}[7], [x6]
4: ret
endfunc
+
+// Copy at most the specified number of bytes from source to destination buffer,
+// stopping at a multiple of 32 bytes, none of which are the start of an escape sequence
+// On entry:
+// x0 -> source buffer
+// w1 = max number of bytes to copy
+// x2 -> destination buffer, optimally 8-byte aligned
+// On exit:
+// w0 = number of bytes not copied
+function ff_vc1_unescape_buffer_helper_neon, export=1
+ // Offset by 80 to screen out cases that are too short for us to handle,
+ // and also make it easy to test for loop termination, or to determine
+ // whether we need an odd number of half-iterations of the loop.
+ subs w1, w1, #80
+ b.mi 90f
+
+ // Set up useful constants
+ movi v20.4s, #3, lsl #24
+ movi v21.4s, #3, lsl #16
+
+ tst w1, #32
+ b.ne 1f
+
+ ld1 {v0.16b, v1.16b, v2.16b}, [x0], #48
+ ext v25.16b, v0.16b, v1.16b, #1
+ ext v26.16b, v0.16b, v1.16b, #2
+ ext v27.16b, v0.16b, v1.16b, #3
+ ext v29.16b, v1.16b, v2.16b, #1
+ ext v30.16b, v1.16b, v2.16b, #2
+ ext v31.16b, v1.16b, v2.16b, #3
+ bic v24.16b, v0.16b, v20.16b
+ bic v25.16b, v25.16b, v20.16b
+ bic v26.16b, v26.16b, v20.16b
+ bic v27.16b, v27.16b, v20.16b
+ bic v28.16b, v1.16b, v20.16b
+ bic v29.16b, v29.16b, v20.16b
+ bic v30.16b, v30.16b, v20.16b
+ bic v31.16b, v31.16b, v20.16b
+ eor v24.16b, v24.16b, v21.16b
+ eor v25.16b, v25.16b, v21.16b
+ eor v26.16b, v26.16b, v21.16b
+ eor v27.16b, v27.16b, v21.16b
+ eor v28.16b, v28.16b, v21.16b
+ eor v29.16b, v29.16b, v21.16b
+ eor v30.16b, v30.16b, v21.16b
+ eor v31.16b, v31.16b, v21.16b
+ cmeq v24.4s, v24.4s, #0
+ cmeq v25.4s, v25.4s, #0
+ cmeq v26.4s, v26.4s, #0
+ cmeq v27.4s, v27.4s, #0
+ add w1, w1, #32
+ b 3f
+
+1: ld1 {v3.16b, v4.16b, v5.16b}, [x0], #48
+ ext v25.16b, v3.16b, v4.16b, #1
+ ext v26.16b, v3.16b, v4.16b, #2
+ ext v27.16b, v3.16b, v4.16b, #3
+ ext v29.16b, v4.16b, v5.16b, #1
+ ext v30.16b, v4.16b, v5.16b, #2
+ ext v31.16b, v4.16b, v5.16b, #3
+ bic v24.16b, v3.16b, v20.16b
+ bic v25.16b, v25.16b, v20.16b
+ bic v26.16b, v26.16b, v20.16b
+ bic v27.16b, v27.16b, v20.16b
+ bic v28.16b, v4.16b, v20.16b
+ bic v29.16b, v29.16b, v20.16b
+ bic v30.16b, v30.16b, v20.16b
+ bic v31.16b, v31.16b, v20.16b
+ eor v24.16b, v24.16b, v21.16b
+ eor v25.16b, v25.16b, v21.16b
+ eor v26.16b, v26.16b, v21.16b
+ eor v27.16b, v27.16b, v21.16b
+ eor v28.16b, v28.16b, v21.16b
+ eor v29.16b, v29.16b, v21.16b
+ eor v30.16b, v30.16b, v21.16b
+ eor v31.16b, v31.16b, v21.16b
+ cmeq v24.4s, v24.4s, #0
+ cmeq v25.4s, v25.4s, #0
+ cmeq v26.4s, v26.4s, #0
+ cmeq v27.4s, v27.4s, #0
+ // Drop through...
+2: mov v0.16b, v5.16b
+ ld1 {v1.16b, v2.16b}, [x0], #32
+ cmeq v28.4s, v28.4s, #0
+ cmeq v29.4s, v29.4s, #0
+ cmeq v30.4s, v30.4s, #0
+ cmeq v31.4s, v31.4s, #0
+ orr v24.16b, v24.16b, v25.16b
+ orr v26.16b, v26.16b, v27.16b
+ orr v28.16b, v28.16b, v29.16b
+ orr v30.16b, v30.16b, v31.16b
+ ext v25.16b, v0.16b, v1.16b, #1
+ orr v22.16b, v24.16b, v26.16b
+ ext v26.16b, v0.16b, v1.16b, #2
+ ext v27.16b, v0.16b, v1.16b, #3
+ ext v29.16b, v1.16b, v2.16b, #1
+ orr v23.16b, v28.16b, v30.16b
+ ext v30.16b, v1.16b, v2.16b, #2
+ ext v31.16b, v1.16b, v2.16b, #3
+ bic v24.16b, v0.16b, v20.16b
+ bic v25.16b, v25.16b, v20.16b
+ bic v26.16b, v26.16b, v20.16b
+ orr v22.16b, v22.16b, v23.16b
+ bic v27.16b, v27.16b, v20.16b
+ bic v28.16b, v1.16b, v20.16b
+ bic v29.16b, v29.16b, v20.16b
+ bic v30.16b, v30.16b, v20.16b
+ bic v31.16b, v31.16b, v20.16b
+ addv s22, v22.4s
+ eor v24.16b, v24.16b, v21.16b
+ eor v25.16b, v25.16b, v21.16b
+ eor v26.16b, v26.16b, v21.16b
+ eor v27.16b, v27.16b, v21.16b
+ eor v28.16b, v28.16b, v21.16b
+ mov w3, v22.s[0]
+ eor v29.16b, v29.16b, v21.16b
+ eor v30.16b, v30.16b, v21.16b
+ eor v31.16b, v31.16b, v21.16b
+ cmeq v24.4s, v24.4s, #0
+ cmeq v25.4s, v25.4s, #0
+ cmeq v26.4s, v26.4s, #0
+ cmeq v27.4s, v27.4s, #0
+ cbnz w3, 90f
+ st1 {v3.16b, v4.16b}, [x2], #32
+3: mov v3.16b, v2.16b
+ ld1 {v4.16b, v5.16b}, [x0], #32
+ cmeq v28.4s, v28.4s, #0
+ cmeq v29.4s, v29.4s, #0
+ cmeq v30.4s, v30.4s, #0
+ cmeq v31.4s, v31.4s, #0
+ orr v24.16b, v24.16b, v25.16b
+ orr v26.16b, v26.16b, v27.16b
+ orr v28.16b, v28.16b, v29.16b
+ orr v30.16b, v30.16b, v31.16b
+ ext v25.16b, v3.16b, v4.16b, #1
+ orr v22.16b, v24.16b, v26.16b
+ ext v26.16b, v3.16b, v4.16b, #2
+ ext v27.16b, v3.16b, v4.16b, #3
+ ext v29.16b, v4.16b, v5.16b, #1
+ orr v23.16b, v28.16b, v30.16b
+ ext v30.16b, v4.16b, v5.16b, #2
+ ext v31.16b, v4.16b, v5.16b, #3
+ bic v24.16b, v3.16b, v20.16b
+ bic v25.16b, v25.16b, v20.16b
+ bic v26.16b, v26.16b, v20.16b
+ orr v22.16b, v22.16b, v23.16b
+ bic v27.16b, v27.16b, v20.16b
+ bic v28.16b, v4.16b, v20.16b
+ bic v29.16b, v29.16b, v20.16b
+ bic v30.16b, v30.16b, v20.16b
+ bic v31.16b, v31.16b, v20.16b
+ addv s22, v22.4s
+ eor v24.16b, v24.16b, v21.16b
+ eor v25.16b, v25.16b, v21.16b
+ eor v26.16b, v26.16b, v21.16b
+ eor v27.16b, v27.16b, v21.16b
+ eor v28.16b, v28.16b, v21.16b
+ mov w3, v22.s[0]
+ eor v29.16b, v29.16b, v21.16b
+ eor v30.16b, v30.16b, v21.16b
+ eor v31.16b, v31.16b, v21.16b
+ cmeq v24.4s, v24.4s, #0
+ cmeq v25.4s, v25.4s, #0
+ cmeq v26.4s, v26.4s, #0
+ cmeq v27.4s, v27.4s, #0
+ cbnz w3, 91f
+ st1 {v0.16b, v1.16b}, [x2], #32
+ subs w1, w1, #64
+ b.pl 2b
+
+90: add w0, w1, #80
+ ret
+
+91: sub w1, w1, #32
+ b 90b
+endfunc
--
2.25.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 13+ messages in thread
* [FFmpeg-devel] [PATCH v3 10/10] avcodec/vc1: Arm 32-bit NEON unescape fast path
2022-03-31 17:23 [FFmpeg-devel] [PATCH v3 00/10] avcodec/vc1: Arm optimisations Ben Avison
` (8 preceding siblings ...)
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 09/10] avcodec/vc1: Arm 64-bit NEON unescape fast path Ben Avison
@ 2022-03-31 17:23 ` Ben Avison
2022-03-31 21:50 ` [FFmpeg-devel] [PATCH v3 00/10] avcodec/vc1: Arm optimisations Martin Storsjö
10 siblings, 0 replies; 13+ messages in thread
From: Ben Avison @ 2022-03-31 17:23 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Ben Avison
checkasm benchmarks on 1.5 GHz Cortex-A72 are as follows.
vc1dsp.vc1_unescape_buffer_c: 918624.7
vc1dsp.vc1_unescape_buffer_neon: 142958.0
Signed-off-by: Ben Avison <bavison@riscosopen.org>
---
libavcodec/arm/vc1dsp_init_neon.c | 61 +++++++++++++++
libavcodec/arm/vc1dsp_neon.S | 118 ++++++++++++++++++++++++++++++
2 files changed, 179 insertions(+)
diff --git a/libavcodec/arm/vc1dsp_init_neon.c b/libavcodec/arm/vc1dsp_init_neon.c
index f5f5c702d7..48cb816b70 100644
--- a/libavcodec/arm/vc1dsp_init_neon.c
+++ b/libavcodec/arm/vc1dsp_init_neon.c
@@ -19,6 +19,7 @@
#include <stdint.h>
#include "libavutil/attributes.h"
+#include "libavutil/intreadwrite.h"
#include "libavcodec/vc1dsp.h"
#include "vc1dsp.h"
@@ -84,6 +85,64 @@ void ff_put_vc1_chroma_mc4_neon(uint8_t *dst, uint8_t *src, ptrdiff_t stride,
void ff_avg_vc1_chroma_mc4_neon(uint8_t *dst, uint8_t *src, ptrdiff_t stride,
int h, int x, int y);
+int ff_vc1_unescape_buffer_helper_neon(const uint8_t *src, int size, uint8_t *dst);
+
+static int vc1_unescape_buffer_neon(const uint8_t *src, int size, uint8_t *dst)
+{
+ /* Dealing with starting and stopping, and removing escape bytes, are
+ * comparatively less time-sensitive, so are more clearly expressed using
+ * a C wrapper around the assembly inner loop. Note that we assume a
+ * little-endian machine that supports unaligned loads. */
+ int dsize = 0;
+ while (size >= 4)
+ {
+ int found = 0;
+ while (!found && (((uintptr_t) dst) & 7) && size >= 4)
+ {
+ found = (AV_RL32(src) &~ 0x03000000) == 0x00030000;
+ if (!found)
+ {
+ *dst++ = *src++;
+ --size;
+ ++dsize;
+ }
+ }
+ if (!found)
+ {
+ int skip = size - ff_vc1_unescape_buffer_helper_neon(src, size, dst);
+ dst += skip;
+ src += skip;
+ size -= skip;
+ dsize += skip;
+ while (!found && size >= 4)
+ {
+ found = (AV_RL32(src) &~ 0x03000000) == 0x00030000;
+ if (!found)
+ {
+ *dst++ = *src++;
+ --size;
+ ++dsize;
+ }
+ }
+ }
+ if (found)
+ {
+ *dst++ = *src++;
+ *dst++ = *src++;
+ ++src;
+ size -= 3;
+ dsize += 2;
+ }
+ }
+ while (size > 0)
+ {
+ *dst++ = *src++;
+ --size;
+ ++dsize;
+ }
+ return dsize;
+}
+
#define FN_ASSIGN(X, Y) \
dsp->put_vc1_mspel_pixels_tab[0][X+4*Y] = ff_put_vc1_mspel_mc##X##Y##_16_neon; \
dsp->put_vc1_mspel_pixels_tab[1][X+4*Y] = ff_put_vc1_mspel_mc##X##Y##_neon
@@ -130,4 +189,6 @@ av_cold void ff_vc1dsp_init_neon(VC1DSPContext *dsp)
dsp->avg_no_rnd_vc1_chroma_pixels_tab[0] = ff_avg_vc1_chroma_mc8_neon;
dsp->put_no_rnd_vc1_chroma_pixels_tab[1] = ff_put_vc1_chroma_mc4_neon;
dsp->avg_no_rnd_vc1_chroma_pixels_tab[1] = ff_avg_vc1_chroma_mc4_neon;
+
+ dsp->vc1_unescape_buffer = vc1_unescape_buffer_neon;
}
diff --git a/libavcodec/arm/vc1dsp_neon.S b/libavcodec/arm/vc1dsp_neon.S
index ba54221ef6..96014fbebc 100644
--- a/libavcodec/arm/vc1dsp_neon.S
+++ b/libavcodec/arm/vc1dsp_neon.S
@@ -1804,3 +1804,121 @@ function ff_vc1_h_loop_filter16_neon, export=1
4: vpop {d8-d15}
pop {r4-r6,pc}
endfunc
+
+@ Copy at most the specified number of bytes from source to destination buffer,
+@ stopping at a multiple of 16 bytes, none of which are the start of an escape sequence
+@ On entry:
+@ r0 -> source buffer
+@ r1 = max number of bytes to copy
+@ r2 -> destination buffer, optimally 8-byte aligned
+@ On exit:
+@ r0 = number of bytes not copied
+function ff_vc1_unescape_buffer_helper_neon, export=1
+ @ Offset by 48 to screen out cases that are too short for us to handle,
+ @ and also make it easy to test for loop termination, or to determine
+ @ whether we need an odd number of half-iterations of the loop.
+ subs r1, r1, #48
+ bmi 90f
+
+ @ Set up useful constants
+ vmov.i32 q0, #0x3000000
+ vmov.i32 q1, #0x30000
+
+ tst r1, #16
+ bne 1f
+
+ vld1.8 {q8, q9}, [r0]!
+ vbic q12, q8, q0
+ vext.8 q13, q8, q9, #1
+ vext.8 q14, q8, q9, #2
+ vext.8 q15, q8, q9, #3
+ veor q12, q12, q1
+ vbic q13, q13, q0
+ vbic q14, q14, q0
+ vbic q15, q15, q0
+ vceq.i32 q12, q12, #0
+ veor q13, q13, q1
+ veor q14, q14, q1
+ veor q15, q15, q1
+ vceq.i32 q13, q13, #0
+ vceq.i32 q14, q14, #0
+ vceq.i32 q15, q15, #0
+ add r1, r1, #16
+ b 3f
+
+1: vld1.8 {q10, q11}, [r0]!
+ vbic q12, q10, q0
+ vext.8 q13, q10, q11, #1
+ vext.8 q14, q10, q11, #2
+ vext.8 q15, q10, q11, #3
+ veor q12, q12, q1
+ vbic q13, q13, q0
+ vbic q14, q14, q0
+ vbic q15, q15, q0
+ vceq.i32 q12, q12, #0
+ veor q13, q13, q1
+ veor q14, q14, q1
+ veor q15, q15, q1
+ vceq.i32 q13, q13, #0
+ vceq.i32 q14, q14, #0
+ vceq.i32 q15, q15, #0
+ @ Drop through...
+2: vmov q8, q11
+ vld1.8 {q9}, [r0]!
+ vorr q13, q12, q13
+ vorr q15, q14, q15
+ vbic q12, q8, q0
+ vorr q3, q13, q15
+ vext.8 q13, q8, q9, #1
+ vext.8 q14, q8, q9, #2
+ vext.8 q15, q8, q9, #3
+ veor q12, q12, q1
+ vorr d6, d6, d7
+ vbic q13, q13, q0
+ vbic q14, q14, q0
+ vbic q15, q15, q0
+ vceq.i32 q12, q12, #0
+ vmov r3, r12, d6
+ veor q13, q13, q1
+ veor q14, q14, q1
+ veor q15, q15, q1
+ vceq.i32 q13, q13, #0
+ vceq.i32 q14, q14, #0
+ vceq.i32 q15, q15, #0
+ orrs r3, r3, r12
+ bne 90f
+ vst1.64 {q10}, [r2]!
+3: vmov q10, q9
+ vld1.8 {q11}, [r0]!
+ vorr q13, q12, q13
+ vorr q15, q14, q15
+ vbic q12, q10, q0
+ vorr q3, q13, q15
+ vext.8 q13, q10, q11, #1
+ vext.8 q14, q10, q11, #2
+ vext.8 q15, q10, q11, #3
+ veor q12, q12, q1
+ vorr d6, d6, d7
+ vbic q13, q13, q0
+ vbic q14, q14, q0
+ vbic q15, q15, q0
+ vceq.i32 q12, q12, #0
+ vmov r3, r12, d6
+ veor q13, q13, q1
+ veor q14, q14, q1
+ veor q15, q15, q1
+ vceq.i32 q13, q13, #0
+ vceq.i32 q14, q14, #0
+ vceq.i32 q15, q15, #0
+ orrs r3, r3, r12
+ bne 91f
+ vst1.64 {q8}, [r2]!
+ subs r1, r1, #32
+ bpl 2b
+
+90: add r0, r1, #48
+ bx lr
+
+91: sub r1, r1, #16
+ b 90b
+endfunc
--
2.25.1
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [FFmpeg-devel] [PATCH v3 00/10] avcodec/vc1: Arm optimisations
2022-03-31 17:23 [FFmpeg-devel] [PATCH v3 00/10] avcodec/vc1: Arm optimisations Ben Avison
` (9 preceding siblings ...)
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 10/10] avcodec/vc1: Arm 32-bit " Ben Avison
@ 2022-03-31 21:50 ` Martin Storsjö
2022-04-01 7:08 ` Martin Storsjö
10 siblings, 1 reply; 13+ messages in thread
From: Martin Storsjö @ 2022-03-31 21:50 UTC (permalink / raw)
To: FFmpeg development discussions and patches; +Cc: Ben Avison
On Thu, 31 Mar 2022, Ben Avison wrote:
> The VC1 decoder was missing lots of important fast paths for Arm, especially
> for 64-bit Arm. This submission fills in implementations for all functions
> where a fast path already existed and the fallback C implementation was
> taking 1% or more of the runtime, and adds a new fast path to permit
> vc1_unescape_buffer() to be overridden.
>
> I've measured the playback speed on a 1.5 GHz Cortex-A72 (Raspberry Pi 4)
> using `ffmpeg -i <bitstream> -f null -` for a couple of example streams:
>
> Architecture: AArch32 AArch32 AArch64 AArch64
> Stream: 1 2 1 2
> Before speed: 1.22x 0.82x 1.00x 0.67x
> After speed: 1.31x 0.98x 1.39x 1.06x
> Improvement: 7.4% 20% 39% 58%
>
> `make fate` passes on both AArch32 and AArch64.
>
> Changes in v2:
>
> * Refactor checkasm tests to convert some macros into functions.
> * Remove cast-to-void of checked_call.
> * Limit 16-bit values in idctdsp checkasm test to +/-0x100.
> * Reinstate ff_add_pixels_clamped_arm.
> * Adapt vc1 deblocking filters to specify stride as ptrdiff_t.
> * Add align specifiers to a few VLD/VST instructions for AArch32 deblocking
> filter, and adapt checkasm test not to test with tighter alignment than is
> encountered in normal use.
> * Correct unescape buffer memcmp length.
> * Update benchmarks for AArch64 idctdsp.
Thanks! From a quick readthrough, this version of the patchset seems good
to me! I'll run it through some more testing, and push it if everything
seems to work fine (tomorrow or so).
// Martin
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 13+ messages in thread
* Re: [FFmpeg-devel] [PATCH v3 00/10] avcodec/vc1: Arm optimisations
2022-03-31 21:50 ` [FFmpeg-devel] [PATCH v3 00/10] avcodec/vc1: Arm optimisations Martin Storsjö
@ 2022-04-01 7:08 ` Martin Storsjö
0 siblings, 0 replies; 13+ messages in thread
From: Martin Storsjö @ 2022-04-01 7:08 UTC (permalink / raw)
To: FFmpeg development discussions and patches; +Cc: Ben Avison
On Fri, 1 Apr 2022, Martin Storsjö wrote:
> On Thu, 31 Mar 2022, Ben Avison wrote:
>
>> The VC1 decoder was missing lots of important fast paths for Arm,
>> especially
>> for 64-bit Arm. This submission fills in implementations for all functions
>> where a fast path already existed and the fallback C implementation was
>> taking 1% or more of the runtime, and adds a new fast path to permit
>> vc1_unescape_buffer() to be overridden.
>>
>> I've measured the playback speed on a 1.5 GHz Cortex-A72 (Raspberry Pi 4)
>> using `ffmpeg -i <bitstream> -f null -` for a couple of example streams:
>>
>> Architecture: AArch32 AArch32 AArch64 AArch64
>> Stream: 1 2 1 2
>> Before speed: 1.22x 0.82x 1.00x 0.67x
>> After speed: 1.31x 0.98x 1.39x 1.06x
>> Improvement: 7.4% 20% 39% 58%
>>
>> `make fate` passes on both AArch32 and AArch64.
>>
>> Changes in v2:
>>
>> * Refactor checkasm tests to convert some macros into functions.
>> * Remove cast-to-void of checked_call.
>> * Limit 16-bit values in idctdsp checkasm test to +/-0x100.
>> * Reinstate ff_add_pixels_clamped_arm.
>> * Adapt vc1 deblocking filters to specify stride as ptrdiff_t.
>> * Add align specifiers to a few VLD/VST instructions for AArch32 deblocking
>> filter, and adapt checkasm test not to test with tighter alignment than is
>> encountered in normal use.
>> * Correct unescape buffer memcmp length.
>> * Update benchmarks for AArch64 idctdsp.
>
> Thanks! From a quick readthrough, this version of the patchset seems good to
> me! I'll run it through some more testing, and push it if everything seems to
> work fine (tomorrow or so).
Pushed now - thanks for your contribution!
// Martin
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 13+ messages in thread
end of thread, other threads:[~2022-04-01 7:08 UTC | newest]
Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-31 17:23 [FFmpeg-devel] [PATCH v3 00/10] avcodec/vc1: Arm optimisations Ben Avison
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 01/10] checkasm: Add vc1dsp in-loop deblocking filter tests Ben Avison
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 02/10] checkasm: Add vc1dsp inverse transform tests Ben Avison
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 03/10] checkasm: Add idctdsp add/put-pixels-clamped tests Ben Avison
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 04/10] avcodec/vc1: Introduce fast path for unescaping bitstream buffer Ben Avison
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 05/10] avcodec/vc1: Arm 64-bit NEON deblocking filter fast paths Ben Avison
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 06/10] avcodec/vc1: Arm 32-bit " Ben Avison
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 07/10] avcodec/vc1: Arm 64-bit NEON inverse transform " Ben Avison
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 08/10] avcodec/idctdsp: Arm 64-bit NEON block add and clamp " Ben Avison
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 09/10] avcodec/vc1: Arm 64-bit NEON unescape fast path Ben Avison
2022-03-31 17:23 ` [FFmpeg-devel] [PATCH v3 10/10] avcodec/vc1: Arm 32-bit " Ben Avison
2022-03-31 21:50 ` [FFmpeg-devel] [PATCH v3 00/10] avcodec/vc1: Arm optimisations Martin Storsjö
2022-04-01 7:08 ` Martin Storsjö
Git Inbox Mirror of the ffmpeg-devel mailing list - see https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
This inbox may be cloned and mirrored by anyone:
git clone --mirror https://master.gitmailbox.com/ffmpegdev/0 ffmpegdev/git/0.git
# If you have public-inbox 1.1+ installed, you may
# initialize and index your mirror using the following commands:
public-inbox-init -V2 ffmpegdev ffmpegdev/ https://master.gitmailbox.com/ffmpegdev \
ffmpegdev@gitmailbox.com
public-inbox-index ffmpegdev
Example config snippet for mirrors.
AGPL code for this site: git clone https://public-inbox.org/public-inbox.git