From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by master.gitmailbox.com (Postfix) with ESMTP id 15EC145B26 for ; Wed, 17 May 2023 10:25:43 +0000 (UTC) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 50B6A68C165; Wed, 17 May 2023 13:24:42 +0300 (EEST) Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id EBA9068C15D for ; Wed, 17 May 2023 13:24:40 +0300 (EEST) Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-1aaebed5bd6so5080915ad.1 for ; Wed, 17 May 2023 03:24:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; t=1684319078; x=1686911078; h=mime-version:message-id:date:subject:cc:to:from:from:to:cc:subject :date:message-id:reply-to; bh=DUorhVEFTKfbZ7CYGgLKIyNfs6f1cReCoT8QXfQi9+w=; b=XUlRK0XAHBYcrPDLn/Q3Epn+P3KOpjCRheLvRRHQiI+ndM7YFSjqx4MoH9oi/Klnwe 3h9zVypqCPq0usVRXZ8xzw4Yg+/Tn5uudLAwo1j5RCYCCQV+TDDvFnOp8jc2Fqa+vc83 FYZQEXLeGKeSNDOOzKTdcfwTywGfC55//1nqLf+KLX2+DFSx53OSPS0nRXiMFZJhzBi6 RokmT1GRHAqTtoJi3SS9xcZ8puPTLYWYhtLdpH5alZ2cqPAlnGXIy49wQfMeWbMUGw0w XlKP08x327h9440mwSmvCJp9zHYiznmpiv0el462MvvRHjAEQLmTZxmlhLezNpZb8dIi ImlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1684319078; x=1686911078; h=mime-version:message-id:date:subject:cc:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=DUorhVEFTKfbZ7CYGgLKIyNfs6f1cReCoT8QXfQi9+w=; b=YgoWDnzfnlKHLQkWDvY03uL27s1Yd8O3ZwM88J+bEpe7OoyLHG/Px9+2gxjmQQ64Po 94AzyfkJlYDeXiQDCrndSBgMaWbc2Un8ENFxZmk77Oa3s0o4C7ZrtIPmjJ0J/HHboOQC 3N2tzhwArbcImXKlom7Z/k5t/qXRzpWBplsJXwPOd28M/q84M31VB+G+hP8+c3mU/i3Q nUBpi302IWzdrR2XoYCUC4DOlX6HC+X20vEA15t1JXuPtQLh2BxLt1p9A97BeHJ5328U UJgYFRpfeUjW95GpR34/0XNrJJxYHN8Fc5fWRWtpFI5R3y6juUSeWSVfktzVaNMevhvU iFqA== X-Gm-Message-State: AC+VfDwPBqHnQ/UPOlyCVDl79MZiACs0hBI5yGunkRENxWOiysOd7msP Pau2Cm3SOkEFoNizOtxxyG0p9IB9/UgrsN4qv9ty1mo5gznw826j1JfOojYugNhgtbmD1dNHlDj RULKcFB3EoxRZyRjXefhQB0VZCY1pSdEd9y3e2U3O8rMlK2Ek9E7t6BZLNA1zIXuUmf5HOBkPjT 5LU7Ws X-Google-Smtp-Source: ACHHUZ6tSFAfdKzs+amjoCTcXSB3ZciciXL4dbKAaFRd0yvF3+PAT7xlWV56XAB6KV4uFKZfZqpThg== X-Received: by 2002:a17:903:234c:b0:1a6:ff51:270 with SMTP id c12-20020a170903234c00b001a6ff510270mr49782450plh.29.1684319077528; Wed, 17 May 2023 03:24:37 -0700 (PDT) Received: from arnie-ThinkPad-T480s.localdomain (61-230-62-170.dynamic-ip.hinet.net. [61.230.62.170]) by smtp.gmail.com with ESMTPSA id d7-20020a170903230700b001a661000398sm17212354plh.103.2023.05.17.03.24.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 17 May 2023 03:24:37 -0700 (PDT) From: Arnie Chang To: ffmpeg-devel@ffmpeg.org Date: Wed, 17 May 2023 18:24:25 +0800 Message-Id: <20230517102425.4402-1-arnie.chang@sifive.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="------------text/x-diff" Subject: [FFmpeg-devel] [PATCH v2] lavc/h264chroma: RISC-V V add motion compensation for 8x8 chroma blocks X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Arnie Chang Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" Archived-At: List-Archive: List-Post: This is a multi-part message in MIME format. --------------text/x-diff Content-Type: text/plain; charset=UTF-8; format=fixed Content-Transfer-Encoding: 8bit Optimize the put and avg filtering for 8x8 chroma blocks Signed-off-by: Arnie Chang --- libavcodec/h264chroma.c | 2 + libavcodec/h264chroma.h | 1 + libavcodec/riscv/Makefile | 3 + libavcodec/riscv/h264_chroma_init_riscv.c | 39 ++ libavcodec/riscv/h264_mc_chroma.S | 492 ++++++++++++++++++++++ libavcodec/riscv/h264_mc_chroma.h | 34 ++ 6 files changed, 571 insertions(+) create mode 100644 libavcodec/riscv/h264_chroma_init_riscv.c create mode 100644 libavcodec/riscv/h264_mc_chroma.S create mode 100644 libavcodec/riscv/h264_mc_chroma.h --------------text/x-diff Content-Type: text/x-patch; name="v2-0001-lavc-h264chroma-RISC-V-V-add-motion-compensation-.patch" Content-Transfer-Encoding: 8bit Content-Disposition: attachment; filename="v2-0001-lavc-h264chroma-RISC-V-V-add-motion-compensation-.patch" diff --git a/libavcodec/h264chroma.c b/libavcodec/h264chroma.c index 60b86b6fba..1eeab7bc40 100644 --- a/libavcodec/h264chroma.c +++ b/libavcodec/h264chroma.c @@ -58,5 +58,7 @@ av_cold void ff_h264chroma_init(H264ChromaContext *c, int bit_depth) ff_h264chroma_init_mips(c, bit_depth); #elif ARCH_LOONGARCH64 ff_h264chroma_init_loongarch(c, bit_depth); +#elif ARCH_RISCV + ff_h264chroma_init_riscv(c, bit_depth); #endif } diff --git a/libavcodec/h264chroma.h b/libavcodec/h264chroma.h index b8f9c8f4fc..9c81c18a76 100644 --- a/libavcodec/h264chroma.h +++ b/libavcodec/h264chroma.h @@ -37,5 +37,6 @@ void ff_h264chroma_init_ppc(H264ChromaContext *c, int bit_depth); void ff_h264chroma_init_x86(H264ChromaContext *c, int bit_depth); void ff_h264chroma_init_mips(H264ChromaContext *c, int bit_depth); void ff_h264chroma_init_loongarch(H264ChromaContext *c, int bit_depth); +void ff_h264chroma_init_riscv(H264ChromaContext *c, int bit_depth); #endif /* AVCODEC_H264CHROMA_H */ diff --git a/libavcodec/riscv/Makefile b/libavcodec/riscv/Makefile index 965942f4df..08b76c93cb 100644 --- a/libavcodec/riscv/Makefile +++ b/libavcodec/riscv/Makefile @@ -19,3 +19,6 @@ OBJS-$(CONFIG_PIXBLOCKDSP) += riscv/pixblockdsp_init.o \ RVV-OBJS-$(CONFIG_PIXBLOCKDSP) += riscv/pixblockdsp_rvv.o OBJS-$(CONFIG_VORBIS_DECODER) += riscv/vorbisdsp_init.o RVV-OBJS-$(CONFIG_VORBIS_DECODER) += riscv/vorbisdsp_rvv.o + +OBJS-$(CONFIG_H264CHROMA) += riscv/h264_chroma_init_riscv.o +RVV-OBJS-$(CONFIG_H264CHROMA) += riscv/h264_mc_chroma.o diff --git a/libavcodec/riscv/h264_chroma_init_riscv.c b/libavcodec/riscv/h264_chroma_init_riscv.c new file mode 100644 index 0000000000..b6f98ba693 --- /dev/null +++ b/libavcodec/riscv/h264_chroma_init_riscv.c @@ -0,0 +1,39 @@ +/* + * Copyright (c) 2023 SiFive, Inc. All rights reserved. + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#include + +#include "libavutil/attributes.h" +#include "libavutil/cpu.h" +#include "libavcodec/h264chroma.h" +#include "config.h" +#include "h264_mc_chroma.h" + +av_cold void ff_h264chroma_init_riscv(H264ChromaContext *c, int bit_depth) +{ +#if HAVE_RVV + const int high_bit_depth = bit_depth > 8; + + if (!high_bit_depth) { + c->put_h264_chroma_pixels_tab[0] = h264_put_chroma_mc8_rvv; + c->avg_h264_chroma_pixels_tab[0] = h264_avg_chroma_mc8_rvv; + } +#endif +} \ No newline at end of file diff --git a/libavcodec/riscv/h264_mc_chroma.S b/libavcodec/riscv/h264_mc_chroma.S new file mode 100644 index 0000000000..a02866f633 --- /dev/null +++ b/libavcodec/riscv/h264_mc_chroma.S @@ -0,0 +1,492 @@ +/* + * Copyright (c) 2023 SiFive, Inc. All rights reserved. + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + .text + + .globl h264_put_chroma_mc8_rvv + .p2align 1 + .type h264_put_chroma_mc8_rvv,@function +h264_put_chroma_mc8_rvv: + slliw t2, a5, 3 + mulw t1, a5, a4 + sh3add a5, a4, t2 + slliw a4, a4, 3 + subw a5, t1, a5 + subw a7, a4, t1 + addiw a6, a5, 64 + subw t0, t2, t1 + vsetivli t3, 8, e8, m1, ta, mu + beqz t1, .LBB0_4 + blez a3, .LBB0_17 + li t4, 0 + li t2, 0 + addi a5, t3, 1 + slli t3, a2, 2 +.LBB0_3: # if (xy != 0) + add a4, a1, t4 + vsetvli zero, a5, e8, m1, ta, ma + addiw t2, t2, 4 + vle8.v v10, (a4) + add a4, a4, a2 + vslidedown.vi v11, v10, 1 + vsetivli zero, 8, e8, m1, ta, ma + vwmulu.vx v8, v10, a6 + vwmaccu.vx v8, a7, v11 + vsetvli zero, a5, e8, m1, ta, ma + vle8.v v12, (a4) + vsetivli zero, 8, e8, m1, ta, ma + add a4, a4, a2 + vwmaccu.vx v8, t0, v12 + vsetvli zero, a5, e8, m1, ta, ma + vslidedown.vi v13, v12, 1 + vsetivli zero, 8, e8, m1, ta, ma + vwmulu.vx v10, v12, a6 + vwmaccu.vx v8, t1, v13 + vwmaccu.vx v10, a7, v13 + vsetvli zero, a5, e8, m1, ta, ma + vle8.v v14, (a4) + vsetivli zero, 8, e8, m1, ta, ma + add a4, a4, a2 + vwmaccu.vx v10, t0, v14 + vsetvli zero, a5, e8, m1, ta, ma + vslidedown.vi v15, v14, 1 + vsetivli zero, 8, e8, m1, ta, ma + vwmulu.vx v12, v14, a6 + vwmaccu.vx v10, t1, v15 + vwmaccu.vx v12, a7, v15 + vsetvli zero, a5, e8, m1, ta, ma + vle8.v v14, (a4) + vsetivli zero, 8, e8, m1, ta, ma + add a4, a4, a2 + vwmaccu.vx v12, t0, v14 + vsetvli zero, a5, e8, m1, ta, ma + vslidedown.vi v15, v14, 1 + vsetivli zero, 8, e8, m1, ta, ma + vwmulu.vx v16, v14, a6 + vwmaccu.vx v12, t1, v15 + vwmaccu.vx v16, a7, v15 + vsetvli zero, a5, e8, m1, ta, ma + vle8.v v14, (a4) + vsetivli zero, 8, e8, m1, ta, ma + add a4, a0, t4 + add t4, t4, t3 + vwmaccu.vx v16, t0, v14 + vsetvli zero, a5, e8, m1, ta, ma + vslidedown.vi v14, v14, 1 + vsetivli zero, 8, e8, m1, ta, ma + vnclipu.wi v15, v8, 6 + vwmaccu.vx v16, t1, v14 + vse8.v v15, (a4) + add a4, a4, a2 + vnclipu.wi v8, v10, 6 + vse8.v v8, (a4) + add a4, a4, a2 + vnclipu.wi v8, v12, 6 + vse8.v v8, (a4) + add a4, a4, a2 + vnclipu.wi v8, v16, 6 + vse8.v v8, (a4) + blt t2, a3, .LBB0_3 + j .LBB0_17 +.LBB0_4: + bnez a4, .LBB0_9 + beqz t2, .LBB0_9 + blez a3, .LBB0_17 + li a4, 0 + li t1, 0 + slli a7, a2, 2 +.LBB0_8: # if ((x8 - xy) == 0 && (y8 -xy) != 0) + add a5, a1, a4 + vsetvli zero, zero, e8, m1, ta, ma + addiw t1, t1, 4 + vle8.v v8, (a5) + add a5, a5, a2 + add t2, a5, a2 + vwmulu.vx v10, v8, a6 + vle8.v v8, (a5) + vwmulu.vx v12, v8, a6 + vle8.v v9, (t2) + add t2, t2, a2 + add a5, t2, a2 + vwmaccu.vx v10, t0, v8 + vle8.v v8, (t2) + vle8.v v14, (a5) + add a5, a0, a4 + add a4, a4, a7 + vwmaccu.vx v12, t0, v9 + vnclipu.wi v15, v10, 6 + vwmulu.vx v10, v9, a6 + vse8.v v15, (a5) + add a5, a5, a2 + vnclipu.wi v9, v12, 6 + vwmaccu.vx v10, t0, v8 + vwmulu.vx v12, v8, a6 + vse8.v v9, (a5) + add a5, a5, a2 + vnclipu.wi v8, v10, 6 + vwmaccu.vx v12, t0, v14 + vse8.v v8, (a5) + add a5, a5, a2 + vnclipu.wi v8, v12, 6 + vse8.v v8, (a5) + blt t1, a3, .LBB0_8 + j .LBB0_17 +.LBB0_9: + beqz a4, .LBB0_14 + bnez t2, .LBB0_14 + blez a3, .LBB0_17 + li a4, 0 + li t2, 0 + addi t0, t3, 1 + slli t1, a2, 2 +.LBB0_13: # if ((x8 - xy) != 0 && (y8 -xy) == 0) + add a5, a1, a4 + vsetvli zero, t0, e8, m1, ta, ma + addiw t2, t2, 4 + vle8.v v8, (a5) + add a5, a5, a2 + vslidedown.vi v9, v8, 1 + vsetivli zero, 8, e8, m1, ta, ma + vwmulu.vx v10, v8, a6 + vwmaccu.vx v10, a7, v9 + vsetvli zero, t0, e8, m1, ta, ma + vle8.v v8, (a5) + add a5, a5, a2 + vslidedown.vi v9, v8, 1 + vsetivli zero, 8, e8, m1, ta, ma + vwmulu.vx v12, v8, a6 + vwmaccu.vx v12, a7, v9 + vsetvli zero, t0, e8, m1, ta, ma + vle8.v v8, (a5) + add a5, a5, a2 + vslidedown.vi v9, v8, 1 + vsetivli zero, 8, e8, m1, ta, ma + vwmulu.vx v14, v8, a6 + vwmaccu.vx v14, a7, v9 + vsetvli zero, t0, e8, m1, ta, ma + vle8.v v8, (a5) + add a5, a0, a4 + add a4, a4, t1 + vslidedown.vi v9, v8, 1 + vsetivli zero, 8, e8, m1, ta, ma + vnclipu.wi v16, v10, 6 + vse8.v v16, (a5) + add a5, a5, a2 + vnclipu.wi v10, v12, 6 + vwmulu.vx v12, v8, a6 + vse8.v v10, (a5) + add a5, a5, a2 + vnclipu.wi v8, v14, 6 + vwmaccu.vx v12, a7, v9 + vse8.v v8, (a5) + add a5, a5, a2 + vnclipu.wi v8, v12, 6 + vse8.v v8, (a5) + blt t2, a3, .LBB0_13 + j .LBB0_17 +.LBB0_14: + blez a3, .LBB0_17 + li a4, 0 + li t2, 0 + slli a7, a2, 2 +.LBB0_16: # the final else, none of the above conditions are met + add t0, a1, a4 + vsetvli zero, zero, e8, m1, ta, ma + add a5, a0, a4 + add a4, a4, a7 + addiw t2, t2, 4 + vle8.v v8, (t0) + add t0, t0, a2 + add t1, t0, a2 + vwmulu.vx v10, v8, a6 + vle8.v v8, (t0) + add t0, t1, a2 + vle8.v v9, (t1) + vle8.v v12, (t0) + vnclipu.wi v13, v10, 6 + vwmulu.vx v10, v8, a6 + vse8.v v13, (a5) + add a5, a5, a2 + vnclipu.wi v8, v10, 6 + vwmulu.vx v10, v9, a6 + vse8.v v8, (a5) + add a5, a5, a2 + vnclipu.wi v8, v10, 6 + vwmulu.vx v10, v12, a6 + vse8.v v8, (a5) + add a5, a5, a2 + vnclipu.wi v8, v10, 6 + vse8.v v8, (a5) + blt t2, a3, .LBB0_16 +.LBB0_17: # Exit h264_put_chroma_mc8_rvv + ret +.Lfunc_end0: + .size h264_put_chroma_mc8_rvv, .Lfunc_end0-h264_put_chroma_mc8_rvv + + .globl h264_avg_chroma_mc8_rvv + .p2align 1 + .type h264_avg_chroma_mc8_rvv,@function +h264_avg_chroma_mc8_rvv: + slliw t2, a5, 3 + mulw t1, a5, a4 + sh3add a5, a4, t2 + slliw a4, a4, 3 + subw a5, t1, a5 + subw a7, a4, t1 + addiw a6, a5, 64 + subw t0, t2, t1 + vsetivli t3, 8, e8, m1, ta, mu + beqz t1, .LBB1_4 + blez a3, .LBB1_17 + li t4, 0 + li t2, 0 + addi a5, t3, 1 + slli t3, a2, 2 +.LBB1_3: # if (xy != 0) + add a4, a1, t4 + vsetvli zero, a5, e8, m1, ta, ma + addiw t2, t2, 4 + vle8.v v10, (a4) + add a4, a4, a2 + vslidedown.vi v11, v10, 1 + vsetivli zero, 8, e8, m1, ta, ma + vwmulu.vx v8, v10, a6 + vwmaccu.vx v8, a7, v11 + vsetvli zero, a5, e8, m1, ta, ma + vle8.v v12, (a4) + vsetivli zero, 8, e8, m1, ta, ma + add a4, a4, a2 + vwmaccu.vx v8, t0, v12 + vsetvli zero, a5, e8, m1, ta, ma + vslidedown.vi v13, v12, 1 + vsetivli zero, 8, e8, m1, ta, ma + vwmulu.vx v10, v12, a6 + vwmaccu.vx v8, t1, v13 + vwmaccu.vx v10, a7, v13 + vsetvli zero, a5, e8, m1, ta, ma + vle8.v v14, (a4) + vsetivli zero, 8, e8, m1, ta, ma + add a4, a4, a2 + vwmaccu.vx v10, t0, v14 + vsetvli zero, a5, e8, m1, ta, ma + vslidedown.vi v15, v14, 1 + vsetivli zero, 8, e8, m1, ta, ma + vwmulu.vx v12, v14, a6 + vwmaccu.vx v10, t1, v15 + vwmaccu.vx v12, a7, v15 + vsetvli zero, a5, e8, m1, ta, ma + vle8.v v14, (a4) + vsetivli zero, 8, e8, m1, ta, ma + add a4, a4, a2 + vwmaccu.vx v12, t0, v14 + vsetvli zero, a5, e8, m1, ta, ma + vslidedown.vi v15, v14, 1 + vsetivli zero, 8, e8, m1, ta, ma + vwmulu.vx v16, v14, a6 + vwmaccu.vx v12, t1, v15 + vwmaccu.vx v16, a7, v15 + vsetvli zero, a5, e8, m1, ta, ma + vle8.v v14, (a4) + vsetivli zero, 8, e8, m1, ta, ma + add a4, a0, t4 + add t4, t4, t3 + vwmaccu.vx v16, t0, v14 + vsetvli zero, a5, e8, m1, ta, ma + vslidedown.vi v14, v14, 1 + vsetivli zero, 8, e8, m1, ta, ma + vnclipu.wi v15, v8, 6 + vle8.v v8, (a4) + vwmaccu.vx v16, t1, v14 + vaaddu.vv v8, v15, v8 + vse8.v v8, (a4) + add a4, a4, a2 + vnclipu.wi v8, v10, 6 + vle8.v v9, (a4) + vaaddu.vv v8, v8, v9 + vse8.v v8, (a4) + add a4, a4, a2 + vnclipu.wi v8, v12, 6 + vle8.v v9, (a4) + vaaddu.vv v8, v8, v9 + vse8.v v8, (a4) + add a4, a4, a2 + vnclipu.wi v8, v16, 6 + vle8.v v9, (a4) + vaaddu.vv v8, v8, v9 + vse8.v v8, (a4) + blt t2, a3, .LBB1_3 + j .LBB1_17 +.LBB1_4: + bnez a4, .LBB1_9 + beqz t2, .LBB1_9 + blez a3, .LBB1_17 + li t2, 0 + li t1, 0 + slli a7, a2, 2 +.LBB1_8: # if ((x8 - xy) == 0 && (y8 -xy) != 0) + add a4, a1, t2 + vsetvli zero, zero, e8, m1, ta, ma + addiw t1, t1, 4 + vle8.v v8, (a4) + add a4, a4, a2 + vwmulu.vx v10, v8, a6 + vle8.v v8, (a4) + add a4, a4, a2 + add a5, a4, a2 + vle8.v v9, (a4) + add a4, a5, a2 + vle8.v v12, (a5) + vwmaccu.vx v10, t0, v8 + vle8.v v13, (a4) + add a4, a0, t2 + add t2, t2, a7 + vnclipu.wi v14, v10, 6 + vwmulu.vx v10, v8, a6 + vle8.v v8, (a4) + vaaddu.vv v8, v14, v8 + vwmaccu.vx v10, t0, v9 + vse8.v v8, (a4) + add a4, a4, a2 + vnclipu.wi v8, v10, 6 + vwmulu.vx v10, v9, a6 + vle8.v v9, (a4) + vaaddu.vv v8, v8, v9 + vwmaccu.vx v10, t0, v12 + vse8.v v8, (a4) + add a4, a4, a2 + vnclipu.wi v8, v10, 6 + vwmulu.vx v10, v12, a6 + vle8.v v9, (a4) + vaaddu.vv v8, v8, v9 + vwmaccu.vx v10, t0, v13 + vse8.v v8, (a4) + add a4, a4, a2 + vnclipu.wi v8, v10, 6 + vle8.v v9, (a4) + vaaddu.vv v8, v8, v9 + vse8.v v8, (a4) + blt t1, a3, .LBB1_8 + j .LBB1_17 +.LBB1_9: + beqz a4, .LBB1_14 + bnez t2, .LBB1_14 + blez a3, .LBB1_17 + li a5, 0 + li t2, 0 + addi t0, t3, 1 + slli t1, a2, 2 +.LBB1_13: # if ((x8 - xy) != 0 && (y8 -xy) == 0) + add a4, a1, a5 + vsetvli zero, t0, e8, m1, ta, ma + addiw t2, t2, 4 + vle8.v v8, (a4) + add a4, a4, a2 + vslidedown.vi v9, v8, 1 + vsetivli zero, 8, e8, m1, ta, ma + vwmulu.vx v10, v8, a6 + vwmaccu.vx v10, a7, v9 + vsetvli zero, t0, e8, m1, ta, ma + vle8.v v8, (a4) + add a4, a4, a2 + vslidedown.vi v9, v8, 1 + vsetivli zero, 8, e8, m1, ta, ma + vwmulu.vx v12, v8, a6 + vwmaccu.vx v12, a7, v9 + vsetvli zero, t0, e8, m1, ta, ma + vle8.v v8, (a4) + add a4, a4, a2 + vslidedown.vi v9, v8, 1 + vsetivli zero, 8, e8, m1, ta, ma + vwmulu.vx v14, v8, a6 + vwmaccu.vx v14, a7, v9 + vsetvli zero, t0, e8, m1, ta, ma + vle8.v v8, (a4) + add a4, a0, a5 + add a5, a5, t1 + vslidedown.vi v9, v8, 1 + vsetivli zero, 8, e8, m1, ta, ma + vnclipu.wi v16, v10, 6 + vle8.v v10, (a4) + vaaddu.vv v10, v16, v10 + vse8.v v10, (a4) + add a4, a4, a2 + vnclipu.wi v10, v12, 6 + vle8.v v11, (a4) + vwmulu.vx v12, v8, a6 + vaaddu.vv v10, v10, v11 + vwmaccu.vx v12, a7, v9 + vse8.v v10, (a4) + add a4, a4, a2 + vnclipu.wi v10, v14, 6 + vle8.v v8, (a4) + vaaddu.vv v8, v10, v8 + vse8.v v8, (a4) + add a4, a4, a2 + vnclipu.wi v8, v12, 6 + vle8.v v9, (a4) + vaaddu.vv v8, v8, v9 + vse8.v v8, (a4) + blt t2, a3, .LBB1_13 + j .LBB1_17 +.LBB1_14: + blez a3, .LBB1_17 + li a4, 0 + li t0, 0 + slli a7, a2, 2 +.LBB1_16: # the final else, none of the above conditions are met + add a5, a1, a4 + vsetvli zero, zero, e8, m1, ta, ma + addiw t0, t0, 4 + vle8.v v8, (a5) + add a5, a5, a2 + add t1, a5, a2 + vwmulu.vx v10, v8, a6 + vle8.v v8, (a5) + add a5, t1, a2 + vle8.v v9, (t1) + vle8.v v12, (a5) + add a5, a0, a4 + add a4, a4, a7 + vnclipu.wi v13, v10, 6 + vle8.v v10, (a5) + vwmulu.vx v14, v8, a6 + vaaddu.vv v10, v13, v10 + vse8.v v10, (a5) + add a5, a5, a2 + vnclipu.wi v8, v14, 6 + vle8.v v10, (a5) + vaaddu.vv v8, v8, v10 + vwmulu.vx v10, v9, a6 + vse8.v v8, (a5) + add a5, a5, a2 + vnclipu.wi v8, v10, 6 + vle8.v v9, (a5) + vwmulu.vx v10, v12, a6 + vaaddu.vv v8, v8, v9 + vse8.v v8, (a5) + add a5, a5, a2 + vnclipu.wi v8, v10, 6 + vle8.v v9, (a5) + vaaddu.vv v8, v8, v9 + vse8.v v8, (a5) + blt t0, a3, .LBB1_16 +.LBB1_17: # Exit h264_avg_chroma_mc8_rvv + ret +.Lfunc_end1: + .size h264_avg_chroma_mc8_rvv, .Lfunc_end1-h264_avg_chroma_mc8_rvv diff --git a/libavcodec/riscv/h264_mc_chroma.h b/libavcodec/riscv/h264_mc_chroma.h new file mode 100644 index 0000000000..cb350d0e4a --- /dev/null +++ b/libavcodec/riscv/h264_mc_chroma.h @@ -0,0 +1,34 @@ +/* + * Copyright (c) 2023 SiFive, Inc. All rights reserved. + * + * This file is part of FFmpeg. + * + * FFmpeg is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * FFmpeg is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with FFmpeg; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +#ifndef AVCODEC_RISCV_H264_MC_CHROMA_H +#define AVCODEC_RISCV_H264_MC_CHROMA_H +#include +#include +#include +#include +#include +#include "config.h" + +#if HAVE_RVV +void h264_put_chroma_mc8_rvv(uint8_t *p_dst, const uint8_t *p_src, ptrdiff_t stride, int h, int x, int y); +void h264_avg_chroma_mc8_rvv(uint8_t *p_dst, const uint8_t *p_src, ptrdiff_t stride, int h, int x, int y); +#endif +#endif \ No newline at end of file --------------text/x-diff Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe". --------------text/x-diff--