From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by master.gitmailbox.com (Postfix) with ESMTP id 3A04D47229 for ; Mon, 29 Apr 2024 15:25:20 +0000 (UTC) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 8EF0168D4D0; Mon, 29 Apr 2024 18:25:17 +0300 (EEST) Received: from JPN01-TYC-obe.outbound.protection.outlook.com (mail-tycjpn01olkn2026.outbound.protection.outlook.com [40.92.99.26]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 4BA0368D36D for ; Mon, 29 Apr 2024 18:25:10 +0300 (EEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=jWe9+H4gWFwGRrHzdYDeZtBx6L2FByt12V+5dvW0WX66Bbo/oG9J8r/H/CLo0BXxv41zzcS+nu2oqRBcUPJS4S60JbHdhy74huDd/j9zQgK9DlPjch5N2rpE8kMoPsPWBCYN2PYGAr8hCs+aFm3dn1A2pslx0DlNdc3T91Zxuo6Gp9yoW2RwLy32fxjAB4x6PhoJSgncjztaS5jm9zKvCgXuVYO61CJ13p1TGKoO5VKgmlpbE2qQKq339LtA5Yu8w858jayNRIUsHJXAf9v006AgA9AZenKqqdKF9eMb8C3n5OOmZRDX5x5nZstlPysmNZJTW8Ac7WjgGV/Vg6f//w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=nEIrIcffflwHqYA4muZP8eDm6j/2khOIUahRCfL+SH0=; b=Hosyj05yrJvkGOcd2UcFtki+vOOGE0yW3D9eQj47sDYDJtxu+IWCL4O1is40cE7Rvoskl8vZFCQMliaikuKu1Xcr4s71dyOS3kHvw/fgCcmqvP3y6qM5d5HUzUytKJ28RSKvgyZcmGmI9WtDJoMzW85acEyde862aC/S2lM8cxVvat7oQPiYKPqtxTu1o3yqXi+vZkALHA4JggbtGbsAa8q/ECHWqAwcq0OjvfQ5fmFALk8+Hrl1JNhcqEnQhIe/avxVtgsppBGmqR2j5DBKRv9ipAKuTMLFw2FeY/PyX5xZW0kREcRemnmVwIjuEXTiABzNGC3dtGDVhJXdYsLaKA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=outlook.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=nEIrIcffflwHqYA4muZP8eDm6j/2khOIUahRCfL+SH0=; b=HkDAElUh+880mM9qYgXdVfT83RY9NcoBS9a8rqvrTYY2jQMDf/0LfcoABBws4O+JgX2dRA4OkQaFtcoZlGHgPsu8rr3Et5BjFvCllZUOq/vv2TSaO3L6BB9fYCZ0Cql6gDpaVx2zfkYJY/I+p1L9wYcVVa2iJLQnhX1Tx3KMMJ2PUmRiJYCPOqOVbWOl1OQI3RMalQ4HeEXSH+Ih6CE0gbZxkB3yBVejZti6+qCZODKu0MSlkQJLExR/mclSNBijO4NdC7vTDPzw6Iynkb1nX1EZdD9W6qa3UL/lNDWGTD7fegGOs4wJkzqK8VTU6ektPw8ldQhQA/qSqwbqGYBxbA== Received: from OSZP286MB2173.JPNP286.PROD.OUTLOOK.COM (2603:1096:604:186::5) by OS3P286MB2424.JPNP286.PROD.OUTLOOK.COM (2603:1096:604:153::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7519.34; Mon, 29 Apr 2024 15:25:05 +0000 Received: from OSZP286MB2173.JPNP286.PROD.OUTLOOK.COM ([fe80::451b:2fa4:87f7:14a9]) by OSZP286MB2173.JPNP286.PROD.OUTLOOK.COM ([fe80::451b:2fa4:87f7:14a9%5]) with mapi id 15.20.7519.035; Mon, 29 Apr 2024 15:25:05 +0000 From: toqsxw@outlook.com To: ffmpeg-devel@ffmpeg.org Date: Mon, 29 Apr 2024 23:24:41 +0800 Message-ID: X-Mailer: git-send-email 2.44.0.windows.1 X-TMN: [mPQ8BGDRNXmvLVPddh5BEGE3qS+LEnK6] X-ClientProxiedBy: SI2PR04CA0016.apcprd04.prod.outlook.com (2603:1096:4:197::7) To OSZP286MB2173.JPNP286.PROD.OUTLOOK.COM (2603:1096:604:186::5) X-Microsoft-Original-Message-ID: <20240429152445.1461-1-toqsxw@outlook.com> MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: OSZP286MB2173:EE_|OS3P286MB2424:EE_ X-MS-Office365-Filtering-Correlation-Id: f27a4724-66d5-49d8-72fb-08dc68608bcb X-Microsoft-Antispam: BCL:0; ARA:14566002|461199019|3412199016|440099019|1710799017; X-Microsoft-Antispam-Message-Info: maJl9ouZkxxZXAOrxy5xg9FNNtP3RpLcSK0TT/hS3vWncfHbt86kVW++KJ0waWE9wnmlYXt/ZdSS25LWmV4F694H+wJId6xG7uSbFwDd7I+JYV8N8i0sM64fZgAvuUEBl6yUAn1ussZ3PJ1kpUVj0h0O27YFL/9IhUxHkY0lP1clodWPmodsgcNhE1fSYCCOfVM/wiFQ4FE+0DYPDY/kILRbAolH5oI6FlbeYW+Nd/qsvA7Pv/SemXgiouHCuY8Q8R7npo1eCJewn+f0hQRIdc7a4VGLG8cqDkzuIJelTRW5/YTQhauhBwsPODF72xUTHJK3lxOS1BRRQEE/L4MpOxAY2hL980UXd0EKCu23+AyDatSoyul8DbS5uRCvB+sIZ+Qfyf/3rS8QL5ScYXD/gZ5YoLiN7uV+5AHVK2/9I1EIs41KkLBD0UOEgzJ7QPnM9pi0Y0uE2jI7HwsnSx8RcFAhQQHyodNtmfg1QHHThQZixpTDKToYDu9Ukte+f7vAVAGxKrdwb3bw46mhr+QVmTCJ1ZUC5UsWsn9au2zM2Mc81YbVEHH7MhX2QsRW2pNRNzlWXNQ19N68PvIPGVvrxzk+DVDSpIVdPY1o1CaHHyFfEhl0bMEko7VcRklz5hFN X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?+UM3ukuVUsRMCnjPR8eKoQ4LU+tBPeV+abgvdp5O+Zb14IZBBFHkaBPiiE1P?= =?us-ascii?Q?j9Mb/iPNvqF/ZjcFkDxK5EU4GxIJdOz/m47wl/QS+CtDz2oxHVMVLTYL5b6i?= =?us-ascii?Q?W2oxPCiXvxMzeJ0hoRMG9ugvms0LQ43mygcxAuNnbkpWWFTSbVWWp7ebPn7o?= =?us-ascii?Q?qhJfOeBozwrMPersDTSI6RQy20q09R0c6CSHfZdWnzvw8jUOMLVWbeqnNB/s?= =?us-ascii?Q?9T0/yXt3XwMDD/fVUauOtVOQuiEtIWvZF/WVF0LrGhYbxmboqeLM+GwfUgVV?= =?us-ascii?Q?+hVwBWAtW/YYqwZ1BIvkbU2krA6gOnO0amyg8XCCr7Qbhj5B7LHQ9MJnuxtJ?= =?us-ascii?Q?i0x4yg19ragTvunrSheEvBZrf4E6dgA/frCjwCYDHosLIpKPtX4EhCozNBra?= =?us-ascii?Q?PV3RI02chlUHXvy7j3qKuynt0SY4K/QBMoZDqadsJHz7f1/a1f6zwGBMBREq?= =?us-ascii?Q?Sev/qDd71Q8TQzzS6DAfrbGpemA4LVFvLC6Kj7+6H3E+/EX3orUU8v4BUD1D?= =?us-ascii?Q?+EJ1hoEUaOSEIVUH36FNs8lzvSm0SX/i7VTHMQ1nSDQbSJZW0lyancxjvbG+?= =?us-ascii?Q?o7rwceEhjjX7t9GXbF+hzask+JYUnxV8PtKfNyaSbLLzmAUBniy4evPOeXf7?= =?us-ascii?Q?mzkdD9K4gtUIFWY80lv6qmhuxptD4Szbv5PMBixjtHBvK8385BDWUlqysQDB?= =?us-ascii?Q?p9vj+92Ek97kyVVYfGqyCdYvLx60W9xBOeUbTwt5lE+3KYpfPT4k3wOxOqQa?= =?us-ascii?Q?rAT57pGUWg7GfcJtGML7lvj+ALqU4KtdpDbjXL/EDIGgwMRMaRA/YWPtAZA5?= =?us-ascii?Q?q2d0rPS6XzR3FN/p53Z2ns05F+rVgmq4js9ULcO7IY4yknMrVLQUPHmOE7HD?= =?us-ascii?Q?DrV2BVOvw+qAGlLPD26JYkN2G7Nt1/1Spg+FYgYjdmzI8Qv5kZ7HeDsHdaRR?= =?us-ascii?Q?mxSfi0y+Ph3YF5i3Vt7Lm7kxD2pcP0f193tS9DXwctebx5p/mwqw3kdCWKj8?= =?us-ascii?Q?OfJugBKftq1gbmbY2OUR9UcTQuJ0Bazjp1hpZgSsPAZvkIiZYIiiDSKK4ygK?= =?us-ascii?Q?nCyp1vAkO1Z8B2uVv2ybAm7+JDhJ14/Xo9tUFd4+aPhxsIibkV53SrC6RUE+?= =?us-ascii?Q?e3iOqXLm2AH9D/PCe06Oukz/HEwqsA5nkUO/xvzVhV56qHBvaI4WsKet+7lD?= =?us-ascii?Q?67UXdX/qjOslFgYt4thuXTfV6xujhtUZj8VBRVM/rqgmbTo2aeQQjAUQcLY?= =?us-ascii?Q?=3D?= X-OriginatorOrg: outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: f27a4724-66d5-49d8-72fb-08dc68608bcb X-MS-Exchange-CrossTenant-AuthSource: OSZP286MB2173.JPNP286.PROD.OUTLOOK.COM X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Apr 2024 15:25:05.4108 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 84df9e7f-e9f6-40af-b435-aaaaaaaaaaaa X-MS-Exchange-CrossTenant-RMS-PersistedConsumerOrg: 00000000-0000-0000-0000-000000000000 X-MS-Exchange-Transport-CrossTenantHeadersStamped: OS3P286MB2424 Subject: [FFmpeg-devel] [PATCH 1/4] avcodec/x86/vvc: add alf filter luma and chroma avx2 optimizations X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Cc: Wu Jianhua Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" Archived-At: List-Archive: List-Post: From: Wu Jianhua vvc_alf_filter_chroma_4x4_10_c: 657.0 vvc_alf_filter_chroma_4x4_10_avx2: 138.0 vvc_alf_filter_chroma_4x8_10_c: 1264.7 vvc_alf_filter_chroma_4x8_10_avx2: 253.5 vvc_alf_filter_chroma_4x12_10_c: 1841.7 vvc_alf_filter_chroma_4x12_10_avx2: 375.5 vvc_alf_filter_chroma_4x16_10_c: 2442.7 vvc_alf_filter_chroma_4x16_10_avx2: 491.7 vvc_alf_filter_chroma_4x20_10_c: 3057.0 vvc_alf_filter_chroma_4x20_10_avx2: 607.2 vvc_alf_filter_chroma_4x24_10_c: 3667.0 vvc_alf_filter_chroma_4x24_10_avx2: 747.5 vvc_alf_filter_chroma_4x28_10_c: 4286.7 vvc_alf_filter_chroma_4x28_10_avx2: 849.0 vvc_alf_filter_chroma_4x32_10_c: 4886.0 vvc_alf_filter_chroma_4x32_10_avx2: 967.5 vvc_alf_filter_chroma_8x4_10_c: 1250.5 vvc_alf_filter_chroma_8x4_10_avx2: 261.0 vvc_alf_filter_chroma_8x8_10_c: 2430.7 vvc_alf_filter_chroma_8x8_10_avx2: 494.7 vvc_alf_filter_chroma_8x12_10_c: 3631.2 vvc_alf_filter_chroma_8x12_10_avx2: 734.5 vvc_alf_filter_chroma_8x16_10_c: 13675.7 vvc_alf_filter_chroma_8x16_10_avx2: 972.0 vvc_alf_filter_chroma_8x20_10_c: 6212.0 vvc_alf_filter_chroma_8x20_10_avx2: 1211.0 vvc_alf_filter_chroma_8x24_10_c: 7440.7 vvc_alf_filter_chroma_8x24_10_avx2: 1447.0 vvc_alf_filter_chroma_8x28_10_c: 8460.5 vvc_alf_filter_chroma_8x28_10_avx2: 1682.5 vvc_alf_filter_chroma_8x32_10_c: 9665.2 vvc_alf_filter_chroma_8x32_10_avx2: 1917.7 vvc_alf_filter_chroma_12x4_10_c: 1865.2 vvc_alf_filter_chroma_12x4_10_avx2: 391.7 vvc_alf_filter_chroma_12x8_10_c: 3625.2 vvc_alf_filter_chroma_12x8_10_avx2: 739.0 vvc_alf_filter_chroma_12x12_10_c: 5427.5 vvc_alf_filter_chroma_12x12_10_avx2: 1094.2 vvc_alf_filter_chroma_12x16_10_c: 7237.7 vvc_alf_filter_chroma_12x16_10_avx2: 1447.2 vvc_alf_filter_chroma_12x20_10_c: 9035.2 vvc_alf_filter_chroma_12x20_10_avx2: 1805.2 vvc_alf_filter_chroma_12x24_10_c: 11135.7 vvc_alf_filter_chroma_12x24_10_avx2: 2158.2 vvc_alf_filter_chroma_12x28_10_c: 12644.0 vvc_alf_filter_chroma_12x28_10_avx2: 2511.2 vvc_alf_filter_chroma_12x32_10_c: 14441.7 vvc_alf_filter_chroma_12x32_10_avx2: 2888.0 vvc_alf_filter_chroma_16x4_10_c: 2410.0 vvc_alf_filter_chroma_16x4_10_avx2: 251.7 vvc_alf_filter_chroma_16x8_10_c: 4943.0 vvc_alf_filter_chroma_16x8_10_avx2: 479.0 vvc_alf_filter_chroma_16x12_10_c: 7235.5 vvc_alf_filter_chroma_16x12_10_avx2: 9751.0 vvc_alf_filter_chroma_16x16_10_c: 10142.7 vvc_alf_filter_chroma_16x16_10_avx2: 935.5 vvc_alf_filter_chroma_16x20_10_c: 12029.0 vvc_alf_filter_chroma_16x20_10_avx2: 1174.5 vvc_alf_filter_chroma_16x24_10_c: 14414.2 vvc_alf_filter_chroma_16x24_10_avx2: 1410.5 vvc_alf_filter_chroma_16x28_10_c: 16813.0 vvc_alf_filter_chroma_16x28_10_avx2: 1713.0 vvc_alf_filter_chroma_16x32_10_c: 19228.5 vvc_alf_filter_chroma_16x32_10_avx2: 2256.0 vvc_alf_filter_chroma_20x4_10_c: 3015.2 vvc_alf_filter_chroma_20x4_10_avx2: 371.7 vvc_alf_filter_chroma_20x8_10_c: 6170.2 vvc_alf_filter_chroma_20x8_10_avx2: 721.0 vvc_alf_filter_chroma_20x12_10_c: 9019.7 vvc_alf_filter_chroma_20x12_10_avx2: 1102.7 vvc_alf_filter_chroma_20x16_10_c: 12040.2 vvc_alf_filter_chroma_20x16_10_avx2: 1422.5 vvc_alf_filter_chroma_20x20_10_c: 15010.7 vvc_alf_filter_chroma_20x20_10_avx2: 1765.7 vvc_alf_filter_chroma_20x24_10_c: 18017.7 vvc_alf_filter_chroma_20x24_10_avx2: 2124.7 vvc_alf_filter_chroma_20x28_10_c: 21025.5 vvc_alf_filter_chroma_20x28_10_avx2: 2488.2 vvc_alf_filter_chroma_20x32_10_c: 31128.5 vvc_alf_filter_chroma_20x32_10_avx2: 3205.2 vvc_alf_filter_chroma_24x4_10_c: 3701.2 vvc_alf_filter_chroma_24x4_10_avx2: 494.7 vvc_alf_filter_chroma_24x8_10_c: 7613.0 vvc_alf_filter_chroma_24x8_10_avx2: 957.2 vvc_alf_filter_chroma_24x12_10_c: 10816.7 vvc_alf_filter_chroma_24x12_10_avx2: 1427.7 vvc_alf_filter_chroma_24x16_10_c: 14390.5 vvc_alf_filter_chroma_24x16_10_avx2: 1948.2 vvc_alf_filter_chroma_24x20_10_c: 17989.5 vvc_alf_filter_chroma_24x20_10_avx2: 2363.7 vvc_alf_filter_chroma_24x24_10_c: 21581.7 vvc_alf_filter_chroma_24x24_10_avx2: 2839.7 vvc_alf_filter_chroma_24x28_10_c: 25179.2 vvc_alf_filter_chroma_24x28_10_avx2: 3313.2 vvc_alf_filter_chroma_24x32_10_c: 28776.2 vvc_alf_filter_chroma_24x32_10_avx2: 4154.7 vvc_alf_filter_chroma_28x4_10_c: 4331.2 vvc_alf_filter_chroma_28x4_10_avx2: 624.2 vvc_alf_filter_chroma_28x8_10_c: 8445.0 vvc_alf_filter_chroma_28x8_10_avx2: 1197.7 vvc_alf_filter_chroma_28x12_10_c: 12684.5 vvc_alf_filter_chroma_28x12_10_avx2: 1786.7 vvc_alf_filter_chroma_28x16_10_c: 16924.5 vvc_alf_filter_chroma_28x16_10_avx2: 2378.7 vvc_alf_filter_chroma_28x20_10_c: 38361.0 vvc_alf_filter_chroma_28x20_10_avx2: 2967.0 vvc_alf_filter_chroma_28x24_10_c: 25329.0 vvc_alf_filter_chroma_28x24_10_avx2: 3564.2 vvc_alf_filter_chroma_28x28_10_c: 29514.0 vvc_alf_filter_chroma_28x28_10_avx2: 4151.7 vvc_alf_filter_chroma_28x32_10_c: 33673.2 vvc_alf_filter_chroma_28x32_10_avx2: 5125.0 vvc_alf_filter_chroma_32x4_10_c: 4945.2 vvc_alf_filter_chroma_32x4_10_avx2: 485.7 vvc_alf_filter_chroma_32x8_10_c: 9658.7 vvc_alf_filter_chroma_32x8_10_avx2: 943.7 vvc_alf_filter_chroma_32x12_10_c: 16177.7 vvc_alf_filter_chroma_32x12_10_avx2: 1443.7 vvc_alf_filter_chroma_32x16_10_c: 19336.0 vvc_alf_filter_chroma_32x16_10_avx2: 1876.0 vvc_alf_filter_chroma_32x20_10_c: 24153.0 vvc_alf_filter_chroma_32x20_10_avx2: 2323.0 vvc_alf_filter_chroma_32x24_10_c: 28917.7 vvc_alf_filter_chroma_32x24_10_avx2: 2806.2 vvc_alf_filter_chroma_32x28_10_c: 33738.7 vvc_alf_filter_chroma_32x28_10_avx2: 3454.0 vvc_alf_filter_chroma_32x32_10_c: 38531.5 vvc_alf_filter_chroma_32x32_10_avx2: 4103.2 vvc_alf_filter_luma_4x4_10_c: 1076.2 vvc_alf_filter_luma_4x4_10_avx2: 240.0 vvc_alf_filter_luma_4x8_10_c: 2113.2 vvc_alf_filter_luma_4x8_10_avx2: 454.5 vvc_alf_filter_luma_4x12_10_c: 3179.2 vvc_alf_filter_luma_4x12_10_avx2: 669.0 vvc_alf_filter_luma_4x16_10_c: 4146.5 vvc_alf_filter_luma_4x16_10_avx2: 885.0 vvc_alf_filter_luma_4x20_10_c: 5168.2 vvc_alf_filter_luma_4x20_10_avx2: 1106.0 vvc_alf_filter_luma_4x24_10_c: 6168.2 vvc_alf_filter_luma_4x24_10_avx2: 1357.0 vvc_alf_filter_luma_4x28_10_c: 7330.0 vvc_alf_filter_luma_4x28_10_avx2: 1539.5 vvc_alf_filter_luma_4x32_10_c: 8202.0 vvc_alf_filter_luma_4x32_10_avx2: 1803.7 vvc_alf_filter_luma_8x4_10_c: 2100.5 vvc_alf_filter_luma_8x4_10_avx2: 479.7 vvc_alf_filter_luma_8x8_10_c: 4079.5 vvc_alf_filter_luma_8x8_10_avx2: 898.2 vvc_alf_filter_luma_8x12_10_c: 6209.2 vvc_alf_filter_luma_8x12_10_avx2: 1328.7 vvc_alf_filter_luma_8x16_10_c: 8177.5 vvc_alf_filter_luma_8x16_10_avx2: 1765.0 vvc_alf_filter_luma_8x20_10_c: 10400.5 vvc_alf_filter_luma_8x20_10_avx2: 2196.2 vvc_alf_filter_luma_8x24_10_c: 12222.7 vvc_alf_filter_luma_8x24_10_avx2: 2626.0 vvc_alf_filter_luma_8x28_10_c: 14235.5 vvc_alf_filter_luma_8x28_10_avx2: 3065.2 vvc_alf_filter_luma_8x32_10_c: 16702.2 vvc_alf_filter_luma_8x32_10_avx2: 3494.2 vvc_alf_filter_luma_12x4_10_c: 3142.0 vvc_alf_filter_luma_12x4_10_avx2: 699.5 vvc_alf_filter_luma_12x8_10_c: 6093.2 vvc_alf_filter_luma_12x8_10_avx2: 1335.5 vvc_alf_filter_luma_12x12_10_c: 9098.7 vvc_alf_filter_luma_12x12_10_avx2: 1988.5 vvc_alf_filter_luma_12x16_10_c: 12237.5 vvc_alf_filter_luma_12x16_10_avx2: 2635.0 vvc_alf_filter_luma_12x20_10_c: 15240.7 vvc_alf_filter_luma_12x20_10_avx2: 3289.5 vvc_alf_filter_luma_12x24_10_c: 18262.0 vvc_alf_filter_luma_12x24_10_avx2: 3937.2 vvc_alf_filter_luma_12x28_10_c: 21283.0 vvc_alf_filter_luma_12x28_10_avx2: 4585.2 vvc_alf_filter_luma_12x32_10_c: 24299.7 vvc_alf_filter_luma_12x32_10_avx2: 5333.5 vvc_alf_filter_luma_16x4_10_c: 5729.7 vvc_alf_filter_luma_16x4_10_avx2: 446.2 vvc_alf_filter_luma_16x8_10_c: 8256.5 vvc_alf_filter_luma_16x8_10_avx2: 876.7 vvc_alf_filter_luma_16x12_10_c: 12178.7 vvc_alf_filter_luma_16x12_10_avx2: 1332.7 vvc_alf_filter_luma_16x16_10_c: 16262.5 vvc_alf_filter_luma_16x16_10_avx2: 1734.5 vvc_alf_filter_luma_16x20_10_c: 20263.7 vvc_alf_filter_luma_16x20_10_avx2: 2147.2 vvc_alf_filter_luma_16x24_10_c: 24789.7 vvc_alf_filter_luma_16x24_10_avx2: 2591.7 vvc_alf_filter_luma_16x28_10_c: 28894.5 vvc_alf_filter_luma_16x28_10_avx2: 3228.7 vvc_alf_filter_luma_16x32_10_c: 33360.0 vvc_alf_filter_luma_16x32_10_avx2: 4117.5 vvc_alf_filter_luma_20x4_10_c: 5076.0 vvc_alf_filter_luma_20x4_10_avx2: 674.2 vvc_alf_filter_luma_20x8_10_c: 10138.2 vvc_alf_filter_luma_20x8_10_avx2: 1323.5 vvc_alf_filter_luma_20x12_10_c: 15171.5 vvc_alf_filter_luma_20x12_10_avx2: 2026.5 vvc_alf_filter_luma_20x16_10_c: 20315.0 vvc_alf_filter_luma_20x16_10_avx2: 2611.0 vvc_alf_filter_luma_20x20_10_c: 25367.0 vvc_alf_filter_luma_20x20_10_avx2: 3259.5 vvc_alf_filter_luma_20x24_10_c: 30443.5 vvc_alf_filter_luma_20x24_10_avx2: 3898.5 vvc_alf_filter_luma_20x28_10_c: 35439.7 vvc_alf_filter_luma_20x28_10_avx2: 4645.5 vvc_alf_filter_luma_20x32_10_c: 40609.0 vvc_alf_filter_luma_20x32_10_avx2: 5849.0 vvc_alf_filter_luma_24x4_10_c: 6245.5 vvc_alf_filter_luma_24x4_10_avx2: 901.2 vvc_alf_filter_luma_24x8_10_c: 12166.7 vvc_alf_filter_luma_24x8_10_avx2: 1754.7 vvc_alf_filter_luma_24x12_10_c: 18223.2 vvc_alf_filter_luma_24x12_10_avx2: 2621.5 vvc_alf_filter_luma_24x16_10_c: 24287.2 vvc_alf_filter_luma_24x16_10_avx2: 3474.2 vvc_alf_filter_luma_24x20_10_c: 38042.2 vvc_alf_filter_luma_24x20_10_avx2: 4335.7 vvc_alf_filter_luma_24x24_10_c: 36462.0 vvc_alf_filter_luma_24x24_10_avx2: 5199.5 vvc_alf_filter_luma_24x28_10_c: 42502.7 vvc_alf_filter_luma_24x28_10_avx2: 6133.5 vvc_alf_filter_luma_24x32_10_c: 48675.5 vvc_alf_filter_luma_24x32_10_avx2: 7575.0 vvc_alf_filter_luma_28x4_10_c: 7101.5 vvc_alf_filter_luma_28x4_10_avx2: 1128.2 vvc_alf_filter_luma_28x8_10_c: 14185.7 vvc_alf_filter_luma_28x8_10_avx2: 2189.0 vvc_alf_filter_luma_28x12_10_c: 21278.7 vvc_alf_filter_luma_28x12_10_avx2: 3347.2 vvc_alf_filter_luma_28x16_10_c: 28338.2 vvc_alf_filter_luma_28x16_10_avx2: 4462.7 vvc_alf_filter_luma_28x20_10_c: 37076.7 vvc_alf_filter_luma_28x20_10_avx2: 5729.0 vvc_alf_filter_luma_28x24_10_c: 42612.2 vvc_alf_filter_luma_28x24_10_avx2: 6508.7 vvc_alf_filter_luma_28x28_10_c: 49686.0 vvc_alf_filter_luma_28x28_10_avx2: 7666.0 vvc_alf_filter_luma_28x32_10_c: 65345.2 vvc_alf_filter_luma_28x32_10_avx2: 9330.2 vvc_alf_filter_luma_32x4_10_c: 8329.5 vvc_alf_filter_luma_32x4_10_avx2: 887.7 vvc_alf_filter_luma_32x8_10_c: 16941.7 vvc_alf_filter_luma_32x8_10_avx2: 1736.0 vvc_alf_filter_luma_32x12_10_c: 73347.7 vvc_alf_filter_luma_32x12_10_avx2: 2584.2 vvc_alf_filter_luma_32x16_10_c: 32359.5 vvc_alf_filter_luma_32x16_10_avx2: 3442.7 vvc_alf_filter_luma_32x20_10_c: 40482.5 vvc_alf_filter_luma_32x20_10_avx2: 4318.5 vvc_alf_filter_luma_32x24_10_c: 48674.7 vvc_alf_filter_luma_32x24_10_avx2: 5174.2 vvc_alf_filter_luma_32x28_10_c: 56715.7 vvc_alf_filter_luma_32x28_10_avx2: 6124.5 vvc_alf_filter_luma_32x32_10_c: 66720.0 vvc_alf_filter_luma_32x32_10_avx2: 7577.2 Signed-off-by: Wu Jianhua --- libavcodec/x86/vvc/Makefile | 3 +- libavcodec/x86/vvc/vvc_alf.asm | 441 +++++++++++++++++++++++++++++++ libavcodec/x86/vvc/vvcdsp_init.c | 49 ++++ 3 files changed, 492 insertions(+), 1 deletion(-) create mode 100644 libavcodec/x86/vvc/vvc_alf.asm diff --git a/libavcodec/x86/vvc/Makefile b/libavcodec/x86/vvc/Makefile index d1623bd46a..d6a66f860a 100644 --- a/libavcodec/x86/vvc/Makefile +++ b/libavcodec/x86/vvc/Makefile @@ -3,5 +3,6 @@ clean:: OBJS-$(CONFIG_VVC_DECODER) += x86/vvc/vvcdsp_init.o \ x86/h26x/h2656dsp.o -X86ASM-OBJS-$(CONFIG_VVC_DECODER) += x86/vvc/vvc_mc.o \ +X86ASM-OBJS-$(CONFIG_VVC_DECODER) += x86/vvc/vvc_alf.o \ + x86/vvc/vvc_mc.o \ x86/h26x/h2656_inter.o diff --git a/libavcodec/x86/vvc/vvc_alf.asm b/libavcodec/x86/vvc/vvc_alf.asm new file mode 100644 index 0000000000..cb1c86d1e5 --- /dev/null +++ b/libavcodec/x86/vvc/vvc_alf.asm @@ -0,0 +1,441 @@ +;****************************************************************************** +;* VVC Adaptive Loop Filter SIMD optimizations +;* +;* Copyright (c) 2023-2024 Nuo Mi +;* Copyright (c) 2023-2024 Wu Jianhua +;* +;* This file is part of FFmpeg. +;* +;* FFmpeg is free software; you can redistribute it and/or +;* modify it under the terms of the GNU Lesser General Public +;* License as published by the Free Software Foundation; either +;* version 2.1 of the License, or (at your option) any later version. +;* +;* FFmpeg is distributed in the hope that it will be useful, +;* but WITHOUT ANY WARRANTY; without even the implied warranty of +;* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +;* Lesser General Public License for more details. +;* +;* You should have received a copy of the GNU Lesser General Public +;* License along with FFmpeg; if not, write to the Free Software +;* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA +;****************************************************************************** + +%include "libavutil/x86/x86util.asm" + +SECTION_RODATA + +%macro PARAM_SHUFFE 1 +%assign i (%1 * 2) +%assign j ((i + 1) << 8) + (i) +param_shuffe_ %+ %1: +%rep 2 + times 4 dw j + times 4 dw (j + 0x0808) +%endrep +%endmacro + +PARAM_SHUFFE 0 +PARAM_SHUFFE 1 +PARAM_SHUFFE 2 +PARAM_SHUFFE 3 + +dd448: times 8 dd 512 - 64 +dw64: times 8 dd 64 + +SECTION .text + + +%define ALF_NUM_COEFF_LUMA 12 +%define ALF_NUM_COEFF_CHROMA 6 +%define ALF_NUM_COEFF_CC 7 + +;%1-%3 out +;%4 clip or filter +%macro LOAD_LUMA_PARAMS_W16 4 + lea offsetq, [3 * xq] ;xq * ALF_NUM_COEFF_LUMA / ALF_BLOCK_SIZE + movu m%1, [%4q + 2 * offsetq + 0 * 32] ; 2 * for sizeof(int16_t) + movu m%2, [%4q + 2 * offsetq + 1 * 32] + movu m%3, [%4q + 2 * offsetq + 2 * 32] +%endmacro + +%macro LOAD_LUMA_PARAMS_W16 6 + LOAD_LUMA_PARAMS_W16 %1, %2, %3, %4 + ;m%1 = 03 02 01 00 + ;m%2 = 07 06 05 04 + ;m%3 = 11 10 09 08 + + vshufpd m%5, m%1, m%2, 0011b ;06 02 05 01 + vshufpd m%6, m%3, m%5, 1001b ;06 10 01 09 + + vshufpd m%1, m%1, m%6, 1100b ;06 03 09 00 + vshufpd m%2, m%2, m%6, 0110b ;10 07 01 04 + vshufpd m%3, m%3, m%5, 0110b ;02 11 05 08 + + vpermpd m%1, m%1, 01111000b ;09 06 03 00 + vshufpd m%2, m%2, m%2, 1001b ;10 07 04 01 + vpermpd m%3, m%3, 10000111b ;11 08 05 02 +%endmacro + +; %1-%3 out +; %4 clip or filter +; %5-%6 tmp +%macro LOAD_LUMA_PARAMS 6 + LOAD_LUMA_PARAMS_W16 %1, %2, %3, %4, %5, %6 +%endmacro + +%macro LOAD_CHROMA_PARAMS 4 + ; LOAD_CHROMA_PARAMS_W %+ WIDTH %1, %2, %3, %4 + movq xm%1, [%3q] + movd xm%2, [%3q + 8] + vpbroadcastq m%1, xm%1 + vpbroadcastq m%2, xm%2 +%endmacro + +%macro LOAD_PARAMS 0 +%if LUMA + LOAD_LUMA_PARAMS 3, 4, 5, filter, 6, 7 + LOAD_LUMA_PARAMS 6, 7, 8, clip, 9, 10 +%else + LOAD_CHROMA_PARAMS 3, 4, filter, 5 + LOAD_CHROMA_PARAMS 6, 7, clip, 8 +%endif +%endmacro + +; FILTER(param_idx) +; input: m2, m9, m10 +; output: m0, m1 +; tmp: m11-m13 +%macro FILTER 1 + %assign i (%1 % 4) + %assign j (%1 / 4 + 3) + %assign k (%1 / 4 + 6) + %define filters m %+ j + %define clips m %+ k + + pshufb m12, clips, [param_shuffe_ %+ i] ;clip + pxor m11, m11 + psubw m11, m12 ;-clip + + vpsubw m9, m2 + CLIPW m9, m11, m12 + + vpsubw m10, m2 + CLIPW m10, m11, m12 + + vpunpckhwd m13, m9, m10 + vpunpcklwd m9, m9, m10 + + pshufb m12, filters, [param_shuffe_ %+ i] ;filter + vpunpcklwd m10, m12, m12 + vpunpckhwd m12, m12, m12 + + vpmaddwd m9, m10 + vpmaddwd m12, m13 + + paddd m0, m9 + paddd m1, m12 +%endmacro + +; FILTER(param_idx, bottom, top, byte_offset) +; input: param_idx, bottom, top, byte_offset +; output: m0, m1 +; temp: m9, m10 +%macro FILTER 4 + LOAD_PIXELS m10, [%2 + %4] + LOAD_PIXELS m9, [%3 - %4] + FILTER %1 +%endmacro + +; GET_SRCS(line) +; brief: get source lines +; input: src, src_stride, vb_pos +; output: s1...s6 +%macro GET_SRCS 1 + lea s1q, [srcq + src_strideq] + lea s3q, [s1q + src_strideq] +%if LUMA + lea s5q, [s3q + src_strideq] +%endif + neg src_strideq + lea s2q, [srcq + src_strideq] + lea s4q, [s2q + src_strideq] +%if LUMA + lea s6q, [s4q + src_strideq] +%endif + neg src_strideq + +%if LUMA + cmp vb_posq, 0 + je %%vb_bottom + cmp vb_posq, 4 + jne %%vb_end +%else + cmp vb_posq, 2 + jne %%vb_end + cmp %1, 2 + jge %%vb_bottom +%endif + +%%vb_above: + ; above + ; p1 = (y + i == vb_pos - 1) ? p0 : p1; + ; p2 = (y + i == vb_pos - 1) ? p0 : p2; + ; p3 = (y + i >= vb_pos - 2) ? p1 : p3; + ; p4 = (y + i >= vb_pos - 2) ? p2 : p4; + ; p5 = (y + i >= vb_pos - 3) ? p3 : p5; + ; p6 = (y + i >= vb_pos - 3) ? p4 : p6; + dec vb_posq + cmp vb_posq, %1 + cmove s1q, srcq + cmove s2q, srcq + + dec vb_posq + cmp vb_posq, %1 + cmovbe s3q, s1q + cmovbe s4q, s2q + + dec vb_posq +%if LUMA + cmp vb_posq, %1 + cmovbe s5q, s3q + cmovbe s6q, s4q +%endif + add vb_posq, 3 + jmp %%vb_end + +%%vb_bottom: + ; bottom + ; p1 = (y + i == vb_pos ) ? p0 : p1; + ; p2 = (y + i == vb_pos ) ? p0 : p2; + ; p3 = (y + i <= vb_pos + 1) ? p1 : p3; + ; p4 = (y + i <= vb_pos + 1) ? p2 : p4; + ; p5 = (y + i <= vb_pos + 2) ? p3 : p5; + ; p6 = (y + i <= vb_pos + 2) ? p4 : p6; + cmp vb_posq, %1 + cmove s1q, srcq + cmove s2q, srcq + + inc vb_posq + cmp vb_posq, %1 + cmovae s3q, s1q + cmovae s4q, s2q + + inc vb_posq +%if LUMA + cmp vb_posq, %1 + cmovae s5q, s3q + cmovae s6q, s4q +%endif + sub vb_posq, 2 +%%vb_end: +%endmacro + +; SHIFT_VB(line) +; brief: shift filter result +; input: m0, m1, vb_pos +; output: m0 +; temp: m9 +%macro SHIFT_VB 1 +%define SHIFT 7 +%if LUMA + cmp %1, 3 + je %%near_above + cmp %1, 0 + je %%near_below + jmp %%no_vb + %%near_above: + cmp vb_posq, 4 + je %%near_vb + jmp %%no_vb + %%near_below: + cmp vb_posq, 0 + je %%near_vb +%else + cmp %1, 0 + je %%no_vb + cmp %1, 3 + je %%no_vb + cmp vb_posq, 2 + je %%near_vb +%endif +%%no_vb: + vpsrad m0, SHIFT + vpsrad m1, SHIFT + jmp %%shift_end +%%near_vb: + vpbroadcastd m9, [dd448] + paddd m0, m9 + paddd m1, m9 + vpsrad m0, SHIFT + 3 + vpsrad m1, SHIFT + 3 +%%shift_end: + vpackssdw m0, m0, m1 +%endmacro + +; FILTER_VB(line) +; brief: filter pixels for luma and chroma +; input: line +; output: m0, m1 +; temp: s0q...s1q +%macro FILTER_VB 1 + vpbroadcastd m0, [dw64] + vpbroadcastd m1, [dw64] + + GET_SRCS %1 +%if LUMA + FILTER 0, s5q, s6q, 0 * ps + FILTER 1, s3q, s4q, 1 * ps + FILTER 2, s3q, s4q, 0 * ps + FILTER 3, s3q, s4q, -1 * ps + FILTER 4, s1q, s2q, 2 * ps + FILTER 5, s1q, s2q, 1 * ps + FILTER 6, s1q, s2q, 0 * ps + FILTER 7, s1q, s2q, -1 * ps + FILTER 8, s1q, s2q, -2 * ps + FILTER 9, srcq, srcq, 3 * ps + FILTER 10, srcq, srcq, 2 * ps + FILTER 11, srcq, srcq, 1 * ps +%else + FILTER 0, s3q, s4q, 0 * ps + FILTER 1, s1q, s2q, 1 * ps + FILTER 2, s1q, s2q, 0 * ps + FILTER 3, s1q, s2q, -1 * ps + FILTER 4, srcq, srcq, 2 * ps + FILTER 5, srcq, srcq, 1 * ps +%endif + SHIFT_VB %1 +%endmacro + +; LOAD_PIXELS(dest, src) +%macro LOAD_PIXELS 2 +%if ps == 2 + movu %1, %2 +%else + vpmovzxbw %1, %2 +%endif +%endmacro + +; STORE_PIXELS(dst, src) +%macro STORE_PIXELS 2 + %if ps == 2 + movu %1, m%2 + %else + vpackuswb m%2, m%2 + vpermq m%2, m%2, 0x8 + movu %1, xm%2 + %endif +%endmacro + +%macro FILTER_16x4 0 +%if LUMA + push clipq + push strideq + %define s1q clipq + %define s2q strideq +%else + %define s1q s5q + %define s2q s6q +%endif + + %define s3q pixel_maxq + %define s4q offsetq + push xq + + xor xq, xq +%%filter_16x4_loop: + LOAD_PIXELS m2, [srcq] ;p0 + + FILTER_VB xq + + paddw m0, m2 + + ; clip to pixel + CLIPW m0, m14, m15 + + STORE_PIXELS [dstq], 0 + + lea srcq, [srcq + src_strideq] + lea dstq, [dstq + dst_strideq] + inc xq + cmp xq, 4 + jl %%filter_16x4_loop + + mov xq, src_strideq + neg xq + lea srcq, [srcq + xq * 4] + mov xq, dst_strideq + neg xq + lea dstq, [dstq + xq * 4] + + pop xq + +%if LUMA + pop strideq + pop clipq +%endif +%endmacro + +; FILTER(bpc, luma/chroma) +%macro ALF_FILTER 2 +%xdefine BPC %1 +%ifidn %2, luma + %xdefine LUMA 1 +%else + %xdefine LUMA 0 +%endif + +; ****************************** +; void vvc_alf_filter_%2_%1bpc_avx2(uint8_t *dst, ptrdiff_t dst_stride, +; const uint8_t *src, ptrdiff_t src_stride, const ptrdiff_t width, cosnt ptr_diff_t height, +; const int16_t *filter, const int16_t *clip, ptrdiff_t stride, ptrdiff_t vb_pos, ptrdiff_t pixel_max); +; ****************************** +cglobal vvc_alf_filter_%2_%1bpc, 11, 15, 16, -6*8, dst, dst_stride, src, src_stride, width, height, filter, clip, stride, vb_pos, pixel_max, \ + offset, x, s5, s6 +%define ps (%1 / 8) ; pixel size + movd xm15, pixel_maxd + vpbroadcastw m15, xm15 + pxor m14, m14 + +.loop: + push srcq + push dstq + xor xd, xd + + .loop_w: + LOAD_PARAMS + FILTER_16x4 + + add srcq, 16 * ps + add dstq, 16 * ps + add xd, 16 + cmp xd, widthd + jl .loop_w + + pop dstq + pop srcq + lea srcq, [srcq + 4 * src_strideq] + lea dstq, [dstq + 4 * dst_strideq] + + lea filterq, [filterq + 2 * strideq] + lea clipq, [clipq + 2 * strideq] + + sub vb_posq, 4 + sub heightq, 4 + jg .loop + RET +%endmacro + +; FILTER(bpc) +%macro ALF_FILTER 1 + ALF_FILTER %1, luma + ALF_FILTER %1, chroma +%endmacro + +%if ARCH_X86_64 +%if HAVE_AVX2_EXTERNAL +INIT_YMM avx2 +ALF_FILTER 16 +ALF_FILTER 8 +%endif +%endif diff --git a/libavcodec/x86/vvc/vvcdsp_init.c b/libavcodec/x86/vvc/vvcdsp_init.c index 985d750472..e672409cd7 100644 --- a/libavcodec/x86/vvc/vvcdsp_init.c +++ b/libavcodec/x86/vvc/vvcdsp_init.c @@ -87,6 +87,27 @@ AVG_PROTOTYPES( 8, avx2) AVG_PROTOTYPES(10, avx2) AVG_PROTOTYPES(12, avx2) +#define ALF_BPC_PROTOTYPES(bpc, opt) \ +void BF(ff_vvc_alf_filter_luma, bpc, opt)(uint8_t *dst, ptrdiff_t dst_stride, \ + const uint8_t *src, ptrdiff_t src_stride, ptrdiff_t width, ptrdiff_t height, \ + const int16_t *filter, const int16_t *clip, ptrdiff_t stride, ptrdiff_t vb_pos, ptrdiff_t pixel_max); \ +void BF(ff_vvc_alf_filter_chroma, bpc, opt)(uint8_t *dst, ptrdiff_t dst_stride, \ + const uint8_t *src, ptrdiff_t src_stride, ptrdiff_t width, ptrdiff_t height, \ + const int16_t *filter, const int16_t *clip, ptrdiff_t stride, ptrdiff_t vb_pos, ptrdiff_t pixel_max); \ + +#define ALF_PROTOTYPES(bpc, bd, opt) \ +void bf(ff_vvc_alf_filter_luma, bd, opt)(uint8_t *dst, ptrdiff_t dst_stride, const uint8_t *src, ptrdiff_t src_stride, \ + int width, int height, const int16_t *filter, const int16_t *clip, const int vb_pos); \ +void bf(ff_vvc_alf_filter_chroma, bd, opt)(uint8_t *dst, ptrdiff_t dst_stride, const uint8_t *src, ptrdiff_t src_stride, \ + int width, int height, const int16_t *filter, const int16_t *clip, const int vb_pos); \ + +ALF_BPC_PROTOTYPES(8, avx2) +ALF_BPC_PROTOTYPES(16, avx2) + +ALF_PROTOTYPES(8, 8, avx2) +ALF_PROTOTYPES(16, 10, avx2) +ALF_PROTOTYPES(16, 12, avx2) + #if ARCH_X86_64 #if HAVE_SSE4_EXTERNAL #define FW_PUT(name, depth, opt) \ @@ -181,6 +202,26 @@ void bf(ff_vvc_w_avg, bd, opt)(uint8_t *dst, ptrdiff_t dst_stride, AVG_FUNCS(8, 8, avx2) AVG_FUNCS(16, 10, avx2) AVG_FUNCS(16, 12, avx2) + +#define ALF_FUNCS(bpc, bd, opt) \ +void bf(ff_vvc_alf_filter_luma, bd, opt)(uint8_t *dst, ptrdiff_t dst_stride, const uint8_t *src, ptrdiff_t src_stride, \ + int width, int height, const int16_t *filter, const int16_t *clip, const int vb_pos) \ +{ \ + const int param_stride = (width >> 2) * ALF_NUM_COEFF_LUMA; \ + BF(ff_vvc_alf_filter_luma, bpc, opt)(dst, dst_stride, src, src_stride, width, height, \ + filter, clip, param_stride, vb_pos, (1 << bd) - 1); \ +} \ +void bf(ff_vvc_alf_filter_chroma, bd, opt)(uint8_t *dst, ptrdiff_t dst_stride, const uint8_t *src, ptrdiff_t src_stride, \ + int width, int height, const int16_t *filter, const int16_t *clip, const int vb_pos) \ +{ \ + BF(ff_vvc_alf_filter_chroma, bpc, opt)(dst, dst_stride, src, src_stride, width, height, \ + filter, clip, 0, vb_pos,(1 << bd) - 1); \ +} \ + +ALF_FUNCS(8, 8, avx2) +ALF_FUNCS(16, 10, avx2) +ALF_FUNCS(16, 12, avx2) + #endif #define PEL_LINK(dst, C, W, idx1, idx2, name, D, opt) \ @@ -252,6 +293,11 @@ AVG_FUNCS(16, 12, avx2) c->inter.avg = bf(ff_vvc_avg, bd, opt); \ c->inter.w_avg = bf(ff_vvc_w_avg, bd, opt); \ } while (0) + +#define ALF_INIT(bd) do { \ + c->alf.filter[LUMA] = ff_vvc_alf_filter_luma_##bd##_avx2; \ + c->alf.filter[CHROMA] = ff_vvc_alf_filter_chroma_##bd##_avx2; \ +} while (0) #endif void ff_vvc_dsp_init_x86(VVCDSPContext *const c, const int bd) @@ -287,12 +333,15 @@ void ff_vvc_dsp_init_x86(VVCDSPContext *const c, const int bd) if (EXTERNAL_AVX2(cpu_flags)) { switch (bd) { case 8: + ALF_INIT(8); AVG_INIT(8, avx2); break; case 10: + ALF_INIT(10); AVG_INIT(10, avx2); break; case 12: + ALF_INIT(12); AVG_INIT(12, avx2); break; default: -- 2.44.0.windows.1 _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".