* [FFmpeg-devel] [PATCH v12 1/8] avcodec/webp: remove unused definitions
2024-04-17 19:19 [FFmpeg-devel] [PATCH v12 0/8] [WIP] webp: add support for animated WebP decoding Thilo Borgmann via ffmpeg-devel
@ 2024-04-17 19:19 ` Thilo Borgmann via ffmpeg-devel
2024-04-17 19:19 ` [FFmpeg-devel] [PATCH v12 2/8] avcodec/webp: separate VP8 decoding Thilo Borgmann via ffmpeg-devel
` (7 subsequent siblings)
8 siblings, 0 replies; 15+ messages in thread
From: Thilo Borgmann via ffmpeg-devel @ 2024-04-17 19:19 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: thilo.borgmann
From: Thilo Borgmann via ffmpeg-devel <ffmpeg-devel@ffmpeg.org>
---
libavcodec/webp.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/libavcodec/webp.c b/libavcodec/webp.c
index dbcc5e73eb..3c153d78d1 100644
--- a/libavcodec/webp.c
+++ b/libavcodec/webp.c
@@ -60,8 +60,6 @@
#define VP8X_FLAG_ALPHA 0x10
#define VP8X_FLAG_ICC 0x20
-#define MAX_PALETTE_SIZE 256
-#define MAX_CACHE_BITS 11
#define NUM_CODE_LENGTH_CODES 19
#define HUFFMAN_CODES_PER_META_CODE 5
#define NUM_LITERAL_CODES 256
--
2.43.0
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 15+ messages in thread
* [FFmpeg-devel] [PATCH v12 2/8] avcodec/webp: separate VP8 decoding
2024-04-17 19:19 [FFmpeg-devel] [PATCH v12 0/8] [WIP] webp: add support for animated WebP decoding Thilo Borgmann via ffmpeg-devel
2024-04-17 19:19 ` [FFmpeg-devel] [PATCH v12 1/8] avcodec/webp: remove unused definitions Thilo Borgmann via ffmpeg-devel
@ 2024-04-17 19:19 ` Thilo Borgmann via ffmpeg-devel
2024-04-17 19:19 ` [FFmpeg-devel] [PATCH v12 3/8] avcodec/bsf: Add awebp2webp bitstream filter Thilo Borgmann via ffmpeg-devel
` (6 subsequent siblings)
8 siblings, 0 replies; 15+ messages in thread
From: Thilo Borgmann via ffmpeg-devel @ 2024-04-17 19:19 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: thilo.borgmann
From: Thilo Borgmann via ffmpeg-devel <ffmpeg-devel@ffmpeg.org>
---
libavcodec/webp.c | 50 +++++++++++++++++++++++++++++++++++++++++------
1 file changed, 44 insertions(+), 6 deletions(-)
diff --git a/libavcodec/webp.c b/libavcodec/webp.c
index 3c153d78d1..3075321e86 100644
--- a/libavcodec/webp.c
+++ b/libavcodec/webp.c
@@ -195,6 +195,7 @@ typedef struct WebPContext {
AVFrame *alpha_frame; /* AVFrame for alpha data decompressed from VP8L */
AVPacket *pkt; /* AVPacket to be passed to the underlying VP8 decoder */
AVCodecContext *avctx; /* parent AVCodecContext */
+ AVCodecContext *avctx_vp8; /* wrapper context for VP8 decoder */
int initialized; /* set once the VP8 context is initialized */
int has_alpha; /* has a separate alpha chunk */
enum AlphaCompression alpha_compression; /* compression type for alpha chunk */
@@ -1299,12 +1300,13 @@ static int vp8_lossy_decode_frame(AVCodecContext *avctx, AVFrame *p,
int ret;
if (!s->initialized) {
- ff_vp8_decode_init(avctx);
+ VP8Context *s_vp8 = s->avctx_vp8->priv_data;
+ s_vp8->actually_webp = 1;
s->initialized = 1;
- s->v.actually_webp = 1;
}
avctx->pix_fmt = s->has_alpha ? AV_PIX_FMT_YUVA420P : AV_PIX_FMT_YUV420P;
s->lossless = 0;
+ s->avctx_vp8->pix_fmt = avctx->pix_fmt;
if (data_size > INT_MAX) {
av_log(avctx, AV_LOG_ERROR, "unsupported chunk size\n");
@@ -1315,14 +1317,32 @@ static int vp8_lossy_decode_frame(AVCodecContext *avctx, AVFrame *p,
s->pkt->data = data_start;
s->pkt->size = data_size;
- ret = ff_vp8_decode_frame(avctx, p, got_frame, s->pkt);
- if (ret < 0)
+ ret = avcodec_send_packet(s->avctx_vp8, s->pkt);
+ if (ret < 0) {
+ av_log(avctx, AV_LOG_ERROR, "Error submitting a packet for decoding\n");
return ret;
+ }
- if (!*got_frame)
+ ret = avcodec_receive_frame(s->avctx_vp8, p);
+ if (ret < 0) {
+ av_log(avctx, AV_LOG_ERROR, "VP8 decoding error: %s.\n", av_err2str(ret));
return AVERROR_INVALIDDATA;
+ }
+
+ ret = ff_decode_frame_props(avctx, p);
+ if (ret < 0) {
+ return ret;
+ }
+
+ if (!p->private_ref) {
+ ret = ff_attach_decode_data(p);
+ if (ret < 0) {
+ return ret;
+ }
+ }
- update_canvas_size(avctx, avctx->width, avctx->height);
+ *got_frame = 1;
+ update_canvas_size(avctx, s->avctx_vp8->width, s->avctx_vp8->height);
if (s->has_alpha) {
ret = vp8_lossy_decode_alpha(avctx, p, s->alpha_data,
@@ -1539,11 +1559,28 @@ exif_end:
static av_cold int webp_decode_init(AVCodecContext *avctx)
{
WebPContext *s = avctx->priv_data;
+ int ret;
+ const AVCodec *codec;
s->pkt = av_packet_alloc();
if (!s->pkt)
return AVERROR(ENOMEM);
+
+ /* Prepare everything needed for VP8 decoding */
+ codec = avcodec_find_decoder(AV_CODEC_ID_VP8);
+ if (!codec)
+ return AVERROR_BUG;
+ s->avctx_vp8 = avcodec_alloc_context3(codec);
+ if (!s->avctx_vp8)
+ return AVERROR(ENOMEM);
+ s->avctx_vp8->flags = avctx->flags;
+ s->avctx_vp8->flags2 = avctx->flags2;
+ s->avctx_vp8->pix_fmt = avctx->pix_fmt;
+ ret = avcodec_open2(s->avctx_vp8, codec, NULL);
+ if (ret < 0) {
+ return ret;
+ }
return 0;
}
@@ -1552,6 +1589,7 @@ static av_cold int webp_decode_close(AVCodecContext *avctx)
WebPContext *s = avctx->priv_data;
av_packet_free(&s->pkt);
+ avcodec_free_context(&s->avctx_vp8);
if (s->initialized)
return ff_vp8_decode_free(avctx);
--
2.43.0
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 15+ messages in thread
* [FFmpeg-devel] [PATCH v12 3/8] avcodec/bsf: Add awebp2webp bitstream filter
2024-04-17 19:19 [FFmpeg-devel] [PATCH v12 0/8] [WIP] webp: add support for animated WebP decoding Thilo Borgmann via ffmpeg-devel
2024-04-17 19:19 ` [FFmpeg-devel] [PATCH v12 1/8] avcodec/webp: remove unused definitions Thilo Borgmann via ffmpeg-devel
2024-04-17 19:19 ` [FFmpeg-devel] [PATCH v12 2/8] avcodec/webp: separate VP8 decoding Thilo Borgmann via ffmpeg-devel
@ 2024-04-17 19:19 ` Thilo Borgmann via ffmpeg-devel
2024-04-17 19:37 ` Thilo Borgmann via ffmpeg-devel
2024-04-17 19:20 ` [FFmpeg-devel] [PATCH v12 4/8] libavcodec/webp: add support for animated WebP Thilo Borgmann via ffmpeg-devel
` (5 subsequent siblings)
8 siblings, 1 reply; 15+ messages in thread
From: Thilo Borgmann via ffmpeg-devel @ 2024-04-17 19:19 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: thilo.borgmann
From: Thilo Borgmann via ffmpeg-devel <ffmpeg-devel@ffmpeg.org>
Splits a packet containing a webp animations into
one non-compliant packet per frame of the animation.
Skips RIFF and WEBP chunks for those packets except
for the first. Copyies ICC, EXIF and XMP chunks first
into each of the packets except for the first.
---
configure | 1 +
libavcodec/bitstream_filters.c | 1 +
libavcodec/bsf/Makefile | 1 +
libavcodec/bsf/awebp2webp.c | 350 +++++++++++++++++++++++++++++++++
4 files changed, 353 insertions(+)
create mode 100644 libavcodec/bsf/awebp2webp.c
diff --git a/configure b/configure
index 55f1fc354d..2d08bc1fd8 100755
--- a/configure
+++ b/configure
@@ -3425,6 +3425,7 @@ aac_adtstoasc_bsf_select="adts_header mpeg4audio"
av1_frame_merge_bsf_select="cbs_av1"
av1_frame_split_bsf_select="cbs_av1"
av1_metadata_bsf_select="cbs_av1"
+awebp2webp_bsf_select=""
dts2pts_bsf_select="cbs_h264 h264parse"
eac3_core_bsf_select="ac3_parser"
evc_frame_merge_bsf_select="evcparse"
diff --git a/libavcodec/bitstream_filters.c b/libavcodec/bitstream_filters.c
index 12860c332b..af88283a8c 100644
--- a/libavcodec/bitstream_filters.c
+++ b/libavcodec/bitstream_filters.c
@@ -28,6 +28,7 @@ extern const FFBitStreamFilter ff_aac_adtstoasc_bsf;
extern const FFBitStreamFilter ff_av1_frame_merge_bsf;
extern const FFBitStreamFilter ff_av1_frame_split_bsf;
extern const FFBitStreamFilter ff_av1_metadata_bsf;
+extern const FFBitStreamFilter ff_awebp2webp_bsf;
extern const FFBitStreamFilter ff_chomp_bsf;
extern const FFBitStreamFilter ff_dump_extradata_bsf;
extern const FFBitStreamFilter ff_dca_core_bsf;
diff --git a/libavcodec/bsf/Makefile b/libavcodec/bsf/Makefile
index fb70ad0c21..48c67dd210 100644
--- a/libavcodec/bsf/Makefile
+++ b/libavcodec/bsf/Makefile
@@ -5,6 +5,7 @@ OBJS-$(CONFIG_AAC_ADTSTOASC_BSF) += bsf/aac_adtstoasc.o
OBJS-$(CONFIG_AV1_FRAME_MERGE_BSF) += bsf/av1_frame_merge.o
OBJS-$(CONFIG_AV1_FRAME_SPLIT_BSF) += bsf/av1_frame_split.o
OBJS-$(CONFIG_AV1_METADATA_BSF) += bsf/av1_metadata.o
+OBJS-$(CONFIG_AWEBP2WEBP_BSF) += bsf/awebp2webp.o
OBJS-$(CONFIG_CHOMP_BSF) += bsf/chomp.o
OBJS-$(CONFIG_DCA_CORE_BSF) += bsf/dca_core.o
OBJS-$(CONFIG_DTS2PTS_BSF) += bsf/dts2pts.o
diff --git a/libavcodec/bsf/awebp2webp.c b/libavcodec/bsf/awebp2webp.c
new file mode 100644
index 0000000000..ebd123c667
--- /dev/null
+++ b/libavcodec/bsf/awebp2webp.c
@@ -0,0 +1,350 @@
+/*
+ * Animated WebP into non-compliant WebP bitstream filter
+ * Copyright (c) 2024 Thilo Borgmann <thilo.borgmann _at_ mail.de>
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+/**
+ * @file
+ * Animated WebP into non-compliant WebP bitstream filter
+ * Splits a packet containing a webp animations into
+ * one non-compliant packet per frame of the animation.
+ * Skips RIFF and WEBP chunks for those packets except
+ * for the first. Copyies ICC, EXIF and XMP chunks first
+ * into each of the packets except for the first.
+ * @author Thilo Borgmann <thilo.borgmann _at_ mail.de>
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include "codec_id.h"
+#include "bytestream.h"
+#include "libavutil/error.h"
+
+#include "bsf.h"
+#include "bsf_internal.h"
+#include "packet.h"
+
+#define VP8X_FLAG_ANIMATION 0x02
+#define VP8X_FLAG_XMP_METADATA 0x04
+#define VP8X_FLAG_EXIF_METADATA 0x08
+#define VP8X_FLAG_ALPHA 0x10
+#define VP8X_FLAG_ICC 0x20
+
+typedef struct WEBPBSFContext {
+ const AVClass *class;
+ GetByteContext gb;
+
+ AVPacket *last_pkt;
+ uint8_t *last_iccp;
+ uint8_t *last_exif;
+ uint8_t *last_xmp;
+
+ int iccp_size;
+ int exif_size;
+ int xmp_size;
+
+ int add_iccp;
+ int add_exif;
+ int add_xmp;
+
+ uint64_t last_pts;
+} WEBPBSFContext;
+
+static int save_chunk(WEBPBSFContext *ctx, uint8_t **buf, int *buf_size, uint32_t chunk_size)
+{
+ if (*buf || !buf_size || !chunk_size)
+ return 0;
+
+ *buf = av_malloc(chunk_size + 8);
+ if (!*buf)
+ return AVERROR(ENOMEM);
+
+ *buf_size = chunk_size + 8;
+
+ bytestream2_seek(&ctx->gb, -8, SEEK_CUR);
+ bytestream2_get_buffer(&ctx->gb, *buf, chunk_size + 8);
+
+ return 0;
+}
+
+static int awebp2webp_filter(AVBSFContext *ctx, AVPacket *out)
+{
+ WEBPBSFContext *s = ctx->priv_data;
+ AVPacket *in;
+ uint32_t chunk_type;
+ uint32_t chunk_size;
+ int64_t packet_start;
+ int64_t packet_end;
+ int64_t out_off;
+ int ret = 0;
+ int is_frame = 0;
+ int key_frame = 0;
+ int delay = 0;
+ int out_size = 0;
+ int has_anim = 0;
+
+ // initialize for new packet
+ if (!bytestream2_size(&s->gb)) {
+ if (s->last_pkt)
+ av_packet_unref(s->last_pkt);
+
+ ret = ff_bsf_get_packet(ctx, &s->last_pkt);
+ if (ret < 0)
+ goto fail;
+
+ bytestream2_init(&s->gb, s->last_pkt->data, s->last_pkt->size);
+
+ av_freep(&s->last_iccp);
+ av_freep(&s->last_exif);
+ av_freep(&s->last_xmp);
+
+ // read packet scanning for metadata && animation
+ while (bytestream2_get_bytes_left(&s->gb) > 0) {
+ chunk_type = bytestream2_get_le32(&s->gb);
+ chunk_size = bytestream2_get_le32(&s->gb);
+
+ if (chunk_size == UINT32_MAX)
+ return AVERROR_INVALIDDATA;
+ chunk_size += chunk_size & 1;
+
+ if (!bytestream2_get_bytes_left(&s->gb) ||
+ bytestream2_get_bytes_left(&s->gb) < chunk_size)
+ break;
+
+ if (chunk_type == MKTAG('R', 'I', 'F', 'F') && chunk_size > 4) {
+ chunk_size = 4;
+ }
+
+ switch (chunk_type) {
+ case MKTAG('I', 'C', 'C', 'P'):
+ if (!s->last_iccp) {
+ ret = save_chunk(s, &s->last_iccp, &s->iccp_size, chunk_size);
+ if (ret < 0)
+ goto fail;
+ } else {
+ bytestream2_skip(&s->gb, chunk_size);
+ }
+ break;
+
+ case MKTAG('E', 'X', 'I', 'F'):
+ if (!s->last_exif) {
+ ret = save_chunk(s, &s->last_exif, &s->exif_size, chunk_size);
+ if (ret < 0)
+ goto fail;
+ } else {
+ bytestream2_skip(&s->gb, chunk_size);
+ }
+ break;
+
+ case MKTAG('X', 'M', 'P', ' '):
+ if (!s->last_xmp) {
+ ret = save_chunk(s, &s->last_xmp, &s->xmp_size, chunk_size);
+ if (ret < 0)
+ goto fail;
+ } else {
+ bytestream2_skip(&s->gb, chunk_size);
+ }
+ break;
+
+ case MKTAG('A', 'N', 'M', 'F'):
+ has_anim = 1;
+ bytestream2_skip(&s->gb, chunk_size);
+ break;
+
+ default:
+ bytestream2_skip(&s->gb, chunk_size);
+ break;
+ }
+ }
+
+ // if no animation is found, pass-through the packet
+ if (!has_anim) {
+ av_packet_move_ref(out, s->last_pkt);
+ return 0;
+ }
+
+ // reset bytestream to beginning of packet
+ bytestream2_init(&s->gb, s->last_pkt->data, s->last_pkt->size);
+ }
+
+ // packet read completely, reset and ask for next packet
+ if (!bytestream2_get_bytes_left(&s->gb)) {
+ if (s->last_pkt)
+ av_packet_free(&s->last_pkt);
+ // reset to empty buffer for reinit with next real packet
+ bytestream2_init(&s->gb, NULL, 0);
+ return AVERROR(EAGAIN);
+ }
+
+ // start reading from packet until sub packet ready
+ packet_start = bytestream2_tell(&s->gb);
+ s->add_iccp = 1;
+ s->add_exif = 1;
+ s->add_xmp = 1;
+
+ while (bytestream2_get_bytes_left(&s->gb) > 0) {
+ chunk_type = bytestream2_get_le32(&s->gb);
+ chunk_size = bytestream2_get_le32(&s->gb);
+
+ if (chunk_size == UINT32_MAX)
+ return AVERROR_INVALIDDATA;
+ chunk_size += chunk_size & 1;
+
+ if (!bytestream2_get_bytes_left(&s->gb) ||
+ bytestream2_get_bytes_left(&s->gb) < chunk_size)
+ break;
+
+ if (chunk_type == MKTAG('R', 'I', 'F', 'F') && chunk_size > 4) {
+ chunk_size = 4;
+ key_frame = 1;
+ }
+
+ switch (chunk_type) {
+ case MKTAG('I', 'C', 'C', 'P'):
+ s->add_iccp = 0;
+ bytestream2_skip(&s->gb, chunk_size);
+ break;
+
+ case MKTAG('E', 'X', 'I', 'F'):
+ s->add_exif = 0;
+ bytestream2_skip(&s->gb, chunk_size);
+ break;
+
+ case MKTAG('X', 'M', 'P', ' '):
+ s->add_xmp = 0;
+ bytestream2_skip(&s->gb, chunk_size);
+ break;
+
+ case MKTAG('V', 'P', '8', ' '):
+ if (is_frame) {
+ bytestream2_seek(&s->gb, -8, SEEK_CUR);
+ goto flush;
+ }
+ bytestream2_skip(&s->gb, chunk_size);
+ is_frame = 1;
+ break;
+
+ case MKTAG('V', 'P', '8', 'L'):
+ if (is_frame) {
+ bytestream2_seek(&s->gb, -8, SEEK_CUR);
+ goto flush;
+ }
+ bytestream2_skip(&s->gb, chunk_size);
+ is_frame = 1;
+ break;
+
+ case MKTAG('A', 'N', 'M', 'F'):
+ if (is_frame) {
+ bytestream2_seek(&s->gb, -8, SEEK_CUR);
+ goto flush;
+ }
+ bytestream2_skip(&s->gb, 12);
+ delay = bytestream2_get_le24(&s->gb);
+ bytestream2_skip(&s->gb, 1);
+ break;
+
+ default:
+ bytestream2_skip(&s->gb, chunk_size);
+ break;
+ }
+
+ packet_end = bytestream2_tell(&s->gb);
+ }
+
+flush:
+ // generate packet from data read so far
+ out_size = packet_end - packet_start;
+ out_off = 0;
+
+ if (s->add_iccp && s->last_iccp)
+ out_size += s->iccp_size;
+ if (s->add_exif && s->last_exif)
+ out_size += s->exif_size;
+ if (s->add_xmp && s->last_xmp)
+ out_size += s->xmp_size;
+
+ ret = av_new_packet(out, out_size);
+ if (ret < 0)
+ goto fail;
+
+ // copy metadata
+ if (s->add_iccp && s->last_iccp) {
+ memcpy(out->data + out_off, s->last_iccp, s->iccp_size);
+ out_off += s->iccp_size;
+ }
+ if (s->add_exif && s->last_exif) {
+ memcpy(out->data + out_off, s->last_exif, s->exif_size);
+ out_off += s->exif_size;
+ }
+ if (s->add_xmp && s->last_xmp) {
+ memcpy(out->data + out_off, s->last_xmp, s->xmp_size);
+ out_off += s->xmp_size;
+ }
+
+ // copy frame data
+ memcpy(out->data + out_off, s->last_pkt->data + packet_start, packet_end - packet_start);
+
+ if (key_frame)
+ out->flags |= AV_PKT_FLAG_KEY;
+ else
+ out->flags &= ~AV_PKT_FLAG_KEY;
+
+ out->pts = s->last_pts;
+ out->dts = out->pts;
+ out->pos = packet_start;
+ out->duration = delay;
+ out->stream_index = s->last_pkt->stream_index;
+ out->time_base = s->last_pkt->time_base;
+
+ s->last_pts += (delay > 0) ? delay : 1;
+
+ key_frame = 0;
+
+ return 0;
+
+fail:
+ if (ret < 0) {
+ av_packet_unref(out);
+ return ret;
+ }
+ av_packet_free(&in);
+
+ return ret;
+}
+
+static void awebp2webp_close(AVBSFContext *ctx)
+{
+ WEBPBSFContext *s = ctx->priv_data;
+ av_freep(&s->last_iccp);
+ av_freep(&s->last_exif);
+ av_freep(&s->last_xmp);
+}
+
+static const enum AVCodecID codec_ids[] = {
+ AV_CODEC_ID_WEBP, AV_CODEC_ID_NONE,
+};
+
+const FFBitStreamFilter ff_awebp2webp_bsf = {
+ .p.name = "awebp2webp",
+ .p.codec_ids = codec_ids,
+ .priv_data_size = sizeof(WEBPBSFContext),
+ .filter = awebp2webp_filter,
+ .close = awebp2webp_close,
+};
--
2.43.0
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [FFmpeg-devel] [PATCH v12 3/8] avcodec/bsf: Add awebp2webp bitstream filter
2024-04-17 19:19 ` [FFmpeg-devel] [PATCH v12 3/8] avcodec/bsf: Add awebp2webp bitstream filter Thilo Borgmann via ffmpeg-devel
@ 2024-04-17 19:37 ` Thilo Borgmann via ffmpeg-devel
0 siblings, 0 replies; 15+ messages in thread
From: Thilo Borgmann via ffmpeg-devel @ 2024-04-17 19:37 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Thilo Borgmann
[-- Attachment #1: Type: text/plain, Size: 750 bytes --]
On 17.04.24 21:19, Thilo Borgmann via ffmpeg-devel wrote:
> From: Thilo Borgmann via ffmpeg-devel <ffmpeg-devel@ffmpeg.org>
>
> Splits a packet containing a webp animations into
> one non-compliant packet per frame of the animation.
> Skips RIFF and WEBP chunks for those packets except
> for the first. Copyies ICC, EXIF and XMP chunks first
> into each of the packets except for the first.
> ---
> configure | 1 +
> libavcodec/bitstream_filters.c | 1 +
> libavcodec/bsf/Makefile | 1 +
> libavcodec/bsf/awebp2webp.c | 350 +++++++++++++++++++++++++++++++++
> 4 files changed, 353 insertions(+)
> create mode 100644 libavcodec/bsf/awebp2webp.c
build failed here. Updated patch attached.
Sorry,
Thilo
[-- Attachment #2: v12-0003-avcodec-bsf-Add-awebp2webp-bitstream-filter.patch --]
[-- Type: text/plain, Size: 13121 bytes --]
From 536dfc772ac273b0d3607545b8aea7a26ba84ac1 Mon Sep 17 00:00:00 2001
From: Thilo Borgmann via ffmpeg-devel <ffmpeg-devel@ffmpeg.org>
Date: Thu, 28 Mar 2024 15:08:53 +0100
Subject: [PATCH v12 3/8] avcodec/bsf: Add awebp2webp bitstream filter
Splits a packet containing a webp animations into
one non-compliant packet per frame of the animation.
Skips RIFF and WEBP chunks for those packets except
for the first. Copyies ICC, EXIF and XMP chunks first
into each of the packets except for the first.
---
configure | 1 +
libavcodec/bitstream_filters.c | 1 +
libavcodec/bsf/Makefile | 1 +
libavcodec/bsf/awebp2webp.c | 351 +++++++++++++++++++++++++++++++++
4 files changed, 354 insertions(+)
create mode 100644 libavcodec/bsf/awebp2webp.c
diff --git a/configure b/configure
index 55f1fc354d..2d08bc1fd8 100755
--- a/configure
+++ b/configure
@@ -3425,6 +3425,7 @@ aac_adtstoasc_bsf_select="adts_header mpeg4audio"
av1_frame_merge_bsf_select="cbs_av1"
av1_frame_split_bsf_select="cbs_av1"
av1_metadata_bsf_select="cbs_av1"
+awebp2webp_bsf_select=""
dts2pts_bsf_select="cbs_h264 h264parse"
eac3_core_bsf_select="ac3_parser"
evc_frame_merge_bsf_select="evcparse"
diff --git a/libavcodec/bitstream_filters.c b/libavcodec/bitstream_filters.c
index 12860c332b..af88283a8c 100644
--- a/libavcodec/bitstream_filters.c
+++ b/libavcodec/bitstream_filters.c
@@ -28,6 +28,7 @@ extern const FFBitStreamFilter ff_aac_adtstoasc_bsf;
extern const FFBitStreamFilter ff_av1_frame_merge_bsf;
extern const FFBitStreamFilter ff_av1_frame_split_bsf;
extern const FFBitStreamFilter ff_av1_metadata_bsf;
+extern const FFBitStreamFilter ff_awebp2webp_bsf;
extern const FFBitStreamFilter ff_chomp_bsf;
extern const FFBitStreamFilter ff_dump_extradata_bsf;
extern const FFBitStreamFilter ff_dca_core_bsf;
diff --git a/libavcodec/bsf/Makefile b/libavcodec/bsf/Makefile
index fb70ad0c21..48c67dd210 100644
--- a/libavcodec/bsf/Makefile
+++ b/libavcodec/bsf/Makefile
@@ -5,6 +5,7 @@ OBJS-$(CONFIG_AAC_ADTSTOASC_BSF) += bsf/aac_adtstoasc.o
OBJS-$(CONFIG_AV1_FRAME_MERGE_BSF) += bsf/av1_frame_merge.o
OBJS-$(CONFIG_AV1_FRAME_SPLIT_BSF) += bsf/av1_frame_split.o
OBJS-$(CONFIG_AV1_METADATA_BSF) += bsf/av1_metadata.o
+OBJS-$(CONFIG_AWEBP2WEBP_BSF) += bsf/awebp2webp.o
OBJS-$(CONFIG_CHOMP_BSF) += bsf/chomp.o
OBJS-$(CONFIG_DCA_CORE_BSF) += bsf/dca_core.o
OBJS-$(CONFIG_DTS2PTS_BSF) += bsf/dts2pts.o
diff --git a/libavcodec/bsf/awebp2webp.c b/libavcodec/bsf/awebp2webp.c
new file mode 100644
index 0000000000..7edacee48f
--- /dev/null
+++ b/libavcodec/bsf/awebp2webp.c
@@ -0,0 +1,351 @@
+/*
+ * Animated WebP into non-compliant WebP bitstream filter
+ * Copyright (c) 2024 Thilo Borgmann <thilo.borgmann _at_ mail.de>
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+/**
+ * @file
+ * Animated WebP into non-compliant WebP bitstream filter
+ * Splits a packet containing a webp animations into
+ * one non-compliant packet per frame of the animation.
+ * Skips RIFF and WEBP chunks for those packets except
+ * for the first. Copyies ICC, EXIF and XMP chunks first
+ * into each of the packets except for the first.
+ * @author Thilo Borgmann <thilo.borgmann _at_ mail.de>
+ */
+
+#include <stdio.h>
+#include <string.h>
+
+#include "codec_id.h"
+#include "bytestream.h"
+#include "libavutil/error.h"
+#include "libavutil/mem.h"
+
+#include "bsf.h"
+#include "bsf_internal.h"
+#include "packet.h"
+
+#define VP8X_FLAG_ANIMATION 0x02
+#define VP8X_FLAG_XMP_METADATA 0x04
+#define VP8X_FLAG_EXIF_METADATA 0x08
+#define VP8X_FLAG_ALPHA 0x10
+#define VP8X_FLAG_ICC 0x20
+
+typedef struct WEBPBSFContext {
+ const AVClass *class;
+ GetByteContext gb;
+
+ AVPacket *last_pkt;
+ uint8_t *last_iccp;
+ uint8_t *last_exif;
+ uint8_t *last_xmp;
+
+ int iccp_size;
+ int exif_size;
+ int xmp_size;
+
+ int add_iccp;
+ int add_exif;
+ int add_xmp;
+
+ uint64_t last_pts;
+} WEBPBSFContext;
+
+static int save_chunk(WEBPBSFContext *ctx, uint8_t **buf, int *buf_size, uint32_t chunk_size)
+{
+ if (*buf || !buf_size || !chunk_size)
+ return 0;
+
+ *buf = av_malloc(chunk_size + 8);
+ if (!*buf)
+ return AVERROR(ENOMEM);
+
+ *buf_size = chunk_size + 8;
+
+ bytestream2_seek(&ctx->gb, -8, SEEK_CUR);
+ bytestream2_get_buffer(&ctx->gb, *buf, chunk_size + 8);
+
+ return 0;
+}
+
+static int awebp2webp_filter(AVBSFContext *ctx, AVPacket *out)
+{
+ WEBPBSFContext *s = ctx->priv_data;
+ AVPacket *in;
+ uint32_t chunk_type;
+ uint32_t chunk_size;
+ int64_t packet_start;
+ int64_t packet_end;
+ int64_t out_off;
+ int ret = 0;
+ int is_frame = 0;
+ int key_frame = 0;
+ int delay = 0;
+ int out_size = 0;
+ int has_anim = 0;
+
+ // initialize for new packet
+ if (!bytestream2_size(&s->gb)) {
+ if (s->last_pkt)
+ av_packet_unref(s->last_pkt);
+
+ ret = ff_bsf_get_packet(ctx, &s->last_pkt);
+ if (ret < 0)
+ goto fail;
+
+ bytestream2_init(&s->gb, s->last_pkt->data, s->last_pkt->size);
+
+ av_freep(&s->last_iccp);
+ av_freep(&s->last_exif);
+ av_freep(&s->last_xmp);
+
+ // read packet scanning for metadata && animation
+ while (bytestream2_get_bytes_left(&s->gb) > 0) {
+ chunk_type = bytestream2_get_le32(&s->gb);
+ chunk_size = bytestream2_get_le32(&s->gb);
+
+ if (chunk_size == UINT32_MAX)
+ return AVERROR_INVALIDDATA;
+ chunk_size += chunk_size & 1;
+
+ if (!bytestream2_get_bytes_left(&s->gb) ||
+ bytestream2_get_bytes_left(&s->gb) < chunk_size)
+ break;
+
+ if (chunk_type == MKTAG('R', 'I', 'F', 'F') && chunk_size > 4) {
+ chunk_size = 4;
+ }
+
+ switch (chunk_type) {
+ case MKTAG('I', 'C', 'C', 'P'):
+ if (!s->last_iccp) {
+ ret = save_chunk(s, &s->last_iccp, &s->iccp_size, chunk_size);
+ if (ret < 0)
+ goto fail;
+ } else {
+ bytestream2_skip(&s->gb, chunk_size);
+ }
+ break;
+
+ case MKTAG('E', 'X', 'I', 'F'):
+ if (!s->last_exif) {
+ ret = save_chunk(s, &s->last_exif, &s->exif_size, chunk_size);
+ if (ret < 0)
+ goto fail;
+ } else {
+ bytestream2_skip(&s->gb, chunk_size);
+ }
+ break;
+
+ case MKTAG('X', 'M', 'P', ' '):
+ if (!s->last_xmp) {
+ ret = save_chunk(s, &s->last_xmp, &s->xmp_size, chunk_size);
+ if (ret < 0)
+ goto fail;
+ } else {
+ bytestream2_skip(&s->gb, chunk_size);
+ }
+ break;
+
+ case MKTAG('A', 'N', 'M', 'F'):
+ has_anim = 1;
+ bytestream2_skip(&s->gb, chunk_size);
+ break;
+
+ default:
+ bytestream2_skip(&s->gb, chunk_size);
+ break;
+ }
+ }
+
+ // if no animation is found, pass-through the packet
+ if (!has_anim) {
+ av_packet_move_ref(out, s->last_pkt);
+ return 0;
+ }
+
+ // reset bytestream to beginning of packet
+ bytestream2_init(&s->gb, s->last_pkt->data, s->last_pkt->size);
+ }
+
+ // packet read completely, reset and ask for next packet
+ if (!bytestream2_get_bytes_left(&s->gb)) {
+ if (s->last_pkt)
+ av_packet_free(&s->last_pkt);
+ // reset to empty buffer for reinit with next real packet
+ bytestream2_init(&s->gb, NULL, 0);
+ return AVERROR(EAGAIN);
+ }
+
+ // start reading from packet until sub packet ready
+ packet_start = bytestream2_tell(&s->gb);
+ s->add_iccp = 1;
+ s->add_exif = 1;
+ s->add_xmp = 1;
+
+ while (bytestream2_get_bytes_left(&s->gb) > 0) {
+ chunk_type = bytestream2_get_le32(&s->gb);
+ chunk_size = bytestream2_get_le32(&s->gb);
+
+ if (chunk_size == UINT32_MAX)
+ return AVERROR_INVALIDDATA;
+ chunk_size += chunk_size & 1;
+
+ if (!bytestream2_get_bytes_left(&s->gb) ||
+ bytestream2_get_bytes_left(&s->gb) < chunk_size)
+ break;
+
+ if (chunk_type == MKTAG('R', 'I', 'F', 'F') && chunk_size > 4) {
+ chunk_size = 4;
+ key_frame = 1;
+ }
+
+ switch (chunk_type) {
+ case MKTAG('I', 'C', 'C', 'P'):
+ s->add_iccp = 0;
+ bytestream2_skip(&s->gb, chunk_size);
+ break;
+
+ case MKTAG('E', 'X', 'I', 'F'):
+ s->add_exif = 0;
+ bytestream2_skip(&s->gb, chunk_size);
+ break;
+
+ case MKTAG('X', 'M', 'P', ' '):
+ s->add_xmp = 0;
+ bytestream2_skip(&s->gb, chunk_size);
+ break;
+
+ case MKTAG('V', 'P', '8', ' '):
+ if (is_frame) {
+ bytestream2_seek(&s->gb, -8, SEEK_CUR);
+ goto flush;
+ }
+ bytestream2_skip(&s->gb, chunk_size);
+ is_frame = 1;
+ break;
+
+ case MKTAG('V', 'P', '8', 'L'):
+ if (is_frame) {
+ bytestream2_seek(&s->gb, -8, SEEK_CUR);
+ goto flush;
+ }
+ bytestream2_skip(&s->gb, chunk_size);
+ is_frame = 1;
+ break;
+
+ case MKTAG('A', 'N', 'M', 'F'):
+ if (is_frame) {
+ bytestream2_seek(&s->gb, -8, SEEK_CUR);
+ goto flush;
+ }
+ bytestream2_skip(&s->gb, 12);
+ delay = bytestream2_get_le24(&s->gb);
+ bytestream2_skip(&s->gb, 1);
+ break;
+
+ default:
+ bytestream2_skip(&s->gb, chunk_size);
+ break;
+ }
+
+ packet_end = bytestream2_tell(&s->gb);
+ }
+
+flush:
+ // generate packet from data read so far
+ out_size = packet_end - packet_start;
+ out_off = 0;
+
+ if (s->add_iccp && s->last_iccp)
+ out_size += s->iccp_size;
+ if (s->add_exif && s->last_exif)
+ out_size += s->exif_size;
+ if (s->add_xmp && s->last_xmp)
+ out_size += s->xmp_size;
+
+ ret = av_new_packet(out, out_size);
+ if (ret < 0)
+ goto fail;
+
+ // copy metadata
+ if (s->add_iccp && s->last_iccp) {
+ memcpy(out->data + out_off, s->last_iccp, s->iccp_size);
+ out_off += s->iccp_size;
+ }
+ if (s->add_exif && s->last_exif) {
+ memcpy(out->data + out_off, s->last_exif, s->exif_size);
+ out_off += s->exif_size;
+ }
+ if (s->add_xmp && s->last_xmp) {
+ memcpy(out->data + out_off, s->last_xmp, s->xmp_size);
+ out_off += s->xmp_size;
+ }
+
+ // copy frame data
+ memcpy(out->data + out_off, s->last_pkt->data + packet_start, packet_end - packet_start);
+
+ if (key_frame)
+ out->flags |= AV_PKT_FLAG_KEY;
+ else
+ out->flags &= ~AV_PKT_FLAG_KEY;
+
+ out->pts = s->last_pts;
+ out->dts = out->pts;
+ out->pos = packet_start;
+ out->duration = delay;
+ out->stream_index = s->last_pkt->stream_index;
+ out->time_base = s->last_pkt->time_base;
+
+ s->last_pts += (delay > 0) ? delay : 1;
+
+ key_frame = 0;
+
+ return 0;
+
+fail:
+ if (ret < 0) {
+ av_packet_unref(out);
+ return ret;
+ }
+ av_packet_free(&in);
+
+ return ret;
+}
+
+static void awebp2webp_close(AVBSFContext *ctx)
+{
+ WEBPBSFContext *s = ctx->priv_data;
+ av_freep(&s->last_iccp);
+ av_freep(&s->last_exif);
+ av_freep(&s->last_xmp);
+}
+
+static const enum AVCodecID codec_ids[] = {
+ AV_CODEC_ID_WEBP, AV_CODEC_ID_NONE,
+};
+
+const FFBitStreamFilter ff_awebp2webp_bsf = {
+ .p.name = "awebp2webp",
+ .p.codec_ids = codec_ids,
+ .priv_data_size = sizeof(WEBPBSFContext),
+ .filter = awebp2webp_filter,
+ .close = awebp2webp_close,
+};
--
2.43.0
[-- Attachment #3: Type: text/plain, Size: 251 bytes --]
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 15+ messages in thread
* [FFmpeg-devel] [PATCH v12 4/8] libavcodec/webp: add support for animated WebP
2024-04-17 19:19 [FFmpeg-devel] [PATCH v12 0/8] [WIP] webp: add support for animated WebP decoding Thilo Borgmann via ffmpeg-devel
` (2 preceding siblings ...)
2024-04-17 19:19 ` [FFmpeg-devel] [PATCH v12 3/8] avcodec/bsf: Add awebp2webp bitstream filter Thilo Borgmann via ffmpeg-devel
@ 2024-04-17 19:20 ` Thilo Borgmann via ffmpeg-devel
2024-04-17 19:20 ` [FFmpeg-devel] [PATCH v12 5/8] avcodec/webp: make init_canvas_frame static Thilo Borgmann via ffmpeg-devel
` (4 subsequent siblings)
8 siblings, 0 replies; 15+ messages in thread
From: Thilo Borgmann via ffmpeg-devel @ 2024-04-17 19:20 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: thilo.borgmann
From: Josef Zlomek <josef@pex.com>
Fixes: 4907
Adds support for decoding of animated WebP.
The WebP decoder adds the animation related features according to the specs:
https://developers.google.com/speed/webp/docs/riff_container#animation
The frames of the animation may be smaller than the image canvas.
Therefore, the frame is decoded to a temporary frame,
then it is blended into the canvas, the canvas is copied to the output frame,
and finally the frame is disposed from the canvas.
The output to AV_PIX_FMT_YUVA420P/AV_PIX_FMT_YUV420P is still supported.
The background color is specified only as BGRA in the WebP file
so it is converted to YUVA if YUV formats are output.
Signed-off-by: Josef Zlomek <josef@pex.com>
---
Changelog | 1 +
libavcodec/codec_desc.c | 3 +-
libavcodec/version.h | 2 +-
libavcodec/webp.c | 710 ++++++++++++++++++++++++++++++++++++----
4 files changed, 653 insertions(+), 63 deletions(-)
diff --git a/Changelog b/Changelog
index 5c8f505211..03ac047cea 100644
--- a/Changelog
+++ b/Changelog
@@ -93,6 +93,7 @@ version 6.1:
- ffprobe XML output schema changed to account for multiple
variable-fields elements within the same parent element
- ffprobe -output_format option added as an alias of -of
+- animated WebP decoder
version 6.0:
diff --git a/libavcodec/codec_desc.c b/libavcodec/codec_desc.c
index 7dba61dc8b..2873f91479 100644
--- a/libavcodec/codec_desc.c
+++ b/libavcodec/codec_desc.c
@@ -1259,8 +1259,7 @@ static const AVCodecDescriptor codec_descriptors[] = {
.type = AVMEDIA_TYPE_VIDEO,
.name = "webp",
.long_name = NULL_IF_CONFIG_SMALL("WebP"),
- .props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSY |
- AV_CODEC_PROP_LOSSLESS,
+ .props = AV_CODEC_PROP_LOSSY | AV_CODEC_PROP_LOSSLESS,
.mime_types= MT("image/webp"),
},
{
diff --git a/libavcodec/version.h b/libavcodec/version.h
index f0958eee14..3d2de546b3 100644
--- a/libavcodec/version.h
+++ b/libavcodec/version.h
@@ -30,7 +30,7 @@
#include "version_major.h"
#define LIBAVCODEC_VERSION_MINOR 5
-#define LIBAVCODEC_VERSION_MICRO 103
+#define LIBAVCODEC_VERSION_MICRO 104
#define LIBAVCODEC_VERSION_INT AV_VERSION_INT(LIBAVCODEC_VERSION_MAJOR, \
LIBAVCODEC_VERSION_MINOR, \
diff --git a/libavcodec/webp.c b/libavcodec/webp.c
index 3075321e86..f882c3e187 100644
--- a/libavcodec/webp.c
+++ b/libavcodec/webp.c
@@ -35,13 +35,16 @@
* Exif metadata
* ICC profile
*
+ * @author Josef Zlomek, Pexeso Inc. <josef@pex.com>
+ * Animation
+ *
* Unimplemented:
- * - Animation
* - XMP metadata
*/
#include "libavutil/imgutils.h"
#include "libavutil/mem.h"
+#include "libavutil/colorspace.h"
#define BITSTREAM_READER_LE
#include "avcodec.h"
@@ -68,6 +71,14 @@
#define NUM_SHORT_DISTANCES 120
#define MAX_HUFFMAN_CODE_LENGTH 15
+#define ANMF_DISPOSAL_METHOD 0x01
+#define ANMF_DISPOSAL_METHOD_UNCHANGED 0x00
+#define ANMF_DISPOSAL_METHOD_BACKGROUND 0x01
+
+#define ANMF_BLENDING_METHOD 0x02
+#define ANMF_BLENDING_METHOD_ALPHA 0x00
+#define ANMF_BLENDING_METHOD_OVERWRITE 0x02
+
static const uint16_t alphabet_sizes[HUFFMAN_CODES_PER_META_CODE] = {
NUM_LITERAL_CODES + NUM_LENGTH_CODES,
NUM_LITERAL_CODES, NUM_LITERAL_CODES, NUM_LITERAL_CODES,
@@ -192,6 +203,8 @@ typedef struct ImageContext {
typedef struct WebPContext {
VP8Context v; /* VP8 Context used for lossy decoding */
GetBitContext gb; /* bitstream reader for main image chunk */
+ ThreadFrame canvas_frame; /* ThreadFrame for canvas */
+ AVFrame *frame; /* AVFrame for decoded frame */
AVFrame *alpha_frame; /* AVFrame for alpha data decompressed from VP8L */
AVPacket *pkt; /* AVPacket to be passed to the underlying VP8 decoder */
AVCodecContext *avctx; /* parent AVCodecContext */
@@ -206,7 +219,22 @@ typedef struct WebPContext {
int has_iccp; /* set after an ICCP chunk has been processed */
int width; /* image width */
int height; /* image height */
- int lossless; /* indicates lossless or lossy */
+ int vp8x_flags; /* global flags from VP8X chunk */
+ int canvas_width; /* canvas width */
+ int canvas_height; /* canvas height */
+ int anmf_flags; /* frame flags from ANMF chunk */
+ int pos_x; /* frame position X */
+ int pos_y; /* frame position Y */
+ int prev_anmf_flags; /* previous frame flags from ANMF chunk */
+ int prev_width; /* previous frame width */
+ int prev_height; /* previous frame height */
+ int prev_pos_x; /* previous frame position X */
+ int prev_pos_y; /* previous frame position Y */
+ int await_progress; /* value of progress to wait for */
+ uint8_t background_argb[4]; /* background color in ARGB format */
+ uint8_t background_yuva[4]; /* background color in YUVA format */
+ const uint8_t *background_data[4]; /* "planes" for background color in YUVA format */
+ uint8_t transparent_yuva[4]; /* transparent black in YUVA format */
int nb_transforms; /* number of transforms */
enum TransformType transforms[4]; /* transformations used in the image, in order */
@@ -1091,7 +1119,6 @@ static int vp8_lossless_decode_frame(AVCodecContext *avctx, AVFrame *p,
int w, h, ret, i, used;
if (!is_alpha_chunk) {
- s->lossless = 1;
avctx->pix_fmt = AV_PIX_FMT_ARGB;
}
@@ -1305,9 +1332,10 @@ static int vp8_lossy_decode_frame(AVCodecContext *avctx, AVFrame *p,
s->initialized = 1;
}
avctx->pix_fmt = s->has_alpha ? AV_PIX_FMT_YUVA420P : AV_PIX_FMT_YUV420P;
- s->lossless = 0;
s->avctx_vp8->pix_fmt = avctx->pix_fmt;
+ ff_thread_finish_setup(s->avctx);
+
if (data_size > INT_MAX) {
av_log(avctx, AV_LOG_ERROR, "unsupported chunk size\n");
return AVERROR_PATCHWELCOME;
@@ -1342,7 +1370,6 @@ static int vp8_lossy_decode_frame(AVCodecContext *avctx, AVFrame *p,
}
*got_frame = 1;
- update_canvas_size(avctx, s->avctx_vp8->width, s->avctx_vp8->height);
if (s->has_alpha) {
ret = vp8_lossy_decode_alpha(avctx, p, s->alpha_data,
@@ -1353,40 +1380,21 @@ static int vp8_lossy_decode_frame(AVCodecContext *avctx, AVFrame *p,
return ret;
}
-static int webp_decode_frame(AVCodecContext *avctx, AVFrame *p,
- int *got_frame, AVPacket *avpkt)
+int init_canvas_frame(WebPContext *s, int format, int key_frame);
+
+static int webp_decode_frame_common(AVCodecContext *avctx, uint8_t *data, int size,
+ int *got_frame, int key_frame, AVFrame *p)
{
WebPContext *s = avctx->priv_data;
GetByteContext gb;
int ret;
uint32_t chunk_type, chunk_size;
- int vp8x_flags = 0;
-
- s->avctx = avctx;
- s->width = 0;
- s->height = 0;
- *got_frame = 0;
- s->has_alpha = 0;
- s->has_exif = 0;
- s->has_iccp = 0;
- bytestream2_init(&gb, avpkt->data, avpkt->size);
-
- if (bytestream2_get_bytes_left(&gb) < 12)
- return AVERROR_INVALIDDATA;
-
- if (bytestream2_get_le32(&gb) != MKTAG('R', 'I', 'F', 'F')) {
- av_log(avctx, AV_LOG_ERROR, "missing RIFF tag\n");
- return AVERROR_INVALIDDATA;
- }
- chunk_size = bytestream2_get_le32(&gb);
- if (bytestream2_get_bytes_left(&gb) < chunk_size)
- return AVERROR_INVALIDDATA;
+ bytestream2_init(&gb, data, size);
- if (bytestream2_get_le32(&gb) != MKTAG('W', 'E', 'B', 'P')) {
- av_log(avctx, AV_LOG_ERROR, "missing WEBP tag\n");
- return AVERROR_INVALIDDATA;
- }
+ // reset metadata bit for each packet
+ s->has_exif = 0;
+ s->has_iccp = 0;
while (bytestream2_get_bytes_left(&gb) > 8) {
char chunk_str[5] = { 0 };
@@ -1397,6 +1405,10 @@ static int webp_decode_frame(AVCodecContext *avctx, AVFrame *p,
return AVERROR_INVALIDDATA;
chunk_size += chunk_size & 1;
+ // we need to dive into RIFF chunk
+ if (chunk_type == MKTAG('R', 'I', 'F', 'F'))
+ chunk_size = 4;
+
if (bytestream2_get_bytes_left(&gb) < chunk_size) {
/* we seem to be running out of data, but it could also be that the
bitstream has trailing junk leading to bogus chunk_size. */
@@ -1404,10 +1416,26 @@ static int webp_decode_frame(AVCodecContext *avctx, AVFrame *p,
}
switch (chunk_type) {
+ case MKTAG('R', 'I', 'F', 'F'):
+ if (bytestream2_get_le32(&gb) != MKTAG('W', 'E', 'B', 'P')) {
+ av_log(avctx, AV_LOG_ERROR, "missing WEBP tag\n");
+ return AVERROR_INVALIDDATA;
+ }
+ s->vp8x_flags = 0;
+ s->canvas_width = 0;
+ s->canvas_height = 0;
+ s->has_exif = 0;
+ s->has_iccp = 0;
+ ff_thread_release_ext_buffer(&s->canvas_frame);
+ break;
case MKTAG('V', 'P', '8', ' '):
if (!*got_frame) {
- ret = vp8_lossy_decode_frame(avctx, p, got_frame,
- avpkt->data + bytestream2_tell(&gb),
+ ret = init_canvas_frame(s, AV_PIX_FMT_YUVA420P, key_frame);
+ if (ret < 0)
+ return ret;
+
+ ret = vp8_lossy_decode_frame(avctx, s->frame, got_frame,
+ data + bytestream2_tell(&gb),
chunk_size);
if (ret < 0)
return ret;
@@ -1416,24 +1444,35 @@ static int webp_decode_frame(AVCodecContext *avctx, AVFrame *p,
break;
case MKTAG('V', 'P', '8', 'L'):
if (!*got_frame) {
- ret = vp8_lossless_decode_frame(avctx, p, got_frame,
- avpkt->data + bytestream2_tell(&gb),
+ ret = init_canvas_frame(s, AV_PIX_FMT_ARGB, key_frame);
+ if (ret < 0)
+ return ret;
+
+ ret = vp8_lossless_decode_frame(avctx, s->frame, got_frame,
+ data + bytestream2_tell(&gb),
chunk_size, 0);
if (ret < 0)
return ret;
+
avctx->properties |= FF_CODEC_PROPERTY_LOSSLESS;
+ ret = av_frame_copy_props(s->canvas_frame.f, s->frame);
+ if (ret < 0)
+ return ret;
+ ff_thread_finish_setup(s->avctx);
}
bytestream2_skip(&gb, chunk_size);
break;
case MKTAG('V', 'P', '8', 'X'):
- if (s->width || s->height || *got_frame) {
+ if (s->canvas_width || s->canvas_height || *got_frame) {
av_log(avctx, AV_LOG_ERROR, "Canvas dimensions are already set\n");
return AVERROR_INVALIDDATA;
}
- vp8x_flags = bytestream2_get_byte(&gb);
+ s->vp8x_flags = bytestream2_get_byte(&gb);
bytestream2_skip(&gb, 3);
s->width = bytestream2_get_le24(&gb) + 1;
s->height = bytestream2_get_le24(&gb) + 1;
+ s->canvas_width = s->width;
+ s->canvas_height = s->height;
ret = av_image_check_size(s->width, s->height, 0, avctx);
if (ret < 0)
return ret;
@@ -1441,7 +1480,7 @@ static int webp_decode_frame(AVCodecContext *avctx, AVFrame *p,
case MKTAG('A', 'L', 'P', 'H'): {
int alpha_header, filter_m, compression;
- if (!(vp8x_flags & VP8X_FLAG_ALPHA)) {
+ if (!(s->vp8x_flags & VP8X_FLAG_ALPHA)) {
av_log(avctx, AV_LOG_WARNING,
"ALPHA chunk present, but alpha bit not set in the "
"VP8X header\n");
@@ -1450,8 +1489,9 @@ static int webp_decode_frame(AVCodecContext *avctx, AVFrame *p,
av_log(avctx, AV_LOG_ERROR, "invalid ALPHA chunk size\n");
return AVERROR_INVALIDDATA;
}
+
alpha_header = bytestream2_get_byte(&gb);
- s->alpha_data = avpkt->data + bytestream2_tell(&gb);
+ s->alpha_data = data + bytestream2_tell(&gb);
s->alpha_data_size = chunk_size - 1;
bytestream2_skip(&gb, s->alpha_data_size);
@@ -1471,37 +1511,33 @@ static int webp_decode_frame(AVCodecContext *avctx, AVFrame *p,
}
case MKTAG('E', 'X', 'I', 'F'): {
int le, ifd_offset, exif_offset = bytestream2_tell(&gb);
- AVDictionary *exif_metadata = NULL;
+ AVDictionary **exif_metadata = NULL;
GetByteContext exif_gb;
if (s->has_exif) {
av_log(avctx, AV_LOG_VERBOSE, "Ignoring extra EXIF chunk\n");
goto exif_end;
}
- if (!(vp8x_flags & VP8X_FLAG_EXIF_METADATA))
+ if (!(s->vp8x_flags & VP8X_FLAG_EXIF_METADATA))
av_log(avctx, AV_LOG_WARNING,
"EXIF chunk present, but Exif bit not set in the "
"VP8X header\n");
s->has_exif = 1;
- bytestream2_init(&exif_gb, avpkt->data + exif_offset,
- avpkt->size - exif_offset);
+ bytestream2_init(&exif_gb, data + exif_offset, size - exif_offset);
if (ff_tdecode_header(&exif_gb, &le, &ifd_offset) < 0) {
av_log(avctx, AV_LOG_ERROR, "invalid TIFF header "
"in Exif data\n");
goto exif_end;
}
+ exif_metadata = (s->vp8x_flags & VP8X_FLAG_ANIMATION) ? &p->metadata : &s->frame->metadata;
bytestream2_seek(&exif_gb, ifd_offset, SEEK_SET);
- if (ff_exif_decode_ifd(avctx, &exif_gb, le, 0, &exif_metadata) < 0) {
+ if (ff_exif_decode_ifd(avctx, &exif_gb, le, 0, exif_metadata) < 0) {
av_log(avctx, AV_LOG_ERROR, "error decoding Exif data\n");
goto exif_end;
}
-
- av_dict_copy(&p->metadata, exif_metadata, 0);
-
exif_end:
- av_dict_free(&exif_metadata);
bytestream2_skip(&gb, chunk_size);
break;
}
@@ -1513,13 +1549,12 @@ exif_end:
bytestream2_skip(&gb, chunk_size);
break;
}
- if (!(vp8x_flags & VP8X_FLAG_ICC))
+ if (!(s->vp8x_flags & VP8X_FLAG_ICC))
av_log(avctx, AV_LOG_WARNING,
"ICCP chunk present, but ICC Profile bit not set in the "
"VP8X header\n");
s->has_iccp = 1;
-
ret = ff_frame_new_side_data(avctx, p, AV_FRAME_DATA_ICC_PROFILE, chunk_size, &sd);
if (ret < 0)
return ret;
@@ -1531,8 +1566,51 @@ exif_end:
}
break;
}
- case MKTAG('A', 'N', 'I', 'M'):
+ case MKTAG('A', 'N', 'I', 'M'): {
+ const AVPixFmtDescriptor *desc;
+ int a, r, g, b;
+ if (!(s->vp8x_flags & VP8X_FLAG_ANIMATION)) {
+ av_log(avctx, AV_LOG_WARNING,
+ "ANIM chunk present, but animation bit not set in the "
+ "VP8X header\n");
+ }
+ // background is stored as BGRA, we need ARGB
+ s->background_argb[3] = b = bytestream2_get_byte(&gb);
+ s->background_argb[2] = g = bytestream2_get_byte(&gb);
+ s->background_argb[1] = r = bytestream2_get_byte(&gb);
+ s->background_argb[0] = a = bytestream2_get_byte(&gb);
+
+ // convert the background color to YUVA
+ desc = av_pix_fmt_desc_get(AV_PIX_FMT_YUVA420P);
+ s->background_yuva[desc->comp[0].plane] = RGB_TO_Y_CCIR(r, g, b);
+ s->background_yuva[desc->comp[1].plane] = RGB_TO_U_CCIR(r, g, b, 0);
+ s->background_yuva[desc->comp[2].plane] = RGB_TO_V_CCIR(r, g, b, 0);
+ s->background_yuva[desc->comp[3].plane] = a;
+
+ bytestream2_skip(&gb, 2); // loop count is ignored
+ break;
+ }
case MKTAG('A', 'N', 'M', 'F'):
+ if (!(s->vp8x_flags & VP8X_FLAG_ANIMATION)) {
+ av_log(avctx, AV_LOG_WARNING,
+ "ANMF chunk present, but animation bit not set in the "
+ "VP8X header\n");
+ }
+ s->pos_x = bytestream2_get_le24(&gb) * 2;
+ s->pos_y = bytestream2_get_le24(&gb) * 2;
+ s->width = bytestream2_get_le24(&gb) + 1;
+ s->height = bytestream2_get_le24(&gb) + 1;
+ bytestream2_skip(&gb, 3); // duration
+ s->anmf_flags = bytestream2_get_byte(&gb);
+
+ if (s->width + s->pos_x > s->canvas_width ||
+ s->height + s->pos_y > s->canvas_height) {
+ av_log(avctx, AV_LOG_ERROR,
+ "frame does not fit into canvas\n");
+ return AVERROR_INVALIDDATA;
+ }
+ s->vp8x_flags |= VP8X_FLAG_ANIMATION;
+ break;
case MKTAG('X', 'M', 'P', ' '):
AV_WL32(chunk_str, chunk_type);
av_log(avctx, AV_LOG_WARNING, "skipping unsupported chunk: %s\n",
@@ -1548,24 +1626,494 @@ exif_end:
}
}
- if (!*got_frame) {
- av_log(avctx, AV_LOG_ERROR, "image data not found\n");
- return AVERROR_INVALIDDATA;
+ return size;
+}
+
+int init_canvas_frame(WebPContext *s, int format, int key_frame)
+{
+ AVFrame *canvas = s->canvas_frame.f;
+ int height;
+ int ret;
+
+ // canvas is needed only for animation
+ if (!(s->vp8x_flags & VP8X_FLAG_ANIMATION))
+ return 0;
+
+ // avoid init for non-key frames whose format and size did not change
+ if (!key_frame &&
+ canvas->data[0] &&
+ canvas->format == format &&
+ canvas->width == s->canvas_width &&
+ canvas->height == s->canvas_height)
+ return 0;
+
+ // canvas changes within IPPP sequences will loose thread sync
+ // because of the ThreadFrame reallocation and will wait forever
+ // so if frame-threading is used, forbid canvas changes and unlock
+ // previous frames
+ if (!key_frame && canvas->data[0]) {
+ if (s->avctx->thread_count > 1) {
+ av_log(s->avctx, AV_LOG_WARNING, "Canvas change detected. The output will be damaged. Use -threads 1 to try decoding with best effort.\n");
+ // unlock previous frames that have sent an _await() call
+ ff_thread_report_progress(&s->canvas_frame, INT_MAX, 0);
+ return AVERROR_PATCHWELCOME;
+ } else {
+ // warn for damaged frames
+ av_log(s->avctx, AV_LOG_WARNING, "Canvas change detected. The output will be damaged.\n");
+ }
+ }
+
+ s->avctx->pix_fmt = format;
+ canvas->format = format;
+ canvas->width = s->canvas_width;
+ canvas->height = s->canvas_height;
+
+ // VP8 decoder changed the width and height in AVCodecContext.
+ // Change it back to the canvas size.
+ ret = ff_set_dimensions(s->avctx, s->canvas_width, s->canvas_height);
+ if (ret < 0)
+ return ret;
+
+ ff_thread_release_ext_buffer(s->avctx, &s->canvas_frame);
+ ret = ff_thread_get_ext_buffer(s->avctx, &s->canvas_frame, AV_GET_BUFFER_FLAG_REF);
+ if (ret < 0)
+ return ret;
+
+ if (canvas->format == AV_PIX_FMT_ARGB) {
+ height = canvas->height;
+ memset(canvas->data[0], 0, height * canvas->linesize[0]);
+ } else /* if (canvas->format == AV_PIX_FMT_YUVA420P) */ {
+ const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(canvas->format);
+ for (int comp = 0; comp < desc->nb_components; comp++) {
+ int plane = desc->comp[comp].plane;
+
+ if (comp == 1 || comp == 2)
+ height = AV_CEIL_RSHIFT(canvas->height, desc->log2_chroma_h);
+ else
+ height = FFALIGN(canvas->height, 1 << desc->log2_chroma_h);
+
+ memset(canvas->data[plane], s->transparent_yuva[plane],
+ height * canvas->linesize[plane]);
+ }
+ }
+
+ return 0;
+}
+
+/*
+ * Blend src1 (foreground) and src2 (background) into dest, in ARGB format.
+ * width, height are the dimensions of src1
+ * pos_x, pos_y is the position in src2 and in dest
+ */
+static void blend_alpha_argb(uint8_t *dest_data[4], int dest_linesize[4],
+ const uint8_t *src1_data[4], int src1_linesize[4],
+ const uint8_t *src2_data[4], int src2_linesize[4],
+ int src2_step[4],
+ int width, int height, int pos_x, int pos_y)
+{
+ for (int y = 0; y < height; y++) {
+ const uint8_t *src1 = src1_data[0] + y * src1_linesize[0];
+ const uint8_t *src2 = src2_data[0] + (y + pos_y) * src2_linesize[0] + pos_x * src2_step[0];
+ uint8_t *dest = dest_data[0] + (y + pos_y) * dest_linesize[0] + pos_x * sizeof(uint32_t);
+ for (int x = 0; x < width; x++) {
+ int src1_alpha = src1[0];
+ int src2_alpha = src2[0];
+
+ if (src1_alpha == 255) {
+ memcpy(dest, src1, sizeof(uint32_t));
+ } else if (src1_alpha + src2_alpha == 0) {
+ memset(dest, 0, sizeof(uint32_t));
+ } else {
+ int tmp_alpha = src2_alpha - ROUNDED_DIV(src1_alpha * src2_alpha, 255);
+ int blend_alpha = src1_alpha + tmp_alpha;
+
+ dest[0] = blend_alpha;
+ dest[1] = ROUNDED_DIV(src1[1] * src1_alpha + src2[1] * tmp_alpha, blend_alpha);
+ dest[2] = ROUNDED_DIV(src1[2] * src1_alpha + src2[2] * tmp_alpha, blend_alpha);
+ dest[3] = ROUNDED_DIV(src1[3] * src1_alpha + src2[3] * tmp_alpha, blend_alpha);
+ }
+ src1 += sizeof(uint32_t);
+ src2 += src2_step[0];
+ dest += sizeof(uint32_t);
+ }
+ }
+}
+
+/*
+ * Blend src1 (foreground) and src2 (background) into dest, in YUVA format.
+ * width, height are the dimensions of src1
+ * pos_x, pos_y is the position in src2 and in dest
+ */
+static void blend_alpha_yuva(WebPContext *s,
+ uint8_t *dest_data[4], int dest_linesize[4],
+ const uint8_t *src1_data[4], int src1_linesize[4],
+ int src1_format,
+ const uint8_t *src2_data[4], int src2_linesize[4],
+ int src2_step[4],
+ int width, int height, int pos_x, int pos_y)
+{
+ const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(src1_format);
+
+ int plane_y = desc->comp[0].plane;
+ int plane_u = desc->comp[1].plane;
+ int plane_v = desc->comp[2].plane;
+ int plane_a = desc->comp[3].plane;
+
+ // blend U & V planes first, because the later step may modify alpha plane
+ int w = AV_CEIL_RSHIFT(width, desc->log2_chroma_w);
+ int h = AV_CEIL_RSHIFT(height, desc->log2_chroma_h);
+ int px = AV_CEIL_RSHIFT(pos_x, desc->log2_chroma_w);
+ int py = AV_CEIL_RSHIFT(pos_y, desc->log2_chroma_h);
+ int tile_w = 1 << desc->log2_chroma_w;
+ int tile_h = 1 << desc->log2_chroma_h;
+
+ for (int y = 0; y < h; y++) {
+ const uint8_t *src1_u = src1_data[plane_u] + y * src1_linesize[plane_u];
+ const uint8_t *src1_v = src1_data[plane_v] + y * src1_linesize[plane_v];
+ const uint8_t *src2_u = src2_data[plane_u] + (y + py) * src2_linesize[plane_u] + px * src2_step[plane_u];
+ const uint8_t *src2_v = src2_data[plane_v] + (y + py) * src2_linesize[plane_v] + px * src2_step[plane_v];
+ uint8_t *dest_u = dest_data[plane_u] + (y + py) * dest_linesize[plane_u] + px;
+ uint8_t *dest_v = dest_data[plane_v] + (y + py) * dest_linesize[plane_v] + px;
+ for (int x = 0; x < w; x++) {
+ // calculate the average alpha of the tile
+ int src1_alpha = 0;
+ int src2_alpha = 0;
+ for (int yy = 0; yy < tile_h; yy++) {
+ for (int xx = 0; xx < tile_w; xx++) {
+ src1_alpha += src1_data[plane_a][(y * tile_h + yy) * src1_linesize[plane_a] +
+ (x * tile_w + xx)];
+ src2_alpha += src2_data[plane_a][((y + py) * tile_h + yy) * src2_linesize[plane_a] +
+ ((x + px) * tile_w + xx) * src2_step[plane_a]];
+ }
+ }
+ src1_alpha = AV_CEIL_RSHIFT(src1_alpha, desc->log2_chroma_w + desc->log2_chroma_h);
+ src2_alpha = AV_CEIL_RSHIFT(src2_alpha, desc->log2_chroma_w + desc->log2_chroma_h);
+
+ if (src1_alpha == 255) {
+ *dest_u = *src1_u;
+ *dest_v = *src1_v;
+ } else if (src1_alpha + src2_alpha == 0) {
+ *dest_u = s->transparent_yuva[plane_u];
+ *dest_v = s->transparent_yuva[plane_v];
+ } else {
+ int tmp_alpha = src2_alpha - ROUNDED_DIV(src1_alpha * src2_alpha, 255);
+ int blend_alpha = src1_alpha + tmp_alpha;
+ *dest_u = ROUNDED_DIV(*src1_u * src1_alpha + *src2_u * tmp_alpha, blend_alpha);
+ *dest_v = ROUNDED_DIV(*src1_v * src1_alpha + *src2_v * tmp_alpha, blend_alpha);
+ }
+ src1_u++;
+ src1_v++;
+ src2_u += src2_step[plane_u];
+ src2_v += src2_step[plane_v];
+ dest_u++;
+ dest_v++;
+ }
+ }
+
+ // blend Y & A planes
+ for (int y = 0; y < height; y++) {
+ const uint8_t *src1_y = src1_data[plane_y] + y * src1_linesize[plane_y];
+ const uint8_t *src1_a = src1_data[plane_a] + y * src1_linesize[plane_a];
+ const uint8_t *src2_y = src2_data[plane_y] + (y + pos_y) * src2_linesize[plane_y] + pos_x * src2_step[plane_y];
+ const uint8_t *src2_a = src2_data[plane_a] + (y + pos_y) * src2_linesize[plane_a] + pos_x * src2_step[plane_a];
+ uint8_t *dest_y = dest_data[plane_y] + (y + pos_y) * dest_linesize[plane_y] + pos_x;
+ uint8_t *dest_a = dest_data[plane_a] + (y + pos_y) * dest_linesize[plane_a] + pos_x;
+ for (int x = 0; x < width; x++) {
+ int src1_alpha = *src1_a;
+ int src2_alpha = *src2_a;
+
+ if (src1_alpha == 255) {
+ *dest_y = *src1_y;
+ *dest_a = 255;
+ } else if (src1_alpha + src2_alpha == 0) {
+ *dest_y = s->transparent_yuva[plane_y];
+ *dest_a = 0;
+ } else {
+ int tmp_alpha = src2_alpha - ROUNDED_DIV(src1_alpha * src2_alpha, 255);
+ int blend_alpha = src1_alpha + tmp_alpha;
+ *dest_y = ROUNDED_DIV(*src1_y * src1_alpha + *src2_y * tmp_alpha, blend_alpha);
+ *dest_a = blend_alpha;
+ }
+ src1_y++;
+ src1_a++;
+ src2_y += src2_step[plane_y];
+ src2_a += src2_step[plane_a];
+ dest_y++;
+ dest_a++;
+ }
}
+}
+
+static int blend_frame_into_canvas(WebPContext *s)
+{
+ AVFrame *canvas = s->canvas_frame.f;
+ AVFrame *frame = s->frame;
+ int width, height;
+ int pos_x, pos_y;
+
+ if ((s->anmf_flags & ANMF_BLENDING_METHOD) == ANMF_BLENDING_METHOD_OVERWRITE
+ || frame->format == AV_PIX_FMT_YUV420P) {
+ // do not blend, overwrite
+
+ if (canvas->format == AV_PIX_FMT_ARGB) {
+ width = s->width;
+ height = s->height;
+ pos_x = s->pos_x;
+ pos_y = s->pos_y;
+
+ for (int y = 0; y < height; y++) {
+ const uint32_t *src = (uint32_t *) (frame->data[0] + y * frame->linesize[0]);
+ uint32_t *dst = (uint32_t *) (canvas->data[0] + (y + pos_y) * canvas->linesize[0]) + pos_x;
+ memcpy(dst, src, width * sizeof(uint32_t));
+ }
+ } else /* if (canvas->format == AV_PIX_FMT_YUVA420P) */ {
+ const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(frame->format);
+ int plane;
+
+ for (int comp = 0; comp < desc->nb_components; comp++) {
+ plane = desc->comp[comp].plane;
+ width = s->width;
+ height = s->height;
+ pos_x = s->pos_x;
+ pos_y = s->pos_y;
+ if (comp == 1 || comp == 2) {
+ width = AV_CEIL_RSHIFT(width, desc->log2_chroma_w);
+ height = AV_CEIL_RSHIFT(height, desc->log2_chroma_h);
+ pos_x = AV_CEIL_RSHIFT(pos_x, desc->log2_chroma_w);
+ pos_y = AV_CEIL_RSHIFT(pos_y, desc->log2_chroma_h);
+ }
+
+ for (int y = 0; y < height; y++) {
+ const uint8_t *src = frame->data[plane] + y * frame->linesize[plane];
+ uint8_t *dst = canvas->data[plane] + (y + pos_y) * canvas->linesize[plane] + pos_x;
+ memcpy(dst, src, width);
+ }
+ }
+
+ if (desc->nb_components < 4) {
+ // frame does not have alpha, set alpha to 255
+ desc = av_pix_fmt_desc_get(canvas->format);
+ plane = desc->comp[3].plane;
+ width = s->width;
+ height = s->height;
+ pos_x = s->pos_x;
+ pos_y = s->pos_y;
+
+ for (int y = 0; y < height; y++) {
+ uint8_t *dst = canvas->data[plane] + (y + pos_y) * canvas->linesize[plane] + pos_x;
+ memset(dst, 255, width);
+ }
+ }
+ }
+ } else {
+ // alpha blending
+
+ if (canvas->format == AV_PIX_FMT_ARGB) {
+ int src2_step[4] = { sizeof(uint32_t) };
+ blend_alpha_argb(canvas->data, canvas->linesize,
+ (const uint8_t **) frame->data, frame->linesize,
+ (const uint8_t **) canvas->data, canvas->linesize,
+ src2_step, s->width, s->height, s->pos_x, s->pos_y);
+ } else /* if (canvas->format == AV_PIX_FMT_YUVA420P) */ {
+ int src2_step[4] = { 1, 1, 1, 1 };
+ blend_alpha_yuva(s, canvas->data, canvas->linesize,
+ (const uint8_t **) frame->data, frame->linesize,
+ frame->format,
+ (const uint8_t **) canvas->data, canvas->linesize,
+ src2_step, s->width, s->height, s->pos_x, s->pos_y);
+ }
+ }
+
+ return 0;
+}
+
+static int copy_canvas_to_frame(WebPContext *s, AVFrame *frame, int key_frame)
+{
+ AVFrame *canvas = s->canvas_frame.f;
+ int ret;
+
+ frame->format = canvas->format;
+ frame->width = canvas->width;
+ frame->height = canvas->height;
+
+ ret = av_frame_get_buffer(frame, 0);
+ if (ret < 0)
+ return ret;
+
+ ret = av_frame_copy_props(frame, canvas);
+ if (ret < 0)
+ return ret;
+
+ // blend the canvas with the background color into the output frame
+ if (canvas->format == AV_PIX_FMT_ARGB) {
+ int src2_step[4] = { 0 };
+ const uint8_t *src2_data[4] = { &s->background_argb[0] };
+ blend_alpha_argb(frame->data, frame->linesize,
+ (const uint8_t **) canvas->data, canvas->linesize,
+ (const uint8_t **) src2_data, src2_step, src2_step,
+ canvas->width, canvas->height, 0, 0);
+ } else /* if (canvas->format == AV_PIX_FMT_YUVA420P) */ {
+ int src2_step[4] = { 0, 0, 0, 0 };
+ blend_alpha_yuva(s, frame->data, frame->linesize,
+ (const uint8_t **) canvas->data, canvas->linesize,
+ canvas->format,
+ s->background_data, src2_step, src2_step,
+ canvas->width, canvas->height, 0, 0);
+ }
+
+ if (key_frame) {
+ frame->pict_type = AV_PICTURE_TYPE_I;
+ } else {
+ frame->pict_type = AV_PICTURE_TYPE_P;
+ }
+
+ return 0;
+}
- return avpkt->size;
+static int dispose_prev_frame_in_canvas(WebPContext *s)
+{
+ AVFrame *canvas = s->canvas_frame.f;
+ int width, height;
+ int pos_x, pos_y;
+
+ if ((s->prev_anmf_flags & ANMF_DISPOSAL_METHOD) == ANMF_DISPOSAL_METHOD_BACKGROUND) {
+ // dispose to background
+
+ if (canvas->format == AV_PIX_FMT_ARGB) {
+ width = s->prev_width;
+ height = s->prev_height;
+ pos_x = s->prev_pos_x;
+ pos_y = s->prev_pos_y;
+
+ for (int y = 0; y < height; y++) {
+ uint32_t *dst = (uint32_t *) (canvas->data[0] + (y + pos_y) * canvas->linesize[0]) + pos_x;
+ memset(dst, 0, width * sizeof(uint32_t));
+ }
+ } else /* if (canvas->format == AV_PIX_FMT_YUVA420P) */ {
+ const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(canvas->format);
+ int plane;
+
+ for (int comp = 0; comp < desc->nb_components; comp++) {
+ plane = desc->comp[comp].plane;
+ width = s->prev_width;
+ height = s->prev_height;
+ pos_x = s->prev_pos_x;
+ pos_y = s->prev_pos_y;
+ if (comp == 1 || comp == 2) {
+ width = AV_CEIL_RSHIFT(width, desc->log2_chroma_w);
+ height = AV_CEIL_RSHIFT(height, desc->log2_chroma_h);
+ pos_x = AV_CEIL_RSHIFT(pos_x, desc->log2_chroma_w);
+ pos_y = AV_CEIL_RSHIFT(pos_y, desc->log2_chroma_h);
+ }
+
+ for (int y = 0; y < height; y++) {
+ uint8_t *dst = canvas->data[plane] + (y + pos_y) * canvas->linesize[plane] + pos_x;
+ memset(dst, s->transparent_yuva[plane], width);
+ }
+ }
+ }
+ }
+
+ return 0;
+}
+
+static int webp_decode_frame(AVCodecContext *avctx, AVFrame *p,
+ int *got_frame, AVPacket *avpkt)
+{
+ WebPContext *s = avctx->priv_data;
+ int ret;
+ int key_frame = avpkt->flags & AV_PKT_FLAG_KEY;
+
+ *got_frame = 0;
+
+ if (key_frame) {
+ // The canvas is passed from one thread to another in a sequence
+ // starting with a key frame followed by non-key frames.
+ // The key frame reports progress 1,
+ // the N-th non-key frame awaits progress N = s->await_progress
+ // and reports progress N + 1.
+ s->await_progress = 0;
+ }
+
+ // reset the frame params
+ s->anmf_flags = 0;
+ s->width = 0;
+ s->height = 0;
+ s->pos_x = 0;
+ s->pos_y = 0;
+ s->has_alpha = 0;
+
+ ret = webp_decode_frame_common(avctx, avpkt->data, avpkt->size, got_frame, key_frame, p);
+ if (ret < 0)
+ goto end;
+
+ if (*got_frame) {
+ if (!(s->vp8x_flags & VP8X_FLAG_ANIMATION)) {
+ // no animation, output the decoded frame
+ av_frame_move_ref(p, s->frame);
+ } else {
+ if (!key_frame) {
+ ff_thread_await_progress(&s->canvas_frame, s->await_progress, 0);
+
+ ret = dispose_prev_frame_in_canvas(s);
+ if (ret < 0)
+ goto end;
+ }
+
+ ret = blend_frame_into_canvas(s);
+ if (ret < 0)
+ goto end;
+
+ ret = copy_canvas_to_frame(s, p, key_frame);
+ if (ret < 0)
+ goto end;
+
+ ff_thread_report_progress(&s->canvas_frame, s->await_progress + 1, 0);
+ }
+
+ p->pts = avpkt->pts;
+ }
+
+ ret = avpkt->size;
+
+end:
+ s->prev_anmf_flags = s->anmf_flags;
+ s->prev_width = s->width;
+ s->prev_height = s->height;
+ s->prev_pos_x = s->pos_x;
+ s->prev_pos_y = s->pos_y;
+
+ av_frame_unref(s->frame);
+ return ret;
}
static av_cold int webp_decode_init(AVCodecContext *avctx)
{
WebPContext *s = avctx->priv_data;
+ const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(AV_PIX_FMT_YUVA420P);
int ret;
const AVCodec *codec;
+ s->avctx = avctx;
s->pkt = av_packet_alloc();
- if (!s->pkt)
+ s->canvas_frame.f = av_frame_alloc();
+ s->frame = av_frame_alloc();
+ if (!s->pkt || !s->canvas_frame.f || !s->frame) {
+ av_packet_free(&s->pkt);
+ av_frame_free(&s->canvas_frame.f);
+ av_frame_free(&s->frame);
return AVERROR(ENOMEM);
+ }
+ // prepare data pointers for YUVA background
+ for (int i = 0; i < 4; i++)
+ s->background_data[i] = &s->background_yuva[i];
+
+ // convert transparent black from RGBA to YUVA
+ s->transparent_yuva[desc->comp[0].plane] = RGB_TO_Y_CCIR(0, 0, 0);
+ s->transparent_yuva[desc->comp[1].plane] = RGB_TO_U_CCIR(0, 0, 0, 0);
+ s->transparent_yuva[desc->comp[2].plane] = RGB_TO_V_CCIR(0, 0, 0, 0);
+ s->transparent_yuva[desc->comp[3].plane] = 0;
/* Prepare everything needed for VP8 decoding */
codec = avcodec_find_decoder(AV_CODEC_ID_VP8);
@@ -1589,13 +2137,52 @@ static av_cold int webp_decode_close(AVCodecContext *avctx)
WebPContext *s = avctx->priv_data;
av_packet_free(&s->pkt);
+ ff_thread_release_ext_buffer(&s->canvas_frame);
+ av_frame_free(&s->canvas_frame.f);
+ av_frame_free(&s->frame);
avcodec_free_context(&s->avctx_vp8);
- if (s->initialized)
- return ff_vp8_decode_free(avctx);
+ return 0;
+}
+
+static void webp_decode_flush(AVCodecContext *avctx)
+{
+ WebPContext *s = avctx->priv_data;
+
+ ff_thread_release_ext_buffer(&s->canvas_frame);
+}
+
+#if HAVE_THREADS
+static int webp_update_thread_context(AVCodecContext *dst, const AVCodecContext *src)
+{
+ WebPContext *wsrc = src->priv_data;
+ WebPContext *wdst = dst->priv_data;
+ int ret;
+
+ if (dst == src)
+ return 0;
+
+ ff_thread_release_ext_buffer(&wdst->canvas_frame);
+ if (wsrc->canvas_frame.f->data[0] &&
+ (ret = ff_thread_ref_frame(&wdst->canvas_frame, &wsrc->canvas_frame)) < 0)
+ return ret;
+
+ wdst->vp8x_flags = wsrc->vp8x_flags;
+ wdst->canvas_width = wsrc->canvas_width;
+ wdst->canvas_height = wsrc->canvas_height;
+ wdst->prev_anmf_flags = wsrc->anmf_flags;
+ wdst->prev_width = wsrc->width;
+ wdst->prev_height = wsrc->height;
+ wdst->prev_pos_x = wsrc->pos_x;
+ wdst->prev_pos_y = wsrc->pos_y;
+ wdst->await_progress = wsrc->await_progress + 1;
+
+ memcpy(wdst->background_argb, wsrc->background_argb, sizeof(wsrc->background_argb));
+ memcpy(wdst->background_yuva, wsrc->background_yuva, sizeof(wsrc->background_yuva));
return 0;
}
+#endif
const FFCodec ff_webp_decoder = {
.p.name = "webp",
@@ -1603,9 +2190,12 @@ const FFCodec ff_webp_decoder = {
.p.type = AVMEDIA_TYPE_VIDEO,
.p.id = AV_CODEC_ID_WEBP,
.priv_data_size = sizeof(WebPContext),
+ UPDATE_THREAD_CONTEXT(webp_update_thread_context),
.init = webp_decode_init,
FF_CODEC_DECODE_CB(webp_decode_frame),
.close = webp_decode_close,
+ .flush = webp_decode_flush,
+ .bsfs = "awebp2webp",
.p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_FRAME_THREADS,
- .caps_internal = FF_CODEC_CAP_ICC_PROFILES,
+ .caps_internal = FF_CODEC_CAP_ICC_PROFILES | FF_CODEC_CAP_ALLOCATE_PROGRESS,
};
--
2.43.0
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 15+ messages in thread
* [FFmpeg-devel] [PATCH v12 5/8] avcodec/webp: make init_canvas_frame static
2024-04-17 19:19 [FFmpeg-devel] [PATCH v12 0/8] [WIP] webp: add support for animated WebP decoding Thilo Borgmann via ffmpeg-devel
` (3 preceding siblings ...)
2024-04-17 19:20 ` [FFmpeg-devel] [PATCH v12 4/8] libavcodec/webp: add support for animated WebP Thilo Borgmann via ffmpeg-devel
@ 2024-04-17 19:20 ` Thilo Borgmann via ffmpeg-devel
2024-04-17 19:20 ` [FFmpeg-devel] [PATCH v12 6/8] libavformat/webp: add WebP demuxer Thilo Borgmann via ffmpeg-devel
` (3 subsequent siblings)
8 siblings, 0 replies; 15+ messages in thread
From: Thilo Borgmann via ffmpeg-devel @ 2024-04-17 19:20 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: thilo.borgmann
From: Thilo Borgmann via ffmpeg-devel <ffmpeg-devel@ffmpeg.org>
---
libavcodec/webp.c | 142 +++++++++++++++++++++++-----------------------
1 file changed, 70 insertions(+), 72 deletions(-)
diff --git a/libavcodec/webp.c b/libavcodec/webp.c
index f882c3e187..4a244c1b67 100644
--- a/libavcodec/webp.c
+++ b/libavcodec/webp.c
@@ -1380,7 +1380,76 @@ static int vp8_lossy_decode_frame(AVCodecContext *avctx, AVFrame *p,
return ret;
}
-int init_canvas_frame(WebPContext *s, int format, int key_frame);
+static int init_canvas_frame(WebPContext *s, int format, int key_frame)
+{
+ AVFrame *canvas = s->canvas_frame.f;
+ int height;
+ int ret;
+
+ // canvas is needed only for animation
+ if (!(s->vp8x_flags & VP8X_FLAG_ANIMATION))
+ return 0;
+
+ // avoid init for non-key frames whose format and size did not change
+ if (!key_frame &&
+ canvas->data[0] &&
+ canvas->format == format &&
+ canvas->width == s->canvas_width &&
+ canvas->height == s->canvas_height)
+ return 0;
+
+ // canvas changes within IPPP sequences will lose thread sync
+ // because of the ThreadFrame reallocation and will wait forever
+ // so if frame-threading is used, forbid canvas changes and unlock
+ // previous frames
+ if (!key_frame && canvas->data[0]) {
+ if (s->avctx->thread_count > 1) {
+ av_log(s->avctx, AV_LOG_WARNING, "Canvas change detected. The output will be damaged. Use -threads 1 to try decoding with best effort.\n");
+ // unlock previous frames that have sent an _await() call
+ ff_thread_report_progress(&s->canvas_frame, INT_MAX, 0);
+ return AVERROR_PATCHWELCOME;
+ } else {
+ // warn for damaged frames
+ av_log(s->avctx, AV_LOG_WARNING, "Canvas change detected. The output will be damaged.\n");
+ }
+ }
+
+ s->avctx->pix_fmt = format;
+ canvas->format = format;
+ canvas->width = s->canvas_width;
+ canvas->height = s->canvas_height;
+
+ // VP8 decoder changed the width and height in AVCodecContext.
+ // Change it back to the canvas size.
+ ret = ff_set_dimensions(s->avctx, s->canvas_width, s->canvas_height);
+ if (ret < 0)
+ return ret;
+
+ ff_thread_release_ext_buffer(&s->canvas_frame);
+ ret = ff_thread_get_ext_buffer(s->avctx, &s->canvas_frame, AV_GET_BUFFER_FLAG_REF);
+ if (ret < 0)
+ return ret;
+
+ if (canvas->format == AV_PIX_FMT_ARGB) {
+ height = canvas->height;
+ memset(canvas->data[0], 0, height * canvas->linesize[0]);
+ } else /* if (canvas->format == AV_PIX_FMT_YUVA420P) */ {
+ const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(canvas->format);
+ for (int comp = 0; comp < desc->nb_components; comp++) {
+ int plane = desc->comp[comp].plane;
+
+ if (comp == 1 || comp == 2)
+ height = AV_CEIL_RSHIFT(canvas->height, desc->log2_chroma_h);
+ else
+ height = FFALIGN(canvas->height, 1 << desc->log2_chroma_h);
+
+ memset(canvas->data[plane], s->transparent_yuva[plane],
+ height * canvas->linesize[plane]);
+ }
+ }
+
+ return 0;
+}
static int webp_decode_frame_common(AVCodecContext *avctx, uint8_t *data, int size,
int *got_frame, int key_frame, AVFrame *p)
@@ -1629,77 +1698,6 @@ exif_end:
return size;
}
-int init_canvas_frame(WebPContext *s, int format, int key_frame)
-{
- AVFrame *canvas = s->canvas_frame.f;
- int height;
- int ret;
-
- // canvas is needed only for animation
- if (!(s->vp8x_flags & VP8X_FLAG_ANIMATION))
- return 0;
-
- // avoid init for non-key frames whose format and size did not change
- if (!key_frame &&
- canvas->data[0] &&
- canvas->format == format &&
- canvas->width == s->canvas_width &&
- canvas->height == s->canvas_height)
- return 0;
-
- // canvas changes within IPPP sequences will loose thread sync
- // because of the ThreadFrame reallocation and will wait forever
- // so if frame-threading is used, forbid canvas changes and unlock
- // previous frames
- if (!key_frame && canvas->data[0]) {
- if (s->avctx->thread_count > 1) {
- av_log(s->avctx, AV_LOG_WARNING, "Canvas change detected. The output will be damaged. Use -threads 1 to try decoding with best effort.\n");
- // unlock previous frames that have sent an _await() call
- ff_thread_report_progress(&s->canvas_frame, INT_MAX, 0);
- return AVERROR_PATCHWELCOME;
- } else {
- // warn for damaged frames
- av_log(s->avctx, AV_LOG_WARNING, "Canvas change detected. The output will be damaged.\n");
- }
- }
-
- s->avctx->pix_fmt = format;
- canvas->format = format;
- canvas->width = s->canvas_width;
- canvas->height = s->canvas_height;
-
- // VP8 decoder changed the width and height in AVCodecContext.
- // Change it back to the canvas size.
- ret = ff_set_dimensions(s->avctx, s->canvas_width, s->canvas_height);
- if (ret < 0)
- return ret;
-
- ff_thread_release_ext_buffer(s->avctx, &s->canvas_frame);
- ret = ff_thread_get_ext_buffer(s->avctx, &s->canvas_frame, AV_GET_BUFFER_FLAG_REF);
- if (ret < 0)
- return ret;
-
- if (canvas->format == AV_PIX_FMT_ARGB) {
- height = canvas->height;
- memset(canvas->data[0], 0, height * canvas->linesize[0]);
- } else /* if (canvas->format == AV_PIX_FMT_YUVA420P) */ {
- const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(canvas->format);
- for (int comp = 0; comp < desc->nb_components; comp++) {
- int plane = desc->comp[comp].plane;
-
- if (comp == 1 || comp == 2)
- height = AV_CEIL_RSHIFT(canvas->height, desc->log2_chroma_h);
- else
- height = FFALIGN(canvas->height, 1 << desc->log2_chroma_h);
-
- memset(canvas->data[plane], s->transparent_yuva[plane],
- height * canvas->linesize[plane]);
- }
- }
-
- return 0;
-}
-
/*
* Blend src1 (foreground) and src2 (background) into dest, in ARGB format.
* width, height are the dimensions of src1
--
2.43.0
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 15+ messages in thread
* [FFmpeg-devel] [PATCH v12 6/8] libavformat/webp: add WebP demuxer
2024-04-17 19:19 [FFmpeg-devel] [PATCH v12 0/8] [WIP] webp: add support for animated WebP decoding Thilo Borgmann via ffmpeg-devel
` (4 preceding siblings ...)
2024-04-17 19:20 ` [FFmpeg-devel] [PATCH v12 5/8] avcodec/webp: make init_canvas_frame static Thilo Borgmann via ffmpeg-devel
@ 2024-04-17 19:20 ` Thilo Borgmann via ffmpeg-devel
2024-04-17 19:20 ` [FFmpeg-devel] [PATCH v12 7/8] fate: add test for animated WebP Thilo Borgmann via ffmpeg-devel
` (2 subsequent siblings)
8 siblings, 0 replies; 15+ messages in thread
From: Thilo Borgmann via ffmpeg-devel @ 2024-04-17 19:20 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: thilo.borgmann
From: Josef Zlomek <josef@pex.com>
Adds the demuxer of animated WebP files.
It supports non-animated, animated, truncated, and concatenated files.
Reading from a pipe (and other non-seekable inputs) is also supported.
The WebP demuxer splits the input stream into packets containing one frame.
It also marks the key frames properly.
The loop count is ignored by default (same behaviour as animated PNG and GIF),
it may be enabled by the option '-ignore_loop 0'.
The frame rate is set according to the frame delay in the ANMF chunk.
If the delay is too low, or the image is not animated, the default frame rate
is set to 10 fps, similarly to other WebP libraries and browsers.
The fate suite was updated accordingly.
Signed-off-by: Josef Zlomek <josef@pex.com>
---
Changelog | 1 +
doc/demuxers.texi | 28 ++
libavformat/Makefile | 1 +
libavformat/allformats.c | 1 +
libavformat/version.h | 2 +-
libavformat/webpdec.c | 383 ++++++++++++++++++
tests/ref/fate/exif-image-webp | 4 +-
tests/ref/fate/webp-rgb-lena-lossless | 2 +-
tests/ref/fate/webp-rgb-lena-lossless-rgb24 | 2 +-
tests/ref/fate/webp-rgb-lossless | 2 +-
.../fate/webp-rgb-lossless-palette-predictor | 2 +-
tests/ref/fate/webp-rgb-lossy-q80 | 2 +-
tests/ref/fate/webp-rgba-lossless | 2 +-
tests/ref/fate/webp-rgba-lossy-q80 | 2 +-
14 files changed, 424 insertions(+), 10 deletions(-)
create mode 100644 libavformat/webpdec.c
diff --git a/Changelog b/Changelog
index 03ac047cea..698182aca3 100644
--- a/Changelog
+++ b/Changelog
@@ -94,6 +94,7 @@ version 6.1:
variable-fields elements within the same parent element
- ffprobe -output_format option added as an alias of -of
- animated WebP decoder
+- animated WebP demuxer
version 6.0:
diff --git a/doc/demuxers.texi b/doc/demuxers.texi
index 04293c4813..9c9d0fee17 100644
--- a/doc/demuxers.texi
+++ b/doc/demuxers.texi
@@ -1158,4 +1158,32 @@ this is set to 0, which means that a sensible value is chosen based on the
input format.
@end table
+@section webp
+
+Animated WebP demuxer.
+
+It accepts the following options:
+
+@table @option
+@item -min_delay @var{int}
+Set the minimum valid delay between frames in milliseconds.
+Range is 0 to 60000. Default value is 10.
+
+@item -max_webp_delay @var{int}
+Set the maximum valid delay between frames in milliseconds.
+Range is 0 to 16777215. Default value is 16777215 (over four hours),
+the maximum value allowed by the specification.
+
+@item -default_delay @var{int}
+Set the default delay between frames in milliseconds.
+Range is 0 to 60000. Default value is 100.
+
+@item -ignore_loop @var{bool}
+WebP files can contain information to loop a certain number of times
+(or infinitely). If @option{ignore_loop} is set to true, then the loop
+setting from the input will be ignored and looping will not occur.
+If set to false, then looping will occur and will cycle the number
+of times according to the WebP. Default value is true.
+@end table
+
@c man end DEMUXERS
diff --git a/libavformat/Makefile b/libavformat/Makefile
index 8efe26b6df..6e7969846b 100644
--- a/libavformat/Makefile
+++ b/libavformat/Makefile
@@ -628,6 +628,7 @@ OBJS-$(CONFIG_WEBM_MUXER) += matroskaenc.o matroska.o \
av1.o avlanguage.o
OBJS-$(CONFIG_WEBM_DASH_MANIFEST_MUXER) += webmdashenc.o
OBJS-$(CONFIG_WEBM_CHUNK_MUXER) += webm_chunk.o
+OBJS-$(CONFIG_WEBP_DEMUXER) += webpdec.o
OBJS-$(CONFIG_WEBP_MUXER) += webpenc.o
OBJS-$(CONFIG_WEBVTT_DEMUXER) += webvttdec.o subtitles.o
OBJS-$(CONFIG_WEBVTT_MUXER) += webvttenc.o
diff --git a/libavformat/allformats.c b/libavformat/allformats.c
index 305fa46532..23f6ef7f7d 100644
--- a/libavformat/allformats.c
+++ b/libavformat/allformats.c
@@ -511,6 +511,7 @@ extern const FFInputFormat ff_webm_dash_manifest_demuxer;
extern const FFOutputFormat ff_webm_dash_manifest_muxer;
extern const FFOutputFormat ff_webm_chunk_muxer;
extern const FFOutputFormat ff_webp_muxer;
+extern const FFInputFormat ff_webp_demuxer;
extern const FFInputFormat ff_webvtt_demuxer;
extern const FFOutputFormat ff_webvtt_muxer;
extern const FFInputFormat ff_wsaud_demuxer;
diff --git a/libavformat/version.h b/libavformat/version.h
index 7ff1483912..ee91990360 100644
--- a/libavformat/version.h
+++ b/libavformat/version.h
@@ -32,7 +32,7 @@
#include "version_major.h"
#define LIBAVFORMAT_VERSION_MINOR 3
-#define LIBAVFORMAT_VERSION_MICRO 100
+#define LIBAVFORMAT_VERSION_MICRO 101
#define LIBAVFORMAT_VERSION_INT AV_VERSION_INT(LIBAVFORMAT_VERSION_MAJOR, \
LIBAVFORMAT_VERSION_MINOR, \
diff --git a/libavformat/webpdec.c b/libavformat/webpdec.c
new file mode 100644
index 0000000000..edd9c79ad4
--- /dev/null
+++ b/libavformat/webpdec.c
@@ -0,0 +1,383 @@
+/*
+ * WebP demuxer
+ * Copyright (c) 2020 Pexeso Inc.
+ *
+ * This file is part of FFmpeg.
+ *
+ * FFmpeg is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation; either
+ * version 2.1 of the License, or (at your option) any later version.
+ *
+ * FFmpeg is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with FFmpeg; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+ */
+
+/**
+ * @file
+ * WebP demuxer.
+ */
+
+#include "libavformat/demux.h"
+#include "avformat.h"
+#include "internal.h"
+#include "libavutil/intreadwrite.h"
+#include "libavutil/opt.h"
+
+typedef struct WebPDemuxContext {
+ const AVClass *class;
+ /**
+ * Time span in milliseconds before the next frame
+ * should be drawn on screen.
+ */
+ int delay;
+ /**
+ * Minimum allowed delay between frames in milliseconds.
+ * Values below this threshold are considered to be invalid
+ * and set to value of default_delay.
+ */
+ int min_delay;
+ int max_delay;
+ int default_delay;
+
+ /*
+ * loop options
+ */
+ int ignore_loop; ///< ignore loop setting
+ int num_loop; ///< number of times to loop the animation
+ int cur_loop; ///< current loop counter
+ int64_t file_start; ///< start position of the current animation file
+ uint32_t remaining_size; ///< remaining size of the current animation file
+
+ /*
+ * variables for the key frame detection
+ */
+ int nb_frames; ///< number of frames of the current animation file
+ int vp8x_flags;
+ int canvas_width; ///< width of the canvas
+ int canvas_height; ///< height of the canvas
+} WebPDemuxContext;
+
+#define VP8X_FLAG_ANIMATION 0x02
+/**
+ * Major web browsers display WebPs at ~10-15fps when rate is not
+ * explicitly set or have too low values. We assume default rate to be 10.
+ * Default delay = 1000 microseconds / 10fps = 100 milliseconds per frame.
+ */
+#define WEBP_DEFAULT_DELAY 100
+/**
+ * By default delay values less than this threshold considered to be invalid.
+ */
+#define WEBP_MIN_DELAY 10
+
+static int webp_probe(const AVProbeData *p)
+{
+ const uint8_t *b = p->buf;
+
+ if (AV_RB32(b) == MKBETAG('R', 'I', 'F', 'F') &&
+ AV_RB32(b + 8) == MKBETAG('W', 'E', 'B', 'P'))
+ return AVPROBE_SCORE_MAX;
+
+ return 0;
+}
+
+static int webp_read_header(AVFormatContext *s)
+{
+ WebPDemuxContext *wdc = s->priv_data;
+ AVIOContext *pb = s->pb;
+ AVStream *st;
+ int ret, n;
+ uint32_t chunk_type, chunk_size;
+ int canvas_width = 0;
+ int canvas_height = 0;
+ int width = 0;
+ int height = 0;
+
+ wdc->delay = wdc->default_delay;
+ wdc->num_loop = 1;
+
+ st = avformat_new_stream(s, NULL);
+ if (!st)
+ return AVERROR(ENOMEM);
+
+ wdc->file_start = avio_tell(pb);
+ wdc->remaining_size = avio_size(pb) - wdc->file_start;
+
+ while (wdc->remaining_size > 8 && !avio_feof(pb)) {
+ chunk_type = avio_rl32(pb);
+ chunk_size = avio_rl32(pb);
+ if (chunk_size == UINT32_MAX)
+ return AVERROR_INVALIDDATA;
+ chunk_size += chunk_size & 1;
+ if (avio_feof(pb))
+ break;
+
+ if (wdc->remaining_size < 8 + chunk_size)
+ return AVERROR_INVALIDDATA;
+
+ if (chunk_type == MKTAG('R', 'I', 'F', 'F')) {
+ wdc->remaining_size = 8 + chunk_size;
+ chunk_size = 4;
+ }
+
+ wdc->remaining_size -= 8 + chunk_size;
+
+ switch (chunk_type) {
+ case MKTAG('V', 'P', '8', 'X'):
+ if (chunk_size >= 10) {
+ wdc->vp8x_flags = avio_r8(pb);
+ avio_skip(pb, 3);
+ canvas_width = avio_rl24(pb) + 1;
+ canvas_height = avio_rl24(pb) + 1;
+ ret = avio_skip(pb, chunk_size - 10);
+ } else
+ ret = avio_skip(pb, chunk_size);
+ break;
+ case MKTAG('V', 'P', '8', ' '):
+ if (chunk_size >= 10) {
+ avio_skip(pb, 6);
+ width = avio_rl16(pb) & 0x3fff;
+ height = avio_rl16(pb) & 0x3fff;
+ ret = avio_skip(pb, chunk_size - 10);
+ } else
+ ret = avio_skip(pb, chunk_size);
+ break;
+ case MKTAG('V', 'P', '8', 'L'):
+ if (chunk_size >= 5) {
+ avio_skip(pb, 1);
+ n = avio_rl32(pb);
+ width = (n & 0x3fff) + 1; // first 14 bits
+ height = ((n >> 14) & 0x3fff) + 1; // next 14 bits
+ ret = avio_skip(pb, chunk_size - 5);
+ } else
+ ret = avio_skip(pb, chunk_size);
+ break;
+ case MKTAG('A', 'N', 'M', 'F'):
+ if (chunk_size >= 12) {
+ avio_skip(pb, 6);
+ width = avio_rl24(pb) + 1;
+ height = avio_rl24(pb) + 1;
+ ret = avio_skip(pb, chunk_size - 12);
+ } else
+ ret = avio_skip(pb, chunk_size);
+ break;
+ default:
+ ret = avio_skip(pb, chunk_size);
+ break;
+ }
+
+ if (ret < 0)
+ return ret;
+
+ // set canvas size if no VP8X chunk was present
+ if (!canvas_width && width > 0)
+ canvas_width = width;
+ if (!canvas_height && height > 0)
+ canvas_height = height;
+ }
+
+ // WebP format operates with time in "milliseconds", therefore timebase is 1/1000
+ avpriv_set_pts_info(st, 64, 1, 1000);
+ st->codecpar->codec_type = AVMEDIA_TYPE_VIDEO;
+ st->codecpar->codec_id = AV_CODEC_ID_WEBP;
+ st->codecpar->codec_tag = MKTAG('W', 'E', 'B', 'P');
+ st->codecpar->width = canvas_width;
+ st->codecpar->height = canvas_height;
+ st->start_time = 0;
+
+ // jump to start
+ if ((ret = avio_seek(pb, wdc->file_start, SEEK_SET)) < 0)
+ return ret;
+ wdc->remaining_size = 0;
+
+ return 0;
+}
+
+static int webp_read_packet(AVFormatContext *s, AVPacket *pkt)
+{
+ WebPDemuxContext *wdc = s->priv_data;
+ AVIOContext *pb = s->pb;
+ int ret, n;
+ int64_t packet_start = avio_tell(pb);
+ int64_t packet_end;
+ uint32_t chunk_type;
+ uint32_t chunk_size;
+ int width = 0;
+ int height = 0;
+ int is_frame = 0;
+ int is_animation = wdc->vp8x_flags & VP8X_FLAG_ANIMATION;
+ int key_frame = 0;
+
+ if (wdc->remaining_size == 0) {
+ wdc->remaining_size = avio_size(pb) - avio_tell(pb);
+ if (wdc->remaining_size == 0) { // EOF
+ int ret;
+ wdc->delay = wdc->default_delay;
+ if (wdc->ignore_loop ||
+ (wdc->num_loop && wdc->cur_loop == wdc->num_loop - 1))
+ return AVERROR_EOF;
+
+ av_log(s, AV_LOG_DEBUG, "loop: %d\n", wdc->cur_loop);
+
+ wdc->cur_loop++;
+ ret = avio_seek(pb, wdc->file_start, SEEK_SET);
+ if (ret < 0)
+ return AVERROR_INVALIDDATA;
+ wdc->remaining_size = avio_size(pb) - avio_tell(pb);
+ packet_start = avio_tell(pb);
+ }
+ }
+
+ while (wdc->remaining_size > 0 && !avio_feof(pb)) {
+ chunk_type = avio_rl32(pb);
+ chunk_size = avio_rl32(pb);
+
+ if (chunk_size == UINT32_MAX)
+ return AVERROR_INVALIDDATA;
+ chunk_size += chunk_size & 1;
+
+ if (avio_feof(pb))
+ break;
+
+ // dive into RIFF chunk
+ if (chunk_type == MKTAG('R', 'I', 'F', 'F') && chunk_size > 4) {
+ wdc->file_start = avio_tell(pb) - 8;
+ wdc->remaining_size = 8 + chunk_size;
+ chunk_size = 4;
+ }
+
+ switch (chunk_type) {
+ case MKTAG('A', 'N', 'I', 'M'):
+ if (chunk_size >= 6) {
+ avio_seek(pb, 4, SEEK_CUR);
+ wdc->num_loop = avio_rb16(pb);
+ avio_seek(pb, chunk_size - 6, SEEK_CUR);
+ }
+ break;
+ case MKTAG('V', 'P', '8', ' '):
+ if (is_frame && !is_animation)
+ goto flush;
+ is_frame = 1;
+
+ if (chunk_size >= 10) {
+ avio_skip(pb, 6);
+ width = avio_rl16(pb) & 0x3fff;
+ height = avio_rl16(pb) & 0x3fff;
+ wdc->nb_frames++;
+ ret = avio_skip(pb, chunk_size - 10);
+ } else
+ ret = avio_skip(pb, chunk_size);
+ break;
+ case MKTAG('V', 'P', '8', 'L'):
+ if (is_frame && !is_animation)
+ goto flush;
+ is_frame = 1;
+
+ if (chunk_size >= 5) {
+ avio_skip(pb, 1);
+ n = avio_rl32(pb);
+ width = (n & 0x3fff) + 1; // first 14 bits
+ height = ((n >> 14) & 0x3fff) + 1; // next 14 bits
+ wdc->nb_frames++;
+ ret = avio_skip(pb, chunk_size - 5);
+ } else
+ ret = avio_skip(pb, chunk_size);
+ break;
+ case MKTAG('A', 'N', 'M', 'F'):
+ if (chunk_size >= 16) {
+ avio_skip(pb, 6);
+ width = avio_rl24(pb) + 1;
+ height = avio_rl24(pb) + 1;
+ wdc->delay = avio_rl24(pb);
+ avio_skip(pb, 1); // anmf_flags
+ if (wdc->delay < wdc->min_delay)
+ wdc->delay = wdc->default_delay;
+ wdc->delay = FFMIN(wdc->delay, wdc->max_delay);
+ chunk_size = 16;
+ ret = 0;
+ } else
+ ret = avio_skip(pb, chunk_size);
+ break;
+ default:
+ ret = avio_skip(pb, chunk_size);
+ break;
+ }
+ if (ret == AVERROR_EOF) {
+ // EOF was reached but the position may still be in the middle
+ // of the buffer. Seek to the end of the buffer so that EOF is
+ // handled properly in the next invocation of webp_read_packet.
+ if ((ret = avio_seek(pb, pb->buf_end - pb->buf_ptr, SEEK_CUR) < 0))
+ return ret;
+ wdc->remaining_size = 0;
+ return AVERROR_EOF;
+ }
+ if (ret < 0)
+ return ret;
+
+ if (!wdc->canvas_width && width > 0)
+ wdc->canvas_width = width;
+ if (!wdc->canvas_height && height > 0)
+ wdc->canvas_height = height;
+
+ if (wdc->remaining_size < 8 + chunk_size)
+ return AVERROR_INVALIDDATA;
+ wdc->remaining_size -= 8 + chunk_size;
+
+ packet_end = avio_tell(pb);
+ }
+
+flush:
+ if ((ret = avio_seek(pb, packet_start, SEEK_SET)) < 0)
+ return ret;
+
+ if ((ret = av_get_packet(pb, pkt, packet_end - packet_start)) < 0)
+ return ret;
+
+ key_frame = is_frame && wdc->nb_frames == 1;
+ if (key_frame)
+ pkt->flags |= AV_PKT_FLAG_KEY;
+ else
+ pkt->flags &= ~AV_PKT_FLAG_KEY;
+
+ pkt->stream_index = 0;
+ pkt->duration = is_frame ? wdc->delay : 0;
+ pkt->pts = pkt->dts = AV_NOPTS_VALUE;
+
+ if (is_frame && wdc->nb_frames == 1)
+ s->streams[0]->r_frame_rate = (AVRational) {1000, pkt->duration};
+
+ return ret;
+}
+
+static const AVOption options[] = {
+ { "min_delay" , "minimum valid delay between frames (in milliseconds)", offsetof(WebPDemuxContext, min_delay) , AV_OPT_TYPE_INT, {.i64 = WEBP_MIN_DELAY} , 0, 1000 * 60, AV_OPT_FLAG_DECODING_PARAM },
+ { "max_webp_delay", "maximum valid delay between frames (in milliseconds)", offsetof(WebPDemuxContext, max_delay) , AV_OPT_TYPE_INT, {.i64 = 0xffffff} , 0, 0xffffff , AV_OPT_FLAG_DECODING_PARAM },
+ { "default_delay" , "default delay between frames (in milliseconds)" , offsetof(WebPDemuxContext, default_delay), AV_OPT_TYPE_INT, {.i64 = WEBP_DEFAULT_DELAY}, 0, 1000 * 60, AV_OPT_FLAG_DECODING_PARAM },
+ { "ignore_loop" , "ignore loop setting" , offsetof(WebPDemuxContext, ignore_loop) , AV_OPT_TYPE_BOOL,{.i64 = 1} , 0, 1 , AV_OPT_FLAG_DECODING_PARAM },
+ { NULL },
+};
+
+static const AVClass demuxer_class = {
+ .class_name = "WebP demuxer",
+ .item_name = av_default_item_name,
+ .option = options,
+ .version = LIBAVUTIL_VERSION_INT,
+ .category = AV_CLASS_CATEGORY_DEMUXER,
+};
+
+const FFInputFormat ff_webp_demuxer = {
+ .p.name = "webp",
+ .p.long_name = NULL_IF_CONFIG_SMALL("WebP image"),
+ .p.flags = AVFMT_GENERIC_INDEX,
+ .p.priv_class = &demuxer_class,
+ .priv_data_size = sizeof(WebPDemuxContext),
+ .read_probe = webp_probe,
+ .read_header = webp_read_header,
+ .read_packet = webp_read_packet,
+};
diff --git a/tests/ref/fate/exif-image-webp b/tests/ref/fate/exif-image-webp
index 73560e8ba0..d0de863c3c 100644
--- a/tests/ref/fate/exif-image-webp
+++ b/tests/ref/fate/exif-image-webp
@@ -8,8 +8,8 @@ pkt_dts=0
pkt_dts_time=0.000000
best_effort_timestamp=0
best_effort_timestamp_time=0.000000
-duration=1
-duration_time=0.040000
+duration=100
+duration_time=0.100000
pkt_pos=0
pkt_size=39276
width=400
diff --git a/tests/ref/fate/webp-rgb-lena-lossless b/tests/ref/fate/webp-rgb-lena-lossless
index c00715a5e7..e784c501eb 100644
--- a/tests/ref/fate/webp-rgb-lena-lossless
+++ b/tests/ref/fate/webp-rgb-lena-lossless
@@ -1,4 +1,4 @@
-#tb 0: 1/25
+#tb 0: 1/10
#media_type 0: video
#codec_id 0: rawvideo
#dimensions 0: 128x128
diff --git a/tests/ref/fate/webp-rgb-lena-lossless-rgb24 b/tests/ref/fate/webp-rgb-lena-lossless-rgb24
index 7f8f550afe..395a01fa1b 100644
--- a/tests/ref/fate/webp-rgb-lena-lossless-rgb24
+++ b/tests/ref/fate/webp-rgb-lena-lossless-rgb24
@@ -1,4 +1,4 @@
-#tb 0: 1/25
+#tb 0: 1/10
#media_type 0: video
#codec_id 0: rawvideo
#dimensions 0: 128x128
diff --git a/tests/ref/fate/webp-rgb-lossless b/tests/ref/fate/webp-rgb-lossless
index 8dbdfd6887..1db3ce70f7 100644
--- a/tests/ref/fate/webp-rgb-lossless
+++ b/tests/ref/fate/webp-rgb-lossless
@@ -1,4 +1,4 @@
-#tb 0: 1/25
+#tb 0: 1/10
#media_type 0: video
#codec_id 0: rawvideo
#dimensions 0: 12x8
diff --git a/tests/ref/fate/webp-rgb-lossless-palette-predictor b/tests/ref/fate/webp-rgb-lossless-palette-predictor
index 92a4ad9810..65537f7ed1 100644
--- a/tests/ref/fate/webp-rgb-lossless-palette-predictor
+++ b/tests/ref/fate/webp-rgb-lossless-palette-predictor
@@ -1,4 +1,4 @@
-#tb 0: 1/25
+#tb 0: 1/10
#media_type 0: video
#codec_id 0: rawvideo
#dimensions 0: 100x30
diff --git a/tests/ref/fate/webp-rgb-lossy-q80 b/tests/ref/fate/webp-rgb-lossy-q80
index f61d75ac13..cd43415b95 100644
--- a/tests/ref/fate/webp-rgb-lossy-q80
+++ b/tests/ref/fate/webp-rgb-lossy-q80
@@ -1,4 +1,4 @@
-#tb 0: 1/25
+#tb 0: 1/10
#media_type 0: video
#codec_id 0: rawvideo
#dimensions 0: 12x8
diff --git a/tests/ref/fate/webp-rgba-lossless b/tests/ref/fate/webp-rgba-lossless
index bb654ae442..2f763c6c46 100644
--- a/tests/ref/fate/webp-rgba-lossless
+++ b/tests/ref/fate/webp-rgba-lossless
@@ -1,4 +1,4 @@
-#tb 0: 1/25
+#tb 0: 1/10
#media_type 0: video
#codec_id 0: rawvideo
#dimensions 0: 12x8
diff --git a/tests/ref/fate/webp-rgba-lossy-q80 b/tests/ref/fate/webp-rgba-lossy-q80
index d2c2aa3fce..6b114f772e 100644
--- a/tests/ref/fate/webp-rgba-lossy-q80
+++ b/tests/ref/fate/webp-rgba-lossy-q80
@@ -1,4 +1,4 @@
-#tb 0: 1/25
+#tb 0: 1/10
#media_type 0: video
#codec_id 0: rawvideo
#dimensions 0: 12x8
--
2.43.0
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 15+ messages in thread
* [FFmpeg-devel] [PATCH v12 7/8] fate: add test for animated WebP
2024-04-17 19:19 [FFmpeg-devel] [PATCH v12 0/8] [WIP] webp: add support for animated WebP decoding Thilo Borgmann via ffmpeg-devel
` (5 preceding siblings ...)
2024-04-17 19:20 ` [FFmpeg-devel] [PATCH v12 6/8] libavformat/webp: add WebP demuxer Thilo Borgmann via ffmpeg-devel
@ 2024-04-17 19:20 ` Thilo Borgmann via ffmpeg-devel
2024-04-17 19:20 ` [FFmpeg-devel] [PATCH v12 8/8] avcodec/webp: export XMP metadata Thilo Borgmann via ffmpeg-devel
2024-04-17 22:52 ` [FFmpeg-devel] [PATCH v12 0/8] [WIP] webp: add support for animated WebP decoding James Zern via ffmpeg-devel
8 siblings, 0 replies; 15+ messages in thread
From: Thilo Borgmann via ffmpeg-devel @ 2024-04-17 19:20 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: thilo.borgmann
From: Thilo Borgmann via ffmpeg-devel <ffmpeg-devel@ffmpeg.org>
---
tests/fate/image.mak | 3 +++
tests/ref/fate/webp-anim | 22 ++++++++++++++++++++++
2 files changed, 25 insertions(+)
create mode 100644 tests/ref/fate/webp-anim
diff --git a/tests/fate/image.mak b/tests/fate/image.mak
index 753936ec20..b6a4cd2ba3 100644
--- a/tests/fate/image.mak
+++ b/tests/fate/image.mak
@@ -566,6 +566,9 @@ fate-webp-rgb-lossy-q80: CMD = framecrc -i $(TARGET_SAMPLES)/webp/rgb_q80.webp
FATE_WEBP += fate-webp-rgba-lossy-q80
fate-webp-rgba-lossy-q80: CMD = framecrc -i $(TARGET_SAMPLES)/webp/rgba_q80.webp
+FATE_WEBP += fate-webp-anim
+fate-webp-anim: CMD = framecrc -i $(TARGET_SAMPLES)/webp/130227-100431-6817p.webp
+
FATE_WEBP-$(call DEMDEC, IMAGE2, WEBP) += $(FATE_WEBP)
FATE_IMAGE_FRAMECRC += $(FATE_WEBP-yes)
fate-webp: $(FATE_WEBP-yes)
diff --git a/tests/ref/fate/webp-anim b/tests/ref/fate/webp-anim
new file mode 100644
index 0000000000..f0d3f1a88f
--- /dev/null
+++ b/tests/ref/fate/webp-anim
@@ -0,0 +1,22 @@
+#tb 0: 1/1000
+#media_type 0: video
+#codec_id 0: rawvideo
+#dimensions 0: 100x70
+#sar 0: 0/1
+0, 0, 0, 80, 28000, 0x2023ba6e
+0, 80, 80, 80, 28000, 0x4292b778
+0, 160, 160, 80, 28000, 0x1c972ef1
+0, 240, 240, 80, 28000, 0xa98d8d04
+0, 320, 320, 80, 28000, 0xd323b6af
+0, 400, 400, 80, 28000, 0x508aba99
+0, 480, 480, 80, 28000, 0x5c672dda
+0, 560, 560, 80, 28000, 0xc8961ebb
+0, 640, 640, 1000, 28000, 0x82460e1b
+0, 1640, 1640, 80, 28000, 0x3debbfc9
+0, 1720, 1720, 80, 28000, 0x427ab31f
+0, 1800, 1800, 80, 28000, 0x6bbdec2e
+0, 1880, 1880, 80, 28000, 0x5690b56b
+0, 1960, 1960, 80, 28000, 0xb62963f3
+0, 2040, 2040, 80, 28000, 0x68dd37b2
+0, 2120, 2120, 80, 28000, 0x465c47d2
+0, 2200, 2200, 10000, 28000, 0xa92033df
--
2.43.0
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 15+ messages in thread
* [FFmpeg-devel] [PATCH v12 8/8] avcodec/webp: export XMP metadata
2024-04-17 19:19 [FFmpeg-devel] [PATCH v12 0/8] [WIP] webp: add support for animated WebP decoding Thilo Borgmann via ffmpeg-devel
` (6 preceding siblings ...)
2024-04-17 19:20 ` [FFmpeg-devel] [PATCH v12 7/8] fate: add test for animated WebP Thilo Borgmann via ffmpeg-devel
@ 2024-04-17 19:20 ` Thilo Borgmann via ffmpeg-devel
2024-04-17 22:52 ` [FFmpeg-devel] [PATCH v12 0/8] [WIP] webp: add support for animated WebP decoding James Zern via ffmpeg-devel
8 siblings, 0 replies; 15+ messages in thread
From: Thilo Borgmann via ffmpeg-devel @ 2024-04-17 19:20 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: thilo.borgmann
From: Thilo Borgmann via ffmpeg-devel <ffmpeg-devel@ffmpeg.org>
---
libavcodec/webp.c | 42 ++++++++++++++++++++++++++++++++++++------
1 file changed, 36 insertions(+), 6 deletions(-)
diff --git a/libavcodec/webp.c b/libavcodec/webp.c
index 4a244c1b67..35851ef3da 100644
--- a/libavcodec/webp.c
+++ b/libavcodec/webp.c
@@ -38,8 +38,8 @@
* @author Josef Zlomek, Pexeso Inc. <josef@pex.com>
* Animation
*
- * Unimplemented:
- * - XMP metadata
+ * @author Thilo Borgmann <thilo.borgmann _at_ mail.de>
+ * XMP metadata
*/
#include "libavutil/imgutils.h"
@@ -217,6 +217,7 @@ typedef struct WebPContext {
int alpha_data_size; /* alpha chunk data size */
int has_exif; /* set after an EXIF chunk has been processed */
int has_iccp; /* set after an ICCP chunk has been processed */
+ int has_xmp; /* set after an XMP chunk has been processed */
int width; /* image width */
int height; /* image height */
int vp8x_flags; /* global flags from VP8X chunk */
@@ -1464,6 +1465,7 @@ static int webp_decode_frame_common(AVCodecContext *avctx, uint8_t *data, int si
// reset metadata bit for each packet
s->has_exif = 0;
s->has_iccp = 0;
+ s->has_xmp = 0;
while (bytestream2_get_bytes_left(&gb) > 8) {
char chunk_str[5] = { 0 };
@@ -1495,6 +1497,7 @@ static int webp_decode_frame_common(AVCodecContext *avctx, uint8_t *data, int si
s->canvas_height = 0;
s->has_exif = 0;
s->has_iccp = 0;
+ s->has_xmp = 0;
ff_thread_release_ext_buffer(&s->canvas_frame);
break;
case MKTAG('V', 'P', '8', ' '):
@@ -1680,12 +1683,39 @@ exif_end:
}
s->vp8x_flags |= VP8X_FLAG_ANIMATION;
break;
- case MKTAG('X', 'M', 'P', ' '):
- AV_WL32(chunk_str, chunk_type);
- av_log(avctx, AV_LOG_WARNING, "skipping unsupported chunk: %s\n",
- chunk_str);
+ case MKTAG('X', 'M', 'P', ' '): {
+ GetByteContext xmp_gb;
+ AVDictionary **xmp_metadata = NULL;
+ uint8_t *buffer;
+ int xmp_offset = bytestream2_tell(&gb);
+
+ if (s->has_xmp) {
+ av_log(avctx, AV_LOG_VERBOSE, "Ignoring extra XMP chunk\n");
+ goto xmp_end;
+ }
+ if (!(s->vp8x_flags & VP8X_FLAG_XMP_METADATA))
+ av_log(avctx, AV_LOG_WARNING,
+ "XMP chunk present, but XMP bit not set in the "
+ "VP8X header\n");
+
+ // there are at least chunk_size bytes left to read
+ buffer = av_malloc(chunk_size + 1);
+ if (!buffer) {
+ return AVERROR(ENOMEM);
+ }
+
+ s->has_xmp = 1;
+ bytestream2_init(&xmp_gb, data + xmp_offset, size - xmp_offset);
+ bytestream2_get_buffer(&xmp_gb, buffer, chunk_size);
+ buffer[chunk_size] = '\0';
+
+ xmp_metadata = (s->vp8x_flags & VP8X_FLAG_ANIMATION) ? &p->metadata : &s->frame->metadata;
+ av_dict_set(xmp_metadata, "xmp", buffer, AV_DICT_DONT_STRDUP_VAL);
+
+xmp_end:
bytestream2_skip(&gb, chunk_size);
break;
+ }
default:
AV_WL32(chunk_str, chunk_type);
av_log(avctx, AV_LOG_VERBOSE, "skipping unknown chunk: %s\n",
--
2.43.0
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [FFmpeg-devel] [PATCH v12 0/8] [WIP] webp: add support for animated WebP decoding
2024-04-17 19:19 [FFmpeg-devel] [PATCH v12 0/8] [WIP] webp: add support for animated WebP decoding Thilo Borgmann via ffmpeg-devel
` (7 preceding siblings ...)
2024-04-17 19:20 ` [FFmpeg-devel] [PATCH v12 8/8] avcodec/webp: export XMP metadata Thilo Borgmann via ffmpeg-devel
@ 2024-04-17 22:52 ` James Zern via ffmpeg-devel
2024-04-18 18:21 ` Thilo Borgmann via ffmpeg-devel
8 siblings, 1 reply; 15+ messages in thread
From: James Zern via ffmpeg-devel @ 2024-04-17 22:52 UTC (permalink / raw)
To: FFmpeg development discussions and patches; +Cc: James Zern, thilo.borgmann
On Wed, Apr 17, 2024 at 12:20 PM Thilo Borgmann via ffmpeg-devel
<ffmpeg-devel@ffmpeg.org> wrote:
>
> From: Thilo Borgmann <thilo.borgmann@mail.de>
>
> Marked WIP because we'd want to introduce private bsf's first; review
> welcome before that though
> VP8 decoder decoupled again
> The whole animated sequence goes into one packet
> The (currently public) bitstream filter splits animations up into non-conformant packets
> Now with XMP metadata support (as string, like MOV)
>
Tests mostly work for me. There are a few images (that I reported
earlier) that give:
Canvas change detected. The output will be damaged. Use -threads 1
to try decoding with best effort.
They don't animate without that option and with it render incorrectly.
A few other notes:
- should ffprobe report anything with files containing xmp?
- 0 duration behaves differently than web browsers, which use the gif
behavior and set it to 10; as long as it's consistent in ffmpeg
between the two either is fine to me.
- The files in https://crbug.com/690848 don't exit cleanly from
ffplay, other corrupt files do; ffmpeg exits, so maybe it's a
non-issue.
>
> Patch 5/8 is still there for making changes in lavc/webp reviewable but shall be stashed when pushing.
>
> -Thilo
>
> Josef Zlomek (2):
> libavcodec/webp: add support for animated WebP
> libavformat/webp: add WebP demuxer
>
> Thilo Borgmann via ffmpeg-devel (6):
> avcodec/webp: remove unused definitions
> avcodec/webp: separate VP8 decoding
> avcodec/bsf: Add awebp2webp bitstream filter
> avcodec/webp: make init_canvas_frame static
> fate: add test for animated WebP
> avcodec/webp: export XMP metadata
>
> Changelog | 2 +
> configure | 1 +
> doc/demuxers.texi | 28 +
> libavcodec/bitstream_filters.c | 1 +
> libavcodec/bsf/Makefile | 1 +
> libavcodec/bsf/awebp2webp.c | 350 ++++++++
> libavcodec/codec_desc.c | 3 +-
> libavcodec/version.h | 2 +-
> libavcodec/webp.c | 796 ++++++++++++++++--
> libavformat/Makefile | 1 +
> libavformat/allformats.c | 1 +
> libavformat/version.h | 2 +-
> libavformat/webpdec.c | 383 +++++++++
> tests/fate/image.mak | 3 +
> tests/ref/fate/exif-image-webp | 4 +-
> tests/ref/fate/webp-anim | 22 +
> tests/ref/fate/webp-rgb-lena-lossless | 2 +-
> tests/ref/fate/webp-rgb-lena-lossless-rgb24 | 2 +-
> tests/ref/fate/webp-rgb-lossless | 2 +-
> .../fate/webp-rgb-lossless-palette-predictor | 2 +-
> tests/ref/fate/webp-rgb-lossy-q80 | 2 +-
> tests/ref/fate/webp-rgba-lossless | 2 +-
> tests/ref/fate/webp-rgba-lossy-q80 | 2 +-
> 23 files changed, 1530 insertions(+), 84 deletions(-)
> create mode 100644 libavcodec/bsf/awebp2webp.c
> create mode 100644 libavformat/webpdec.c
> create mode 100644 tests/ref/fate/webp-anim
>
> --
> 2.43.0
>
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [FFmpeg-devel] [PATCH v12 0/8] [WIP] webp: add support for animated WebP decoding
2024-04-17 22:52 ` [FFmpeg-devel] [PATCH v12 0/8] [WIP] webp: add support for animated WebP decoding James Zern via ffmpeg-devel
@ 2024-04-18 18:21 ` Thilo Borgmann via ffmpeg-devel
2024-04-18 19:38 ` James Zern via ffmpeg-devel
0 siblings, 1 reply; 15+ messages in thread
From: Thilo Borgmann via ffmpeg-devel @ 2024-04-18 18:21 UTC (permalink / raw)
To: ffmpeg-devel; +Cc: Thilo Borgmann
Hi,
On 17.04.24 00:52, James Zern via ffmpeg-devel wrote:
> On Wed, Apr 17, 2024 at 12:20 PM Thilo Borgmann via ffmpeg-devel
> <ffmpeg-devel@ffmpeg.org> wrote:
>>
>> From: Thilo Borgmann <thilo.borgmann@mail.de>
>>
>> Marked WIP because we'd want to introduce private bsf's first; review
>> welcome before that though
>> VP8 decoder decoupled again
>> The whole animated sequence goes into one packet
>> The (currently public) bitstream filter splits animations up into non-conformant packets
>> Now with XMP metadata support (as string, like MOV)
>>
>
> Tests mostly work for me. There are a few images (that I reported
> earlier) that give:
thanks for testing!
> Canvas change detected. The output will be damaged. Use -threads 1
> to try decoding with best effort.
> They don't animate without that option and with it render incorrectly.
That issue yields from the canvas frame being the synchronization object
(ThreadFrame) - doing so prevents the canvas size changed mid-stream.
_Maybe_ this can be fixed switching the whole frame multithreading away
from ThreadFrame to sth else, not sure though and no experience with the
alternatives (AVExecutor?). Maybe Andreas can predict if it's
worth/valid to change that whole part of it? I'm not against putting
more effort into it to get it right.
> A few other notes:
> - should ffprobe report anything with files containing xmp?
It does, it is put into the frame metadata as a blob.
./ffprobe -show_frames <file>
will reveal it.
> - 0 duration behaves differently than web browsers, which use the gif
> behavior and set it to 10; as long as it's consistent in ffmpeg
> between the two either is fine to me.
We are consistent to GIF in ffmpeg. Both do assume 100ms default delay.
Notice the defaults in their defines (ms for webp, fps for gif) in the
demuxers:
#define WEBP_DEFAULT_DELAY 100
#define GIF_DEFAULT_DELAY 10
> - The files in https://crbug.com/690848 don't exit cleanly from
> ffplay, other corrupt files do; ffmpeg exits, so maybe it's a
> non-issue.
ffplay always crashes after any file on osx for me. If ffmpeg terminates
fine, it's a non-issue for that patchset. I'll however look into it once
I can, I hear people saying their ffplay not always crashes...
Thanks!
-Thilo
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [FFmpeg-devel] [PATCH v12 0/8] [WIP] webp: add support for animated WebP decoding
2024-04-18 18:21 ` Thilo Borgmann via ffmpeg-devel
@ 2024-04-18 19:38 ` James Zern via ffmpeg-devel
2024-05-21 16:50 ` Thilo Borgmann via ffmpeg-devel
0 siblings, 1 reply; 15+ messages in thread
From: James Zern via ffmpeg-devel @ 2024-04-18 19:38 UTC (permalink / raw)
To: FFmpeg development discussions and patches; +Cc: James Zern, Thilo Borgmann
On Thu, Apr 18, 2024 at 11:21 AM Thilo Borgmann via ffmpeg-devel
<ffmpeg-devel@ffmpeg.org> wrote:
>
> Hi,
>
> On 17.04.24 00:52, James Zern via ffmpeg-devel wrote:
> > On Wed, Apr 17, 2024 at 12:20 PM Thilo Borgmann via ffmpeg-devel
> > <ffmpeg-devel@ffmpeg.org> wrote:
> >>
> >> From: Thilo Borgmann <thilo.borgmann@mail.de>
> >>
> >> Marked WIP because we'd want to introduce private bsf's first; review
> >> welcome before that though
> >> VP8 decoder decoupled again
> >> The whole animated sequence goes into one packet
> >> The (currently public) bitstream filter splits animations up into non-conformant packets
> >> Now with XMP metadata support (as string, like MOV)
> >>
> >
> > Tests mostly work for me. There are a few images (that I reported
> > earlier) that give:
>
> thanks for testing!
>
>
> > Canvas change detected. The output will be damaged. Use -threads 1
> > to try decoding with best effort.
> > They don't animate without that option and with it render incorrectly.
>
> That issue yields from the canvas frame being the synchronization object
> (ThreadFrame) - doing so prevents the canvas size changed mid-stream.
> _Maybe_ this can be fixed switching the whole frame multithreading away
> from ThreadFrame to sth else, not sure though and no experience with the
> alternatives (AVExecutor?). Maybe Andreas can predict if it's
> worth/valid to change that whole part of it? I'm not against putting
> more effort into it to get it right.
>
>
> > A few other notes:
> > - should ffprobe report anything with files containing xmp?
>
> It does, it is put into the frame metadata as a blob.
> ./ffprobe -show_frames <file>
> will reveal it.
>
Thanks. I didn't try that option.
>
> > - 0 duration behaves differently than web browsers, which use the gif
> > behavior and set it to 10; as long as it's consistent in ffmpeg
> > between the two either is fine to me.
>
> We are consistent to GIF in ffmpeg. Both do assume 100ms default delay.
> Notice the defaults in their defines (ms for webp, fps for gif) in the
> demuxers:
>
> #define WEBP_DEFAULT_DELAY 100
> #define GIF_DEFAULT_DELAY 10
>
It doesn't seem the default delay is getting applied to this file:
http://littlesvr.ca/apng/images/SteamEngine.webp
Or at least the rendering is off in ffplay. The duration of all frames
are 0 in that file.
>
>
> > - The files in https://crbug.com/690848 don't exit cleanly from
> > ffplay, other corrupt files do; ffmpeg exits, so maybe it's a
> > non-issue.
>
> ffplay always crashes after any file on osx for me. If ffmpeg terminates
> fine, it's a non-issue for that patchset. I'll however look into it once
> I can, I hear people saying their ffplay not always crashes...
>
> Thanks!
> -Thilo
> _______________________________________________
> ffmpeg-devel mailing list
> ffmpeg-devel@ffmpeg.org
> https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
>
> To unsubscribe, visit link above, or email
> ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [FFmpeg-devel] [PATCH v12 0/8] [WIP] webp: add support for animated WebP decoding
2024-04-18 19:38 ` James Zern via ffmpeg-devel
@ 2024-05-21 16:50 ` Thilo Borgmann via ffmpeg-devel
2024-05-23 2:31 ` James Zern via ffmpeg-devel
0 siblings, 1 reply; 15+ messages in thread
From: Thilo Borgmann via ffmpeg-devel @ 2024-05-21 16:50 UTC (permalink / raw)
To: James Zern via ffmpeg-devel; +Cc: Thilo Borgmann
Hi,
[...]
>>> Tests mostly work for me. There are a few images (that I reported
>>> earlier) that give:
>>
>> thanks for testing!
>>
>>
>>> Canvas change detected. The output will be damaged. Use -threads 1
>>> to try decoding with best effort.
>>> They don't animate without that option and with it render incorrectly.
>>
>> That issue yields from the canvas frame being the synchronization object
>> (ThreadFrame) - doing so prevents the canvas size changed mid-stream.
>> _Maybe_ this can be fixed switching the whole frame multithreading away
>> from ThreadFrame to sth else, not sure though and no experience with the
>> alternatives (AVExecutor?). Maybe Andreas can predict if it's
>> worth/valid to change that whole part of it? I'm not against putting
>> more effort into it to get it right.
I could fix 488x488.webp and have an almost identical output to libwebp.
488x488.webp features an ARGB canvas and has both, ARGB & YUVA420P
p-frames.
Do you have more files with other variations of canvas & p-frames? If
they at all exist... e.g. canvas YUV and p-frames RGB?
Pinged Meta as well for real-world samples. Will take some more days
until I get feedback. Will then post the next iteration...
Thanks,
Thilo
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [FFmpeg-devel] [PATCH v12 0/8] [WIP] webp: add support for animated WebP decoding
2024-05-21 16:50 ` Thilo Borgmann via ffmpeg-devel
@ 2024-05-23 2:31 ` James Zern via ffmpeg-devel
0 siblings, 0 replies; 15+ messages in thread
From: James Zern via ffmpeg-devel @ 2024-05-23 2:31 UTC (permalink / raw)
To: FFmpeg development discussions and patches; +Cc: James Zern
On Tue, May 21, 2024 at 9:50 AM Thilo Borgmann via ffmpeg-devel
<ffmpeg-devel@ffmpeg.org> wrote:
>
> Hi,
>
> [...]
> >>> Tests mostly work for me. There are a few images (that I reported
> >>> earlier) that give:
> >>
> >> thanks for testing!
> >>
> >>
> >>> Canvas change detected. The output will be damaged. Use -threads 1
> >>> to try decoding with best effort.
> >>> They don't animate without that option and with it render incorrectly.
> >>
> >> That issue yields from the canvas frame being the synchronization object
> >> (ThreadFrame) - doing so prevents the canvas size changed mid-stream.
> >> _Maybe_ this can be fixed switching the whole frame multithreading away
> >> from ThreadFrame to sth else, not sure though and no experience with the
> >> alternatives (AVExecutor?). Maybe Andreas can predict if it's
> >> worth/valid to change that whole part of it? I'm not against putting
> >> more effort into it to get it right.
>
> I could fix 488x488.webp and have an almost identical output to libwebp.
>
> 488x488.webp features an ARGB canvas and has both, ARGB & YUVA420P
> p-frames.
>
> Do you have more files with other variations of canvas & p-frames? If
> they at all exist... e.g. canvas YUV and p-frames RGB?
>
Sent a few created with `gif2webp -mixed` off list. A more exhaustive
set can be created using cwebp and webpmux to assemble them.
> Pinged Meta as well for real-world samples. Will take some more days
> until I get feedback. Will then post the next iteration...
>
> Thanks,
> Thilo
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel
To unsubscribe, visit link above, or email
ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".
^ permalink raw reply [flat|nested] 15+ messages in thread