From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by master.gitmailbox.com (Postfix) with ESMTPS id 78BA84E080 for ; Sat, 8 Mar 2025 14:59:29 +0000 (UTC) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 5B12A68F418; Sat, 8 Mar 2025 16:59:25 +0200 (EET) Received: from mail-wm1-f51.google.com (mail-wm1-f51.google.com [209.85.128.51]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id 42EF568F415 for ; Sat, 8 Mar 2025 16:59:24 +0200 (EET) Received: by mail-wm1-f51.google.com with SMTP id 5b1f17b1804b1-43cebe06e9eso1625655e9.3 for ; Sat, 08 Mar 2025 06:59:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1741445963; x=1742050763; darn=ffmpeg.org; h=content-language:thread-index:content-transfer-encoding :mime-version:message-id:date:subject:to:from:from:to:cc:subject :date:message-id:reply-to; bh=y0fxqX+0DlJc30eH8U1KrRU37Fn1E+iAYmjJc7lRIlc=; b=NT/P1suxBZ9W9bGaNXkgQmGIm8/U/7ee1o+bmA+YoIpUfk5QKM1sytsV5Jd0wKHrEn s9aCxmNuZ/SOMdheD2Pg0pCoG0zOXlJx4XzZVyLilUMiNgg/JFDP0VWrrQNgckSWyzKO HKceLyxEQbmc6QrxmuMe4N5DIC1xWnXth+WLYHSxV+Bt5g1Ewr6bpGukcKaJsD2FWOj6 LEj6G3IwTVnL2b7P6aG94vr1pChmqVbjPMfXsENjLv8kdObeQ/PIKlF9O8o9IcwW94I0 vDXD05rwYAMplBy7yj/Mvz23BL1+dyUkXZuvCOKiCVBqFGz0BbLEd3ltYQbk0TrqqNxG ZdtQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741445963; x=1742050763; h=content-language:thread-index:content-transfer-encoding :mime-version:message-id:date:subject:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=y0fxqX+0DlJc30eH8U1KrRU37Fn1E+iAYmjJc7lRIlc=; b=hOCXKUfjkVI1MPs+P3lFzhNkBRELm+cwOTBUGK43RfF20NQlwimSla4opr+WZZTlga Ezbu1rS5Q7L39dcRyFEEBB/ejYE+xrzuBqsCXu7Pr62kLxPIwGS+bKZV5fOjnKS+Ypg0 RPgog4xxjeH3aUy3JRrCUqfVJdvrMWR++su2598LLYlbV60Oc7WD+NpyLXu8hIk237rA qtym4iS6VpphN38ZTQVGAH0sEG+gscjhzb4LZvemo+qAbDltHTjJPpImQh7ncdNlHMkY lP0+o15vDGNJlQh/Sp7Fe5XO2x9RXEc/00KUvRyw48Y7TnhE/nkZDk5BYIMBPjEV74nF t1oQ== X-Gm-Message-State: AOJu0Yxwfo2shJHLG+Bv9cp4HzVecFgiaDO3UZkICqk7jQ8FmXHaQo0G MfmzzMMtB4dxhNNXyUB86dv5q8+8MCIR+jccP2+m7nEsDHuoOHhf8e5lPA== X-Gm-Gg: ASbGncvraNWrACuL2Yq/LN8XzANE8Zc71ismtYLf3lI8zBqxRq4J/4tbKpufrr4xcAv mVzVPjy+vN3hwin2ICdkOmTHA2P9de1m7Xm14+6/E+I46gRrTyexjj3zeXVtC9savesUFBtoukR yACPUVn+Su75DKB9G60fl8wBrhnfFOGfCVSjBgs5oJ+6yUyOjF2jATHNK5C7vaSSCM611opUbi2 u/TPOGikaSbr0hG6h8GfhORGQFvVYkzWd7JhJ7kYGfwQ44Ta7Mt3orbTM6sXpUTMN+CQoeZnyPH ynaOl+ppfoPRu0Jexq3jdYPDyiNS6FJyp95j7N7iz8osNbHhWHjWMaamOWnE/qrJlY/lWiFjK3I tYPb+yF026x6JAbJp X-Google-Smtp-Source: AGHT+IEqnPJ6Z1Cbm/UEIQ+jLTAs6PAw6EPpv8rvi8rRpeRkvO8YPMnZSB4uoCE+LuAJk4mAw8+XeQ== X-Received: by 2002:a05:600c:46cf:b0:43c:e8ca:5140 with SMTP id 5b1f17b1804b1-43ce8ca528dmr12770865e9.23.1741445963072; Sat, 08 Mar 2025 06:59:23 -0800 (PST) Received: from MK2 (80-108-16-220.cable.dynamic.surfer.at. [80.108.16.220]) by smtp.gmail.com with ESMTPSA id 5b1f17b1804b1-43bdd93cac5sm86158505e9.27.2025.03.08.06.59.22 for (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Sat, 08 Mar 2025 06:59:22 -0800 (PST) From: To: Date: Sat, 8 Mar 2025 15:59:25 +0100 Message-ID: <007701db903a$af836b50$0e8a41f0$@gmail.com> MIME-Version: 1.0 X-Mailer: Microsoft Outlook 16.0 Thread-Index: AduQOIrusqtMTe+AR6WY6T4M1qikFg== Content-Language: en-at Subject: [FFmpeg-devel] [PATCH FFmpeg 5/15] libavfilter: filter common introduce interfaces for CLIP/CLAP Classification and model loading with tokenizer X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" Archived-At: List-Archive: List-Post: Extends the DNN filter common code to support CLIP/CLAP classification and model loading with tokenizers. Adds new execution functions for both image and audio classification. Try the new filters using my Github Repo https://github.com/MaximilianKaindl/DeepFFMPEGVideoClassification. Any Feedback is appreciated! Signed-off-by: MaximilianKaindl --- libavfilter/dnn_filter_common.c | 78 ++++++++++++++++++++++++++++++--- libavfilter/dnn_filter_common.h | 4 ++ 2 files changed, 77 insertions(+), 5 deletions(-) diff --git a/libavfilter/dnn_filter_common.c b/libavfilter/dnn_filter_common.c index 6a1e9ace2e..d9b2cc9bcd 100644 --- a/libavfilter/dnn_filter_common.c +++ b/libavfilter/dnn_filter_common.c @@ -75,7 +75,7 @@ void *ff_dnn_filter_child_next(void *obj, void *prev) return ff_dnn_child_next(&base->dnnctx, prev); } -int ff_dnn_init(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *filter_ctx) +int ff_dnn_init_priv(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *filter_ctx) { DNNBackendType backend = ctx->backend_type; @@ -87,10 +87,19 @@ int ff_dnn_init(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *fil if (backend == DNN_TH) { if (ctx->model_inputname) av_log(filter_ctx, AV_LOG_WARNING, "LibTorch backend do not require inputname, "\ - "inputname will be ignored.\n"); + "inputname will be ignored.\n"); if (ctx->model_outputnames) av_log(filter_ctx, AV_LOG_WARNING, "LibTorch backend do not require outputname(s), "\ "all outputname(s) will be ignored.\n"); + +#if (CONFIG_LIBTOKENIZERS == 0) + if ((func_type == DFT_ANALYTICS_CLIP || func_type == DFT_ANALYTICS_CLAP)) { + av_log(ctx, AV_LOG_ERROR, + "tokenizers-cpp is not included. CLIP/CLAP Classification requires tokenizers-cpp library. Include " + "it with configure.\n"); + return AVERROR(EINVAL); + } +#endif ctx->nb_outputs = 1; } else if (backend == DNN_TF) { if (!ctx->model_inputname) { @@ -118,26 +127,50 @@ int ff_dnn_init(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *fil void *child = NULL; av_log(filter_ctx, AV_LOG_WARNING, - "backend_configs is deprecated, please set backend options directly\n"); + "backend_configs is deprecated, please set backend options directly\n"); while (child = ff_dnn_child_next(ctx, child)) { if (*(const AVClass **)child == &ctx->dnn_module->clazz) { int ret = av_opt_set_from_string(child, ctx->backend_options, - NULL, "=", "&"); + NULL, "=", "&"); if (ret < 0) { av_log(filter_ctx, AV_LOG_ERROR, "failed to parse options \"%s\"\n", - ctx->backend_options); + ctx->backend_options); return ret; } } } } + return 0; +} +int ff_dnn_init(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *filter_ctx) +{ + int ret = ff_dnn_init_priv(ctx, func_type, filter_ctx); + if (ret < 0) { + return ret; + } ctx->model = (ctx->dnn_module->load_model)(ctx, func_type, filter_ctx); if (!ctx->model) { av_log(filter_ctx, AV_LOG_ERROR, "could not load DNN model\n"); return AVERROR(EINVAL); } + return 0; +} +int ff_dnn_init_with_tokenizer(DnnContext *ctx, DNNFunctionType func_type, char **labels, int label_count, + int *softmax_units, int softmax_units_count, char *tokenizer_path, + AVFilterContext *filter_ctx) +{ + int ret = ff_dnn_init_priv(ctx, func_type, filter_ctx); + if (ret < 0) { + return ret; + } + ctx->model = (ctx->dnn_module->load_model_with_tokenizer)(ctx, func_type, labels, label_count, softmax_units, + softmax_units_count, tokenizer_path, filter_ctx); + if (!ctx->model) { + av_log(filter_ctx, AV_LOG_ERROR, "could not load DNN model\n"); + return AVERROR(EINVAL); + } return 0; } @@ -200,6 +233,41 @@ int ff_dnn_execute_model_classification(DnnContext *ctx, AVFrame *in_frame, AVFr return (ctx->dnn_module->execute_model)(ctx->model, &class_params.base); } +int ff_dnn_execute_model_clip(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame, const char **labels, int label_count, const char* tokenizer_path, char *target) +{ + DNNExecZeroShotClassificationParams class_params = { + { + .input_name = ctx->model_inputname, + .output_names = (const char **)ctx->model_outputnames, + .nb_output = ctx->nb_outputs, + .in_frame = in_frame, + .out_frame = out_frame, + }, + .labels = labels, + .label_count = label_count, + .tokenizer_path = tokenizer_path, + .target = target, + }; + return (ctx->dnn_module->execute_model)(ctx->model, &class_params.base); +} + +int ff_dnn_execute_model_clap(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame, const char **labels, int label_count, const char* tokenizer_path) +{ + DNNExecZeroShotClassificationParams class_params = { + { + .input_name = ctx->model_inputname, + .output_names = (const char **)ctx->model_outputnames, + .nb_output = ctx->nb_outputs, + .in_frame = in_frame, + .out_frame = out_frame, + }, + .labels = labels, + .label_count = label_count, + .tokenizer_path = tokenizer_path, + }; + return (ctx->dnn_module->execute_model)(ctx->model, &class_params.base); +} + DNNAsyncStatusType ff_dnn_get_result(DnnContext *ctx, AVFrame **in_frame, AVFrame **out_frame) { return (ctx->dnn_module->get_result)(ctx->model, in_frame, out_frame); diff --git a/libavfilter/dnn_filter_common.h b/libavfilter/dnn_filter_common.h index fffa676a9e..b05acf5d55 100644 --- a/libavfilter/dnn_filter_common.h +++ b/libavfilter/dnn_filter_common.h @@ -54,7 +54,9 @@ void *ff_dnn_filter_child_next(void *obj, void *prev); int ff_dnn_filter_init_child_class(AVFilterContext *filter); +int ff_dnn_init_priv(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *filter_ctx); int ff_dnn_init(DnnContext *ctx, DNNFunctionType func_type, AVFilterContext *filter_ctx); +int ff_dnn_init_with_tokenizer(DnnContext *ctx, DNNFunctionType func_type, char** labels, int label_count, int* softmax_units, int softmax_units_count, char* tokenizer_path, AVFilterContext *filter_ctx); int ff_dnn_set_frame_proc(DnnContext *ctx, FramePrePostProc pre_proc, FramePrePostProc post_proc); int ff_dnn_set_detect_post_proc(DnnContext *ctx, DetectPostProc post_proc); int ff_dnn_set_classify_post_proc(DnnContext *ctx, ClassifyPostProc post_proc); @@ -62,6 +64,8 @@ int ff_dnn_get_input(DnnContext *ctx, DNNData *input); int ff_dnn_get_output(DnnContext *ctx, int input_width, int input_height, int *output_width, int *output_height); int ff_dnn_execute_model(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame); int ff_dnn_execute_model_classification(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame, const char *target); +int ff_dnn_execute_model_clip(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame, const char **labels, int label_count, const char* tokenizer_path, char *target); +int ff_dnn_execute_model_clap(DnnContext *ctx, AVFrame *in_frame, AVFrame *out_frame, const char **labels, int label_count, const char* tokenizer_path); DNNAsyncStatusType ff_dnn_get_result(DnnContext *ctx, AVFrame **in_frame, AVFrame **out_frame); int ff_dnn_flush(DnnContext *ctx); void ff_dnn_uninit(DnnContext *ctx); -- 2.34.1 _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".