From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from ffbox0-bg.mplayerhq.hu (ffbox0-bg.ffmpeg.org [79.124.17.100]) by master.gitmailbox.com (Postfix) with ESMTPS id 0B3CA4E322 for ; Mon, 10 Mar 2025 19:56:12 +0000 (UTC) Received: from [127.0.1.1] (localhost [127.0.0.1]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTP id 1B67E68E0FB; Mon, 10 Mar 2025 21:55:49 +0200 (EET) Received: from mail-wr1-f42.google.com (mail-wr1-f42.google.com [209.85.221.42]) by ffbox0-bg.mplayerhq.hu (Postfix) with ESMTPS id D504F68E085 for ; Mon, 10 Mar 2025 21:55:42 +0200 (EET) Received: by mail-wr1-f42.google.com with SMTP id ffacd0b85a97d-391342fc148so2137700f8f.2 for ; Mon, 10 Mar 2025 12:55:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1741636542; x=1742241342; darn=ffmpeg.org; h=thread-index:content-language:content-transfer-encoding :mime-version:message-id:date:subject:to:from:from:to:cc:subject :date:message-id:reply-to; bh=jWsZTQBKgMHDqjaiNimJvN4IY3/aN7Z3/2UMefxjEdI=; b=TNNedUYoiszVVraMhCBsUXQnGwB+eHAHBWnyznyCJKsVDIVj7u6LP4LhWkgWYQolyh rQ0ewmpqsEFd5wO+8OJ+LqZ7I1xzdNL3siofF/EnnbL3rNkRO9CY1BK7GS06YzJgiBpw bPXoeIGVaS8rMeQTeta6lNyNn4NgajCLBy89k1VJU8sY1E0jbF/Fay0bxKUG0hDfzmQ0 REvEJ94vwlaCumdcSATVqYJe8m0LS2mFadKl36EnWhM3uo8bJIpPCHwFQglNtMJP0Tu/ sVRJbJhZHAjgJbyLlP+l+Go8HM+qnXHqFXPSbkx/mfAAeP55uvXt/V/LGv8NKfsYtATN El6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1741636542; x=1742241342; h=thread-index:content-language:content-transfer-encoding :mime-version:message-id:date:subject:to:from:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=jWsZTQBKgMHDqjaiNimJvN4IY3/aN7Z3/2UMefxjEdI=; b=hgYjJ1T1O/SxB9leTd2jy1iMqJ1YLokCi4O0QtwZtcbpJsIsQh9BFVxA0FajmC4/mX CLqpaJtBs4iJYy9HCOKbxnK8rt1kRrK07R29r0nFbWg7CAEGbjNivd91Xt1km7+2BbKF 4KqDyhYYgHkMQN/OniSgJgjYE5/sz6aSgSntfoOb/zuhy9pw4hqgI4Wh9YFqTefqk0fU JRgmWjjzxsamWWjPJNoUXdNrLyQfKFGJi3LcsD1QwIwO48FwJ0fJWd6lo6Bv8ybb9gtG ItK8/qXEgYiaKWgcnNtlHnTc1d95t+32Aj7HmlOZVi1mRWuVsphdIo+fHuwGlGtpKFe9 hKrA== X-Gm-Message-State: AOJu0YzJQ6tlT8J8oKT17TsRL2uta6vdcj9l0KskNPm+vtlU7jyKNt3k yxqpWFcmj+NnTGSRgk87JMRLOlCB5Z9/acZcWy8vvjNgZCcIazq5WdIB3A== X-Gm-Gg: ASbGncuj0ZMWZrLV707Metu3RMf9z6WvKVxE1VqtR92inPpc4mxRqEHSOkXaM4tbYgo 0EDVBFS65/yTquK08j17SsQ5Qf1EBGdEhgnAGt2dSc0yP7oKMQO5Rvdp+LFPwVqAAnILAGFD9Am 4OuEIWHORiQQKzsrDPsH0kxgzJcff2ljrQicfhV57wfQIjJQCPQGT2kUtMF/cDV3sI9EVKgduBP 02zCdz5O+gS2ZW6xnAf+JdwyIABfg6cqq9EP0N+VA6tLqpQA7NhsL9j+Aj4uT7InKRr4h7MU2Es ncRPyhevBHOJC3v4DC56rFNuqSA/zoLXyhQ3243yLvS3+hektsiQ8v5Qvaz52S6d/3mB8I9BGiK 3CV3TBq3HznpnxfOr X-Google-Smtp-Source: AGHT+IGdSnuvYPPa72Q9bL95P5PQviY3sazILztmb/z8RpbFKM3CZEg8SHY0cfI6kaZcEB5Spx8Rxw== X-Received: by 2002:a05:6000:1f83:b0:390:f9d0:5e3 with SMTP id ffacd0b85a97d-39263affd7amr1215647f8f.1.1741636540227; Mon, 10 Mar 2025 12:55:40 -0700 (PDT) Received: from MK2 (80-108-16-220.cable.dynamic.surfer.at. [80.108.16.220]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-3912bfe0004sm15520970f8f.40.2025.03.10.12.55.39 for (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Mon, 10 Mar 2025 12:55:39 -0700 (PDT) From: To: Date: Mon, 10 Mar 2025 20:55:39 +0100 Message-ID: <004601db91f6$665ec1f0$331c45d0$@gmail.com> MIME-Version: 1.0 X-Mailer: Microsoft Outlook 16.0 Content-Language: en-at Thread-Index: AduR9mYEEpklv+tTTgSLLOK1yov6EA== Subject: [FFmpeg-devel] [PATCH v2 FFmpeg 18/20] doc/filters.texi: add classify documentation X-BeenThere: ffmpeg-devel@ffmpeg.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: FFmpeg development discussions and patches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Reply-To: FFmpeg development discussions and patches Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Errors-To: ffmpeg-devel-bounces@ffmpeg.org Sender: "ffmpeg-devel" Archived-At: List-Archive: List-Post: Signed-off-by: MaximilianKaindl --- doc/filters.texi | 124 ++++++++++++++++++++++++++++++++--------------- 1 file changed, 85 insertions(+), 39 deletions(-) diff --git a/doc/filters.texi b/doc/filters.texi index 0ba7d3035f..a7046e0f4e 100644 --- a/doc/filters.texi +++ b/doc/filters.texi @@ -11970,45 +11970,6 @@ ffmpeg -i INPUT -f lavfi -i nullsrc=hd720,geq='r=128+80*(sin(sqrt((X-W/2)*(X-W/2 @end example @end itemize -@section dnn_classify - -Do classification with deep neural networks based on bounding boxes. - -The filter accepts the following options: - -@table @option -@item dnn_backend -Specify which DNN backend to use for model loading and execution. This option accepts -only openvino now, tensorflow backends will be added. - -@item model -Set path to model file specifying network architecture and its parameters. -Note that different backends use different file formats. - -@item input -Set the input name of the dnn network. - -@item output -Set the output name of the dnn network. - -@item confidence -Set the confidence threshold (default: 0.5). - -@item labels -Set path to label file specifying the mapping between label id and name. -Each label name is written in one line, tailing spaces and empty lines are skipped. -The first line is the name of label id 0, -and the second line is the name of label id 1, etc. -The label id is considered as name if the label file is not provided. - -@item backend_configs -Set the configs to be passed into backend - -For tensorflow backend, you can set its configs with @option{sess_config} options, -please use tools/python/tf_sess_config.py to get the configs for your system. - -@end table - @section dnn_detect Do object detection with deep neural networks. @@ -31982,6 +31943,91 @@ settb=AVTB @end example @end itemize +@section dnn_classify +Analyze media (video frames or audio) using deep neural networks to apply classifications based on the content. +This filter supports three classification modes: + +@itemize @bullet +@item Standard image classification (OpenVINO backend) +@item CLIP (Contrastive Language-Image Pre-training) classification (Torch backend) +@item CLAP (Contrastive Language-Audio Pre-training) classification (Torch backend) +@end itemize + +The filter accepts the following options: +@table @option +@item dnn_backend +Specify which DNN backend to use for model loading and execution. Currently supports: +@table @samp +@item openvino +Use OpenVINO backend (standard image classification only). +@item torch +Use LibTorch backend (supports CLIP for images and CLAP for audio). +@end table +@item confidence +Set the confidence threshold (default: 0.5). Classifications with confidence below this value will be filtered out. +@item labels +Set path to a label file specifying classification labels. This is required for standard classification and can be used for CLIP/CLAP classification. +Each label is written on a separate line in the file. Trailing spaces and empty lines are skipped. +@item categories +Path to a categories file for hierarchical classification (CLIP/CLAP only). This allows classification to be organized into multiple category units with individual categories containing related labels. +@item tokenizer +Path to the text tokenizer.json file (CLIP/CLAP only). Required for text embedding generation. +@item target +Specify which objects to classify. When omitted, the entire frame is classified. When specified, only bounding boxes with detection labels matching this value are classified. +@item is_audio +Enable audio processing mode for CLAP models (default: 0). Set to 1 to process audio input instead of video frames. +@item logit_scale +Logit scale for similarity calculation in CLIP/CLAP (default: 4.6052 for CLIP, 33.37 for CLAP). Values below 0 use the default. +@item temperature +Softmax temperature for CLIP/CLAP models (default: 1.0). Lower values make the output more peaked, higher values make it smoother. +@item forward_order +Order of forward output for CLIP/CLAP: 0 for media-text order, 1 for text-media order (default depends on model type). +@item normalize +Whether to normalize the input tensor for CLIP/CLAP (default depends on model type). Some scripted models already do this in the forward, so this is not necessary in some cases. +@item input_res +Expected input resolution for video processing models (default: automatically detected). +@item sample_rate +Expected sample rate for audio processing models (default: 44100). +@item sample_duration +Expected sample duration in seconds for audio processing models (default: 7). +@item token_dimension +Dimension of token vector for text embeddings (default: 77). +@item optimize +Enable graph executor optimization (0: disabled, 1: enabled). +@end table +@subsection Category Files Format +For CLIP/CLAP models, a hierarchical categories file can be provided with the following format: +@example +[RecordingSystem] +(Professional) +a photo with high level of detail +a professionally recorded sound +(HomeRecording) +a photo with low level of detail +an amateur recording +[ContentType] +(Nature) +trees +mountains +birds singing +(Urban) +buildings +street noise +traffic sounds +@end example +Each unit enclosed in square brackets [] creates a classification group. Within each group, categories are defined with parentheses () and the labels under each category are used to classify the input. +@subsection Examples +@example +Classify video using OpenVINO +ffmpeg -i input.mp4 -vf "dnn_classify=dnn_backend=openvino:model=model.xml:labels=labels.txt" output.mp4 +Classify video using CLIP +ffmpeg -i input.mp4 -vf "dnn_classify=dnn_backend=torch:model=clip_model.pt:categories=categories.txt:tokenizer=tokenizer.json" output.mp4 +Classify only person objects in a video +ffmpeg -i input.mp4 -vf "dnn_detect=model=detection.xml:input=data:output=detection_out:confidence=0.5,dnn_classify=model=clip_model.pt:dnn_backend=torch:tokenizer=tokenizer.json:labels=labels.txt:target=person" output.mp4 +Classify audio using CLAP +ffmpeg -i input.mp3 -af "dnn_classify=dnn_backend=torch:model=clap_model.pt:categories=audio_categories.txt:tokenizer=tokenizer.json:is_audio=1:sample_rate=44100:sample_duration=7" output.mp3 +@end example + @section showcqt Convert input audio to a video output representing frequency spectrum logarithmically using Brown-Puckette constant Q transform algorithm with -- 2.34.1 _______________________________________________ ffmpeg-devel mailing list ffmpeg-devel@ffmpeg.org https://ffmpeg.org/mailman/listinfo/ffmpeg-devel To unsubscribe, visit link above, or email ffmpeg-devel-request@ffmpeg.org with subject "unsubscribe".