On Thu, Apr 27, 2023 at 04:25:56PM +0200, Anton Khirnov wrote: > Stop using InputStream.dts for generating missing timestamps for decoded > frames, because it contains pre-decoding timestamps and there may be > arbitrary amount of delay between input packets and output frames (e.g. > dependent on the thread count when frame threading is used). It is also > in AV_TIME_BASE (i.e. microseconds), which may introduce unnecessary > rounding issues. > > New code maintains a timebase that is the inverse of the LCM of all the > samplerates seen so far, and thus can accurately represent every audio if the LCM fits in int32 This can hit some pathologic cases though consider a 192khz stream that starts with a damaged packet thats read as 11.197 khz lcm of 192000 and 11197 > 2^31 so the whole stream will then be stuck with 11.197khz that seems like a bad choice the code should favor standard sample rates as well as the higher sample rate if the lcm is not representable also if lets say there are 48khz and 48.001khz where again lcm doesnt work then a multiple of 48khz may be a better choice than either itself also what happens if the audio timestamps are known more precissely than the audio sample rate ? thx [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB Opposition brings concord. Out of discord comes the fairest harmony. -- Heraclitus