Hi On Sun, Jun 15, 2025 at 11:53:28AM -0300, James Almer wrote: > On 6/15/2025 10:35 AM, Michael Niedermayer wrote: > > Hi all > > > > As it seems someone figured out how to make AI solve anubis, which made trac > > rather slow due to the DDOS from 100 different IPs, which eventually > > we had to block. > > (maybe timo has time to write an incident report?) > > > > Some questions > > * does someone know how to make trac use/set cache-control headers > > (this would simply and plainly reduce load on trac for pages that dont change > > but has to play along correctly with user sessions and all that) > > > > * should we make a static copy of the whole trac so the > > AI users, vibe coders, AI data analyists, and AI bot trainers can actually > > use trac while everyone else also can use it ? > > that static copy would then get updated ... i dont know, maybe once a week? > > side effect, even humans would have a "instant responce but older trac" too > > How would this work? We then just expect LLMs to crawl it while leaving the > live one alone? Well, theres an incentive for LLMs and their human operator to crawl the static one, as they succeed. with significantly less latency and less computational resources thus less financial cost. thx [...] -- Michael GnuPG fingerprint: 9FF2128B147EF6730BADF133611EC787040B0FAB Old school: Use the lowest level language in which you can solve the problem conveniently. New school: Use the highest level language in which the latest supercomputer can solve the problem without the user falling asleep waiting.