00:04:23dabs quits [Quit: Leaving]
00:05:38etnguyen03 (etnguyen03) joins
01:24:48<h2ibot>TriangleDemon edited Main Page/Current Projects (-138): https://wiki.archiveteam.org/?diff=56613&oldid=56605
01:24:49<h2ibot>TriangleDemon edited Main Page/Current Projects (-1, /* Recently finished projects */): https://wiki.archiveteam.org/?diff=56614&oldid=56613
01:27:14cascode quits [Ping timeout: 240 seconds]
01:29:24cascode joins
02:14:50charmander joins
02:26:04Wohlstand quits [Quit: Wohlstand]
02:28:09etnguyen03 quits [Client Quit]
02:28:30etnguyen03 (etnguyen03) joins
02:37:44charmander quits [Client Quit]
02:38:19riggi joins
02:49:15etnguyen03 quits [Remote host closed the connection]
03:22:05<h2ibot>OrIdow6 edited Itch.io (+130): https://wiki.archiveteam.org/?diff=56615&oldid=56600
03:24:54<@OrIdow6>arkiver: It's not completely ready yet but advanced notice of a request for setting up a project, itchio-minimal, no backfeed or anything, no size estimate
03:25:37<@OrIdow6>"-minimal" because it's a crude thing mostly just to get the downloads and the barest HTML pages
03:25:45<@OrIdow6>With AB filling in the rest, hopefully
03:28:40<@OrIdow6>... huh, itch takedown pages contain a hidden link to the devlog, if any
03:28:45<@OrIdow6>E.g. https://dakodanova.itch.io/faefire-fantasy
03:32:18lemuria quits [Read error: Connection reset by peer]
03:33:06lemuria (lemuria) joins
03:42:33Webuser456094 joins
03:42:47Webuser456094 quits [Client Quit]
03:46:13<@OrIdow6>Should be able to have this ready in ~12 hours unless something catastrophic happens, apologies for the dealy
03:56:30<pokechu22>OrIdow6: for what it's worth that devlog also appears in https://itch.io/sitemaps/blog_posts_4.xml
04:31:14<h2ibot>PaulWise edited ArchiveBot/Ignore (+285, add flickr ignores from pokechu22): https://wiki.archiveteam.org/?diff=56616&oldid=55898
04:37:54cascode quits [Ping timeout: 240 seconds]
04:55:21GradientCat quits [Quit: Connection closed for inactivity]
04:58:31Yakov quits [Quit: The Lounge - https://thelounge.chat]
04:58:44Yakov joins
05:01:24LddPotato quits [Ping timeout: 260 seconds]
05:01:59LddPotato (LddPotato) joins
05:02:50Yakov quits [Changing host]
05:02:50Yakov (Yakov) joins
05:09:36cascode joins
05:10:53Yakov quits [Client Quit]
05:11:05Yakov joins
05:11:27Yakov quits [Changing host]
05:11:27Yakov (Yakov) joins
05:50:29LddPotato quits [Remote host closed the connection]
05:52:07LddPotato (LddPotato) joins
05:54:04Guest58 quits [Quit: My Mac has gone to sleep. ZZZzzz…]
05:56:49nulldata-alt quits [Ping timeout: 260 seconds]
06:28:33ummmSokar joins
06:31:14notSokar quits [Ping timeout: 260 seconds]
06:33:58ummmSokar quits [Client Quit]
06:34:52Sokar joins
06:42:59<ineffyble>RE: Itch, I don't know if it's useful at all, but the Itch.io Desktop app keeps an sqlite database of the games from the user's library which is quite easy to pull info out of. I haven't checked if opening the app since the takedowns removes entries that were taken down or not, but at least any that haven't been opened would definitely include them
06:43:49<ineffyble>TABLE games (id INTEGER NOT NULL, url TEXT, title TEXT, short_text TEXT, type TEXT, classification TEXT, cover_url TEXT, still_cover_url TEXT, created_at DATETIME, published_at DATETIME, min_price INTEGER, can_be_bought BOOLEAN, has_demo BOOLEAN, in_press_system BOOLEAN, windows TEXT, linux TEXT, osx TEXT, user_id INTEGER, PRIMARY KEY (id))
06:44:37Webuser631421 joins
06:44:57Webuser631421 quits [Client Quit]
06:47:05awauwa (awauwa) joins
07:02:20riggi quits [Quit: Ooops, wrong browser tab.]
07:05:02nulldata-alt (nulldata) joins
07:15:51Island quits [Read error: Connection reset by peer]
07:36:34pokechu22 quits [Read error: Connection reset by peer]
08:01:39Czechball0 joins
08:03:59Czechball quits [Ping timeout: 260 seconds]
08:03:59Czechball0 is now known as Czechball
08:38:08<hexagonwin_>Not sure if I can ask this but are there some easy to understand/standalone testable (without syncing with tracker etc) method of writing wget-lua scripts? I'm trying to do an archive project and want to move away from my current method of using vanilla wget to save plain files with shell scripts and onto properly making WARCs.
08:39:44<hexagonwin_>i've installed universal-tracker onto my machine to test and it seems to work, but documentation is rare and hard to configure..
08:47:10<that_lurker>grab-site could be what you are looking for https://github.com/ArchiveTeam/grab-site
08:47:30<that_lurker>basically a local archivebot
08:48:00<hexagonwin_>that_lurker: thanks, but i'm already aware of it and have used it. the site i'm trying to archive requires some custom logic (it's a blog) like fetching comments api, converting some links to get high quality image etc
08:52:29Dada joins
09:15:32<hexagonwin_>just realized i can run wget-lua like normal wget to download a single url but onto a WARC, and just combine (cat 1.warc.gz 2.warc.gz > combined.warc.gz) them and it opens at least on replayweb.page. Is this fine for "archival purposes" compared to using lua scripts?
09:17:31<hexagonwin_>it at least seems to leave wget's log/args/manifest for each which might be a bit messy...
09:19:14Juest quits [Ping timeout: 240 seconds]
09:23:03Juest (Juest) joins
09:39:29Guest58 joins
09:40:36Guest58 quits [Read error: Connection reset by peer]
09:40:45Guest58_ joins
09:50:06BearFortress_ quits []
10:37:48<@arkiver>OrIdow6: itchio-minimal is up, do we need a target?
10:37:56<@arkiver>you have admin access to that project
10:38:00<@arkiver>do you think we need a channel?
10:45:15<@arkiver>i am a bit on and off available today, starting from tomorrow fully back
10:54:48BearFortress joins
10:57:54<masterx244|m>idea if we need a channel name: scratch.io
11:00:05Bleo182600722719623455222 quits [Quit: The Lounge - https://thelounge.chat]
11:02:49Bleo182600722719623455222 joins
11:08:17<h2ibot>PaulWise edited ArchiveBot/Ignore (+59, Flickr giftPro ignore): https://wiki.archiveteam.org/?diff=56617&oldid=56616
11:18:09<@OrIdow6>arkiver: Target will be needed but I'm going to try to do a small size estimate now
11:18:10<@OrIdow6>Thanks
11:18:22<TheTechRobo>hexagonwin_: Yeah, that's fine. Lua scripting just lets you control the behaviour of a crawl.
11:18:35<@OrIdow6>I don't know if we'll need a channel, I suspect this might end up as something small
11:38:45<masterx244|m>hexagonwin_: thats a intentional effect on how WARC files works. the AT pipeline behind the warrior targets does exactly the same with glueing the WARCs together for larger chunks. WARC files compress each record by itself
11:39:14pabs quits [Ping timeout: 260 seconds]
11:40:28<masterx244|m>(needed to avoid decompressing anything before the wanted data, if you look at WARC items on archive.org you always see a CDX next to them, those files are sort of TOCs so tools know where each record starts
11:40:29<masterx244|m>)
11:42:04pabs (pabs) joins
11:44:15<egallager>https://www.theregister.com/2025/07/28/infosec_in_brief/
11:45:59<@JAA>masterx244|m, hexagonwin_: It'd fail for .warc.zst with a custom dictionary, but it's fine for .warc.gz, yes.
11:47:23<h2ibot>Cooljeanius edited Software (+98, /* General Tools */ add SiteSucker): https://wiki.archiveteam.org/?diff=56618&oldid=55294
11:48:23<h2ibot>Cooljeanius edited Itch.io (+8, use URL template): https://wiki.archiveteam.org/?diff=56619&oldid=56615
11:52:26<masterx244|m>JAA: luckily that combination doesn't fall out of the easy-access tools and mainly appears in the more professional pipelines
11:52:55<@JAA>True
11:57:22<@arkiver>OrIdow6: if you think it's tiny, i would not put much effort into size estimates
11:57:41<@arkiver>let's see about a channel later then, if all runs smooth we may not need it
11:57:49<@arkiver>(though we can totally make one if you'd like)
11:59:45<@arkiver>imer: when you are around, could we have a target for project itchio-minimal ?
12:00:14<@arkiver>even though the tracker has "-minimal", i think the metadata would point to the site in general, with
12:00:17<@arkiver>archiveteam_itchio_
12:00:19<@arkiver>itchio_
12:00:24<@arkiver>Archive Team itch.io:
12:00:30<@arkiver>OrIdow6: let me know if there is a repo
12:07:28<@imer>arkiver: OrIdow6: target added
12:07:56<@arkiver>awesome, thanks for that :)
12:15:12sepro0 (sepro) joins
12:17:44sepro quits [Ping timeout: 260 seconds]
12:17:44sepro0 is now known as sepro
12:53:15<hexagonwin_>masterx244|m: thanks for the response, I'm glad that's fine. May I ask if you know of some quick tool that can open a file inside each individual warc to parse against in the script?
13:06:41GradientCat (GradientCat) joins
13:16:09cuphead2527480 (Cuphead2527480) joins
13:20:37UwU quits [Quit: bye]
13:35:22<@OrIdow6>arkiver: Thanks
13:35:54<@OrIdow6>The range is really wide since there's a long tail but this might end up being a 10s-of-TBs deal
13:36:36<@OrIdow6>I'll see when it runs, if it turns out it's too huge (effect of that long tail) might change things around (probably, eliminate huge submissions)
13:37:02<@OrIdow6>And yeah metadata is good, I just don't want to step on the toes of a more fine-toothed project here
13:37:05<@OrIdow6>thanks imer
13:42:29arch quits [Remote host closed the connection]
13:45:29<@OrIdow6>I'm gonna make an extremely rough guess and say 45 TB for the whole thing
13:45:31<@OrIdow6>But we'll see
13:45:41<@OrIdow6>Well, the whole potential-nsfw list
13:48:25<masterx244|m>too bad that we can't sneak out information except new URLs out via the tracker backfeed mechanism, otherwise we could have run a first sweep only catching metadata and logging the size straightaway
13:48:48<h2ibot>OrIdow6 uploaded File:Itchio-logo.png (Converted from SVG): https://wiki.archiveteam.org/?title=File%3AItchio-logo.png
13:49:55<masterx244|m>(maybe backfeeding a pseudo-URL that never goes into real queue of the sort AT-Logging://itch/gameid?size=12345678 and then processing those in some datacrunching could work as a estimator tool, depending on how the metadata items are crunched)
13:53:16nine quits [Quit: See ya!]
13:53:29nine joins
13:53:29nine quits [Changing host]
13:53:29nine (nine) joins
13:54:27<hexagonwin_>While finding subdomains I found some very weird (most likely invalid) URLs saved to WBM by archiveteam_urls like this one, is this common? The URL can't be valid but it's kinda weird it includes some pretty related keywords.
13:54:29<hexagonwin_>e.g. https://web.archive.org/web/20240901000000*/http://xn--o7irjj-bo70a.egloos.xn--como8yagoosarang-et62e.tistory.xn--como9yagoora-si29c.textcube.xn--como10sports24-ex61d.tistory.com/
14:00:39<gamer191-1|m><hexagonwin_> "e.g. https://web.archive.org/web..." <- I don't know if that's valid, but I should note that it's punycode, which is denoted by putting xn-- at the start
14:00:39<gamer191-1|m>So it makes sense that `.xn--` appears many times, and that the text appears to be nonsensical
14:06:02<gamer191-1|m><TheTechRobo> "I think the policy is more of a..." <- Sorry, just saw this
14:06:02<gamer191-1|m>I don't think insecure DNS is an issue, because SSL certificates include a list of domains that they're valid for
14:06:02<gamer191-1|m>(it is an issue if you inherit SSL ceritificates from the OS, but I assume that's not the case?)
14:07:42<gamer191-1|m><katia> "also - things can still be..." <- Yeah, so a "unclean connections" mode would have to abort if a web request failed
14:07:57<gamer191-1|m>I guess it's probably not worth the effort to implement that
14:11:40<hexagonwin_>gamer191-1|m yeah it's punycode and (supposed to be) korean, but it's not a proper word, just random gibberish
14:11:59<hexagonwin_>maybe it's broken regex in some archiveteam project?
14:12:27<gamer191-1|m><murb> "katia: i make my evil censored..." <- Most hostnames resolve to Cloudflare DDOS protection, so they definitely don't block by IP address. They presumably use deep-packet-inspection (unfortunately, SSL only encrypts everything except the hostname, so you can see where a packet is going and block it)
14:12:47<gamer191-1|m>hexagonwin_: Oh
14:12:54<katia>yeah also DPI gamer191-1|m
14:12:58<katia>but yeah
14:13:15<katia>i've lived in italy for a bit so i know how broken the internet can get when they censor it :/
14:14:23<gamer191-1|m>katia you can probably bypass it using https://github.com/ValdikSS/GoodbyeDPI (note that I think this uses Yandex DNS by default, which you should probably change or disable)
14:14:37<gamer191-1|m>Although that tends to break random websites
14:14:39<katia>gamer191-1|m, i don't have problems bypassing it
14:14:47<katia>and i also don't use windows
14:15:02<hexagonwin_>zapret kinda works but breaks quite a few sites
14:15:27<katia>i would use shadowsocks on a box hosted in somewhere with free-r internet
14:15:41<katia>nowadays i live in .nl and the internet is not (as?) censored
14:19:14<murb>gamer191-1|m: really? dpi is most expensive. mostly you push only special traffic to your censorware infra.
14:19:24arch (arch) joins
14:20:47<murb>so yeah, when say high traffic site ends up going that way at best you get performance degredation.
14:20:57<gamer191-1|m>!tell Orldow6 https://itch.io/sitemap.xml has a list of itch.io URLs (including a games sitemap listing all games). I don't know how up-to-date it is, or whether it includes the deindexed NSFW games
14:20:57<eggdrop>[tell] ok, I'll tell Orldow6 when they join next
14:21:40<murb>worse, why can't i edit / login etc, it thinks my ip is suspect
14:22:47<gamer191-1|m>murb: Most schools use dpi to block game sites, so it can't be that expensive
14:22:47<gamer191-1|m>Given the popularity of CDNs, afaik there's no other real way to block sites
14:23:10<murb>katia: so need to pick up some abandoned domains of recenyly censored content and have a play. it helps to have vantage points on the censored networks.
14:24:13<murb>gamer191-1|m: schools may pay for ecpensive in order of 100s or 1000 / euro per month.
14:24:15<@OrIdow6>arkiver: https://github.com/OrIdow6/itchio-minimal-items https://github.com/OrIdow6/itchio-minimal-grab
14:24:41<@OrIdow6>As with leaving for the day IRL I feel as if I'm forgetting something, but I don't know what
14:24:57<murb>rather than retail were you have customers paying 50p & their last fluffy rollo.
14:27:02<murb>gamer191-1|m: just drop the ip and neighouring, works in .pt
14:27:47<murb>and yeah they've been dropping lots of cloudflare
14:29:16<@OrIdow6>gamer191-1|m: That's what we're using, and it does
14:30:22<@OrIdow6>!tell pokechu22 thx for your work, it's in -items now :)
14:30:23<eggdrop>[tell] ok, I'll tell pokechu22 when they join next
14:30:30<@OrIdow6>!seen pokechu22
14:30:31<eggdrop>[seen] pokechu22 (~pokechu22@hackint/user/pokechu22) was last seen quitting from #unclesamsarchive 6 hours 53 minutes 56 seconds ago (2025-07-28T07:36:34Z), stating "Read error: Connection reset by peer"
14:32:25<@OrIdow6>hexagonwin_: Almost certainly over-aggressive rather than broken
14:32:40FiTheArchiver joins
14:32:40<@OrIdow6>Well, not even over-aggressive, just aggressive
14:32:43FiTheArchiver quits [Remote host closed the connection]
14:33:58<@OrIdow6>In a world of JS and site-specific tricks and people copying and pasting URLs as plain text extracting <a href>'s is not nearly enough, you need heuristics, and usually bad extractions aren't a big enough resource drain
14:36:26<@OrIdow6>masterx244|m: The main issue is that it's several times more complicated to write and deploy a distributed grab script than run a little Python script that runs locally
14:38:25<@OrIdow6>Like shoot, even with this I *though* I was gonna write some little local estimator script but got sidetracked by embeds and just wrote the project out
14:38:41<gamer191-1|m>OrIdow6: Am I supposed to be able to see the itch.io project in `Available projects` on the warrior?
14:38:42<@OrIdow6>And my estimate is from running the grab script proper, locally, on a random sample from the items we have
14:39:04<@OrIdow6>gamer191-1|m: No, it'll be a bit before it shows up
14:39:44<@OrIdow6>There are more steps to the deployment process
14:41:28<gamer191-1|m>Btw, if necessary, I can probably recruit a tonne of users on Reddit and Discord, and then I guess you won't need to worry about how large the project is because it won't be taking resources from other projects (unless the backend has a storage limitation)
14:43:16<gamer191-1|m>Cause people are very passionately opposed to itch.io removing nsfw games
14:43:22<@arkiver>OrIdow6: understood, no problem
14:44:14<@arkiver>and, 45 TB is fine
14:48:10<@OrIdow6>arkiver: Ah, nice, perhaps my standards for what is "big" are stuck in about 2019 :)
14:50:12<@arkiver>well, it would certainly cost quite some dollars on the long term to store
14:50:20<@arkiver>repos cloned, you should have write access
14:50:26<@arkiver>project is on the front page
14:50:34<@arkiver>you have admin access to the tracker page
14:50:37<@arkiver>of the project
14:51:20<@arkiver>OrIdow6: i set the limit to 0, do you want to increase it?
14:51:31<@arkiver>i will be unavailable the next ~7 hours
14:53:36<@arkiver>i've set it to 100 items/min
14:53:46<@arkiver>feel free to adjust
14:54:43<@OrIdow6>Thanks arkiver! Will
14:54:54<@OrIdow6>s/Will//
14:55:14<@OrIdow6>... Will do so, we'll see what this can take
14:56:45<gamer191-1|m><gamer191-1|m> "Btw, if necessary, I can..." <- I'll get a Reddit post ready
14:59:01<@imer>OrIdow6: activated drone, do you want a build kicked off as well?
14:59:22<@arkiver>i'll be off now, let me know if anything else is needed and i'll get to it in 7 hours (or someone else)
15:00:07<gamer191-1|m>OrIdow6 the project isn't working for me on the Archiveteam warrior: https://pastebin.com/raw/kmErbDjj
15:03:09<@OrIdow6>gamer191-1|m: Ah, I know what that is
15:03:18<@OrIdow6>imer: In a few minutes, need to change something
15:03:20<@OrIdow6>arkiver: Alright
15:04:00Mateon1 quits [Quit: Mateon1]
15:05:43Mateon1 joins
15:05:44<@imer>if we do want a channel for the project, #scratchtheitch popped into my head
15:06:15<@OrIdow6>Yeah that may be a good idea
15:06:27<@OrIdow6>I was thinking #burntheitch TBH, thought it'd be funny
15:06:37<@imer>nice
15:07:16<@OrIdow6>More suggestions are welcome
15:07:45<@OrIdow6>New version out, please update
15:08:53<nstrom|m>docker image doesn't seem to exist yet
15:09:01<@OrIdow6>Ah yeah
15:09:10<@OrIdow6>imer: You can build now, thanks
15:09:12<@OrIdow6>I need to eat
15:09:30<@imer>looks like it picked that commit up and did it itself, so should be good :)
15:09:33<@imer>enjoy your food!
15:10:56<gamer191-1|m>Should I do this Reddit post:
15:10:56<gamer191-1|m>> Archive Team are currently running an operation to archive Itch.io's free (and "pay what you want") NSFW games to https://web.archive.org/. I'm not quite sure the details, but I believe they're currently archiving metadata, and next they're gonna archive the actual games. If you would like to use your bandwidth to help with the archival method, read on!
15:10:56<gamer191-1|m>and then I'll post a blurb about using clean connections, followed by instructions to install the warrior
15:10:59arch quits [Ping timeout: 260 seconds]
15:11:51<gamer191-1|m>If yes, what should I put instead of "I'm not quite sure the details, but I believe they're currently archiving metadata, and next they're gonna archive the actual games" since that seems kinda cringe lol
15:12:11<nstrom|m>probably best to wait until it gets going and we see how much more help we need if any
15:12:18<@JAA>I'll add #itchy and #itchy.io as ideas.
15:12:55<@JAA>#burntheitch is very similar to #burnthetwitch and could cause confusion.
15:13:47<@imer>think that was the joke
15:14:40arch (arch) joins
15:26:05GradientCat quits [Client Quit]
15:26:22lemuria quits [Read error: Connection reset by peer]
15:27:04lemuria (lemuria) joins
15:28:29<gamer191-1|m><OrIdow6> "New version out, please update" <- I can't test the new version cause of rate-limiting, so hopefully it works
15:29:30<gamer191-1|m>Oh there we go
15:29:33<gamer191-1|m>Yeah, it's working
15:33:44<gamer191-1|m>Is it supposed to be rate-limited with only 60 items out? Is the rate-limit exponential?
15:34:56<gamer191-1|m>Yeah, I guess the rate-limit increases over time
15:35:56cuphead2527480 quits [Client Quit]
15:40:31AlsoHP_Archivist joins
15:42:20<nstrom|m>Lua runtime error: itchio-minimal.lua:306: assertion failed!, 404 for game:aminaleeds/sinners-sons
15:42:34HP_Archivist quits [Ping timeout: 240 seconds]
15:45:32HP_Archivist (HP_Archivist) joins
15:46:34AlsoHP_Archivist quits [Ping timeout: 240 seconds]
15:49:32AlsoHP_Archivist joins
15:51:14HP_Archivist quits [Ping timeout: 260 seconds]
15:52:44<@OrIdow6>nstrom|m: Thx
15:55:03<@OrIdow6>New version out, please update
15:58:08<gamer191-1|m>What is the network usage of an idle warrior? I'm averaging around 60kbps, and I can't tell if the job (which has been the same for 20 minutes) is frozen or if itch's servers are just extremely slow
15:58:37<@OrIdow6>Any more name ideas?
16:00:18<gamer191-1|m>IMO it's best to avoid ones that imply unrelated domain URLs (eg scratch.io)
16:00:41<ineffyble>#ditchtheitch ryhmes
16:01:41<ineffyble>#itchsnitch
16:04:06<gamer191-1|m>#collectivesave (parodying "collective shout", the group which pressured Itch to remove nsfw games)
16:05:51<@OrIdow6>I'm going to do this democratically
16:05:54<@OrIdow6>Please vote here https://rankedchoice.net/poll/5881950845849740120
16:10:39<@OrIdow6>Will decide in 30 minutes
16:10:42<gamer191-1|m><gamer191-1|m> "What is the network usage of..." <- https://pastebin.com/raw/Pb3CCYZH
16:10:43<gamer191-1|m>This has been running for 30 minutes and my Warrior's network usage is maxing 130kb/s and averaging 60kb/s. I don't mind it running for as long as it needs in the background, but I figured I should report that in case it's completely frozen and this is a bug
16:11:40<gamer191-1|m>gamer191-1|m: (this is for itch.io btw)
16:13:49<gamer191-1|m>gamer191-1|m: Nevermind, it was just being extremely slow
16:15:28<@OrIdow6>gamer191-1|m: It's a S3 server behind Cloudflare, if it's slow that might be your end
16:20:25<awauwa>there's also the tracker rate-limit
16:21:23<gamer191-1|m>awauwa: that wouldn't affect download speeds, would it?
16:22:27<@OrIdow6>20
16:22:31<awauwa>well no but if you have 6 jobs just waiting for items and checking every now and then you will see slow average speeds
16:23:19<gamer191-1|m>OrIdow6: Nah, it was just that one download that was slow, maybe it was a Cloudflare bug (I don't care enough to check tbh)
16:24:24<gamer191-1|m>Also this game is like 600mb lol
16:25:31<gamer191-1|m>Out of interest, what is the tracker rate-limit?
16:33:59<@OrIdow6>kiska: Can we get a grafana for itch?
16:35:39pokechu22 (pokechu22) joins
16:35:41<eggdrop>[tell] pokechu22: [2025-07-28T14:30:22Z] <OrIdow6> thx for your work, it's in -items now :)
16:37:30GradientCat (GradientCat) joins
16:38:35<@OrIdow6>30
16:38:37<@OrIdow6>gamer191-1|m: 30
16:39:21Mateon1 quits [Client Quit]
16:40:05Mateon1 joins
16:42:55<gamer191-1|m>OrIdow6: Thanks
16:42:55<gamer191-1|m>Wait, why does it say “328 out”? Am I misunderstanding that page?
16:43:54Juest quits [Ping timeout: 240 seconds]
16:44:06<@imer>gamer191-1|m: out is the number of items handed out to workers that haven't completed (so that includes both items being worked on and items that are abandoned)
16:45:10<gamer191-1|m>imer: Oh
16:45:10<gamer191-1|m>How does the tracker know when an item is abandoned, and how does it handle it?
16:45:40<@imer>it doesnt
16:46:08<@imer>we generally enable reclaims with a timeout, so items can be reclaimed after x hours
16:47:28Mateon1 quits [Remote host closed the connection]
16:47:49<gamer191-1|m>Oh
16:47:49<gamer191-1|m>How does the rate limit work? Is it items per minute or something?
16:48:13Mateon1 joins
16:48:19Juest (Juest) joins
16:49:51<@imer>gamer191-1|m: exactly, yep
16:53:16grill (grill) joins
16:53:32<gamer191-1|m>imer: thanks
17:01:11AlsoHP_Archivist quits [Client Quit]
17:01:27HP_Archivist (HP_Archivist) joins
17:18:20<h2ibot>Imer edited Itch.io (+23, add #scratchtheitch irc channel): https://wiki.archiveteam.org/?diff=56621&oldid=56619
17:28:34grill quits [Ping timeout: 240 seconds]
17:31:23cuphead2527480 (Cuphead2527480) joins
17:34:41grill (grill) joins
18:03:39grill quits [Ping timeout: 260 seconds]
18:14:36awauwa quits [Quit: awauwa]
18:41:26APOLLO03a joins
18:42:54APOLLO03 quits [Ping timeout: 240 seconds]
19:00:35<h2ibot>Nicolas17v2 edited Itch.io (+221, Add tracker link): https://wiki.archiveteam.org/?diff=56622&oldid=56621
19:09:54GradientCat quits [Client Quit]
19:32:09its_notjack (its_notjack) joins
19:33:55Guest58_ quits [Quit: My Mac has gone to sleep. ZZZzzz…]
19:51:34threedeeitguy69 quits [Ping timeout: 260 seconds]
19:56:31<gamer191-1|m>Random thought: it would be great if the tracker statistics included the rate limit for the project as well as the average number of claims per minute in a specific amount of time (probably last hour). That way people could determine which projects need more help and which ones have more than enough workers already
19:59:09pabs quits [Ping timeout: 260 seconds]
20:02:15<its_notjack>(out of the loop) what's going on with TCRF as mentioned in the main channel's topic?
20:03:19<pokechu22>They were getting ddosed. Probably doesn't need to be there anymore
20:06:08<its_notjack>ah, valid. i was worried there for a bit!
20:18:01threedeeitguy69 (threedeeitguy) joins
20:41:55<h2ibot>OrIdow6 edited Itch.io (+292): https://wiki.archiveteam.org/?diff=56623&oldid=56622
20:58:55<katia>gamer191-1|m, there is
20:58:56<katia>the item serve rate
20:59:00<katia>is between 0 and 1
20:59:16<katia>the closer it is to 1 the more every request for an item is being met
20:59:23<katia>0.5 means half the workers are sleeping
21:02:09<@OrIdow6>And the irsr on my projects is usally around 0.1% :)
21:04:51APOLLO03a quits [Remote host closed the connection]
21:05:19APOLLO03a joins
21:10:22dabs joins
21:48:52egallager quits [Quit: This computer has gone to sleep]
21:52:33egallager joins
22:03:22linuxgemini (linuxgemini) joins
22:36:05APOLLO03a quits [Client Quit]
22:37:37gatagoto (gatagoto) joins
22:39:30cuphead2527480 quits [Quit: Connection closed for inactivity]
22:48:05pie_ quits []
22:51:00<Vokun>kiska can we get a graph for itchio-minimal?
22:52:04<nicolas17>ETA 11.7 hours (Jul 29 10:35) fwiw
23:02:44<@JAA>→ #scratchtheitch
23:06:44Dada quits [Remote host closed the connection]
23:07:58Wohlstand (Wohlstand) joins
23:12:10Webuser868678 joins
23:12:36Webuser868678 quits [Client Quit]
23:19:14Wohlstand quits [Ping timeout: 260 seconds]
23:24:33etnguyen03 (etnguyen03) joins
23:29:46fionera quits [Remote host closed the connection]
23:29:48fionera joins
23:29:48fionera quits [Changing host]
23:29:48fionera (Fionera) joins
23:41:10etnguyen03 quits [Client Quit]
23:41:30etnguyen03 (etnguyen03) joins
23:51:17etnguyen03 quits [Client Quit]
23:51:37etnguyen03 (etnguyen03) joins