02:05:46tzt quits [Ping timeout: 265 seconds]
02:08:35tzt (tzt) joins
02:22:11nicolas17 joins
02:24:33BearFortress_ joins
02:28:17<nicolas17>is the docker image for urlteam in atdr.meo.ws/archiveteam? the github readme gives instructions for building my own image
02:41:19<datechnoman>Try this nicolas17 - docker run -d --network host --restart always --log-opt max-size=10m --log-opt max-file=2 --name urlteam atdr.meo.ws/archiveteam/terroroftinytown-client-grab:latest --concurrent 6 YOURUSERNAME
02:42:27<nicolas17>that docker URL should be documented in the wiki and/or the readme... thanks
02:47:42<TheTechRobo>nicolas17: As a general rule, docker image adresses are `atdr.meo.ws/archiveteam/${NAME_OF_GITHUB_REPO}`
02:48:46<nicolas17>does the tracker have some sense of "number of URLs in queue, waiting to be fetched"?
02:49:10<nicolas17>or is this project just bruteforcing? :P
02:51:52<datechnoman>We are bruteforcing every url lol
02:52:05<datechnoman>Mind you slowly bruteforcing
02:53:32<myself>it's weird to me how slow it is, like are the shorteners really that easily overloaded? I could imagine my client running that number of queries against _every shortener simultaneously_, not against one shortener at a time. I'm not doubting that there's a reason behind it, I'm just baffled that they must suck so bad.
02:53:47<nicolas17>fetching shortened URLs crawled from websites is just handled by archivebot etc while crawling the website?
03:04:46<TheTechRobo>myself: I think the project is intended to have a ton of URL shorteners at a time. But no time.
03:05:44<myself>yeah, I mean, it's the project I run on stuff where I don't want to chew up my bandwidth cap but still contribute something useful, cuz it's barely a trickle, so that's nice
03:05:51<myself>I just wouldn't mind a "brrrt" mode
05:24:35qwertyasdfuiopghjkl (qwertyasdfuiopghjkl) joins
06:55:27s-crypt|m joins
07:08:24pabs quits [Ping timeout: 252 seconds]
07:09:04pabs (pabs) joins
07:19:13pabs quits [Ping timeout: 252 seconds]
07:38:55pabs (pabs) joins
12:11:28Chris5010 (Chris5010) joins
16:41:46<@JAA>Yeah, someone needs to add more shorteners to the tracker, then it can go brrr.
17:31:41Nickwasused joins
17:50:40<Ryz>JAA, would like to add more, mmm :c
17:50:51<Ryz>Need more guidance on the more complicated ones...
18:07:53BearFortress_ quits [Client Quit]
18:13:56Nickwasused quits [Read error: Connection reset by peer]
18:17:50Jake quits [Quit: Leaving for a bit!]
18:18:44BearFortress joins
18:19:18Jake (Jake) joins
20:51:25<vokunal|m>Only like 2 concurrent run at a time and the rest always get stuck trying again. That's a limit with the amount of different shorteners?
21:19:50<@JAA>The tracker limits everyone to one batch per shortener per IP at a time. But there are usually way more workers than batches available, so you'll often see even less.