00:04:40ericgallager joins
00:09:36kansei- (kansei) joins
00:09:56kansei quits [Ping timeout: 256 seconds]
00:15:05pabs quits [Read error: Connection reset by peer]
00:15:45pabs (pabs) joins
00:20:16<Guest>is it normal for the universal-tracker to take several seconds to load each page?
00:20:48<nicolas17>your own install of it?
00:25:51<Guest>yes
00:26:22<Guest>its using the development redis database, but i dont believe that should be any slower
00:26:33cmlow quits [Quit: Ping timeout (120 seconds)]
00:26:37<Guest>its the ruby http server thats the problem
00:26:44cmlow joins
00:26:44kansei- quits [Client Quit]
00:26:56kansei (kansei) joins
00:32:36kansei quits [Ping timeout: 256 seconds]
00:34:51kansei (kansei) joins
00:39:07arch quits [Ping timeout: 272 seconds]
00:41:12etnguyen03 quits [Client Quit]
00:42:17kansei quits [Ping timeout: 272 seconds]
00:42:58kansei (kansei) joins
00:44:30datechnoman (datechnoman) joins
00:47:25arch (arch) joins
00:48:28kansei quits [Ping timeout: 256 seconds]
00:48:57kansei (kansei) joins
00:59:30kansei- (kansei) joins
01:00:22kansei quits [Ping timeout: 256 seconds]
01:15:11DogsRNice joins
01:20:58etnguyen03 (etnguyen03) joins
01:25:47hackbug (hackbug) joins
01:38:22<Guest>i can also queue items, but the grab container isnt picking them up. the host is set correctly (and reachable)
01:40:03Island joins
02:19:57nine quits [Quit: See ya!]
02:20:09nine joins
02:20:09nine quits [Changing host]
02:20:09nine (nine) joins
02:28:11<TheTechRobo>dumb question, but are you using the correct ID for the project in the tracker?
02:29:26<nicolas17>is seesaw still compatible with universal-tracker? wouldn't surprise me if it accidentally (or otherwise) depends on the new tracker now
02:38:33<Guest>nicolas17, TheTechRobo : i just fixed this, the problem is i used the adobeaero-grab code (although i modified it to fix the project), but from what i can tell, it was pulling multiple items at once (with ?multi=MULTI_ITEM_SIZE). the open source universal-tracker doesnt support it and it gives 404 errors (which seesaw interpreted as no items in the queue)
02:38:57<Guest>s/to fix the project/to fit the project
02:46:03<nicolas17>ha, so it was protocol incompatibility kinda
02:47:03midou quits [Ping timeout: 272 seconds]
02:47:48<@JAA>That's in the pipeline code though, not in seesaw.
02:49:34<Guest>yes, the pipeline code was sending an incorrect string to seesaw which caused it to 404
02:49:38<nicolas17>oh so multiitems are an even dirtier hack than I thought
02:49:43<nicolas17>:p
02:50:54<@JAA>Guest: Technically to seesaw, but really to the tracker as seesaw just passes it on. Just wanting to keep the terminology straight there.
02:51:00<@JAA>nicolas17: Absolutely right.
02:51:24<nicolas17>the null-separated item names already looked hackish
02:51:56<nicolas17>but if seesaw itself is completely unaware of all this... lol
02:52:22<@JAA>Yep, seesaw has zero awareness of multi-items.
02:52:28<Guest>the grab script is now giving "Curl expects to upload a single file.", even when rsync concurrent uploads is set to 1. is there any base -grab image that works with universal-tracker out of the bix?
02:52:52<nicolas17>curl?
02:54:37<Guest>yes it says "Uploading with Curl to http://target/warrior/{project}/{downloader}/"
02:55:42midou joins
02:57:00nine quits [Client Quit]
02:57:13nine joins
02:57:13nine quits [Changing host]
02:57:13nine (nine) joins
02:58:30HP_Archivist (HP_Archivist) joins
03:04:19<Guest>i used http instead of rsync.[1] after changing that i got "Unknown module 'warrior'", but had to replace "warrior" with "ateam-airsync" which fixed it. i got the "warrior" from the default uploads config example.
03:04:23<Guest>[1] https://github.com/ArchiveTeam/seesaw-kit/blob/699b0d215768c2208b5b48844c9f0f75bd6a1cbc/seesaw/tracker.py#L235
03:10:21<Guest>"@ERROR: max connections (-1) reached -- try again later", increasing connections env var didnt work. this is using atdr.meo.ws/fusl/ateam-airsync
03:14:00dendory quits [Quit: The Lounge - https://thelounge.chat]
03:15:05dendory (dendory) joins
03:15:31dendory quits [Client Quit]
03:16:19dendory (dendory) joins
03:42:53<TheTechRobo>that will not be the connections var, that will be the disk limits
03:43:35<TheTechRobo>-1 means the target has reached the soft disk limit and is not accepting connections. Assuming you used the 50% example in the README, you probably have >50% disk usage
03:56:59etnguyen03 quits [Client Quit]
04:00:36Wohlstand (Wohlstand) joins
04:02:18etnguyen03 (etnguyen03) joins
04:03:05linuxgemini5 (linuxgemini) joins
04:03:24linuxgemini quits [Read error: Connection reset by peer]
04:03:24linuxgemini5 is now known as linuxgemini
04:05:35stepney141 quits [Ping timeout: 272 seconds]
04:07:18stepney141 (stepney141) joins
04:14:37<nicolas17>https://opensource.samsung.com/uploadList?menuItem=mobile is this down or did they ban me?
04:14:57<nicolas17>ok I'm getting error 522 rather than timeout now
04:15:16<@Fusl>nicolas17: works from here
04:15:35<nicolas17>and now works again
04:15:40<@Fusl>yw
04:15:43<@Fusl>(:
04:17:40Webuser444726 joins
04:18:18Webuser444726 quits [Client Quit]
04:18:32Webuser473229 joins
04:20:18etnguyen03 quits [Client Quit]
04:27:12whimsysciences quits [Ping timeout: 256 seconds]
04:27:48etnguyen03 (etnguyen03) joins
04:28:13etnguyen03 quits [Remote host closed the connection]
04:37:20Island quits [Read error: Connection reset by peer]
04:41:12DogsRNice quits [Read error: Connection reset by peer]
04:41:56<Webuser473229>l;k;
04:42:57datechnoman quits [Ping timeout: 272 seconds]
04:50:28datechnoman (datechnoman) joins
05:00:59nexussfan quits [Quit: Konversation terminated!]
05:06:44nexussfan (nexussfan) joins
05:21:47despot joins
05:21:47<eggdrop>[tell] despot: [2025-12-08T13:04:15Z] <cruller> see https://irclogs.archivete.am/archiveteam-bs/2025-12-08#l50ec5a4b
05:21:56<despot>hi there
05:25:17<despot>i noticed that https://prnt.sc/ doesn't have randomized urls. that means https://prnt.sc/111111, https://prnt.sc/111112 and so on are all valid urls. so that's potentially 2176782336 images. if it seems like it might ever go down, would you archive it?
05:31:07Webuser473229 quits [Client Quit]
05:40:11despot quits [Client Quit]
05:40:57despot joins
05:51:27nukke quits [Quit: nukke]
05:55:28nukke (nukke) joins
05:55:47despot_ joins
06:00:13despot quits [Ping timeout: 272 seconds]
06:09:48klea wonders if there could be some kind of deduplication https://prnt.sc/111110
06:10:16<klea>since it also returns a 200 for firefox
06:17:12nexussfan quits [Client Quit]
06:31:41<despot_>idk honestly, i just noticed that earlier images are 6 letters/digits long. 0 can't be the first number though ive just noticed
06:36:57<@JAA>Pretty sure those are ancient. Recent URLs have [0-9a-zA-Z_-]{12} codes.
06:38:13midou quits [Ping timeout: 272 seconds]
06:45:13<despot_>it seems so