00:14:54 | <nicolas17> | AB would split it into multiple warcs right? |
00:17:01 | <pokechu22> | Yeah |
00:18:50 | <Flashfire42> | Do all projects use a bloom filter or is it just #// ? |
00:19:17 | <nicolas17> | either way, this 72TB figure puts my 624GiB of Apple simulator runtimes into a good perspective :P |
00:20:01 | <@JAA> | Flashfire42: Virtually all projects do. |
00:20:50 | <Flashfire42> | I am just thinking about if queueing stuff to youtube or telegram is making it so that there is the potential that things are skipped by a bloom filter. |
00:22:39 | <@JAA> | Yes |
00:23:30 | <Flashfire42> | is there anyway to further minimise that? |
00:24:30 | <@JAA> | Not really unless you find a pile of money somewhere. :-) |
00:25:49 | <nicolas17> | I assume the bloom filter can be made larger to have less false positives, but you can never bring it to 0 |
00:25:50 | <pokechu22> | It's only a risk of the URL you queue colliding with another URL in the same project, so I don't think that's much of a problem, I guess unless the URL itself is invalid for the project? |
00:39:42 | <@JAA> | nicolas17: Correct. A pile of money would allow for more memory = larger filter = lower false positive rate. A much bigger pile of money would be needed to avoid false positives entirely (by not using a bloom filter at all). |
00:41:32 | <nicolas17> | I assume inactive projects have their bloom filter offloaded from memory already? |
00:42:43 | <Vokun> | It's physically possible to dedup with no false positives on something that big? |
00:43:00 | <Vokun> | I guess somewhere someone's doing it, but it sounds impossible |
00:43:25 | <@JAA> | pokechu22: Every item being added has a tiny chance of colliding with some previously seen item in the same project. URL validity doesn't really play a role in this, and at least for things queued with qubert, there's a sniff test anyway. |
00:43:49 | <Flashfire42> | on something in the thousands or millions maybe but not likely billions |
00:43:51 | <@JAA> | nicolas17: Yes, and there are also other things to minimise memory usage. |
00:44:10 | <nicolas17> | Vokun: it's definitely possible, but it may not be fast enough |
00:44:12 | <@JAA> | Vokun: I'm sure AWS has a product to sell you for this exact thing. ;-) |
00:44:39 | <@JAA> | You can shard it efficiently via hashes. |
00:44:40 | <Vokun> | 7 TB of ram |
00:45:01 | <@JAA> | Yeah, it's not going to be cheap. |
00:45:09 | <nicolas17> | like, you can have a SQL database with a unique index |
00:45:50 | <nicolas17> | but if you can't fit large parts of the database in memory, how long will it take to insert a million item IDs when there's already a billion? |
00:46:46 | <@JAA> | Yeah, performance is the other big knob. |
00:46:48 | | etnguyen03 (etnguyen03) joins |
00:48:44 | <Vokun> | I got a new ssd, and i'm thinking of just, deduping my url stash in not memory. I've deduped chunks of it, but never the chunks against each other, but I think my new ssd is probably fast enough to |
00:50:04 | <Vokun> | I think it'd take me longer to figure out how to do it the right way than to just do this |
00:52:00 | <nicolas17> | sort -u |
01:00:54 | | wickedplayer494 quits [Ping timeout: 260 seconds] |
01:01:31 | | wickedplayer494 joins |
01:01:36 | | wickedplayer494 is now authenticated as wickedplayer494 |
01:02:28 | | lennier2 joins |
01:05:34 | | lennier2_ quits [Ping timeout: 260 seconds] |
01:12:25 | | PredatorIWD253 joins |
01:14:54 | | PredatorIWD25 quits [Ping timeout: 260 seconds] |
01:14:54 | | PredatorIWD253 is now known as PredatorIWD25 |
01:18:01 | <h2ibot> | PaulWise edited Anubis (+49, LWN article): https://wiki.archiveteam.org/?diff=56412&oldid=56099 |
01:29:08 | <h2ibot> | PaulWise edited Anubis/uncategorized (-69, use <code> to avoid having to add *): https://wiki.archiveteam.org/?diff=56413&oldid=56276 |
01:40:09 | <h2ibot> | PaulWise edited Anubis/uncategorized (+329, more domains/urls): https://wiki.archiveteam.org/?diff=56414&oldid=56413 |
02:12:11 | | dabs quits [Read error: Connection reset by peer] |
02:16:15 | <h2ibot> | PaulWise edited Anubis (+72, adjust for new metarefresh challenge): https://wiki.archiveteam.org/?diff=56415&oldid=56412 |
02:31:42 | | etnguyen03 quits [Client Quit] |
02:39:11 | | etnguyen03 (etnguyen03) joins |
02:51:57 | | etnguyen03 quits [Remote host closed the connection] |
02:57:37 | | lennier2_ joins |
03:00:47 | | lennier2 quits [Ping timeout: 276 seconds] |
03:31:59 | | magmaus3 quits [Ping timeout: 276 seconds] |
03:32:09 | | magmaus3 (magmaus3) joins |
03:36:29 | <Dango360> | pabs: im guessing it was an intended that Anubis/uncategorized is now super condensed? https://transfer.archivete.am/13ORzn/Screenshot%202025-07-11%20at%2004.35.54.png |
03:36:29 | <eggdrop> | inline (for browser viewing): https://transfer.archivete.am/inline/13ORzn/Screenshot%202025-07-11%20at%2004.35.54.png |
03:37:01 | <pabs> | oh :/ |
03:37:59 | <pabs> | hmm, it isn't condensed here |
03:38:46 | <Dango360> | i'm on chrome 138 |
03:41:21 | <pokechu22> | pabs: did you want <pre> instead of <code> ? |
03:43:07 | <pabs> | kinda, but the links aren't HTMLified :/ |
03:43:30 | <h2ibot> | PaulWise edited Anubis/uncategorized (-2, use <pre>): https://wiki.archiveteam.org/?diff=56416&oldid=56414 |
03:43:59 | <pabs> | just wanted \n to be turned into <br/> automatically. apparently that requires the <poem> extension, which isn't enabled... |
03:44:25 | <BlankEclair> | enable scribunto, and i can make lua mess real quick :thumbs_up: |
03:47:14 | | Guest58 joins |
03:47:17 | <BlankEclair> | wait, y'all have string functions enabled in parserfunctions |
03:47:45 | <BlankEclair> | > Error: String exceeds 1,000 character limit. |
03:47:47 | <BlankEclair> | oh |
03:52:58 | <BlankEclair> | there we go |
03:53:09 | <BlankEclair> | now everyone can be unhappy |
03:53:32 | <h2ibot> | BlankEclair edited Anubis/uncategorized (-7, Make links work (Parsoid users hate this trick!)): https://wiki.archiveteam.org/?diff=56417&oldid=56416 |
03:55:38 | <BlankEclair> | oh wait, this hack also works w/ parsoid: https://meta.miraheze.org/wiki/user:blankEclair/Quips?action=parsermigration-edit |
03:55:43 | | magmaus3 quits [Read error: Connection reset by peer] |
03:55:52 | | magmaus3 (magmaus3) joins |
04:01:00 | | Radzig quits [Quit: ZNC 1.10.1 - https://znc.in] |
04:01:38 | | Radzig joins |
04:11:05 | | Snivy quits [Quit: The Lounge - https://thelounge.chat] |
04:15:53 | | Snivy (Snivy) joins |
04:21:35 | | Guest58 quits [Client Quit] |
04:22:09 | | Guest58 joins |
04:25:33 | | ThetaDev quits [Quit: https://quassel-irc.org - Chat comfortably. Anywhere.] |
04:26:02 | | ThetaDev joins |
04:26:49 | | Guest58 quits [Ping timeout: 260 seconds] |
04:36:38 | | Shjosan quits [Quit: Am sleepy (-, – )…zzzZZZ] |
04:37:04 | | Point joins |
04:38:39 | | Shjosan (Shjosan) joins |
04:42:33 | | Guest58 joins |
04:42:38 | <Point> | wanting to run archive team via docker on my home server, reading the guide it said to ask here for the link to the docker image for a specific project. no real preference on project, ideally whatever is needed most/under represented |
04:44:36 | <BlankEclair> | Point: you can also run the warrior as a docker image |
04:44:50 | | benjins3_ quits [Read error: Connection reset by peer] |
04:45:53 | <Point> | yes, im wanting to run it as a docker image, though the guide on the wiki page i saw for that said to ask here for image link |
04:48:43 | <Point> | ah, i had found a list of the docker images, ill pick one from there, thank you |
04:52:53 | <pabs> | BlankEclair++ |
04:52:53 | <eggdrop> | [karma] 'BlankEclair' now has 14 karma! |
04:53:14 | | skyrocket quits [Ping timeout: 276 seconds] |
04:57:28 | | skyrocket joins |
04:59:39 | <Point> | oh i misread what was said by BlankEclair, or more missed that Warrior decides the project it is working on automatically, thank you for the suggestion, ill go with that |
04:59:50 | <BlankEclair> | mhm ^_^ |
05:00:02 | <BlankEclair> | it automatically handles doing the project-specific grab stuff |
05:00:11 | <BlankEclair> | and provides a web ui |
05:00:32 | <BlankEclair> | (you can also pick a specific project within the web ui, but obv you don't mind going with AT choice) |
05:00:53 | | Snivy quits [Client Quit] |
05:02:20 | <Point> | well than you so much, i have that deployed now <3 |
05:02:23 | <Point> | BlankEclair++ |
05:02:23 | <eggdrop> | [karma] 'BlankEclair' now has 15 karma! |
05:02:28 | <BlankEclair> | :3 |
05:03:50 | <steering> | BlankEclair: LOL @ remove the closing tag |
05:03:55 | <BlankEclair> | IKR |
05:04:01 | <BlankEclair> | i don't know why it works |
05:04:04 | <BlankEclair> | but it does. flawlessly. |
05:04:44 | | TheEnbyperor quits [Ping timeout: 260 seconds] |
05:06:10 | <steering> | mediawiki-- |
05:06:10 | <eggdrop> | [karma] 'mediawiki' now has -1 karma! |
05:06:11 | <steering> | BlankEclair++ |
05:06:13 | <eggdrop> | [karma] 'BlankEclair' now has 16 karma! |
05:06:14 | | TheEnbyperor_ quits [Ping timeout: 276 seconds] |
05:06:19 | <BlankEclair> | y'know what |
05:06:21 | <BlankEclair> | CirrusSearch-- |
05:06:22 | <eggdrop> | [karma] 'CirrusSearch' now has -1 karma! |
05:09:19 | | Snivy (Snivy) joins |
05:10:35 | | nicolas17 quits [Quit: Konversation terminated!] |
05:20:21 | | TheEnbyperor (TheEnbyperor) joins |
05:20:27 | | TheEnbyperor_ joins |
05:24:09 | | nicolas17 joins |
05:28:04 | | pabs quits [Ping timeout: 260 seconds] |
05:30:28 | | Guest58 quits [Client Quit] |
05:41:26 | | nicolas17_ joins |
05:41:28 | | nicolas17 quits [Read error: Connection reset by peer] |
05:42:02 | | DartRetaliator joins |
05:47:41 | | Guest58 joins |
06:00:51 | | pabs (pabs) joins |
06:02:55 | | nicolas17 joins |
06:02:55 | | nicolas17_ quits [Read error: Connection reset by peer] |
06:11:38 | | Point quits [Client Quit] |
06:16:18 | | nicolas17_ joins |
06:17:04 | | nicolas17 quits [Read error: Connection reset by peer] |
06:30:15 | | nicolas17 joins |
06:34:34 | | nicolas17_ quits [Ping timeout: 260 seconds] |
06:36:19 | | skyrocket quits [Ping timeout: 260 seconds] |
06:47:19 | | skyrocket joins |
06:50:09 | | yourfate1 quits [Quit: WeeChat 4.5.1] |
06:50:22 | | PredatorIWD25 quits [Read error: Connection reset by peer] |
06:55:34 | | PredatorIWD25 joins |
06:58:17 | | Guest58 quits [Client Quit] |
08:44:37 | <@arkiver> | nulldata: i'm definitely in for that!! |
08:45:37 | <@arkiver> | the 72 TB would be nice to have a separate project |
08:47:18 | | Dada joins |
08:49:28 | | Island quits [Read error: Connection reset by peer] |
08:55:13 | | lemuria_ is now known as lemuria |
09:00:53 | | agtsmith quits [Ping timeout: 276 seconds] |
09:04:24 | | simon816 quits [Quit: ZNC 1.9.1 - https://znc.in] |
09:09:28 | | nicolas17_ joins |
09:10:51 | | simon816 (simon816) joins |
09:12:39 | | nicolas17 quits [Ping timeout: 260 seconds] |
09:29:54 | | DartRetaliator_ joins |
09:33:39 | | DartRetaliator quits [Ping timeout: 260 seconds] |
09:45:44 | | BennyOtt_ joins |
09:45:44 | | BennyOtt quits [Ping timeout: 276 seconds] |
09:46:50 | | BennyOtt_ is now known as BennyOtt |
09:46:56 | | BennyOtt is now authenticated as BennyOtt |
09:46:57 | | awauwa (awauwa) joins |
09:51:10 | | Guest58 joins |
10:01:57 | | Lunarian1 (LunarianBunny1147) joins |
10:05:44 | | LunarianBunny1147 quits [Ping timeout: 260 seconds] |
10:11:34 | | nicolas17_ quits [Ping timeout: 260 seconds] |
10:11:44 | | unlobito quits [Ping timeout: 276 seconds] |
10:14:21 | | nicolas17_ joins |
10:19:31 | | unlobito (unlobito) joins |
10:31:06 | | pixel (pixel) joins |
10:33:38 | | agtsmith joins |
10:47:09 | <@arkiver> | hexagonwin: yes, upload it to IA - others can download it then, but it's unlikely to go into the Wayback Machine |
10:47:32 | <@arkiver> | tzt: please add it to deathwatch! "ROCKET3 .NET" |
10:49:30 | | Wohlstand (Wohlstand) joins |
10:54:52 | | cuphead2527480 (Cuphead2527480) joins |
11:00:45 | | benjins3 joins |
11:35:14 | <@arkiver> | pabs: it may be nice to do a tripod project, yes |
11:35:52 | <@arkiver> | the wiki page at https://wiki.archiveteam.org/index.php/Tripod mentions 17500 sites that return a 200, do we have a list of those? |
11:37:09 | <@arkiver> | pabs: on tuxfamily, is there some page with more information on it? i see no mention of 'tux' on deathwatch for example |
11:37:59 | <@arkiver> | nicolas17: in my opinion, 624 GiB for the Apple Xcode data is totally worth it... |
11:38:06 | <@arkiver> | have we archived it already? |
11:41:11 | <justauser|m> | There was a job for encode.su in 2024 but it was aborted rather quickly. https://archive.fart.website/archivebot/viewer/job/202404221145045frly No mention of reason in #archiveteam-bs logs; can anybody with password to #archivebot check there? Unless it was something transient, restarting is probably a bad idea. |
11:41:25 | <justauser|m> | The main page about tuxfamily is https://pad.notkiska.pw/p/archivebot-tuxfamily |
11:48:19 | | PotatoProton01 joins |
11:48:52 | <@arkiver> | nulldata: do you have an idea how complete your list of drivers is? it's pretty nice stuff |
11:49:20 | <@arkiver> | you said you only looked for drivers - but what if you look for "everything", what would that give us? |
12:04:47 | | Onyx joins |
12:09:31 | | pixel leaves |
12:20:28 | | PotatoProton01 quits [Client Quit] |
12:24:01 | | pixel (pixel) joins |
13:00:28 | | unlobito quits [Remote host closed the connection] |
13:01:24 | | unlobito (unlobito) joins |
13:03:00 | | pixel leaves |
13:03:05 | | pixel (pixel) joins |
13:04:35 | | cuphead2527480 quits [Client Quit] |
13:23:46 | | sec^nd quits [Remote host closed the connection] |
13:24:00 | | sec^nd (second) joins |
13:26:47 | | egallager quits [Quit: This computer has gone to sleep] |
13:31:04 | | DartRetaliator_ quits [Ping timeout: 260 seconds] |
13:33:48 | | pixel leaves |
13:33:53 | | pixel (pixel) joins |
13:45:39 | | BennyOtt quits [Ping timeout: 260 seconds] |
13:46:43 | | BennyOtt (BennyOtt) joins |
13:49:35 | | grill (grill) joins |
14:01:59 | | grill quits [Ping timeout: 260 seconds] |
14:03:39 | | pixel leaves |
14:03:43 | | grill (grill) joins |
14:09:17 | | egallager joins |
14:31:42 | | PredatorIWD25 quits [Read error: Connection reset by peer] |
14:34:39 | | PredatorIWD25 joins |
15:08:08 | | BennyOtt quits [Ping timeout: 276 seconds] |
15:08:11 | | BennyOtt_ joins |
15:09:10 | | BennyOtt_ is now known as BennyOtt |
15:09:15 | | BennyOtt is now authenticated as BennyOtt |
15:12:41 | | grill quits [Ping timeout: 276 seconds] |
15:12:42 | | BennyOtt quits [Client Quit] |
15:13:43 | | BennyOtt (BennyOtt) joins |
15:29:47 | | pixel (pixel) joins |
15:33:44 | | egallager quits [Read error: Connection reset by peer] |
15:34:01 | | egallager joins |
15:37:02 | | egallager quits [Read error: Connection reset by peer] |
15:37:20 | | egallager joins |
15:53:03 | | dabs joins |
15:54:40 | | nicolas17_ is now known as nicolas17 |
16:00:21 | | BornOn420 quits [Remote host closed the connection] |
16:00:54 | | BornOn420 (BornOn420) joins |
16:04:12 | | cuphead2527480 (Cuphead2527480) joins |
16:16:43 | <nulldata> | I think it's all of them, but Windows Server Update Services (WSUS) is notoriously buggy and the database isn't super well documented. |
16:16:43 | <nulldata> | If I synced all products, in theory I'd have a database of pretty much every AV definition update, patches, service packs, and drivers for most Microsoft software products released since 2000 that you might use in a commercial setting. That's as long as WSUS doesn't croak and I don't run out of space to house the DB. Drivers for each version of |
16:16:43 | <nulldata> | Windows are in their own "products" as usually admins stay away from syncing drivers as it tends to make WSUS very unstable. Syncing the drivers took over a day and uses around 108GB for its MSSQL database. |
16:17:43 | <nulldata> | To note, I've never actually tried to sync really old products like say Office XP 2002. It's in the list, but I dunno if any data is still there for it or if the files are still live. I've got a snapshot before I synced drivers - I could roll back and give it a shot... |
16:17:51 | <nulldata> | arkiver^ |
16:17:58 | <nulldata> | arkiver ^ |
16:18:32 | <@arkiver> | nulldata: it would be pretty nice to have a complete list of all of their binaries, next to drivers |
16:18:41 | <@arkiver> | though i see that may require significant resources |
16:18:48 | <@arkiver> | nulldata: ^ |
16:18:54 | <@arkiver> | i'll set up a project for the 72 TB |
16:19:04 | | Onyx quits [Remote host closed the connection] |
16:19:15 | <@arkiver> | will start i the weekend and we'll get them as fast as possible |
16:19:38 | <@arkiver> | in* |
16:22:12 | <@arkiver> | nulldata: am i correct in thinking that "the other stuff" next to the deriver might run into the many 100s of TBs? (or likely 1+ PB) |
16:22:27 | <@arkiver> | will name the current project "Windows Update Drivers" |
16:22:37 | <@arkiver> | a more general later could be "Windows Update" |
16:22:43 | <@arkiver> | if we decide to do that |
16:22:58 | | grill (grill) joins |
16:24:27 | | nicolas17_ joins |
16:28:59 | | nicolas17 quits [Ping timeout: 260 seconds] |
16:29:26 | | jinn6 quits [Quit: WeeChat 4.6.3] |
16:31:20 | <@arkiver> | nulldata: i see several lines like "2AD00540-6449-4DDF-A603-72935120AC6E,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL" is that correct? |
16:31:27 | | jinn6 joins |
16:31:38 | <nicolas17_> | arkiver: can I put the 624GiB URL list into archivebot or should I split it up? |
16:31:46 | | nicolas17_ is now known as nicolas17 |
16:32:35 | <@arkiver> | nicolas17: where is your list? |
16:32:46 | <@arkiver> | i'm not an expert in ArchiveBot, so it's probably a question for JAA |
16:33:41 | <nicolas17> | nowhere but lemme upload :P |
16:34:04 | <nicolas17> | https://transfer.archivete.am/inline/x9qyW/updates.cdn-apple.com-xcode-simulators.txt |
16:34:50 | <@arkiver> | nulldata: distribution of years https://transfer.archivete.am/M7Wv1/WUDrivers_years.txt |
16:34:51 | <eggdrop> | inline (for browser viewing): https://transfer.archivete.am/inline/M7Wv1/WUDrivers_years.txt |
16:35:02 | <nicolas17> | I checked the cdx API, none of them are in WBM, there's only a few failed captures that returned an html error |
16:35:08 | <@arkiver> | looks like either the number of drivers really took off with Windows 10, or many earlier ones were already deleted |
16:35:20 | <@arkiver> | nicolas17: how large are the files? |
16:35:49 | <@arkiver> | pretty big i guess judging by number of URLs |
16:35:52 | <nicolas17> | 10,737,418,240 bytes is the largest single file |
16:36:05 | <@arkiver> | JAA: pabs: can we use AB for this? ^ |
16:36:18 | <@arkiver> | 624 GB total, up to 10 GB each URL |
16:36:33 | <nicolas17> | I know AB works with 15GiB files, I just wasn't sure about the total size, though I guess it splits into multiple warcs |
16:36:40 | <@arkiver> | it does yes |
16:36:58 | <@arkiver> | i think we can put it in, it may just need a certain pipeline which i don't know much about |
16:37:39 | <nulldata> | Hmm there might be something wrong with my query - the files table only has 78373 rows - which is significantly less than what my query has. I'll look into it later |
16:37:57 | | nicolas17 is now authenticated as nicolas17 |
16:38:27 | <@arkiver> | really good these are being collected. they're very important and often overlooked |
16:38:33 | <@arkiver> | i'll go get some sleep |
16:38:55 | <nicolas17> | your sleep schedule is more mysterious to me than JAA's |
16:39:33 | <@arkiver> | yeah it's not exactly stable |
16:39:43 | <@arkiver> | good day to you :P |
16:39:52 | <nicolas17> | gn :P |
16:44:18 | <pokechu22> | nicolas17: 624 GB total is probably fine |
16:44:59 | | grill quits [Ping timeout: 276 seconds] |
16:45:04 | <pokechu22> | the main danger is that archivebot pauses jobs on a pipeline when there's less than 5GB free space, but if files are larger than that then the pausing won't stop it from running out of disk space (it only prevents starting a new download) |
16:46:15 | | grill (grill) joins |
16:48:47 | <pokechu22> | based on http://archivebot.com/pipelines -p poke or -p dag is probably best |
16:49:55 | <pokechu22> | oh, with regards to total size, AB will start creating a new WARC when the existing one is over 5 GiB, though it does not split WARCs mid-file, so individual warcs may end up being bigger. But if the largest file is 10 GB, then that's 15GB max for a WARC (if it's at like 4.9 GB and then the 10GB file is downloaded) which should be fine |
17:01:53 | | Wohlstand quits [Quit: Wohlstand] |
17:02:08 | | Wohlstand (Wohlstand) joins |
17:04:42 | <pokechu22> | nulldata: does your list include 32-bit drivers (and I guess arm drivers and any other platforms like that)? |
17:33:09 | | grill quits [Ping timeout: 260 seconds] |
17:34:56 | | grill (grill) joins |
17:52:35 | | grill quits [Ping timeout: 276 seconds] |
17:54:08 | | grill (grill) joins |
17:57:13 | <justauser|m> | Quoting from "Valhalla"page: |
17:57:15 | <justauser|m> | > until the Internet Archive (or another entity) grows its coffers/storage enough that 80-100tb is "no big deal" |
17:57:38 | <justauser|m> | Apparently this project is now officially obsolete. |
17:57:42 | <nicolas17> | what is Valhalla? |
17:57:59 | <justauser|m> | https://wiki.archiveteam.org/index.php/Valhalla |
18:01:41 | | grill quits [Ping timeout: 276 seconds] |
18:08:02 | <that_lurker> | No I would say it's still a big deal, though not as big as it was before |
18:10:06 | <h2ibot> | HadeanEon edited Deaths in 2007 (+509, BOT - Updating page: {{saved}} (5),…): https://wiki.archiveteam.org/?diff=56418&oldid=55439 |
18:10:07 | <h2ibot> | HadeanEon edited Deaths in 2007/list (+34, BOT - Updating list): https://wiki.archiveteam.org/?diff=56419&oldid=55330 |
18:40:50 | | awauwa quits [Quit: awauwa] |
19:11:15 | <h2ibot> | HadeanEon edited Deaths in 2011 (+474, BOT - Updating page: {{saved}} (204),…): https://wiki.archiveteam.org/?diff=56420&oldid=55740 |
19:11:16 | <h2ibot> | HadeanEon edited Deaths in 2011/list (+27, BOT - Updating list): https://wiki.archiveteam.org/?diff=56421&oldid=55741 |
19:14:07 | | FiTheArchiver joins |
19:14:23 | | FiTheArchiver quits [Remote host closed the connection] |
19:24:35 | | cuphead2527480 quits [Quit: Connection closed for inactivity] |
19:30:23 | <h2ibot> | Pokechu22 created Sympa (+36, Redirected page to [[Mailing Lists#Software]]): https://wiki.archiveteam.org/?title=Sympa |
19:30:24 | <h2ibot> | Pokechu22 edited Mailing Lists (+32, /* Software */ https://lists.compasspoint.org/…): https://wiki.archiveteam.org/?diff=56423&oldid=56363 |
20:02:41 | | Dada quits [Remote host closed the connection] |
20:07:31 | | Dada joins |
20:18:14 | | TastyWiener95 quits [Ping timeout: 260 seconds] |
20:21:24 | | TastyWiener95 (TastyWiener95) joins |
20:50:35 | <h2ibot> | HadeanEon edited Deaths in 2015 (+374, BOT - Updating page: {{saved}} (316),…): https://wiki.archiveteam.org/?diff=56424&oldid=56190 |
20:50:36 | <h2ibot> | HadeanEon edited Deaths in 2015/list (+36, BOT - Updating list): https://wiki.archiveteam.org/?diff=56425&oldid=56191 |
21:18:05 | | Teabag (Teabag) joins |
21:22:27 | | Webuser440238 joins |
21:24:10 | <Webuser440238> | hello, can someone please send site with WARCS that are about to be sent to webarchive? It was here but I lost it |
21:24:41 | <pokechu22> | I assume you're thinking of https://archive.fart.website/archivebot/viewer/ ? |
21:25:14 | <pokechu22> | note that WARCs are still listed there even after they end up on web.archive.org (it's more of an index of https://archive.org/details/archivebot) |
21:29:51 | <Webuser440238> | yep, thank you. And thanks for the information too |
21:30:19 | | Island joins |
21:31:44 | | Island quits [Client Quit] |
21:32:54 | | Webuser440238 quits [Client Quit] |
21:37:54 | | etnguyen03 (etnguyen03) joins |
21:52:39 | <Teabag> | Hi, does AT save marketing sites? A certain reward scheme is moving their offers from website to app only "soon". Would this be worth archiving? |
21:55:45 | <h2ibot> | HadeanEon edited Deaths in 2017 (+426, BOT - Updating page: {{saved}} (373),…): https://wiki.archiveteam.org/?diff=56426&oldid=56192 |
21:55:46 | <h2ibot> | HadeanEon edited Deaths in 2017/list (+31, BOT - Updating list): https://wiki.archiveteam.org/?diff=56427&oldid=55960 |
21:57:41 | <pokechu22> | Teabag: might be worth saving, yeah |
22:02:25 | <Teabag> | Cool, the site is https://priority.o2.co.uk/ (rather limited without JS and links to the main company site, hope those aren't problems) |
22:02:39 | <Teabag> | Thank you! :) |
22:08:46 | <pokechu22> | Hmm, archivebot mainly requires JS, but it looks like it should at least navigate the site, even if it doesn't discover images |
22:08:54 | <pokechu22> | err sorry |
22:09:31 | <pokechu22> | archivebot doesn't run JS, but it looks like that page uses <a href=...> for all of the links; it's just the images that are loaded via JS. So archivebot won't discover images but it will at least find info about all of the deals |
22:09:46 | | pixel leaves [Error from remote client] |
22:25:15 | | Dada quits [Remote host closed the connection] |
22:26:27 | | APOLLO03 quits [Quit: Leaving] |
22:44:26 | | nexusxe9 joins |
22:44:47 | | nexusxe2 joins |
22:47:38 | | nexusxe9 quits [Client Quit] |
22:47:38 | | nexusxe2 quits [Client Quit] |
22:59:12 | | Webuser021220 joins |
22:59:17 | | Webuser021220 quits [Client Quit] |
23:19:02 | | Wohlstand quits [Quit: Wohlstand] |
23:35:39 | | etnguyen03 quits [Client Quit] |
23:48:44 | | nexusxe quits [Quit: Leaving] |