01:07:05DigitalDragons quits [Quit: Ping timeout (120 seconds)]
01:09:56DigitalDragons (DigitalDragons) joins
04:39:47fluke quits [Ping timeout: 268 seconds]
05:16:57atphoenix__ (atphoenix) joins
05:19:20atphoenix_ quits [Ping timeout: 268 seconds]
05:37:09Starchives_ (Starchives) joins
05:40:50Starchives__ quits [Ping timeout: 268 seconds]
05:43:18nothere quits [Ping timeout: 268 seconds]
06:23:35nothere joins
07:17:54pabs quits [Read error: Connection reset by peer]
07:18:29pabs (pabs) joins
13:07:37pabs quits [Remote host closed the connection]
13:08:04pabs (pabs) joins
16:45:35atphoenix_ (atphoenix) joins
16:48:46atphoenix__ quits [Ping timeout: 268 seconds]
16:52:29Maturion joins
17:10:21Webuser964183 joins
17:10:23<Webuser964183>Hi !
17:10:29<Webuser964183>Let's continue here :3
17:15:41<justauser>So, we know about it, but not working on it yet. Lists of URLs affected are welcome.
18:26:51<Webuser964183>Great news !
18:27:59<Webuser964183>I did a list with Google, but there's only 80 indexed domains.
18:30:00<justauser>Great.
18:30:12<justauser>You can upload the results to https://transfer.archivete.am/ and paste the link here.
18:32:35<Webuser964183>https://transfer.archivete.am/FJTuD/domaines_xs4all.txt
18:32:36<eggdrop>inline (for browser viewing): https://transfer.archivete.am/inline/FJTuD/domaines_xs4all.txt
18:33:10<justauser>Did you copy links within domains?
18:33:44<justauser>No need to scrape from them - we'll do it either way - but if Google had them, they would be nice.
18:34:49<Webuser964183>I did it by going on site:xs4all.nl on Google
18:34:56<Webuser964183>Then I grabbed only the domains
18:35:01<Webuser964183>There's no duplicata
18:35:09<justauser>Ack.
18:35:43<Webuser964183>Sorry, maybe I didn't understood something ? ;(
18:36:39<justauser>Ack = Acknowledgement. That is, I got your message.
18:39:17<Webuser964183>Okay, sorry, I just didn't mean to frustrate you; I'm trying to help as best I can. By the way, is there anything else I can do to help you?
18:42:21<justauser>There are more things to scrape apart from Google - there are two lists on the wiki.
18:42:35<justauser>https://wiki.archiveteam.org/index.php/Site_exploration and https://wiki.archiveteam.org/index.php/Finding_subdomains
18:42:50<justauser>Setting up a Warrior is always a good idea.
18:43:05<justauser>https://wiki.archiveteam.org/index.php/ArchiveTeam_Warrior
18:43:26<Webuser964183>I have an active Warrior since a long time, but I want to help more :D
18:47:12<Webuser964183>Bing is giving 5 results.
18:47:58<Webuser964183>But I will try to expand a lot my list of 80 domains