| 02:34:05 | | graham quits [Client Quit] |
| 03:59:26 | | jlwoodwa joins |
| 04:02:28 | | atphoenix quits [Remote host closed the connection] |
| 04:03:11 | | atphoenix (atphoenix) joins |
| 06:26:41 | | @rewby quits [Ping timeout: 265 seconds] |
| 08:31:33 | | rewby (rewby) joins |
| 08:31:33 | | @ChanServ sets mode: +o rewby |
| 09:25:00 | | qw3rty quits [Read error: Connection reset by peer] |
| 09:25:12 | | qw3rty joins |
| 10:13:59 | | AmAnd0A quits [Ping timeout: 252 seconds] |
| 10:16:32 | | AmAnd0A joins |
| 12:58:11 | | AmAnd0A quits [Ping timeout: 265 seconds] |
| 12:58:21 | | AmAnd0A joins |
| 13:06:39 | | AmAnd0A quits [Read error: Connection reset by peer] |
| 13:06:58 | | AmAnd0A joins |
| 15:10:16 | | IDK quits [Client Quit] |
| 15:38:08 | | jlwoodwa quits [Ping timeout: 252 seconds] |
| 16:05:30 | | fionera (Fionera) joins |
| 16:06:44 | <fionera> | Is it possible to get a full dump of all currently known urls? xD I want to ingest them into one clickhouse table |
| 16:46:37 | | Cydog|m joins |
| 17:10:33 | <@kaz> | I mean in theory you could grab the dumps |
| 17:36:15 | | graham joins |
| 17:45:13 | | graham quits [Client Quit] |
| 17:46:14 | | graham joins |
| 17:56:47 | | graham quits [Client Quit] |
| 18:00:13 | <@JAA> | There isn't a single combined dump, but yeah, the dumps are all publicly accessible and only about 650 GiB in total. |
| 18:10:06 | | graham joins |
| 18:19:07 | | graham quits [Client Quit] |
| 19:15:08 | | jlwoodwa joins |
| 19:20:01 | | jlwoodwa quits [Ping timeout: 265 seconds] |
| 20:16:14 | | GiorgioBrux quits [Ping timeout: 252 seconds] |
| 23:45:26 | | AmAnd0A quits [Ping timeout: 252 seconds] |
| 23:58:33 | | Chris50106 (Chris5010) joins |