00:19:43shgaqnyrjp_ quits [Remote host closed the connection]
00:20:06shgaqnyrjp_ (shgaqnyrjp) joins
00:27:42<fireonlive>https://github.com/Pennyw0rth/NetExec/pull/335
00:27:58<fireonlive>>Add Recall module for dumping all users Microsoft Recall DBs & screenshots
00:32:03Mateon2 joins
00:33:58Mateon1 quits [Ping timeout: 255 seconds]
00:33:58Mateon2 is now known as Mateon1
00:37:02etnguyen03 quits [Client Quit]
00:54:44<fireonlive>https://dl.fireon.live/irc/4b852d2afceea6f0/ethicalai.png https://www.change.org/p/our-data-our-choice-stop-unconsented-ai-training-now there's something about this ad that's strange but I can't quite put a finger on it ;)
01:15:14etnguyen03 (etnguyen03) joins
01:35:23tmob_ joins
01:38:46tmob quits [Ping timeout: 255 seconds]
01:53:24<fireonlive>https://dl.fireon.live/irc/0c9b8384362ebbb8/tweet-thread-baked-burger.html
01:53:25<fireonlive>enjoy
01:57:59<nicolas17>what in the goddamn fuck
02:02:44<fireonlive>😂
02:03:13<fireonlive>👨‍🍳
02:04:31fireonlive grabs the hotdog thread...
02:06:04<fireonlive>https://dl.fireon.live/irc/3b074b5813d1e982/tweet-thread-air-fried-hotdog.html
02:12:08<@JAA>The water at the end... lol
02:13:57<fireonlive>xP
02:14:15<@JAA>The rest I can sort of understand, but that...
02:14:49<fireonlive>rehydrate!
02:21:47<nicolas17>pabs: https://www.debian.org/News/2024/20240606 yikes
02:24:57<nicolas17>may want to AB that + https://www.debian.org/News/2024/judgement.pdf + https://www.wipo.int/amc/en/domains/decisions/pdf/2024/d2024-0770.pdf
02:26:25<fireonlive>nicolas17: done
02:42:45<fireonlive>https://twitter.com/TheInsiderPaper/status/1799033540425966075
02:42:49<fireonlive>>WATCH: Boeing 777 trails flames on take-off before engine issue forces emergency landing in Canada
02:43:24@JAA resets the counter.
02:43:32<fireonlive>it's been 0 days...
02:48:20etnguyen03 quits [Client Quit]
03:06:40etnguyen03 (etnguyen03) joins
03:10:42<nicolas17>https://www.wipo.int/amc/en/domains/decisions/pdf/2024/d2024-0770.pdf#page=4 ok the guy's reply makes me think he may need psychiatric assistance
03:11:53<fireonlive>lmao
03:11:54<fireonlive>wow
03:17:27etnguyen03 quits [Client Quit]
03:21:48etnguyen03 (etnguyen03) joins
03:21:52etnguyen03 quits [Remote host closed the connection]
03:22:37BlueMaxima joins
03:24:31MetaNova quits [Ping timeout: 255 seconds]
03:29:55MetaNova (MetaNova) joins
04:20:20tmob_ quits [Read error: Connection reset by peer]
04:21:34tmob joins
04:22:54SootBector quits [Remote host closed the connection]
04:23:17SootBector (SootBector) joins
04:24:22flotwig quits [Ping timeout: 255 seconds]
04:47:49jamesp quits [Client Quit]
05:40:25nicolas17 quits [Ping timeout: 255 seconds]
05:45:12<fireonlive>https://x.com/vmfunc/status/1799291381988655391 dont use github logged in for a while
06:51:04^ quits [Ping timeout: 255 seconds]
07:11:49<fireonlive>lol not patched
07:25:07BearFortress quits [Client Quit]
07:33:06<fireonlive>they patched it
07:45:37BlueMaxima quits [Read error: Connection reset by peer]
08:17:56^ (^) joins
08:24:49HackMii quits [Ping timeout: 250 seconds]
08:26:24BearFortress joins
08:27:25HackMii (hacktheplanet) joins
09:00:02Bleo1826007227196 quits [Quit: The Lounge - https://thelounge.chat]
09:01:35Bleo1826007227196 joins
09:05:10eroc1990 quits [Ping timeout: 255 seconds]
09:07:45eroc1990 (eroc1990) joins
09:14:57tmob quits [Read error: Connection reset by peer]
09:39:23<Barto>Debian jugement from swiss authorities, funny stuffs
09:40:00<Barto>and the dude is a candidate for EU elections lol
09:41:38<Barto>the dude is living a step away from the uni i studied at lol
09:43:32<Barto>personal website hereby thrown into ab
10:11:46Gereon0 quits [Ping timeout: 255 seconds]
10:33:49yarrow joins
11:14:28yarrow quits [Client Quit]
11:14:49yarrow joins
11:16:34<that_lurker>https://edition.cnn.com/2024/06/07/science/apollo-8-astronaut-william-anders-reportedly-killed-in-plane-crash/index.html
11:18:14<that_lurker>He took this photo https://www.nasa.gov/wp-content/uploads/2023/03/apollo08_earthrise.jpg
11:18:46<that_lurker>https://en.wikipedia.org/wiki/Earthrise
11:20:38etnguyen03 (etnguyen03) joins
11:24:16yarrow quits [Client Quit]
11:24:34yarrow joins
11:25:18yarrow quits [Client Quit]
11:25:37yarrow joins
11:25:38etnguyen03 quits [Remote host closed the connection]
11:26:45benjins2__ joins
11:28:43benjins2_ quits [Ping timeout: 255 seconds]
11:30:37eroc1990 quits [Ping timeout: 272 seconds]
11:30:51etnguyen03 (etnguyen03) joins
12:24:26c3manu quits [Read error: Connection reset by peer]
12:24:33c3manu (c3manu) joins
12:24:33etnguyen03 quits [Client Quit]
12:30:11HackMii quits [Remote host closed the connection]
12:31:21eroc1990 (eroc1990) joins
12:31:49HackMii (hacktheplanet) joins
12:39:58etnguyen03 (etnguyen03) joins
12:46:57muklumsum joins
12:49:47nepeat quits [Quit: ZNC - https://znc.in]
12:51:20nepeat (nepeat) joins
12:54:51muklumsum quits [Ping timeout: 272 seconds]
12:56:34muklumsum joins
13:00:36eroc1990 quits [Read error: Connection reset by peer]
13:02:34eroc1990 (eroc1990) joins
13:10:55<nulldata>https://metro.co.uk/2024/06/07/boeing-737-flight-uk-seconds-disaster-glitch-20990507/
13:12:38<nulldata>fireonlive - how about adding a days since last Boeing incident counter to eggdrop? Should be easy - just always report 0
13:16:11T31M quits [Quit: ZNC - https://znc.in]
13:16:34T31M joins
13:17:33etnguyen03 quits [Client Quit]
13:30:40midou quits [Ping timeout: 255 seconds]
14:02:16midou joins
14:22:23etnguyen03 (etnguyen03) joins
14:23:19Arcorann quits [Ping timeout: 255 seconds]
14:27:17xkey quits [Quit: WeeChat 4.1.1]
14:27:26xkey (xkey) joins
15:00:00driib quits [Client Quit]
15:03:50driib (driib) joins
16:03:44<that_lurker>-rss- The Backrooms of the Internet Archive: https://blog.archive.org/2024/06/01/the-backrooms-of-the-internet-archive/ https://news.ycombinator.com/item?id=40618079
16:05:28etnguyen03 quits [Client Quit]
16:19:20shgaqnyrjp_ is now known as shgaqnyrjp
17:20:07etnguyen03 (etnguyen03) joins
17:45:53eightthree quits [Remote host closed the connection]
18:04:35eightthree joins
18:06:58pabs quits [Ping timeout: 255 seconds]
18:08:12pabs (pabs) joins
18:21:03eightthree quits [Remote host closed the connection]
18:30:52etnguyen03 quits [Client Quit]
18:31:43systwi quits [Ping timeout: 255 seconds]
18:35:19etnguyen03 (etnguyen03) joins
18:43:20systwi (systwi) joins
19:01:31eightthree joins
19:04:08etnguyen03 quits [Client Quit]
19:07:34nicolas17 joins
19:11:31<fireonlive>https://news.ycombinator.com/item?id=40618742#40618832
19:11:33<fireonlive>o_O
19:12:28<fireonlive>>I quite like the fragility of it, it makes it more apparent that everything is transient. In a way I wish the IA had a half life on content, that it would decay over time, pages and images would be randomly deleted. Little by little it would rot and become nothing, a reflection of humanity.
19:12:54@JAA randomly deletes samwillis.
19:13:13<fireonlive>>:)
19:23:28tzt quits [Ping timeout: 255 seconds]
19:23:32etnguyen03 (etnguyen03) joins
19:25:01tzt (tzt) joins
19:25:42tzt quits [Client Quit]
19:27:56tzt (tzt) joins
19:35:38etnguyen03 quits [Client Quit]
19:44:37tzt quits [Ping timeout: 255 seconds]
19:48:29etnguyen03 (etnguyen03) joins
20:02:40eightthree quits [Remote host closed the connection]
20:35:08etnguyen03 quits [Remote host closed the connection]
20:48:06<fireonlive>>The 12-month Amazon Web Services Free Tier period associated with your Amazon Web Services account 190724040252 will expire on June 30, 2024.
20:48:06<fireonlive>hmmm, wonder how much that execute-api will cost me :p
20:49:21<kiska>Lots
20:49:42<nicolas17>doesn't your bill say you spent $2 and the free tier covers $2 for a total of $0?
20:50:19<kiska>Yay tracker websockets are coming alive!
20:51:14<kiska>I should learn influxdb retention policies and continuous queries... https://server8.kiska.pw/uploads/ed0ca84c5c805dbc/image.png
20:52:54<fireonlive>hmm *check*
20:52:56<@JAA>92.2% 92.2% 92.2% 92.2% 92.2%
20:53:06<@JAA>Very useful axis, thanks Grafana!
20:53:15<fireonlive>i like how the emails they send link to 'awstrack.me'
20:53:21<fireonlive>at least it's https
20:53:26<nicolas17>kiska: don't do that at the last minute
20:54:03<fireonlive>i get emails that contain... business content...
20:54:11<fireonlive>and the tracking url to follow trough is http only
20:54:13<fireonlive>lmao
20:54:29<kiska>Is this axis more useful? https://server8.kiska.pw/uploads/b2f05bd5b583d212/image.png
20:55:43<nicolas17>kiska: https://github.com/influxdata/influxdb/issues/8088#issuecomment-426143558
20:56:15<Barto>that_lurker: rip bill anders, i did archive the museum he worked with. Any other urls are welcomed.
20:56:41<kiska>nicolas17: then I am sol cause I don't have sufficient disk to perform that operation :D
20:56:44lukash98 quits [Quit: The Lounge - https://thelounge.chat]
20:57:09<nicolas17>depends if you care about storing lower-res older data
20:59:17<kiska>I think what I'll do is use autogen to store single websocket data, then use retention policy "whatever" and downsample it to 1s or something...
20:59:21<that_lurker>Barto: Don't think there is others. NASA of course has a lot, but that should be archived already
20:59:24<fireonlive>apparently i had 42 requets last month
20:59:34<kiska>with autogen I think I'll set it for 30 days
20:59:41<nicolas17>yeah but what do you do with your current data?
20:59:49<kiska>Display it to grafana :D
20:59:56<kiska>Basically nothing
21:00:19<nicolas17>do you throw away data older than 30d?
21:00:20<kiska>So I guess I need a bigger disk first
21:00:29<kiska>Nope I don't throw away data, yet
21:00:33<nicolas17>to downsample it to 1s resolution before deleting it, you need more disk space :P
21:00:40<fireonlive>aw kiska it's fine the way it is
21:00:47<nicolas17>hence why it's best to plan the retention and downsampling since day 1
21:00:51<kiska>Tell me that when I run out of disk space
21:01:01<fireonlive>:p
21:01:15<Barto>that_lurker: yeah, nasa is out of question
21:01:22<kiska>My previous solutions have been asking advin servers to upgrade my VM
21:01:34<kiska>Obv that is going to be untentable :D
21:02:27<fireonlive>yeet all raw websocket data to IA
21:02:30<kiska>nicolas17: I could perhaps downsample the inactive projects first, then delete them
21:02:40<kiska>fireonlive: I am sure the IA will love that
21:02:47<fireonlive>very much :D
21:03:34<kiska>I think the rate of ingest would be about 5Mb/s and perhaps... 70 PUT req/s
21:03:36<nicolas17>delete the data from inactive projects in autogen, but not in the downsampled retention policy? see the ticket :P
21:04:08lukash98 joins
21:04:30<kiska>What I'll do is downsample the data and put into RP "5sRP" and then set autogen for that project to 30d retention
21:04:38<kiska>That should work?
21:05:05<nicolas17>do you use different influxdb databases for each project?
21:05:19<kiska>Different "measurements"
21:05:25<kiska>Not databases
21:05:41<nicolas17>database.rp.measurement
21:05:55<nicolas17>if you alter autogen to 30d, it will erase old data for all measurements
21:06:58<kiska>RIP I have seemed to forgot my password to influx :D
21:07:20<kiska>Ok nevermind was using wrong username
21:08:28<fireonlive>hunter2
21:11:27@JAA hands kiska one (1) password manager.
21:12:24<kiska>nicolas17: My schema: https://paste.kiska.pw/PeptizingGymnasiums
21:12:36<kiska>Obv repeat for all projects
21:13:03<kiska>Its a very shit schema isn't it :D
21:13:45<kiska>I might just move this into OVH just to not deal with it :D
21:19:39<katia>https://reclaimthenet.org/eu-plans-mass-surveillance-data-collection-device-monitoring-encryption-backdoors
21:25:06<fireonlive>>i post ai stuff to inspire actual artists, go support them
21:25:11<fireonlive>.....ok
21:25:40<fireonlive>sound logic checks out
21:41:36<kiska>nicolas17: Have I stunned you on how bad my schema is? :D
21:43:31<steering>kiska: what is that a schema for? influxdb?
21:43:38<kiska>yeah
21:43:54<kiska>Its very shit
21:44:05<kiska>Schema is for https://grafana3.kiska.pw/d/000000/archiveteam-tracker-stats
21:47:24<steering>(i know nothing about influxdb) what would be the analogy to an old school (sql) db here? the "series" are tables/keys in a table and the fields are columns?
21:49:10<steering>>Each point consists of several key-value pairs called the fieldset and a timestamp. When grouped together by a set of key-value pairs called the tagset, these define a series. Finally, series are grouped together by a string identifier to form a measurement.
21:49:25<steering>(from the wikipedia article) interesting hierarchy.
21:51:39<kiska>Here is a sample of the data https://paste.kiska.pw/ProcrastinatedFriend
21:54:51eightthree joins
21:56:44<kiska>steering: I hope you can visualise the database now
21:56:46<kiska>:D
22:00:00<steering>ah, yeah
22:00:46<steering>that's a pretty nice way to be able to define it. better than *glares* rrdtool :P
22:01:55<kiska>lmao
22:03:25<kiska>however I am running out of space
22:03:28<kiska>steering: /dev/sda2 237G 209G 18G 93% /
22:03:58<kiska>So I am now thinking of downsampling the data or something
22:05:05<@JAA>Hey look, kiska is rediscovering why the official Grafana instance no longer exists. :-D
22:05:28<kiska>:D
22:05:46<kiska>Well that was one of the reasons
22:12:07<kiska>steering: here it is with a little more readable timestamps https://paste.kiska.pw/ForestallsMoros
22:13:10<kiska>JAA: The one thing that can help with the longevity of the grafana instance is not having thousands of samples per second stored :D
22:13:26<@JAA>:o
22:13:39<kiska>See the paste :D
22:13:58<@JAA>I don't actually know the details about the official instance. Wasn't me who ran it.
22:14:21<kiska>Yeah one of the issues people kept hammering days worth of data at a time
22:14:31<kiska>And at very low intervals ie 10s or 30s
22:14:50<@JAA>Yeah, and Grafana or whatever layer underneath it doesn't cache that very well.
22:14:56<kiska>So influx got very overloaded
22:15:06<kiska>And influx doesn't cache when using the now() feature
22:15:18<kiska>Or at least that is how I understand it
22:15:29<kiska>At least the data is in memory :D
22:19:08<@JAA>It could cache time slices so it would only have to fetch the latest block of data.
22:24:22<kiska>Yeah that would be smart :D
22:25:11<kiska>[1717514.306290] Out of memory: Killed process 760 (influxd) total-vm:240768848kB, anon-rss:31309036kB, file-rss:0kB, shmem-rss:0kB, UID:111 pgtables:168664kB oom_score_adj:0
22:25:13<kiska>Oh...
22:29:04<@JAA>RIP
22:31:00<that_lurker>kiska you need https://github.com/facebookincubator/oomd
22:32:23<@JAA>I like that it's coming from Facebook. lol
22:33:37<kiska>that_lurker: well no... there is only 1 process on this machine that uses a lot of memory and that is influxdb
22:33:48<kiska>So this wouldn't help
22:34:06<kiska>oomd would still have killed influxd so...
22:34:10<@JAA>Well, it would kill the process before the penguin could get angry.
22:34:20<@JAA>But yeah, same outcome
22:34:22<kiska>Well that is true :D
23:13:13BlueMaxima joins
23:42:18tzt (tzt) joins