00:00:26<jasonswohl>BPCZ is ZFS not already open source?!
00:01:20<fireonlive>it’s not as permissive as some would like it to be i believe
00:02:00<jasonswohl>ah, sounds like if you use it in a business, or are thinking about remotely profiting, PAY US OUR $$$$$$$$$$$$$$$$
00:03:32<BPCZ>fireonlive: split brain too. Oracle zfs is different from openzfs.
00:04:11<BPCZ>jasonswohl: most companies are fire with using zfs under cddl linux just won’t let it into the kernel
00:05:21<fireonlive>ahh
00:05:35<nulldata>jasonswohl The standard Oracle license - if you even think of the name in your sleep, you now owe them licensing fees for your dreams.
00:06:10<nulldata>Unsure if you owe them? Their answer is yes.
00:07:36<nicolas17>there's this famous post https://web.archive.org/web/20150811052336/https://blogs.oracle.com/maryanndavidson/entry/no_you_really_can_t
00:08:30<nulldata>Larry needs a couple of new mega-yachts - his current ones are a few years old at this point. Basically unusable.
00:09:47<fireonlive>few years old? how embarrassing
00:10:28<nicolas17>there was so much backlash about that blog post that it was removed within 24 hours
00:10:44<jasonswohl>o gowd ^
00:17:53Lord_Nightmare quits [Client Quit]
00:18:14Lord_Nightmare (Lord_Nightmare) joins
00:34:40AmAnd0A quits [Read error: Connection reset by peer]
00:36:24AmAnd0A joins
00:41:21<nulldata>nicolas17 - how dare you reverse-obtain their deleted blog post! A team of Oracle lawyers have been dispatched to your location.
00:52:01<jasonswohl>nicolas17 rut row............ * looks for a new apt before they show up :)
01:13:50HP_Archivist quits [Ping timeout: 252 seconds]
01:17:00HP_Archivist (HP_Archivist) joins
01:57:03AmAnd0A quits [Read error: Connection reset by peer]
01:57:19AmAnd0A joins
03:04:24TheTechRobo quits [Quit: bye]
03:05:49TheTechRobo (TheTechRobo) joins
03:56:24<nulldata>Update on InfluxBD Cloud - it wasn't just Belgium, Sydney was shutdown and deleted too. Sounds like they have some backup log files from Belgium for the past 100 days and they are attempting to extract bits of data from it for customers. Sydney customers are shit out of luck.
03:57:15<fireonlive>two regions? wow
03:57:49<fireonlive>total geniuses running that company
03:58:55<nicolas17>often when I'm about to delete a file, I instead run: "at 'next week' <<< 'rm file.iso'"
03:59:37<nicolas17>(especially when it's a large download that is excluded from backups)
04:01:58<BPCZ>Why do people even use influx cloud offerings? Setting up your own metrics system isn’t even all that hard. We have a 50PiB cluster backing our Cassandra
04:06:30<fireonlive>ye, kiska's influxdb/etc is all self-hosted https://grafana3.kiska.pw/d/000000/archiveteam-tracker-stats?orgId=1&refresh=1m
04:06:59<fireonlive>it's not like huge but not like had to go and cry for their cloud offering lol
04:12:22<BPCZ>lol I piss off my metrics team by asking if we can do 0.1 second granularity for some metrics I care about because microbursts suck
04:14:45<nicolas17>BPCZ: https://blog.nicolas17.xyz/posts/load-average-spikes.html
04:16:17<nicolas17>tl;dr telegraf uses a dozen threads to check a dozen metrics at the same time, it's super brief CPU activity, but when the kernel checks how many threads are runnable to calculate the load average, at the *exact* millisecond those threads are running, we get a big spike
04:21:06<BPCZ>Stealing that to see what we do at work tomorrow. Funny enough we don’t really monitor load averages. Our metrics tend to be very specific to what’s deployed. The one that I want more data on is network traffic, and I need to extend our eBPF tooling to allow for source - destination tcp socket monitoring for dropped packets to specific servers
04:22:42<BPCZ>Absolutely cursed that I can’t trust a modern network stack to deliver packets. But this is also the same network stack the engineers had me tune their packet retry wait from 2ns to 2000000ns and that actually fixed the problem
04:23:49<Jake>what
04:24:16<BPCZ>It’s very novel hardware with amazing failure cases
04:25:06<nicolas17>BPCZ: these tend to be boring webservers
04:25:35<BPCZ>Like sometimes if you unplug the wrong cable among hundreds of thousands the entire network panics for multiple hours as it reroutes and resends all packets in flight at that moment
04:25:52<nicolas17>no fancy HPC workloads, more like https://discuss.kde.org/
04:26:21<BPCZ>nicolas17: I’m still gonna look at it. We like load smoothing to make everyone’s lives better
04:26:38<nicolas17>yeah I meant re: "we don’t really monitor load averages"
04:27:03<nicolas17>it's not a very reliable or deterministic metric
04:27:08<BPCZ>Oh I’d still argue load average is kind of not the best metric to be collecting
04:27:13<BPCZ>lol cool yeah
04:27:20<nicolas17>but when it goes to 100 you *know* something is wrong
04:27:26<BPCZ>T.T
04:27:34<nicolas17>(when the average is <4 on that machine)
04:28:21<BPCZ>Me when my users attempt to access the same 4k file from every node at once and think it’s ok because each node is accessing a hard link (making the issue 4x worse)
04:28:38<nicolas17>highest I have seen was on a donated VM that used a fancy block storage system rather than a disk, and the storage went down
04:28:53<nicolas17>so every single process that was stuck for hours trying to access the disk, increased the load average
04:28:55<BPCZ>You ever see a file with ~90,000 hard links to it?
04:29:31flashfire42|m joins
04:30:25<fireonlive>😏
04:30:36<nicolas17>I haven't done KDE stuff in a while >.>
04:30:47<BPCZ>I still find it absolutely hilarious I can track exactly when google realized they needed to take computing seriously by who quit my last employer in the 2004-2006 timeframe to work on compute at google
04:31:00<flashfire42|m>What did I miss XD I join and all i see is a smirk from fireonlive
04:31:12<nicolas17><BPCZ> You ever see a file with ~90,000 hard links to it?
04:32:29<BPCZ>Just mass exodus of NASA engineers & PhDs that built novel HPC stuff asked by google to redo that work but without the hard bits and the first iteration was a pile of Perl that drove Borg
04:33:34<nicolas17>oh man where's that "I want to serve 5TB of data" video leaked from Google?
04:34:12<nicolas17>seems it got deleted
04:34:19<nicolas17>how do I check if a youtube video is archived on IA?
04:34:37<BPCZ>Na but similar timeline and having worked with the kind of person that actually wants to fork the linux kernel and maintain their own distro with a team of 1 it’s a special kind of hell
04:34:37<nicolas17>do youtube links Just Work on WBM if they're archived?
04:34:39<nicolas17>seems unlikely
04:36:07<nicolas17>BPCZ: https://web.archive.org/web/20220608190933/https://www.youtube.com/watch?v=3t6L-FlfeaI
04:36:13<BPCZ>You’ve not experienced pain until you see a pure Perl data management framework that consists of 1 server file that’s 12,000 lines and a single client file that’s 8,000 lines and it’s supports TiB/s transit, multiple backends and write targets, and inflight tar and untar operations
04:36:18<fireonlive>aw that got deleted?
04:37:00<fireonlive>best google video
04:37:06<fireonlive>*saves a copy*
04:37:56<BPCZ>Like yes hello I’d like to create a tar as fast as the backend can accept the bits. It even had full tape drive support and could set flags for how many drives it claimed out of libraries
04:38:43<fireonlive>(https://findyoutubevideo.thetechrobo.ca/ is also handy)
04:38:43<BPCZ>Oh and the insane dude that wrote it maintained a fork of coreutils to parallelize cp and md5sum
04:40:26<BPCZ>Oh now I need to go find the NSF project that asks for full BMC access to your hardware so they can add it to their k8s research cluster
04:40:36<nicolas17>BPCZ: one of the big things I did in KDE was in the SVN to Git migration
04:40:38masterX244 quits [Ping timeout: 252 seconds]
04:40:53<nicolas17>SVN stores the repository with two files per commit
04:40:58<nicolas17>there were 1.5 million commits
04:41:34<BPCZ>nicolas17: god damn, that is a project for sure
04:41:52<nicolas17>the conversion tool reads goes through every commit and checks what changed in it, which almost always requires getting data from older commits too since it stores deltas
04:41:56<BPCZ>Surprised you didn’t quash them and link to the commit archive but I guess the history is important
04:42:14<nicolas17>Linux's readahead is useless since it's separate files
04:42:20<BPCZ>How long did that take?
04:42:35<nicolas17>disk cache in RAM? didn't fit the whole repo in there
04:43:24<fireonlive>all in perl :D
04:44:05<nicolas17>and it was 1 monolithic SVN repository with 1.5M commits, to be converted into one git repo per app
04:44:17<nicolas17>so, you write conversion rules saying what subdirectories to grab: https://invent.kde.org/sdk/kde-ruleset/-/raw/master/icecream/icecream-rules
04:44:53<BPCZ>Oh old job was really funny, the seniors wouldn’t tell anyone including managers where their repos were stored. They just had a random location in some servers (at least 4) where they worked out of. And it came to light when the competent manager they hired was like “what happens if we need to blackstart this site” and the seniors were like “we will just pull our secret backups that
04:44:53<BPCZ>get pushed to another site”
04:45:02<nicolas17>commit 1, get list of all paths modified, does any of them match any of the regexes in the conversion rules? nope, move on to commit 2
04:45:15<nicolas17>so about 3 hours later you'd get the git repo
04:45:38<BPCZ>Found it
04:45:40<BPCZ>https://www.sdsc.edu/support/user_guides/nrp.html
04:45:59<nicolas17>and look at the log, and see that there's a branch missing which seems historically important and should be converted too
04:46:17<nicolas17>so you tweak the rules, run it again, wait 3 hours, by the time you get the repo you forget what you were even doing
04:46:29<BPCZ>If you have hardware and waaaay too much trust in some random PhD candidates you can give them full BMC control of your server and get access to the worlds shittiest distributed research computer
04:46:32<nicolas17>turns out the previous tweak had a typo
04:46:58<nicolas17>so that sucked
04:47:42<BPCZ>nicolas17 how big was this repo?
04:47:50<nicolas17>so I made a thing that imported the whole history (which paths were changed in each commit) into an SQL database
04:48:11<BPCZ>You can always fit things in ram. Maybe just with some insane tomfoolery
04:48:43<nicolas17>and then I could do "select distinct commit_number from modified_paths where path like '/trunk/icecream/%'" super fast since it's indexed
04:48:56<nicolas17>then I told the conversion tool to *only* look at those commits
04:49:19<nicolas17>for some subprojects it made the conversion take 3 seconds instead of 3 hours
04:50:31<nicolas17>I got some fun reactions like "what, that was fast, what is this black magic"
04:50:44<nicolas17>and "in my days the commits had to walk uphill through the snow both ways *waves cane*"
04:52:24<nicolas17>then when writing the conversion rules and reviewing the resulting history, I literally wore out the scroll wheel in my cheap mouse from scrolling in gitk and I had to buy a new one
04:53:00<BPCZ>Ugh I have a meeting with two storage vendors tomorrow. It’s literally my entire day
04:54:02<BPCZ>Maybe someone will solve the PLC data structure issue so you can actually use PLC as a write through cache to HDD
04:54:51<BPCZ>(Do not believe a vendors lies)
04:55:43<nicolas17>my eternal problem is I see "must have X years of experience in Y" in job descriptions and I'm like "how the fuck do I measure a year of experience in an on-and-off volunteer thing"
04:56:45<BPCZ>Oh oh oh I know this one
04:57:25<BPCZ>https://files.catbox.moe/afccyw.jpeg
04:57:57<fireonlive>i should try that
04:58:09<fireonlive>i mainly just pray i don't wake up in the morning
04:58:35<BPCZ>Why yes I do have 2 years of experience doing DevOPs. Please ignore that I was promoted to this role 2 months ago and just changed my job title because I felt like I didn’t materially change what I was doing the entire time
04:59:40<nicolas17>how do I express contributions to BOINC and KDE and archiveteam and theapplewiki and buildbot and cppcheck and wireshark and a friend's random toy project, in resume form?
05:00:06masterX244 (masterX244) joins
05:00:12<nicolas17>what even is the start-end date for those
05:00:53<fireonlive>no end date if you're still workin' at it
05:01:07<fireonlive>start date... idk when you first contributed something
05:01:26<nicolas17>I never unsubscribed from the BOINC mailing lists but I haven't actually done anything in years, it's all fuzzy and vague
05:01:48<flashfire42|m>Maybe look at what projects you were running.
05:02:04<nicolas17>flashfire42|m: in BOINC?
05:02:17<flashfire42|m>Yeah
05:02:23<BPCZ>The trick to being employable is already having a job and just mogging the fuck out of the interview process for the next job
05:02:28<nicolas17>I mean like code contributions, running my own server, arguing in boinc-dev, etc
05:03:13<fireonlive>'i went full ryan sleevi for 3.5 years in bonic-dev'
05:03:19<fireonlive>'ohhhh, ok'
05:03:46<flashfire42|m>Ah ok. Idk then
05:03:52<BPCZ>My current role releveled the position to bring me in as a senior for a role that I applied to with jr qualifications
05:06:06<BPCZ>Everyday is a battle to hide just how mentally ill I am from coworkers
05:06:56<nicolas17>"throwing together imgur-bruteforce in an hour or two for archiveteam" is not worth mentioning as experience, "spending a few years administering KDE server infra" is worth mentioning even if those weren't full-time-equivalent years; but I don't know where to draw the line for the big gray area in the middle
05:08:37<fireonlive>i'm still shooting my payloads to you :D
05:09:01<nicolas17>lewd
05:09:08<fireonlive>:3
05:12:17<nicolas17>and then there is "If you see a job you want but you think you don't meet all the requirements, apply anyway!"
05:12:24<nicolas17>me: what the fuck is 'a job you want'
05:12:49<nicolas17>recruiters *suck* at making job descriptions sound interesting
05:13:17<nicolas17>I haven't *ever* seen a job ad that made me think "oh man I want to work there"
05:14:07<nicolas17>I *have* seen people asking for help in technical IRC channels that made me think "oh man I wish I was working on a problem like that, sounds fun"
05:19:02<fireonlive>BPCZ and the massive e-penis
05:19:07<fireonlive>ikr
05:20:05<fireonlive>'i eat 800QiB clusters for breakfast'
05:20:07<fireonlive>:P
05:20:40<fireonlive>is very cool tho
05:36:35<fireonlive>anyone have a preferred telegram downloader? preferably something that spits out all media and also the chat in json or something
05:36:56<BPCZ>fireonlive: I had 400PiB of storage delivered yesterday
05:37:02<fireonlive>ideally i'd feed it my login so it can access login-walled stuff
05:37:06<fireonlive>damn :D
05:37:14<BPCZ>Also we are but a small bean in the big world
05:37:38<fireonlive>i need to back up a u-haul to your dumpster
05:37:41<fireonlive>:p
05:37:44<BPCZ>Hyperscalers literally buy 70-80% of all storage produced and nearly as much compute
05:38:43<BPCZ>I will laugh very deeply the day aws has go admit how much spare capacity they float if growth ever slows or stops in cloud
05:40:24<fireonlive>lols
05:45:02<fireonlive>everyone is moving to local hardware again right :3
05:45:59<BPCZ>no :( that’s why I’m going to transition to cloud dev and scream about how inefficient aws makes their VMs to squeeze more money out if you
05:46:30<fireonlive>UwU
05:46:42<fireonlive>muh tax dollary-doos
05:46:55<fireonlive>💸
05:49:22<BPCZ>Na man it’s totally awesome using lambdas to perfectly parallelize access to your site. I mean what if your burning VC money as a service company takes off on YC. You need to be able to completely spend your 18 month runway in 6 hours so you can get a series B round. Investors only want startups that think planet scale firsf
05:51:15<fireonlive>😂
05:57:16<nicolas17>BPCZ: lambda is awesome for some use cases
05:57:20<nicolas17>I want to make a build system on it
05:57:44<nicolas17>launch 1000 parallel lambdas which each compile one .c file
05:58:53<nicolas17>it's certainly over-used
06:00:29<nicolas17>like some people would think I could have the imgur-bruteforce site submit to a lambda, which saves the data in a serverless database, or throws it into an S3 file with a periodic lambda concatenating the tiny files into larger ones, and then I would only pay cents for the negligible API calls I'm doing
06:01:23<nicolas17>or... I could throw the script receiving requests into the $6 VPS I *already* have and append to a text file
06:02:34<nicolas17>and pay nothing because that $6 VPS is gonna be there anyway whether the imgur-bruteforce script is running on it or not
06:05:34IDK_ quits [Client Quit]
06:05:34TheTechRobo quits [Remote host closed the connection]
06:05:34qwertyasdfuiopghjkl quits [Remote host closed the connection]
06:05:34jasonswohl quits [Remote host closed the connection]
06:05:43IDK_ joins
06:05:51AnotherTechRobo (TheTechRobo) joins
06:07:42BlueMaxima quits [Read error: Connection reset by peer]
06:07:48qwertyasdfuiopghjkl (qwertyasdfuiopghjkl) joins
06:10:02<fireonlive>but, it's not webscale?
06:10:32qwertyasdfuiopghjkl quits [Excess Flood]
06:10:32ave quits [Client Quit]
06:10:32icedice quits [Remote host closed the connection]
06:10:42icedice (icedice) joins
06:10:45ave (ave) joins
06:10:49qwertyasdfuiopghjkl (qwertyasdfuiopghjkl) joins
06:12:48qwertyasdfuiopghjkl quits [Excess Flood]
06:12:48icedice quits [Remote host closed the connection]
06:12:51icedice2 (icedice) joins
06:13:12qwertyasdfuiopghjkl (qwertyasdfuiopghjkl) joins
06:25:47Arcorann (Arcorann) joins
06:35:19datechnoman quits [Quit: The Lounge - https://thelounge.chat]
06:36:00datechnoman (datechnoman) joins
06:38:19hitgrr8 joins
07:27:27AmAnd0A quits [Remote host closed the connection]
07:27:40AmAnd0A joins
07:30:30AmAnd0A quits [Read error: Connection reset by peer]
07:30:42AmAnd0A joins
07:34:20nicolas17 quits [Client Quit]
07:38:37LeGoupil joins
07:43:25LeGoupil quits [Client Quit]
07:48:10spirit quits [Client Quit]
08:23:21BigBrain_ quits [Ping timeout: 245 seconds]
08:25:33BigBrain_ (bigbrain) joins
09:08:19Webuser353 joins
09:10:24<Webuser353>Hi, what software do you recommend to archive a vBulletin forum? There are a few Arabic forums that are almost dead (barely 1-2 posts a week) and they're pretty big (smallest one has 5m posts) without boterhing ArchiveBot since I have my own VPS that I'll be using.
09:10:44<Webuser353>any tips or pointers would be appreciated, will be using linux of course.
09:14:09<@Sanqui>"bothering ArchiveBot" has the benefit of automatic ingestion into the Wayback Machine
09:15:47<Webuser353>I mean yeah, I'll upload them to arhive.org eventually but didn't want to take away resources
09:18:13<Webuser353>I wanted to apply for my own ArchiveBot but rules don't fit me (unrestricted internet access, since I'm from MENA region and some things are blocked) but the forums in questions are accessible from my region just fine.
09:18:14Iki1 joins
09:19:20<masterX244>grab-site is the tool to use then. Its the crawl part of archivebot without the irc control part
09:19:58<masterX244>important thing with forums: monitor the early crawl parts to ignoure out any rabbitholes/useless links
09:20:48<thuban>uploading them to archive.org won't get the warcs into the wayback machine (they'll just be available for download). using archivebot will, and there's plenty of capacity
09:20:59<@JAA>This does not belong in the off-topic channel.
09:21:02<thuban>also--yes
09:21:41AnotherIki quits [Ping timeout: 252 seconds]
09:22:09<Webuser353>Sorry JAA, wasn't sure where to post since I thought it was not related to ArchiveBot.
09:22:28<Webuser353>Thank you masterX244! will see how grab-site works.
09:23:37<Webuser353>ah didn't know that thuban.
09:27:38<Webuser353>Another question, will the ArchiveBot also crawl topics and all of their posts? on Wayback right now the archives are mostly of the sub-forum pages and first/second posts pages and nothing more is archived
09:28:02<Webuser353>which is the reason why I wanted to crawl it by myself since I thought that was the limit of the public bot
09:28:39<thuban>yes it will, but discussion should go in #archiveteam-bs
09:51:26Webuser353 quits [Client Quit]
10:12:14jasonswohl joins
10:13:19VerifiedJ quits [Quit: The Lounge - https://thelounge.chat]
10:22:19VerifiedJ (VerifiedJ) joins
12:22:24icedice2 quits [Remote host closed the connection]
12:22:43icedice2 (icedice) joins
12:29:06ave quits [Client Quit]
12:29:23ave (ave) joins
12:31:46jasonswohl quits [Remote host closed the connection]
13:32:25cm joins
13:32:56<cm>is there an irc channel for archive.org?
13:35:27<thuban>cm: it's not official iirc, but #internetarchive
13:37:46<cm>ah thanks
13:37:59Arcorann quits [Ping timeout: 252 seconds]
14:11:07<nulldata>https://www.suse.com/news/SUSE-Preserves-Choice-in-Enterprise-Linux/
14:11:15<nulldata>SUSE is forking RHEL
14:14:08sec^nd (second) joins
14:27:20<@JAA>*plot twist*
14:27:27<@JAA>I did not expect that.
14:28:35<nstrom|m>wow yeah
14:32:47pabs quits [Ping timeout: 258 seconds]
14:49:48pabs (pabs) joins
14:53:21<that_lurker>https://lounge.kuhaon.fun/folder/2164abdcdcb97b9b/7s7x4r.jpg
14:56:42icedice2 quits [Client Quit]
15:12:25<sknebel>SUSE has been selling RHEL support for a while, so they kind of have to now
15:13:10<@JAA>Ah, didn't know that.
15:14:45<benjins2>Someone looking for an archive of the @GammaGroupPr twitter account https://infosec.exchange/@lorenzofb/110696011178909288
15:38:59AmAnd0A quits [Ping timeout: 252 seconds]
15:39:50AmAnd0A joins
15:43:29<that_lurker>There is a lot of interesting conversations about the SUSE for on hacker news https://news.ycombinator.com/item?id=36678079
15:44:24<that_lurker>s/for/forking RHEL
15:59:50AnotherTechRobo is now known as TheTechRobo
16:11:40lk quits [Ping timeout: 265 seconds]
16:11:59lk (lk) joins
16:36:42icedice (icedice) joins
16:45:19<fireonlive>on RHEL/SUSE/Twitter/Threads/Meta/etc: https://transfer.archivete.am/inline/1HetN/IMG_3664.jpeg
16:46:47<fireonlive>(also lorenzo works at TechCrunch now, before that motherboard etc)
17:08:29icedice quits [Client Quit]
17:13:37<Barto>suse? Their best release is this: https://www.youtube.com/watch?v=SYRlTISvjww :-)
17:20:26<fireonlive>https://techcommunity.microsoft.com/t5/microsoft-entra-azure-ad-blog/azure-ad-is-becoming-microsoft-entra-id/ba-p/2520436
17:20:36<fireonlive>active directory no longer so active :P
17:25:01icedice (icedice) joins
17:40:25<fireonlive>10:37:33 AM <+rss> Intel is quitting on NUC computers: https://www.theverge.com/2023/7/11/23790956/intel-nuc-compact-pc-discontinued → https://news.ycombinator.com/item?id=36683756
17:40:33<fireonlive>breaking: Intel NUCs are done
17:44:58<@JAA>... I was just looking at them the other day. lol
17:46:41<fireonlive>:x
17:47:10<fireonlive>ye my current little home server box is an older nuc
17:49:57<@JAA>TIL DIY/modular laptops: https://frame.work/ (via one of the comments there)
17:49:57<@JAA>I think I've seen it before, actually, but it was just a concept at the time.
17:53:15<Barto>shameless plug about hardkernel and their awesome H series SoC: https://www.hardkernel.com/shop/odroid-h3-plus/
17:53:21<Barto>I'm using a H2+ as my nas
17:53:43<@JAA>Yeah, also one that landed on my list to look at more closely.
17:54:12<@JAA>The N100's TDP of 6W is neat though.
17:55:35<@JAA>Plus it's a newer generation including AV1 hardware decoding.
17:57:26<fireonlive>ah yeah, tip tech man 'infamously' invested a couple hundred thousand into framework
17:58:25nicolas17 joins
17:59:06<@JAA>And now I found that the next generation of Intel chips (14th, Meteor Lake) will support AV1 encoding.
17:59:10<@JAA>> to be released to the market at an undisclosed date in the future
17:59:12<@JAA>:-)
18:01:10<FireFly>oh the intel NUC news is relevant for $work, that'll be.. interesting
18:09:08<fireonlive>i know of a few small-medium MSPs that are 100% standardized on NUCs for their rollouts to clients
18:09:14<fireonlive>it's going to be fun times i guess for them
18:10:23<fireonlive>NUC clients being like, 'this office needs X computers to access microsoft word' or 'this hair salon needs a PC to access hairsalonbookingSaaS.example' ; they probably also do other stuff. but yeah, the standard route is hit there lol
18:11:42<FireFly>I mean the other hardware we rely on is raspberry pis
18:12:00<FireFly>so we're in a fun position I guess :p
18:12:29<fireonlive>i used to like them but those SD cards man.....
18:12:39<fireonlive>then again i've since sworn off buying sd cards from amazon lmao
18:13:20<fireonlive>even shipped and sold by amazon name brand a++++++++++++++ sd cards can have fakes shipped right to your pi's bussy thanks to fulfilled by amazon :/
18:13:38<fireonlive>something with a little bit of onboard storage would be nice though
18:19:33<myself>For a lot of NUC use-cases, you can just get a framework mobo and slap it in the little case that makes it a standalone machine.
18:19:54<fireonlive>i guess they would sell the mobo seperately eh
18:20:25<fireonlive>i heard of taking the mobo post upgrade of their existing laptop and reusing the old one as like a server/desktop/whatever but never thought of 'just getting one for that express purpose'
18:22:59<FireFly>the problem for us is we need something with a bunch of certifications (hence also using pis in fancy industrial cases for €€)
18:27:08<fireonlive>ahhhhh
18:27:14<fireonlive>damn
18:31:37<sknebel>plenty other places making small PCs nowadays though. Asus, MSI, Asrock, Zotax, even HP and Lenovo do have small series. so its not like the market segment is suddenly gone
18:35:11<FireFly>yeah I mean I'm sure we can figure something out
18:35:18<FireFly>it's more having to test things, adapt things, etc
18:35:35<fireonlive>what do you use for storage on PIs? is it those compute unit things?
18:37:12<FireFly>us? just the internal flash, which isn't big but Big Enough for our purposes (the ones we have are built around the compute modules, and the pi4's have 6GiB of flash I think)
18:37:25<FireFly>well, built around the pi cm4 I think
18:38:40<fireonlive>ahh ok
18:38:51<icedice><JAA> TIL DIY/modular laptops: https://frame.work/ (via one of the comments there)
18:39:17<icedice>Yeah, Linus is invested into Framework
18:39:31<fireonlive>i just had a couple basic-bitch ones that needed an SD card doing light tasks
18:40:59<FireFly>yeah, I have a pi4 I've been meaning to set up for home-automation thingies but haven't gotten around to yet, I'm sure it's fine to just use a random SD card though..
18:42:40<fireonlive>i've been quite unlucky with death of them, but I guess if I get it from a better supplier...
18:42:49<fireonlive>and better tune things: tmpfs etc
18:43:09<fireonlive>(or otherwise ship off/turn down logging)
18:43:16<fireonlive>i hear the newer ones can even USB or PXE boot
18:43:23<FireFly>yeah that's something we ended up doing to keep writes down
18:43:35<FireFly>to preserve the flash longer
18:43:38<fireonlive>i thought I did with at least one of them but it's been a long time :D
18:43:55<fireonlive>for the lazy i think there's even new 'forks' of raspbian that have a lot of that configured by default
18:44:00<FireFly>reminds me idly of when I decided to format a microSD card with btrfs a long time ago, it didn't last long :p
18:44:09<fireonlive>x3
18:44:38<fireonlive>i think there's even one that goes so far as to make the entire filesystem readonly? or something? it's been a while since i casually glanced
18:47:23<@JAA>icedice: That would explain why I hadn't really heard of it, can't stand Linus.
18:47:36<fireonlive>he's..............
18:47:43<fireonlive>special :p
18:47:43<FireFly>fireonlive: netboot might be an option?
18:47:54<FireFly>I mean if one doesn't need state :p
18:48:01<FireFly>or well persistency
18:48:10<fireonlive>ye, can offload state for a lot of things
18:48:20<fireonlive>the old ones i have (idk even what gen) can't do netboot I think?
18:48:25<FireFly>or if there's somewhere else providing that, like a NAS or so :p
18:48:26<fireonlive>but i heard you can shim that with a SD card
18:48:46<fireonlive>that like sole purpose is to run something to netboot and otherwise be read only
18:50:23<icedice>Linus is all right
18:50:37<icedice>He has a few smooth brain takes
18:50:48<icedice>And there's some clickbait on the channel
18:50:56<icedice>But overall, not that bad
18:51:30<fireonlive>wow that's old, i wrote on it in electrical tape "Model B" and the board says "Raspberry Pi (c)2011,12" "FCC ID: 2ABCB-RPI21"
18:51:31<fireonlive>lol
18:51:36<fireonlive>been a while i guess :D
18:52:46<fireonlive>to be fair that's just the one i found in the mess immediately
18:52:54<fireonlive>but yeah
18:52:58<@JAA>icedice: I'm not talking about the content. I mean, yeah, that's kind of meh, too. I can't stand his way of talking about just about anything.
18:53:26<icedice>They have some pretty fun videos sometimes though
18:54:54<icedice>https://www.youtube.com/watch?v=7eQg2N1uoaY
18:55:02<fireonlive>whole room watercooling!
18:55:12<fireonlive>JAA: oh the bouncy presenter thing?
18:55:23<nicolas17>fireonlive: https://transfer.archivete.am/inline/dy4Uy/video0-7-1-1.mp4
18:55:42<fireonlive>nicolas17: haha yes
18:55:55<fireonlive>i linked his... power bank review recently
18:56:05<icedice>https://www.youtube.com/watch?v=JI2vcvhhVb4
18:56:13<fireonlive>05:44:09 PM <fireonlive> linus ??? tips: https://imgur.com/a/q5H8AmS
18:56:13<fireonlive>from july 8
18:56:15<fireonlive>s/8/9/
18:56:29<fireonlive>it was 55MB so i decided to save transfer some space
18:56:38<fireonlive>though i didn't supply any compression to it
18:56:41<icedice>https://www.youtube.com/watch?v=dJwjqZZgcWk&pp=ygUYTGludXMgVGVjaCBUaXBzIE9ubHlGYW5z
18:59:37<@JAA>fireonlive: His hyping 'I'M SO EXCITED ABOUT THIS TRIVIAL BIT OF INFORMATION' attitude. I'll admit that last time I watched any of his content was many years ago, apart from the one video (series) on the Apollo flight computer he did with Smarter Every Day.
19:01:36<@JAA>He was actually somewhat reasonable and watchable there, probably because for once, he was legitimately impressed, e.g. by the memory module's construction.
19:05:42<fireonlive>ahhhh yeah i know exactly what you mean there
19:06:00<fireonlive>on the factory tours you can see it toned down a little bit but mm
19:06:11<fireonlive>proabbly for the same reason you mentioned
19:07:18<icedice>If you look at the thumbnails you can assume that they're targeting a slighly younger demographic
19:07:24<nicolas17>there was a video touring ASML factory, by someone else in the LTT staff
19:07:26<icedice>Probably teenagers
19:07:40<nicolas17>and many comments said it was a good idea to not send Linus himself there...
19:07:45<icedice>And I guess being hyper helps with viewer retention there
19:08:03<nicolas17>icedice: did you see youtube has thumbnail a/b testing now
19:08:12<icedice>Nope
19:08:38<nicolas17>https://cdn.discordapp.com/attachments/302360773311725569/1121819368951644201/image1.jpg
19:08:49<@JAA>I wonder whether ASML was like 'lolno Linus isn't getting into this building'.
19:08:50<nicolas17>"which clickbait is more effective"
19:08:53<icedice>Well, I've seen different thumbnails on the same video
19:09:02<fireonlive>TIL they have that
19:09:20<icedice>But I assumed that was the YouTuber changing it and NewPipe caching the old thumbnail URL
19:09:48<nicolas17>icedice: that's likely
19:10:00<nicolas17>I assume this testing thing would always show the same one to the same person?
19:10:23<@JAA>I hate when they change titles or thumbnails. Getting excited for a new video on one of the channels I follow, start watching, wait a minute... Ugh.
19:10:34<icedice>NewPipe isn't exactly algorithm-influenced though
19:11:27<fireonlive>JAA: yeah.. some of my watch history started acting wonky, like the red thing under the thumbnails and it was super annoying
19:11:33<fireonlive>like i thought i already saw this etc
19:11:41<icedice>I kind of lol'd when they made an announcement video that they were switching VPN sponsor because Tunnelbear got bought up by a US company
19:11:52<icedice>And they switched to Private Internet Access
19:12:01<icedice>Which is also a US company
19:12:38<fireonlive>i think it was because tunnelbear was bought by mcaffee?
19:12:52<icedice>Might be
19:12:58<nicolas17>>having a VPN sponsor
19:13:09<icedice>And then Private Internet Access got bought up by Kape Technologies
19:13:37<fireonlive>VPN sponsor spots are so cringe
19:13:39<icedice>Which is pretty cozy with Unit 8200 (Israeli signals intelligence)
19:13:43<fireonlive>the ad read for them is just....... ugh
19:14:08<fireonlive>'if you don't use us hackers will literally steal your SIN/SSN and open credit cards under your name and and'
19:14:47<icedice>If you want to drop in IQ points, just watch Tom Spark Reviews
19:15:12<icedice>He's a smooth brained TorGuard shill
19:16:57<icedice>Watch any of his videos about well-respected VPN providers and how he then proceeds to go "you know TorGuard does that too and you get a nice discount if you use my discount code"
19:17:26<thuban>sponsorblock is your friend
19:17:28<icedice>After that he goes onto Reddit and shills TorGuard in random VPN threads where he always gets downvoted. Then he goes onto his YouTube channel and complains that some VPN subreddit on Reddit is being unfair to him :'D
19:17:41<icedice>Yeah, I know
19:18:05<icedice>His entire videos are basically sponsorships
19:18:51<icedice><nicolas17> >having a VPN sponsor
19:19:08<icedice>There's Proton VPN which has an affiliate program
19:19:25<fireonlive>goes to reddit too? lmao
19:20:00<icedice>But iirc Proton VPN requires that the people applying for their affiliate program are actually technologically literate
19:20:03<fireonlive>i hear mullvad is the good one
19:20:09<fireonlive>but that's about it
19:20:24<icedice>Yeah, he runs /r/NetflixViaVPN and /r/VPNComparison
19:20:34<fireonlive>lol
19:20:47<icedice>He started /r/VPNComparison since nobody in /r/VPN and /r/VPNTorrents wanted anything to do with him
19:20:52<icedice><fireonlive> i hear mullvad is the good one
19:21:09<icedice>Unless you need port forwarding or want to unblock streaming sites, yeah
19:21:25<icedice>For privacy it's the best one
19:21:33<icedice>A shame that they got rid of port forwarding
19:22:21<fireonlive>yeah :(
19:22:31<fireonlive>i have usenet/torrents for media at least
19:23:25<icedice>Proton VPN, AirVPN, AzireVPN, and Integrity VPN are also good
19:23:50<icedice>Integrity VPN is only available as a bundle from participating ISPs mainly in Sweden, but also in Finland and Denmark since the non-profit operating it doesn't want to handle any customer payment info. So due to that geographical centralization of users, users outside of Sweden might not have as good of a haystack to blend into as with other VPNs. No idea if they do port forwarding, but I wouldn't recommend using it outside of Sweden
19:23:50<icedice>unless they start selling it worldwide.
19:25:03<icedice><fireonlive> i have usenet/torrents for media at least
19:25:17<icedice>Private tracker usage is not that great without port forwarding
19:26:01<icedice>Seedbox is another option
19:26:10@JAA wonders whether NAT punching works.
19:26:27<icedice>I have no idea what that is
19:26:40<@JAA>https://en.wikipedia.org/wiki/Hole_punching_(networking)?useskin=vector
19:27:09<@JAA>Just spam UDP traffic to all IPv4s, easy. :-P
19:32:23<fireonlive>hmmm not sure... i do think if the downloader has an open port it'll work that way
19:32:36<fireonlive>but i've been a public tracker leecher (minus a couple speciality ones) but a bit now :p
19:32:44<fireonlive>mainly just to cover usenet holes now and then
19:36:49<@JAA>Well, can also work with both sides behind NAT, but you'll need a separate non-NAT host to facilitate the punching.
19:39:37<fireonlive>ah ye, STUN/TURN
19:39:41<fireonlive>iirc
19:39:50<fireonlive>like what tailscale does
19:39:54<@JAA>Yeah
19:40:18<fireonlive>they have the fallback 'you go though us' but they try their hardest for you not to use it because it costs them money
19:40:50<@JAA>That, plus it might be very slow compared to a direct route.
19:41:16<@JAA>And that's the whole point of using a mesh-like network like Tailscale.
19:41:18<fireonlive>ah yes
19:41:37<fireonlive>i do like how my ssh sessions just 'stay open' lol
19:41:49<fireonlive>now if only i could be connected to multiple networks at once...
19:41:58<fireonlive>(yeah mosh is a thing... i should look into it again)
19:42:20<fireonlive>i think it's still a thing anyways
19:42:26<fireonlive>i remember it being the bees' knees
19:45:24<@JAA>Yeah, I've been meaning to try out mosh for a few years now. Maybe next week..... :-)
19:47:00<fireonlive>=]
19:47:32<@JAA>My `while :; do ssh -t $host tmux attach -t $session; sleep 1; done` works just fine. :-P
19:48:02<fireonlive>xD
19:48:04<fireonlive>love while loops
19:48:36<fireonlive>if you get stuck rescue it just a <enter>~. away
19:48:41<fireonlive>s/it/is/ :D
19:50:35<@JAA>It does get fun when the terminal gets fucked up because you accidentally dumped a binary file into it.
19:50:47<@JAA>But otherwise, it does the job. :-P
19:51:00<fireonlive>:3
19:51:48<nicolas17>mosh is great
19:52:01<nicolas17>stays alive across disconnections and even IP changes
19:52:10<nicolas17>and the latency hiding is nice too
19:53:38<nicolas17>you type a letter on the prompt, the server echoes it back, the mosh client notices and starts doing local echo
19:53:48<nicolas17>so now what you type appears on screen instantly
19:54:04<nicolas17>arrow keys also move the cursor instantly
19:55:09<nicolas17>as soon as the server does something "unexpected" in response to a keypress, or you press something like enter or tab or Ctrl-A, the local echo is disabled again
19:55:10<@JAA>Might be nice on mobile network connections, yeah.
19:55:31<@JAA>When I'm wired up at home, I barely notice any latency anyway.
19:55:35<fireonlive>icedice: speaking of newpipe: 12:51:46 PM <+rss> Newpipe.net removed from Google search results due to DMCA take down request: https://newpipe.net/blog/pinned/announcement/newpipe-net-dmca-google-search/ → https://news.ycombinator.com/item?id=36682509
19:55:38<nicolas17>I have 150ms to my VPS :P
19:56:01<@JAA>And I guess readline history with arrow-up doesn't work either.
19:56:17<nicolas17>indeed, arrow up also disables local echo
19:56:41<fireonlive>just temporarily ig
19:57:31<nicolas17>yes, as soon as you type normal text and the remote end has expected behavior in response, it's turned back on
19:58:45<@JAA>How does it interact with tmux et al.?
19:58:56<@JAA>As in, tmux on the server side.
19:59:24<nicolas17>afaik it doesn't know or care what's running on the remote side
20:01:36<nicolas17>same as ssh
20:03:21<@JAA>I mean the local echo thing. Does that still work if the remote end runs something based on curses or similar?
20:13:14<nicolas17>vim gets local echo just fine, though *sometimes* editing commands might get briefly displayed as typed characters, until they get fixed in the next network roundtrip
20:13:42<@JAA>Ok yeah, that's what I expected.
20:25:18<icedice>fireonlive: oof, I guess "Because Music" is pro-malware distribution
21:37:55<fireonlive>Kelly Rowland couldn't have used the =HYPERLINK() function to message Nelly: https://blog.jgc.org/2023/07/unfortunately-kelly-rowland-couldnt.html
21:40:59jasonswohl joins
21:48:19<Barto>someone had too much time lol
21:53:53<fireonlive>cloudflare CTO got bored x3
22:01:25<nicolas17>https://pbs.twimg.com/media/F0w2qOFaMAIC6cK.jpg shots fired
22:03:00<fireonlive>lol
22:03:03<fireonlive><_<
22:08:56Mateon2 joins
22:09:43Mateon1 quits [Ping timeout: 258 seconds]
22:09:43Mateon2 is now known as Mateon1
22:12:08<nicolas17>fireonlive: https://cdn.discordapp.com/attachments/286612533757083648/1128446843844579409/Clipboard01.png so this is who the Twitter rate limits were targeting
22:14:26cdub quits [Ping timeout: 252 seconds]
22:15:38cdub joins
22:16:39<fireonlive>nicolas17: lmao
22:16:55<fireonlive>coomers, assemble!
22:30:05<Doranwen>Welp, AO3's behind Cloudflare now.
22:30:14<Doranwen>There go all the scripts for d/l-ing from it.
22:30:40<nicolas17>Doranwen: I heard they were DDoS'd *by the Russian government*?
22:30:54<fireonlive>Doranwen: god damn it
22:31:00<Doranwen>Eh, they were DDoSed by a group of hackers that are known to be associated with Russia somehow.
22:31:03<fireonlive>hopefully they loosen the restrictions later
22:31:15<Doranwen>Exact details we do not know, just that they're liars, lol.
22:31:26<fireonlive>i saw some telegram post vaguely saying gays are bad so the ddos will continue
22:31:30<fireonlive>...somewhere
22:31:52<Doranwen>Yeah, AO3 says the experts say "don't believe what they say about their motivations".
22:32:12<Doranwen>They also wanted $30k in Bitcoin to stop the attacks, so. Who knows.
22:32:29<nicolas17>gotta fund the war somehow (?)
22:32:38<Doranwen> /\o/\
22:35:19<Doranwen>The ao3downloader scripter says they'll wait to see how things shake out, confer with people or something, but they're not going to try to make it work right now. Which makes sense.
22:38:59<fireonlive>ye
22:39:18<fireonlive>03:35:35 PM <+rss> The Free Movie: https://thefreemovie.buzz/ → https://news.ycombinator.com/item?id=36687399
22:39:23<fireonlive>something new from MSCHF :3
22:39:38<fireonlive>a frame-by-frame croud-sourced line drawing of the entire bee movie
22:40:29<fireonlive>i quite like the (outer) interface
22:41:36<fireonlive>"ALL FRAMES have been drawn!!! We did it. All 65244 frames of the BEE Movie have been hand drawn." ohh
22:43:11<fireonlive>HN says some of them are just bad frames, which of course they are. but interesting to see where it'll go lol. also, if you dismiss that you can hit play and see what people did though
22:46:19<@JAA>Now do it with a Disney movie. I want to see how this plays out legally. :-P
22:46:26<fireonlive>looks like movie so far is here: https://thefreemovie-frames.s3.amazonaws.com/movie/in-progress-movie.mp4
22:46:41<fireonlive>which is i guess should be static now? and credits for every frame available as json https://a3dc8x1bk0.execute-api.us-east-1.amazonaws.com/dev/credits
22:47:03<fireonlive>haha yes
22:47:54<fireonlive>ah! and each finished frame is https://a3dc8x1bk0.execute-api.us-east-1.amazonaws.com/dev/finishedFrames
22:48:19<fireonlive>ok that's all :D
22:48:33<@JAA>The number of 'penis' frames is surprisingly low.
22:49:21<fireonlive>indeed
22:49:32<fireonlive>i think i saw one fly by that said 'i'm not drawing all that'
22:50:30<Doranwen>LOL
22:52:44<nicolas17>that seems low framerate
22:53:48<@JAA>12 fps it seems from a quick calculation.
22:53:53<@JAA>Not great, not terrible.
22:54:15<Doranwen>Makes me dizzy trying to watch it.
22:54:27<nicolas17>I would have done it binary-search-like
22:55:08<nicolas17>as the project continues, more in-between frames are added and framerate improves
22:57:42AmAnd0A quits [Read error: Connection reset by peer]
22:57:59AmAnd0A joins
23:08:14<fireonlive>what's interesting is as the movie plays back they use the /credits endpoint to sync up who drew what
23:10:16<fireonlive>https://transfer.archivete.am/inline/Eu30T/1689117000.png
23:10:21<fireonlive>fortunately for you all i missed the penises
23:26:40AmAnd0A quits [Ping timeout: 265 seconds]
23:27:03AmAnd0A joins
23:47:01BlueMaxima joins
23:56:44hitgrr8 quits [Client Quit]