• 4 Posts
  • 78 Comments
Joined 2 years ago
cake
Cake day: July 28th, 2023

help-circle
rss
  • Homeburning can be surprisingly robust as a backup method, and as an option of physical media, but I’d still keep backups on an actual NAS as well. There’s also a ton of variables that affect the lifetime of a burnt CD, like dyes used (cyanine - phthalocyanine - azo), lamination quality, storage and the burner used. Especially the quality and intensity of the build has a surprisingly strong effect, despite things being set in a standard – you can get a lot more storage life out of a CD burned using a quality 5.25" burner compared to a budget slim drive.

    Also early discs based on cyanine had a notoriously short shelf life compared to the later archival quality discs, around 30 years or so in optimal conditions (and typically a lot less), so much of the stuff burnt in 90’s and 00’s has already began deteriorating. More recent quality discs can last over a century if stored properly, but the older ones can’t.

    DVDs can also often have issues with delamination, meaning that especially the outer rim of the disc can start exhibiting bit rot quite early if you’re using low quality media. I’ve noticed even new discs having signs of early delamination between the two disc halves (DVDs have the data layer in between two acrylic discs, unlike CDs which have it on the backside directly under the reflective coating). I’ve also experienced a lot of issues when burning multilayer DVDs that might affect how long they last in storage, so for actual backups I’d prefer using a single layer disc instead.

    But as per reasons for still using discs – they’re an unparalleled cold storage solution. With proper care you can actually leave them be for decades and be sure the data is still readable, unlike with SSDs which will lose their data when unpowered for a long period of time. Tape is a good option, but not really viable for consumers – also tape needs more active upkeep, since you typically have to copy over the old data to new media every 20-30 years or so (promised life in archival is 30 years, after which it might not be possible to get new drives for reading the tapes). Optical is also king when you need to transfer data into air-gapped environments, since with optical media it’s relatively easy to audit that what’s burned to the disc is unalterable. There’s a reason why I still keep a full install set of Debian handy.






  • Not sure if that’s a thing in France, but alternatively to plant milk for lactose intolerant

    • Lactose-free milk (there are versions with lactose removed instead of broken down, that aren’t sweet and taste basically the same as normal milk)
    • Lactase enzyme taken together with the coffee, to break lactose down

    I don’t really see plant milk as the lactose-intolerant variant, but a vegan option, but that might just be due to the fact Finland has lactose-free milk available as an option basically everywhere as milk is such an important part of the coffee culture.


  • Eikös ihan samoilla perusteilla voisi vaatia kaupoilta rahaa lehtien kansien näkymisestä myymälässä, tai esimerkiksi kirjojen takakansitekstien näyttämisestä kirjakaupoissa? Linkki ei ole uutinen, ja linkinkeräimistä pääasiallinen kohde mihin päädytään on sen artikkelin julkaissut taho, sillä kyllä jos sisältö ihmistä kiinnostaa, niin linkkiä klikataan – se on sitten lehden ongelma jos linkin takaa ei löydy muuta kuin ilmoitus sisällön maksullisuudesta. Ei kukaan ala tilaamaan jotain perähikiän sanomia yhden artikkelin takia, mutta jos artikkelin voisi lukea esimerkiksi kuittaamalla 20 snt kertakustannuksen niin aika moni varmaan olisi tuon valmis maksamaan. Koko lehden digitilaus on vaan yksinkertaisesti liikaa.

    Ylipäätään on hämmentävää miten vähän lehdet suostuu myymään lukuoikeuksia yksittäisiin artikkeleihin, sillä itse ainakin olisin valmis maksamaan kertaluvuista useampia kertoja päivässä. Tämä on ihan noiden uutissivustojen oma ongelma jos ei osata käyttää linkinkeräinten myötä saapuvien asiakkaiden mahdollisuutta hyväksi. Esimerkiksi jenkkimediat ovat kokeilleet yksittäisten artikkelien myymistä ja ymmärtääkseni tuo on ihan toimiva ratkaisu. Vielä kun osaisivat tehdä tuon anonyymisti siten, ettei se yksittäisen artikkelin lukuoikeuden ostaminen vaatisi käyttäjätunnuksia ja kirjautumista…



  • Yep, and truth be told if I had the option of paying 90 € for an actual physical copy without microtransactions, DLC instead of having all content in the game from launch, no online access required and no copy protection on the disc, I’d gladly pay that. 100 € even, if it’s a particularly good game.

    But I have zero trust in that being the case with the increased prices, it’s just going to be the same thing we now have, more expensively.






  • Don’t seem to be any disk reads on request at a glance, though that might just be due to read caching on OS level. There’s a spike on first page refresh/load after dropping the read cache, so that could indicate reading the file in every time there’s a fresh page load. Would have to open the browser with call tracing to be sure, which I’ll probably try out later today.

    For my other devices I use unbound hosted on the router, so this is the first time encountering said issue for me as well.


  • You’re using software to do something it wasn’t designed to do

    As such, Chrome isn’t exactly following the best practices either – if you want to reinvent the wheel at least improve upon the original instead of making it run worse. True, it’s not the intended method of use, but resource-wise it shouldn’t cause issues – at this point one would’ve needed active work to make it run this poorly.

    Why would you even think to do something like this?

    As I said, due to company VPN enforcing their own DNS for intranet resources etc. Technically I could override it with a single rule in configuration, but this would also technically be a breach of guidelines as opposed to the more moderate rules-lawyery approach I attempt here.

    If it was up to me the employer should just add some blocklist to their own forwarder for the benefit of everyone working there…

    But guess I’ll settle for local dnsmasq on the laptop for now. Thanks for the discussion 👌🏼


  • TLDR: looks like you’re right, although Chrome shouldn’t be struggling with that amount of hosts to chug through. This ended up being an interesting rabbit hole.

    My home network already uses unbound with proper blocklist configured, but I can’t use the same setup directly with my work computer as the VPN sets it’s own DNS. I can only override this with a local resolver on the work laptop, and I’d really like to get by with just systemd-resolved instead of having to add dnsmasq or similar for this. None of the other tools I use struggle with this setup, as they use the system IP stack.

    Might well be that chromium has a bit more sophisticated a network stack (than just using the system provided libraries), and I remember the docs indicating something about that being the case. In any way, it’s not like the code is (or should be) paging through the whole file every time there’s a query – either it forwards it to another resolver, or does it locally, but in any case there will be a cache. That cache will then end up being those queried domains in order of access, after which having a long /etc/hosts won’t matter. Worst case scenario after paging in the hosts file initially is 3-5 ms (per query) for comparing through the 100k-700k lines before hitting a wall, and that only needs to happen once regardless of where the actual resolving takes place. At a glance chrome net stack should cache queries into the hosts file as well. So at the very least it doesn’t really make sense for it to struggle for 5-10 seconds on every consecutive refresh of the page with a warm DNS cache in memory…

    …or that’s how it should happen. Your comment inspired me to test it a bit more, and lo: after trying out a hosts file with 10 000 000 bogus entries chrome was brought completely to it’s knees. However, that amount of string comparisons is absolutely nothing in practice – Python with its measly linked lists and slow interpreter manages comparing against every row in 300 ms, a crude C implementation manages it in 23 ms (approx. 2 ms with 1 million rows, both a lot more than what I have appended to the hosts file). So the file being long should have nothing to do with it unless there’s something very wrong with the implementation. Comparing against /etc/hosts should be cheap as it doesn’t support wildcard entires – as such the comparisons are just simple 1:1 check against first matching row. I’ll continue investigating and see if there’s a quick change to be made in how the hosts are read in. Fixing this shouldn’t cause any issues for other use cases from what I see.

    For reference, if you want to check the performance for 10 million comparisons on your own hardware:

    #include <stdio.h>
    #include <stdlib.h>
    #include <string.h>
    #include <sys/time.h>
    
    
    int main(void) {
    	struct timeval start_t;
    	struct timeval end_t;
    
    	char **strs = malloc(sizeof(char *) * 10000000);
    	for (int i = 0; i < 10000000; i++) {
    		char *urlbuf = malloc(sizeof(char) * 50);
    		sprintf(urlbuf, "%d.bogus.local", i);
    		strs[i] = urlbuf;
    	}
    
    	printf("Checking comparisons through array of 10M strings.\n");
    	gettimeofday(&start_t, NULL);
    
    	for (int i = 0; i < 10000000; i++) {
    		strcmp(strs[i], "test.url.local");
    	}
    
    	gettimeofday(&end_t, NULL);
    
    	long duration = (end_t.tv_usec - start_t.tv_usec) / 1000;
    	printf("Spent %ld ms on the operation.\n", duration);
    
    	for (int i = 0; i < 10000000; i++) {
    		free(strs[i]);
    	}
    	free(strs);
    }
    



  • Yep, I don’t think the A55 is the culprit either – just outlined the reasoning behind that. Sometimes pairing also gets things wrong which leads to the headphones using an older protocol version.

    But that doesn’t seem to be the case as it’s using SSC, at this point I’d also just guess it’s a bad battery. You can try pairing them again but I wouldn’t be surprised if it doesn’t help. Still, couldn’t really hurt to try.


  • Tavallaanhan sähköposti kärsii samalla tavalla avoimen federoinnin ongelmista, kuten ajoittain liian herkästä defederaatiosta. Asiasta saa hyvän kuvan kun kokeilee pistää oman sähköpostipalvelimen pystyyn ja huomaa, miten vaikeaa niitä viestejä on oikeasti saada lähtemään muille palvelimille niin, että menee oikeasti eteenpäin. Samasta syystä moni palvelu pyytää edelleen tarkistamaan roskapostin joidenkin aktivointiviestien yms. varalta, kun omat sähköpostipalvelimet merkataan usein roskapostiksi, jos nyt edes kulkee vastaanottajalle asti.

    Toki mitä nyt on noita estolistoja Lemmyssä katsonut, niin suurin osa estetyistä instansseista on sitä hyvästä syystä. Änkyrävasemmiston (tankies) ja muiden instanssien välinen estojen taistelu on sitten asia erikseen, ja vaikuttaa herättävän aika vahvoja tunteita säännöllisin väliajoin, suuntaan ja toiseen 😅