There are a couple I have in mind. Like many techies, I am a huge fan of RSS for content distribution and XMPP for federated communication.

The really niche one I like is S-expressions as a data format and configuration in place of json, yaml, toml, etc.

I am a big fan of Plaintext formats, although I wish markdown had a few more features like tables.

  • @[email protected]
    link
    fedilink
    English
    7410 months ago

    It’s completely bonkers that JPEG-XL is as good as it is and no one wants to actually implement it into web browsers

      • @[email protected]
        link
        fedilink
        English
        1810 months ago

        Basically smaller file sizes than JPEG at the same quality and it also automatically loads a lower quality version of the image before it loads a higher quality version instead of loading it pixel by pixel like an image would normally load. Google refuses to implement this tech into Chrome because they have their own avif format, which isn’t bad but significantly outclassed by JPEG-XL in nearly every conceivable metric. Mozilla also isn’t putting JPEG-XL into Firefox for whatever reason. If you want more detail, here’s an eight minute video about it.

        • @[email protected]
          link
          fedilink
          1310 months ago

          I’m under the impression that there’s two reasons we don’t have it in chromium yet:

          1. Google initially ignored jpeg-xl but then everyone jumped on it and now they feel they have to create a post-hoc justification for not supporting it earlier which is tricky and now they have a sunk cost situation to keep ignoring it
          2. Google today was burnt by the webp vulnerability which happened because there was only one decoder library and now they’re waiting for more jpeg-xl libraries which have optimizations (which rules out reference implementations), good support (which rules out libraries by single authors), have proven battle-hardening (which will only happen over time) and are written safely to avoid another webp style vulnerability.

          Google already wrote the wuffs language which is specifically designed to handle formats in a fast and safe way but it looks like it only has one dedicated maintainer which means it’s still stuck on a bus factor of 1.

          Honestly, Google or Microsoft should just make a team to work on a jpg-xl library in wuffs while adobe should make a team to work on a jpg-xl library in rust/zig.

          That way everyone will be happy, we will have two solid implementations, and they’ll both be made focussing on their own features/extensions first so we’ll all have a choice among libraries for different needs (e.g. browser lib focusing on fast decode, creative suite lib for optimised encode).

          • @[email protected]
            link
            fedilink
            English
            410 months ago

            didn’t google include jpeg-xl support already in developer versions of chromium, just to remove it later?

            • @[email protected]
              link
              fedilink
              210 months ago

              Chromium had it behind a flag for a while, but if there were security or serious enough performance concerns then it would make sense to remove it and wait for the jpeg-xl encoder/decoder situation to change.

              It baffles me that someone large enough hasn’t gone out of their way to make a decoder for chromium.

              The video streaming services have done a lot of work to switch users to better formats to reduce their own costs.

              If a CDN doesn’t add it to chromium within the next 3 years, I’ll be seriously questioning their judgement.

              • The_Decryptor
                link
                fedilink
                English
                110 months ago

                Chromium had it behind a flag for a while, but if there were security or serious enough performance concerns then it would make sense to remove it and wait for the jpeg-xl encoder/decoder situation to change.

                Adobe announced they were supporting it (in Camera Raw), that’s when the Chrome team announced they were removing it (due to a “lack of industry interest”)

      • @[email protected]
        link
        fedilink
        5710 months ago
        • Existing JPEG files (which are the vast, vast majority of images currently on the web and in people’s own libraries/catalogs) can be losslessly compressed even further with zero loss of quality. This alone means that there’s benefits to adoption, if nothing else for archival and serving old stuff.
        • JPEG XL encoding and decoding is much, much faster than pretty much any other format.
        • The format works for both lossy and lossless compression, depending on the use case and need. Photographs can be encoded in a lossy way much more efficiently than JPEG and things like screenshots can be losslessly encoded more efficiently than PNG.
        • The format anticipates being useful for both screen and prints. Webp, HEIF, and AVIF are all optimized for screen resolutions, and fail at truly high resolution uses appropriate for prints. The JPEG XL format isn’t ready to replace camera RAW files, but there’s room in the spec to accommodate that use case, too.

        It’s great and should be adopted everywhere, to replace every raster format from JPEG photographs to animated GIFs (or the more modern live photos format with full color depth in moving pictures) to PNGs to scanned TIFFs with zero compression/loss.

        • Angry_Autist (he/him)
          link
          fedilink
          2110 months ago

          This is why I fucking love the internet.

          I mean, I’ll never take the time to get this knowledgable about image formats, but I am ABSOLUTELY fuckdamn thrilled that at least SOMEONE out there takes it seriously.

          Good on you, pixel king

        • The_Decryptor
          link
          fedilink
          English
          8
          edit-2
          10 months ago

          Existing JPEG files (which are the vast, vast majority of images currently on the web and in people’s own libraries/catalogs) can be losslessly compressed even further with zero loss of quality. This alone means that there’s benefits to adoption, if nothing else for archival and serving old stuff.

          Funny thing is, there was talk on the Chrome bug tracker of using just this ability transparently at the HTTP layer (like gzip/brotli compression), but they’re so set on pushing their AVIF format that they backed away from it.

        • @[email protected]
          link
          fedilink
          710 months ago
          • The format works for both lossy and lossless compression, depending on the use case and need. Photographs can be encoded in a lossy way much more efficiently than JPEG and things like screenshots can be losslessly encoded more efficiently than PNG.

          Someone made a fair point that having a format being both lossy and lossless is not necessarily a great idea. If you download a jpeg file you know it will be compressed, if you download png it will be lossless. Shifting through jxl files to check if it’s lossy or not doesn’t sound very fun.

          All in all I’m a big supporter of jxl though, it’s one of the only github repos I actively follow.

          • @[email protected]
            link
            fedilink
            410 months ago

            Functionally speaking, I don’t see this as a significant issue.

            JPEG quality settings can run a pretty wide gamut, and obviously wouldn’t be immediately apparent without viewing the file and analyzing the metadata. But if we’re looking at metadata, JPEG XL reports that stuff, too.

            Of course, the metadata might only report the most recent conversion, but that’s still a problem with all image formats, where conversion between GIF/PNG/JPG, or even edits to JPGs, would likely create lots of artifacts even if the last step happens to be lossless.

            You’re right that we should ensure that the metadata does accurately describe whether an image has ever been encoded in a lossy manner, though. It’s especially important for things like medical scans where every pixel matters, and needs to be trusted as coming from the sensor rather than an artifact of the encoding process, to eliminate some types of error. That’s why I’m hopeful that a full JXL based workflow for those images will preserve the details when necessary, and give fewer opportunities for that type of silent/unknown loss of data to occur.

          • @[email protected]
            link
            fedilink
            1010 months ago

            While I agree that it’s somewhat bad that there is no distinction between lossless and lossy jxl in the file extension, I think it’s really not a big deal compared to the present situation with jpg/png.

            The reason being that if you download a png file you have no idea if its been converted from jpg, if it’s a screenshot of a jpg, or if it’s been subjected to lossy reencoding by a tool or a website upload process.

            The only thing you can really do to try and see if the file you’ve downloaded has suffered encoding loss is to do an image search on it and see if there are any better quality versions out there. You’d do the exact same thing with a jxl file.

    • mox
      link
      fedilink
      5
      edit-2
      10 months ago

      I think I would feel better using JPEG-XL where I currently use WebP. Here’s hoping for wider support.

    • @[email protected]
      link
      fedilink
      3210 months ago

      Adobe is backing the format, Apple support is coming along, and there are rumors that Apple is switching from HEIC to JPEG XL as a capture format as early as the iPhone 16 coming out in a few weeks. As soon as we have a full blown workflow that can take images from camera to post processing to publishing in JXL, we might see a pretty strong push for adoption at the user side (browsers, websites, chat programs, social media apps and sites, etc.).

        • @[email protected]
          link
          fedilink
          310 months ago

          QOI is just a format that’s easy for a programmer to get their head around.

          It’s not designed for everyday use and hardware optimization like jpeg-xl is.

          You’re most likely to see QOI in homebrewed game engines.

        • @[email protected]
          link
          fedilink
          310 months ago

          To be honest, no. I mainly know about JPEG XL only because I’m acutely aware of the limitations of standard JPEG for both photography and high resolution scanned documents, where noise and real world messiness cause all sorts of problems. Something like QOI seems ideal for synthetic images, which I don’t work with a lot, and wouldn’t know the limitations of PNG as well.

    • comma
      link
      fedilink
      English
      110 months ago

      Good news! I believe the Ladybird Browser intends to include support for JPEG XL.

  • @[email protected]
    link
    fedilink
    710 months ago

    The semantic web and social linked data. We could have applications share data without depending on big tech, but rather based on application standards.

    It can be used today and gains traction but I wouldn’t mind it going faster. Especially the interoperable personal app space could use some love and attention.

      • @[email protected]
        link
        fedilink
        210 months ago

        Exactly. The Semantic Web is broader than Solid but Solid is great for personal apps.

        Say you buy a smartphone. The specifications of the smartphone likely belong elsewhere than in a Solid Personal Online Datastore, but they can be pulled in from semantic data on the product website. Your own proof of purchase is a great candidate for a Solid POD, as is the trace of any repairs made to it.

        These technologies are great to cross the barriers between applications. If we’d embrace this, it would be trivial to find the screen protector matching your exact smartphone because we’d have an identifier to discover its type and specifications. Heck, any product search would be easier if you could combine sources and compare with what you already have.

        The sharing tech exists. Building apps works also. Interpreting the information without building a dedicated interface seems lacking for laymen.

    • @[email protected]
      link
      fedilink
      810 months ago

      The biggest problems with gRPC are:

      1. Very complicated. Way more complexity than you want in most cases.
      2. Depends on HTTP 2. I’ve seen people who weren’t even doing web stuff reach for gRPC, and now boom you have a web server in your stack for now reason. Compare to Thrift which properly separates out encodings, transports, etc.
      3. Doesn’t work from the web. There are actually two modifications to gRPC to make it work on the web which means you have three different incompatible versions of gRPC with different feature sets. IIRC some of them require setting up complex proxies, some don’t support streaming calls, ugh. Total mess.

      Plain HTTP can be type safe. Just publish JSON schema or Typespec files or even use Protobuf.

      • @[email protected]
        link
        fedilink
        English
        3
        edit-2
        10 months ago

        Your concerns are all valid, but about 1 and 3 there are possible solutions. I’m using Rust+Tonic to build an API and that’s eliminate the necessity of proxies and it’s very simple to use.

        I know that it don’t solve all problems, but IMHO is a question of adoption. Easier told tools will be develop for it.

      • @[email protected]
        link
        fedilink
        English
        3
        edit-2
        10 months ago

        Depends on HTTP 2.

        Doesn’t work from the web.

        Am I the only one who is weirded out? Requiring a web server for something and then requiring another server if you want it to actually work on the web?
        How expensive do people want to make their deployments?

    • @[email protected]
      link
      fedilink
      7
      edit-2
      10 months ago

      It’s the recommended approach to replace WCF which was deprecated after .NET framework 4.8. My company is just now getting around to ripping out all their WCF stuff and putting in gRPC. REST interfaces were always a non-starter because of how “heavyweight” they were for our use case (data collection from industrial devices which are themselves data collectors).

    • Caveman
      link
      fedilink
      210 months ago

      I like the concept and I think the use case is almost covered by generating API client through generated OpenAPI spec.

      It’s needs a bit of setup but a client library can be built whenever a backend server is built.

    • @[email protected]
      link
      fedilink
      English
      8
      edit-2
      10 months ago

      I mean, REST-ful JSON APIs can be perfectly type-safe, if their developers actually take care to make them that way. And the self-descriptive nature of JSON is arguably a benefit in really large public-facing APIs. But yeah, gRPC forces a certain amount of type-safety and version control, and gRPC with protobuf is SUCH a pleasure to work with.

      Give it time, though, it’s definitely gaining traction.

      • Magiilaro
        link
        fedilink
        310 months ago

        Oh many years ago in school I created something like that for an arts/creative writing project once, a calendar with 12, 30 day month based on sailor moon. Having it based on a magical girl manga gave me the freedom to declare the rest of the days to “days of evil” Was a fun project because I created a whole religion around it. 😁

    • Magiilaro
      link
      fedilink
      410 months ago

      That sounds interesting, would most likely not be very popular with lots of people and a pain in the butt to implement but interesting.

  • @[email protected]
    link
    fedilink
    6
    edit-2
    10 months ago

    Oddly having several variants rather than a standard despite “regular” being in the name: everyone I work with eschews regex but after finally taking the time to learn more than just the basics of it a few years ago I find it so incredibly useful almost daily.

      • @[email protected]
        link
        fedilink
        110 months ago

        regex101.com has a convenient searchable cheat sheet for all the somewhat odd but powerful functions like negative lookbehind/lookahead with a brief explanation of each, a regex pattern input with checkable boxes that helps you get down single replacements vs global replacements, a large input that lets you dump text to test against the pattern, an explanation on the right of what each symbol is trying to match, and the left side lets you switch between the different flavors to see some of the variants between languages/standards. I still have a lot to learn before I’ll consider it mastered, but I have enough common stuff memorized now that it works great for me!

  • Bora M. Alper
    link
    fedilink
    English
    4010 months ago

    ActivityPub :) People spend an incredible amount of time on social media—whether it be Facebook, Instagram, Twitter/X, TikTok, and YouTube—so it’d be nice to liberate that.

    • jelloeater
      link
      fedilink
      2
      edit-2
      10 months ago

      I mean, you’re in the right place to advocate for that 😜

    • @[email protected]
      link
      fedilink
      210 months ago

      Oh, this looks great!
      I’ve been struggling between customize and helm. Neither seem to make k8s easier to work with.

      I have to try cuelang now. Something sensible without significant whitespace that confuses editors, variables without templating.
      I’ll have to see how it holds up with my projects

    • Eager Eagle
      link
      fedilink
      English
      6
      edit-2
      10 months ago

      Oh this! YAML was a terrible choice. And that’s coming from someone who likes Python and prefers white spaces over brackets. YAML never clicked for me.

      https://noyaml.com/

  • UFO
    link
    fedilink
    910 months ago

    Is ipfs usage growing? Stagnant? No idea… Diatributed serving of content seems great

    • CyclohexaneOP
      link
      fedilink
      610 months ago

      I never really quite understood IPFS and why it gets used where I see it today. What problem is it solving?

      • @[email protected]
        link
        fedilink
        410 months ago

        IPFS would replace Content Delivery Networks in present day.

        It would also allow you to host software and other content from your own network again without the constraints modern Internet Service Providers pose on you to limit your self-hosting capabilities.

        If applications are built for it, it could serve as live storage for your applications too.

        We ran ipf-search. In one of the experiments we could show that a distributed search index on ipfs-search, accessible through JavaScript is likely feasible with the necessary research. Parts of the index would automatically be hosted by clients who used the index thus creating a fairly resilient system.

        Too bad IPFS couldn’t get over the technical hurdles of limiting connection setup time. We could get a fast (ElasticSearch based) index running and hosted over common web technologies, but fetching content from IPFS directly was generally rather slow.

        • @[email protected]
          link
          fedilink
          410 months ago

          Would you be interested in a similar protocol that supports more things (and is IMO easier to set up)?

          • @[email protected]
            link
            fedilink
            210 months ago

            I’m not actively looking but please do share references! Other people may read this and they may want to know too. Perhaps I’ll jump back in the rabbit hole at some point too 😁

            • @[email protected]
              link
              fedilink
              210 months ago

              Okay here it goes!

              Tenfingers sharing protocol & python implementation (your python needs cryptodomex, or use the frozen executables).

              http://tenfingers.org

              You share theirs, they share yours (all encrypted)! So no benevolent nodes or crypto and it’s 100% decentralised.

              I’m working on a better documentation on how to set it up (just forward a port and run setup basically).

              • @[email protected]
                link
                fedilink
                310 months ago

                I had to read the overview and it looks nice. It reads like IPFS without some of the challenging cruft. Well written!

                IPFS seemingly works small scale but not large scale. What makes tenfingers handle millions of files and petabytes of data better than IPFS? Perhaps that is not the goal. In what way do you think the tech scales? Why will discovery of the node which has the data be short?

                I want to ask for benchmarks but you can’t do a full benchmark without loads of resources.

                • @[email protected]
                  link
                  fedilink
                  2
                  edit-2
                  10 months ago

                  Thanks!

                  IPFS is static, whereas tenfingers is dynamic when it comes to the links. So you can update the shared data without the need of redistributing the link.

                  That said, its also very different tech wise, there is no need for benevolent nodes (or some crypto or payment).

                  Nodes do not need to be trustworthy either, so node discovery is very simple (basically just ask other nodes for known nodes).

                  The distribution part, where nodes share your data, is based on reciprocal sharing, you share theirs and they share yours. If they don’t share any more (there are checks) you just ditch the deal and ask for a new deal with another node.

                  With over sharing (default is you share your data with 10 other nodes, sharing their data) this should both make bad nodes a no problem, but also make for good uptime and takedown safety.

                  This system also makes it scalable infinitely node wise, as every node does not need to know all other nodes, just enough for their need (for example thousands out if millions of existing nodes).

                  To share lots if data, you need to bring enough storage and bandwith to the table because it’s reciprocal, so basically it’s up to your node how much it can share.

                  Big data sets are always complicated because of errors and long download times, I have done 300MB files without problems, but the download process sure can be made better (with parallel downloading for example and better error handling).

                  I haven’t worked on sharing way bigger datasets, even a simple terabyte is a pita to download on the regular internet :-) and the use case is more the idea of sharing lots of smaller data, like a website for example, or a chat.

                  What do you think, am I missing something important? Or of course if you have other questions please do ask!

                  Also, sorry I’m writing this on my mobile so it’s not very well written.

                  Edit: missed one question; getting the data is straight forward to use (a bit complicated how it’s handled because of the changing nature of things) but when you download, you have the addresses of the nodes sharing your data so you just connect to one of them and download it (or the next if the first one isn’t up etc and so on). So that should not be any kind of bottleneck.

      • @[email protected]
        link
        fedilink
        110 months ago

        Yeah it’s basically a benevolent-store-static-data, where static is you cannot change it (or you have to upload new data and make a new link to it).

        Cool name though.

  • @[email protected]
    link
    fedilink
    17
    edit-2
    10 months ago

    The term open-standard does not cut it. People should start using “publicly available and sharable” instead (maybe there is a better name for it).

    ISO standards for example are technically “open”. But how relevant is that to a curious individual developer when anything you need to implement would require access to multiple “open” standards, each coming with a (monetary) price, with some extra shenanigans [archived] on top.

    IETF standards however are actually truly open, as in publicly available and sharable.

    • @[email protected]
      link
      fedilink
      English
      210 months ago

      why do we call standards open when they require people to pay for access to the documents? to me that does not sound open at all

      • @[email protected]
        link
        fedilink
        310 months ago

        Because non-open ones are not available, even for a price. Unless you buy something bigger than the “standard” itself of course, like a company that is responsible for it or having access to it.

        There is also the process of standardization itself, with committees, working groups, public proposals, …etc involved.

        Anyway, we can’t backtrack on calling ISO standards and their likes “open” on the global level, hence my suggestion to use more precise language (“publicly available and sharable”) when talking about truly open standards.

      • @[email protected]
        link
        fedilink
        210 months ago

        It’s a historical quirk of the industry. This stuff came around before Open Source Software and the OSI definition was ever a thing.

        10BASE5 ethernet was an open standard from the IEEE. If you were implementing it, you were almost certainly an engineer at a hardware manufacturing company that made NICs or hubs or something. If it was $1,000 to purchase the standard, that’s OK, your company buys that as the cost of entering the market. This stuff was well out of reach of amateurs at the time, anyway.

        It wasn’t like, say, DECnet, which began as a DEC project for use only in their own systems (but later did open up).

        And then you have things like “The Open Group”, which controls X11 and the Unix trademark. They are not particularly open by today’s standards, but they were at the time.

  • lime!
    link
    fedilink
    English
    3110 months ago

    i’m a plan 9 from bell labs fan. Imagine how excited I was when wsl used 9P for its plumbing. then they scrapped it all for wsl2.

    just, the power they managed to get out of those union mounts… your application wants access to the mouse? sure, here’s a file named “mouse”. it’s got the coordinates in it. you want to draw to the screen? here’s a file called like “bitmap” or whatever, just write to it. you want to start a process on another machine? just cd to it and start the process there. want to have the UI show up on your machine? symlink your bitmap file to that directory.

    I also wish early web composability could have stayed and expanded. like, the old vlc embed player, which would just show up in your browser and could play any file inline? great stuff. Imagine if every application composed with everything else, like the android Activity and Intent concepts but for anything, just by virtue of living in the same os. need an image? just ask the os and it will present the user with many ways to procure an image, let the selected one run , and hand you back an image. you don’t even have to care where from. in a way, it’s what the arcan guy is doing with his experiments, although that’s more for stitching together graphical pipelines.

        • The_Decryptor
          link
          fedilink
          English
          610 months ago

          They’re “file like” in the sense that they’re exposed as an fd, but they’re not exposed via the filesystem at all (Unlike e.g. unix sockets), and the existing API is just mapped over the sockets one (i.e. write() instead of send(), read() instead of recv()). There’s also a difference in how you create them, you open() a file, but connect() a socket, etc.

          (As an aside, it turns out Bash has its own virtual file-based wrapper around sockets, so you can do things like cat a remote port with Bash, something you can do natively in Plan 9)

          Really it just shows that “everything is a file” didn’t stand up in practice, there’s more stuff that needs special treatment than doesn’t (e.g. Interacting with TTYs also has special APIs). It makes more sense to have a better dedicated API than a generic catch-all one.

  • @[email protected]
    link
    fedilink
    24
    edit-2
    10 months ago

    PGP or GPG, however you spell it. You can encrypt stuff, protect your email from prying eyes!

    Also FOSS in general.

    • @[email protected]
      link
      fedilink
      210 months ago

      The tooling around it needs to be brought up to snuff. It seems like it hasn’t evolved much in the last 20+ years.

      I had a small team make an attempt to use it at work. Our conclusion was that it was too clunky. Email plugins would fool you into thinking it was encrypted when it wasn’t. When it did encrypt, the result wasn’t consistently readable by plugins on the receiving end. The most consistent method was to write a plaintext doc, encrypt it, and attach the encrypted version to the email. Also, key servers are setup by amateurs who maintain them in their spare time, and aren’t very reliable.

      One of the more useful things we could do is have developers sign their git commits. GitHub can verify the signature using a similar setup to SSH keys.

      It’s also possible to use TLS in a web of trust way, but the tooling around it doesn’t make it easy.

    • jelloeater
      link
      fedilink
      110 months ago

      Huge fan of PHP…I mean PGP, oh god auto correct, you scary 😳

  • @[email protected]
    link
    fedilink
    English
    4910 months ago

    JSON5. it’s basically just JSON with several QoL improvements, like comments, that make it usable as a format for human consumption (as opposed to a serialization format).