There are a couple I have in mind. Like many techies, I am a huge fan of RSS for content distribution and XMPP for federated communication.
The really niche one I like is S-expressions as a data format and configuration in place of json, yaml, toml, etc.
I am a big fan of Plaintext formats, although I wish markdown had a few more features like tables.
I’ll give my usual contribution to RSS feed discourse, which is that, news flash! RSS feeds support video!
It drives me crazy when podcasters are like, “thanks for listening to our audio podcasts. We also have a video feed for our YouTube subscribers.” Just let me have the video in PocketCasts please!
I feel you but i dont think podcasters point to youtube for video feeds because of a supposed limitation of RSS. They do it because of the storage and bandwidth costs of hosting video.
I’d think they’d get it back by not having to share their ad rev with Google. There’s something to be said for the economies of scale Google benefits from but with cloud services that’s not as relevant as it was.
I just wrote a YouTube scraper and exported to RSS and into my podcast client. Using YouTube any other way is masochism in comparison.
The semantic web and social linked data. We could have applications share data without depending on big tech, but rather based on application standards.
It can be used today and gains traction but I wouldn’t mind it going faster. Especially the interoperable personal app space could use some love and attention.
Like with the Solid Project ?
Exactly. The Semantic Web is broader than Solid but Solid is great for personal apps.
Say you buy a smartphone. The specifications of the smartphone likely belong elsewhere than in a Solid Personal Online Datastore, but they can be pulled in from semantic data on the product website. Your own proof of purchase is a great candidate for a Solid POD, as is the trace of any repairs made to it.
These technologies are great to cross the barriers between applications. If we’d embrace this, it would be trivial to find the screen protector matching your exact smartphone because we’d have an identifier to discover its type and specifications. Heck, any product search would be easier if you could combine sources and compare with what you already have.
The sharing tech exists. Building apps works also. Interpreting the information without building a dedicated interface seems lacking for laymen.
ISO 216 paper sizes work like this: https://www.printed.com/blog/paper-size-guide/
It’s so fucking neat and intuitive! How is it not used more???
Most preschool kids know what an A4 sheet is. Not sure how it can be used more.
sorry to tell you this bud…
Clearly the rest of the world are communists! It’s not us, it’s you! I’m not crying you’re crying! 😭😭😭
It’s also worth noting that switching from ANSI to ISO 216 paper would not be a substantial physical undertaking, as the short-side of even-numbered ISO 216 paper (eg A2, A4, A6, etc) is narrower than for ANSI equivalents. And for the odd-numbered sizes, I’ve seen Tabloid-size printers in America which generously accommodate A3.
For comparison, the standard “Letter” paper size (aka ANSI A) is 8.5 inches by 11 inches. (note: I’m sticking with American units because I hope Americans read this). Whereas the similar A4 paper size is 8.3 inches by 11.7 inches. Unless you have the rare, oddball printer which takes paper long-edge first, this means all domestic and small-business printers could start printing A4 today.
In fact, for businesses with an excess stock of company-labeled #10 envelopes – a common size of envelope, measuring 4.125 inches by 9.5 inches – a sheet of A4 folded into thirds will still (just barely) fit. Although this would require precision folding, that’s no problem for automated letter mailing systems. Note that the common #9 envelope (3.875 inches by 8.875 inches) used for return envelopes will not fit an A4 sheet folded in thirds. It would be advisable to switch entirely to A series paper and C series envelopes at the same time.
Confusingly, North America has an A-series of envelopes, which bear no relation to the ISO 216 paper series. Fortunately, the overlap is only for the less-common A2, A6, and A7.
TL;DR: bring reams of A4 to the USA and we can use it. And Tabloid-size printers often accept A3.
My printer will print and scan any A side paper. But I can’t even buy A paper! Fucking America
Also, A4 simply has a better ratio than letter. Letter is too wide, making A4 better to hold and it fits more lines per page.
Presumably you could just buy that paper size? They’re pretty similar sizes; printers all support both sizes. I’ve never had an issue printing a US Letter sized PDF (which I assume I have done).
Kind of weird that you guys stick to US Letter when switching would be zero effort. I guess to be fair there aren’t really any practical benefits either.
I’ve literally never even seen A paper in America. Probably would have to special order it from another country
Ah fair enough.
I mean I’d love to use it. Of course America is behind the times of civilized nations.
Not sure if it counts, but the terminal world being a place where many applications do so many different things but are interoperable, is amazing. I guess that would be the POSIX standard?
Problem Details for HTTP APIs - I have to work and integrate with a lot of different APIs and different kinda implementations of error handling. Everyone seems to be inventing their own flavor of returning errors.
My life would be so much easier if everyone just used some ‘global unified’ way to returning errors, all in the same way
Best is when the API doesn’t match a PDF and says “500: Internal Error”
Saving…
I made my first API at work last year (still making) and always saw myself looking for input on making a consistent way to return errors, with no useful input from the senior programmers or the API users. This is my second biggest problem, the first being variable and function names of course.
If I were to do anything related to HTTP, I now have something to look at.
That would be nice. I have implemented this in the past but never once encountered an API that used it.
Please guys, stop using line-breaks mid-sentence. It’s not the 90’s anymore, viewers generally can wrap.
Maybe a bad markdown viewer?
viewergenerator?No, in general the markdown format suggests using line breaks in the middle of paragraphs to make the code just as readable as the output. That’s why two line breaks is what creates a new paragraph. So it’s the viewer showing it incorrectly here.
The screenshot is of the website ietf.org , which doesn’t seem to be markdown.
I wish there was a good open standard for task management or todo list.
I know there’s todo.txt, but it lacks features like dependent tasks, and overall the plain text format limits features and implementations.
Do you know if it allows dependent tasks?
Yes, but not all clients expose dependent tasks (which is sadly a common issue with open standards: they aren’t always properly implemented). I’m using Tasks.org on my phone (which supports dependent tasks), synchronizing to a Nextcloud server with the Tasks app (which supports dependent tasks now,
but didn’t for a long time), which also syncs to Thunderbird (which does not appear to show dependent tasks as dependents).Edit: remembered that the Nextcloud Tasks app has long supported dependent tasks. I was thinking of recurring tasks, which it does not support. Again, open standards aren’t always fully implemented.
Well that’s still good news that I didn’t expect! I suppose I will look into that then. Thank you!
XMPP, RSS, …
XMPP is not a good protocol though. There’s a reason nobody uses it anymore.
I think it’s going to be interesting when the EU tries to enforce interoperability between the major messaging platforms. What are they going to do? They have some ridiculous targets like interoperable end-to-end encrypted group video calls in 5 years!
There’s a reason nobody uses it anymore.
I and many others use it! And Google, meta, etc. Have used it but decided to lock it down.
Yes you’re right, there’s a reason people don’t use it as much, which is because these corporations embraced it, dominated it, then extinguished it.
But XMPP is honestly my favorite comm protocol and the most impressive imo.
There’s a reason nobody uses it anymore.
Yeah, Google and Faceebook EEE’d it.
XMPP is not a good protocol though.
Do elaborate.
XMPP is very old and was created when nobody knew about mobile phones. It worked more like true messaging app less than messages store ( unlike matrix ).
Requirement of permanent tcp ip connection doesn’t work well for mobile + pretty much useful feature in xmpp ( like message history ) is optional. If something doesn’t work in xmpp most people would blame xmpp / jabber rather than the lack of feature support in their server
It worked more like true messaging app less than messages store ( unlike matrix ).
Can you please elaborate this point? I don’t understand what you mean by “true messaging app” and why that would be a bad thing?
Requirement of permanent tcp ip connection
Are you sure this is the case? Maybe back in the day, but my understanding is this isn’t true anymore
useful feature in xmpp ( like message history ) is optional
Why is user choice a bad thing? There’s a wealth of clients that implement the features you want
If something doesn’t work in xmpp most people would blame xmpp
This may not be an important point, but from my experience, people always blame the client and not the underlying protocol. If I face an issue with my browser, I’d likely blame the browser before I blame http.
XMPP is very old
Seriously? That’s your argument? So is the wheel.
Requirement of permanent tcp ip connection doesn’t work well for mobile
I was under the impression PubSub was created for that.
Still, it’s an open extensible protocol.
XMPP is very old
Seriously? That’s your argument? So is the wheel.
They elaborated how that relates; usage scenario changed with mobile phones. XMPP is a bad match.
XMPP is a bad match.
The X is for extensible, so are a whole bunch of other protocols and people haven’t stopped using them, they get improved upon (for the most part).
The mentioned permanent tcp ip connection (which you don’t neccessarily have on mobile) too?
Seriously, if you do take one verse from the whole response, you get straw men you fighting with.
I just told you that jabber / xmpp was created in the times almost nobody knew or believed mobile phones can be a thing. Thus it got created in that way: many similarities of xmpp and e-mail, irc or icq which didn’t stand the passage of time.
Of course, you’re right xmpp evolved to get PubSub extension as an “optional feature” but because of its availability (or rather lack) - most servers didn’t support it even the client did support, xmpp didn’t win the acceptance of the end-users. It got some attention in the business world (cisco jabber) but not in the retail.
Business cannot work forever without clients willing to pay or at least use, so it died off even in the business.
End of story, try not to fighting with the straw men you created.
Of course, you’re right xmpp evolved to get PubSub extension as an “optional feature” but because of its availability (or rather lack) - most servers didn’t support it even the client did support, xmpp didn’t win the acceptance of the end-users. It got some attention in the business world (cisco jabber) but not in the retail.
That XMPP’s extensibility is in itself a strength and a weakness is indeed a valid argument, as you’ve exemplified. I was expecting you’d criticize OMEMO though…
Business cannot work forever without clients willing to pay or at least use, so it died off even in the business.
No, it didn’t die off, it’s still used. IRC is still used as well, probably more or less at the same level. But if you define usage as “used in business” well then probably just a few cases, yes.
I hadn’t heard of Cisco Jabber but i’ve heard of Google and Facebook - both companies’ messengers were, initially, based on XMPP but they EEE’d it once they got enough users and walled their gardens, dealing a major blow to the protocol.
End of story, try not to fighting with the straw men you created.
Can i fight my inner daemons at least? Please?
I use xmpp. It happens to be a great fit for a private family messaging service. Good interoperability between modern clients. I get that “nobody uses it” is hyperbole, but the internet is a big place and there is room for services without mass market appeal to thrive.
For RSS I honestly don’t see a point, at least for me. What’s the use for having update feeds in a unified format when I still have to go to each fucking site to view the full text? I completely see the point of RSS when all I need is in the feed. But I hate going from different UI to different UI to get the full content. I want something like inoreader.com for self-hosting.
RSS works great for me though.
I have an app on my not-so-smart phone to read news when commuting. It is not a long journey so I just want to have a quick glance at the headlines and read the actual articles that I want to. There are only 6 sites that I am interested, but still will take quite some work to crawl from the proper websites. RSS in turn is unified so I don’t need to worry about their website layouts, formats, etc. It also gives me an URL to the actual content which I can use readability/reader mode library to parse and further reduce unnecessary contents.
Quite the opposite, I hope more informational sites offer/keep RSS! (Some removed RSS typically after a revamp, design change)
Mastodon offers rss for both keywords and users
What’s the use for having update feeds in a unified format when I still have to go to each fucking site to view the full text
This has nothing to do with RSS, it is the author’s choice. It’s like someone who posts links to their articles on Twitter / Facebook / Reddit, same thing. The platform doesn’t prevent you from putting the entire content there, and in fact, many do, especially with RSS.
One benefit of RSS though is that because it is an open protocol, the problem you mention already has solutions, which auto fetch the articles for you. That wouldn’t be possible without an open protocol like RSS
Moreover, I’d argue even with that, RSS is still a huge plus. To have all your content’s headlines in one UI, and potentially you can filter or sort them however you want, that’s pretty awesome.
The content of the feed depends on the content creator, not on RSS.
I know that. But RSS is like 95% used for news feeds and that’s what I’m talking about. The way RSS is overwhelmingly used is making the whole thing useless (to me).
well, then just consider those giving shitty support for it as if they wouldn’t be supporting it at all
Miniflux is likely to tick most of your boxes. It’s self hostable and can download the full article without extra clicks / having to visit the source.
Thanks, I’ll take a look. These days Inoreader also shows only the summary, making it useless for me.
It’s completely bonkers that JPEG-XL is as good as it is and no one wants to actually implement it into web browsers
Adobe is backing the format, Apple support is coming along, and there are rumors that Apple is switching from HEIC to JPEG XL as a capture format as early as the iPhone 16 coming out in a few weeks. As soon as we have a full blown workflow that can take images from camera to post processing to publishing in JXL, we might see a pretty strong push for adoption at the user side (browsers, websites, chat programs, social media apps and sites, etc.).
Do you know QOI format ? I would appreciate your opinion about it.
To be honest, no. I mainly know about JPEG XL only because I’m acutely aware of the limitations of standard JPEG for both photography and high resolution scanned documents, where noise and real world messiness cause all sorts of problems. Something like QOI seems ideal for synthetic images, which I don’t work with a lot, and wouldn’t know the limitations of PNG as well.
QOI is just a format that’s easy for a programmer to get their head around.
It’s not designed for everyday use and hardware optimization like jpeg-xl is.
You’re most likely to see QOI in homebrewed game engines.
I think I would feel better using JPEG-XL where I currently use WebP. Here’s hoping for wider support.
What’s so good about it?
Basically smaller file sizes than JPEG at the same quality and it also automatically loads a lower quality version of the image before it loads a higher quality version instead of loading it pixel by pixel like an image would normally load. Google refuses to implement this tech into Chrome because they have their own avif format, which isn’t bad but significantly outclassed by JPEG-XL in nearly every conceivable metric. Mozilla also isn’t putting JPEG-XL into Firefox for whatever reason. If you want more detail, here’s an eight minute video about it.
I’m under the impression that there’s two reasons we don’t have it in chromium yet:
- Google initially ignored jpeg-xl but then everyone jumped on it and now they feel they have to create a post-hoc justification for not supporting it earlier which is tricky and now they have a sunk cost situation to keep ignoring it
- Google today was burnt by the webp vulnerability which happened because there was only one decoder library and now they’re waiting for more jpeg-xl libraries which have optimizations (which rules out reference implementations), good support (which rules out libraries by single authors), have proven battle-hardening (which will only happen over time) and are written safely to avoid another webp style vulnerability.
Google already wrote the wuffs language which is specifically designed to handle formats in a fast and safe way but it looks like it only has one dedicated maintainer which means it’s still stuck on a bus factor of 1.
Honestly, Google or Microsoft should just make a team to work on a jpg-xl library in wuffs while adobe should make a team to work on a jpg-xl library in rust/zig.
That way everyone will be happy, we will have two solid implementations, and they’ll both be made focussing on their own features/extensions first so we’ll all have a choice among libraries for different needs (e.g. browser lib focusing on fast decode, creative suite lib for optimised encode).
didn’t google include jpeg-xl support already in developer versions of chromium, just to remove it later?
Chromium had it behind a flag for a while, but if there were security or serious enough performance concerns then it would make sense to remove it and wait for the jpeg-xl encoder/decoder situation to change.
It baffles me that someone large enough hasn’t gone out of their way to make a decoder for chromium.
The video streaming services have done a lot of work to switch users to better formats to reduce their own costs.
If a CDN doesn’t add it to chromium within the next 3 years, I’ll be seriously questioning their judgement.
Chromium had it behind a flag for a while, but if there were security or serious enough performance concerns then it would make sense to remove it and wait for the jpeg-xl encoder/decoder situation to change.
Adobe announced they were supporting it (in Camera Raw), that’s when the Chrome team announced they were removing it (due to a “lack of industry interest”)
- Existing JPEG files (which are the vast, vast majority of images currently on the web and in people’s own libraries/catalogs) can be losslessly compressed even further with zero loss of quality. This alone means that there’s benefits to adoption, if nothing else for archival and serving old stuff.
- JPEG XL encoding and decoding is much, much faster than pretty much any other format.
- The format works for both lossy and lossless compression, depending on the use case and need. Photographs can be encoded in a lossy way much more efficiently than JPEG and things like screenshots can be losslessly encoded more efficiently than PNG.
- The format anticipates being useful for both screen and prints. Webp, HEIF, and AVIF are all optimized for screen resolutions, and fail at truly high resolution uses appropriate for prints. The JPEG XL format isn’t ready to replace camera RAW files, but there’s room in the spec to accommodate that use case, too.
It’s great and should be adopted everywhere, to replace every raster format from JPEG photographs to animated GIFs (or the more modern live photos format with full color depth in moving pictures) to PNGs to scanned TIFFs with zero compression/loss.
Existing JPEG files (which are the vast, vast majority of images currently on the web and in people’s own libraries/catalogs) can be losslessly compressed even further with zero loss of quality. This alone means that there’s benefits to adoption, if nothing else for archival and serving old stuff.
Funny thing is, there was talk on the Chrome bug tracker of using just this ability transparently at the HTTP layer (like gzip/brotli compression), but they’re so set on pushing their AVIF format that they backed away from it.
- The format works for both lossy and lossless compression, depending on the use case and need. Photographs can be encoded in a lossy way much more efficiently than JPEG and things like screenshots can be losslessly encoded more efficiently than PNG.
Someone made a fair point that having a format being both lossy and lossless is not necessarily a great idea. If you download a jpeg file you know it will be compressed, if you download png it will be lossless. Shifting through jxl files to check if it’s lossy or not doesn’t sound very fun.
All in all I’m a big supporter of jxl though, it’s one of the only github repos I actively follow.
Functionally speaking, I don’t see this as a significant issue.
JPEG quality settings can run a pretty wide gamut, and obviously wouldn’t be immediately apparent without viewing the file and analyzing the metadata. But if we’re looking at metadata, JPEG XL reports that stuff, too.
Of course, the metadata might only report the most recent conversion, but that’s still a problem with all image formats, where conversion between GIF/PNG/JPG, or even edits to JPGs, would likely create lots of artifacts even if the last step happens to be lossless.
You’re right that we should ensure that the metadata does accurately describe whether an image has ever been encoded in a lossy manner, though. It’s especially important for things like medical scans where every pixel matters, and needs to be trusted as coming from the sensor rather than an artifact of the encoding process, to eliminate some types of error. That’s why I’m hopeful that a full JXL based workflow for those images will preserve the details when necessary, and give fewer opportunities for that type of silent/unknown loss of data to occur.
While I agree that it’s somewhat bad that there is no distinction between lossless and lossy jxl in the file extension, I think it’s really not a big deal compared to the present situation with jpg/png.
The reason being that if you download a png file you have no idea if its been converted from jpg, if it’s a screenshot of a jpg, or if it’s been subjected to lossy reencoding by a tool or a website upload process.
The only thing you can really do to try and see if the file you’ve downloaded has suffered encoding loss is to do an image search on it and see if there are any better quality versions out there. You’d do the exact same thing with a jxl file.
This is why I fucking love the internet.
I mean, I’ll never take the time to get this knowledgable about image formats, but I am ABSOLUTELY fuckdamn thrilled that at least SOMEONE out there takes it seriously.
Good on you, pixel king
Good news! I believe the Ladybird Browser intends to include support for JPEG XL.
JSON5. it’s basically just JSON with several QoL improvements, like comments, that make it usable as a format for human consumption (as opposed to a serialization format).
TIL this exists
Objects may have a single trailing comma.
I just came.
TMI
I love that there’s someone out there who’s that passionate about JSON.
I hate grammers in anything that don’t support trailing commas. It’s even worse when it’s supported in some contexts and not others. Like lists are OK, but not function parameters.
i’m a plan 9 from bell labs fan. Imagine how excited I was when wsl used 9P for its plumbing. then they scrapped it all for wsl2.
just, the power they managed to get out of those union mounts… your application wants access to the mouse? sure, here’s a file named “mouse”. it’s got the coordinates in it. you want to draw to the screen? here’s a file called like “bitmap” or whatever, just write to it. you want to start a process on another machine? just cd to it and start the process there. want to have the UI show up on your machine? symlink your bitmap file to that directory.
I also wish early web composability could have stayed and expanded. like, the old vlc embed player, which would just show up in your browser and could play any file inline? great stuff. Imagine if every application composed with everything else, like the android Activity and Intent concepts but for anything, just by virtue of living in the same os. need an image? just ask the os and it will present the user with many ways to procure an image, let the selected one run , and hand you back an image. you don’t even have to care where from. in a way, it’s what the arcan guy is doing with his experiments, although that’s more for stitching together graphical pipelines.
Plan 9 even extended the “everything is a file” philosophy to networking, unlike everybody else that used sockets instead.
Are sockets not files?
They’re “file like” in the sense that they’re exposed as an
fd
, but they’re not exposed via the filesystem at all (Unlike e.g. unix sockets), and the existing API is just mapped over the sockets one (i.e.write()
instead ofsend()
,read()
instead ofrecv()
). There’s also a difference in how you create them, youopen()
a file, butconnect()
a socket, etc.(As an aside, it turns out Bash has its own virtual file-based wrapper around sockets, so you can do things like
cat
a remote port with Bash, something you can do natively in Plan 9)Really it just shows that “everything is a file” didn’t stand up in practice, there’s more stuff that needs special treatment than doesn’t (e.g. Interacting with TTYs also has special APIs). It makes more sense to have a better dedicated API than a generic catch-all one.
IRC.
Jabber.
IPFS.
Yes and RSS feeds.
I also pick this guy’s IRC
The term open-standard does not cut it. People should start using “publicly available and sharable” instead (maybe there is a better name for it).
ISO standards for example are technically “open”. But how relevant is that to a curious individual developer when anything you need to implement would require access to multiple “open” standards, each coming with a (monetary) price, with some extra shenanigans [archived] on top.
IETF standards however are actually truly open, as in publicly available and sharable.
why do we call standards open when they require people to pay for access to the documents? to me that does not sound open at all
It’s a historical quirk of the industry. This stuff came around before Open Source Software and the OSI definition was ever a thing.
10BASE5 ethernet was an open standard from the IEEE. If you were implementing it, you were almost certainly an engineer at a hardware manufacturing company that made NICs or hubs or something. If it was $1,000 to purchase the standard, that’s OK, your company buys that as the cost of entering the market. This stuff was well out of reach of amateurs at the time, anyway.
It wasn’t like, say, DECnet, which began as a DEC project for use only in their own systems (but later did open up).
And then you have things like “The Open Group”, which controls X11 and the Unix trademark. They are not particularly open by today’s standards, but they were at the time.
Because non-open ones are not available, even for a price. Unless you buy something bigger than the “standard” itself of course, like a company that is responsible for it or having access to it.
There is also the process of standardization itself, with committees, working groups, public proposals, …etc involved.
Anyway, we can’t backtrack on calling ISO standards and their likes “open” on the global level, hence my suggestion to use more precise language (“publicly available and sharable”) when talking about truly open standards.
how about FOSS, free and open-source standards /s
https://cuelang.org/. I deal with a lot of k8s at work, and I’ve grown to hate YAML for complex configuration. The extra guardrails that Cue provides are hugely helpful for large projects.
Hmm, what alternative? XML :-)? People hate Grade DSL just for not being xml
Oh, this looks great!
I’ve been struggling between customize and helm. Neither seem to make k8s easier to work with.I have to try cuelang now. Something sensible without significant whitespace that confuses editors, variables without templating.
I’ll have to see how it holds up with my projectsOh this! YAML was a terrible choice. And that’s coming from someone who likes Python and prefers white spaces over brackets. YAML never clicked for me.
What you mean you can’t easily tell what this is?
- foo: - - : - bar: baz: [ - - ]
Oddly having several variants rather than a standard despite “regular” being in the name: everyone I work with eschews regex but after finally taking the time to learn more than just the basics of it a few years ago I find it so incredibly useful almost daily.
Very much the same. I was terrified of regex, now I love it
What resource did you use to master it? As every time I have to use regex I want to cry.
regex101.com has a convenient searchable cheat sheet for all the somewhat odd but powerful functions like negative lookbehind/lookahead with a brief explanation of each, a regex pattern input with checkable boxes that helps you get down single replacements vs global replacements, a large input that lets you dump text to test against the pattern, an explanation on the right of what each symbol is trying to match, and the left side lets you switch between the different flavors to see some of the variants between languages/standards. I still have a lot to learn before I’ll consider it mastered, but I have enough common stuff memorized now that it works great for me!
(Holocene or) Human Era calendar
That would represent all human history as one.
And also, the Dekatrian calendar
Where we would have a less broken, more regular, year calendar that is almost align with the moon cycle.
Oh many years ago in school I created something like that for an arts/creative writing project once, a calendar with 12, 30 day month based on sailor moon. Having it based on a magical girl manga gave me the freedom to declare the rest of the days to “days of evil” Was a fun project because I created a whole religion around it. 😁
That sounds interesting, would most likely not be very popular with lots of people and a pain in the butt to implement but interesting.
There’s a cool video from In a Nutshell about it some years ago.