if you could pick a standard format for a purpose what would it be and why?
e.g. flac for lossless audio because…
(yes you can add new categories)
summary:
- photos .jxl
- open domain image data .exr
- videos .av1
- lossless audio .flac
- lossy audio .opus
- subtitles srt/ass
- fonts .otf
- container mkv (doesnt contain .jxl)
- plain text utf-8 (many also say markup but disagree on the implementation)
- documents .odt
- archive files (this one is causing a bloodbath so i picked randomly) .tar.zst
- configuration files toml
- typesetting typst
- interchange format .ora
- models .gltf / .glb
- daw session files .dawproject
- otdr measurement results .xml
matroska for media, we already have MKA for audio and MKV for video. An image container would be good too.
mp4 is more prone to data loss and slower to parse, while also being less flexible, despite this it seems to be a sort of pseudo standard.
(MP4, M4A, HEIF formats like heic, avif)
wait why not av4 or jpegxl
A mp4 file contains media in, for example, h264 and AAC codec, which is the combined for playback. It is not a codec itself.
those are media formats, not containers.
im compiling summarised list in body, what do i put this under and what file extensions
Data output from manufacturing equipment. Just pick a standard. JSON works. TOML / YAML if you need to write as you go. Stop creating your own format that’s 80% JSON anyways.
JSON is nicer for some things, and YAML is nicer for others. It’d be nice if more apps would let you use whichever you prefer. The data can be represented in either, so let me choose.
TOML for configuration files
100% this. Much more readable than JSON, YAML or other custom formats.
I am surprised no one mentioned HCL yet. It’s just as sane as toml but it is also properly nestable, like yaml, while being easily parsable and formattable. I wish it was used more as a config language.
Some sort of machine-readable format for invoices and documents with related purposes (offers, bills of delivery, receipts,…) would be useful to get rid of some more of the informal paper or PDF human-readable documents we still use a lot. Ideally something JSON-based, perhaps with a cryptographic signature and encryption layer around it.
This one exists. SEPA or ISO20022. Encryption/signing isn’t included in the format, it’s managed on transfer layer, but that’s pretty much the standard every business around here works and many don’t even accept PDFs or other human-readable documents anymore if you want to get your money.
Well, okay, let me rephrase that. It would be nice if the B2C communication used something like that too.
In Finland it kinda-sorta does, for some companies (mostly for things where you pay monthly). You can get your invoices directly to your banking account and even accept them automatically if you wish. And that doesn’t include anything else than invoices, so not exactly what you’re after. And I agree, that would be very nice.
Some companies, like one of our major grocery chain, offer to store your receipts on their online service, but I think that you can only get a copy of the receipt there and it’s not machine readable.
whats the file extension and whats the category name, compiling list in body
Woah neat
Definitely FLAC for audio because it’s lossless, if you record from a high fidelity source…
exFAT for external hard drives and SD cards because both Windows and Mac can read and write to it as well as Linux. And you don’t have the permission pain…
What permission pain?
If you were to format the drive with extra and then copy something to it from Linux - if you try open it on another Linux machine (eg you distro hop after this event) it won’t open the file because your aren’t the owner.
Then you have to jump though hoops trying to make yourself the owner just so you can open your own file.
I learnt this the hard way so I just use exFAT and it all works.
Then you have to jump though hoops trying to make yourself the owner just so you can open your own file.
I mean, if you want to set permissions on a drive to a userid and groupid in /etc/passwd and /etc/group on the current machine:
$ sudo chown -R /mnt/olddrive username $ sudo chgrp -R /mnt/olddrive groupname
That’s not that painful, though I guess it could take a while to run on a drive with a lot of stuff.
SQLite for all “I’m going to write my own binary format because I is haxor” jobs.
There are some specific cases where SQLite isn’t appropriate (streaming). But broadly it fits in 99% of cases.
Yeah, what was it? If office formats used sqlite instead of zip?
To chase this - converting to json or another standardized format in every single case where someone is tempted to write their own custom parser. Never write custom parsers kids, they’re an absolutely horrible time-suck and you’ll be fixing them quite literally forever as you discover new and interesting ways for your input data to break them.
Edit: it doesn’t have to be json, I really don’t care what format you use, just pick an existing data format that uses a robust, thoroughly tested, parser.
To add to that. Configuration file formats…just pick a standard one, do not write your own.
And while we are at it, if there is even a remote chance that you have a “we will do everything declaratively” idea, just use an existing programming language for your file format instead of painfully growing some home-grown add-ons to your declarative format over the next decade or two because you were wrong about only needing a declarative format.
give me a category please
I’ll take “what’s that file format for $300 please”
Also parquet if the data aren’t mutated much.
XML for machine-readable data because I live to cause chaosEither markdown or Org for human-readable text-only documents. MS Office formats and the way they are handled have been a mess since the 2007 -x versions were introduced, and those and Open Document formats are way too bloated for when you only want to share a presentable text file.
While we’re at it, standardize the fucking markdown syntax! I still have nightmares about Reddit’s degenerate four-space-indent code blocks.
Man, I’d love if markdown was more widely used, it’s pretty much the perfect format for everything I do
You can convert Markdown to a number of formats with pandoc, if you want to author in Markdown and just distribute in some other format.
Not going to work if you need to collaborate with other people, though.
Markdown, CommonMark, .rst formats are good for printing basic rich text for technical documentation and so on, when text styling is made by an external application and you don’t care about reproducible layout.
But you also want to make custom styles (font size, text alignment, colours), page layout (paper format, margin size, etc.) and make sure your document is reproducible across multiple processing applications, that the layout doesn’t break, authoring tools, maybe even some version control, etc. This is when it strikes you bad.
Markdown misses checkboxes anywhere, especially in tables.
But markdown is just good. It’s just writing text as normal basically
Some new format for DAW session files that is compatible with all DAWs. I believe ardour can import protools files but I bet a lot. Of work went into that.
Nice, hadn’t seen this before. From the looks of the Ardour forum there is nobody currently looking at implementing the forum but they seem open to it. I would contribute but I only know python so probably not much use. I could write a ardour-dawproject translator in python but seems a bit pointless if someone goes and creates a proper implementation at some point anyway
.opus for lossy music, .flac for lossless music, .png for image files, .mkv for video
.jpeg for photos
.jxl
All of them are OK, except mkv is less a file type and more a container. What should be specified is the code for video, which for most things I’d say AV1, but high res movies might not be the most suitable. Throw in opus for the audio track, and you can use mkv, but might as well use webm anyways since it’s more clear what’s behind it. (though can still be other things)
I’d also add that jxl should be the standard for lossy images. Better than jpg. And you want something other than png for massive images because that quickly gets costly in terms of size due to png being lossless.
Png is not always lossless. It also supports compression. But your point stands, it’s not the best compression
PNG support lossless compression through deflation, but there are encoders that can apply a lossy filter to the image to make the compression more effective.
PNG doesn’t support lossy compression natively, to be clear.
That’s interesting. Learn something new every day. Thanks
Unpopular opinion but webp isn’t bad it just needs wider support, but maybe I’m unaware of its actual shortcomings in which case please educate.
Also I wonder if it’s possible to have a single image format for all those uses but also RAW?
Here’s a little article which highlights jxl well. https://chipsandcheese.com/2021/02/28/modern-data-compression-in-2021-part-2-the-battle-to-dethrone-jpeg-with-jpeg-xl-avif-and-webp/
I do not think it’s mentioned there, but I think webp and also it’s indirect successor avif afaik, both lack progressive loading which is not optimal for website loading. It’s has incremental loading which I think is akin the the old dial up time of loading top to bottom row for row. They proclaim progressive decoding is costly on memory and cpu, but progressive gives the best user experience imo.
Lastly a fringe issue, re-encooding multiple times. The good old reason why jpgs turn into trash over time because people encode instead of save images. Or because sites re-encode when uploading. Jxl wins here. It also is very easy to see why jpg turns into what it does rather quickly.
https://www.reddit.com/r/AV1/comments/ju18pz/generation_loss_comparing_jpeg_webp_jxl_and_avif/
I also like the idea of incremental loading, even if it’s not that relevant anymore. Also, I don’t think your example of generation loss is enough as it’s just one image, I would like to see multiple randomly picked images passed through the same process but if the results are the same as here or barely anymore loss than the best option for any given image I do think that’s a good result to aim for.
whats the difference between opus and 320 mp3?
opus is higher quality at a much lower bitrate, meaning you can definitely store more songs in the opus format than in 320 mp3. opus can be constant bit rate or variable bit rate, whichever you prefer at encode time
@neo I thought opus was voice optimized?
It is, but it’s not the only thing opus is optimized for
@[email protected] I thought opus was voice optimized?
The point is 140kbps opus is almost identical to 320kbps mp3 for human so it saves over 50% size for the same quality. Also it’s a royalty free format.
neat. Though now i wish my ipod supported opus (besides rockbox) but thats much nicer to know!
jxl for images, vp9 for video, ogg vorbis for lossy audio and flac for loseless
Opus is the successor to Vorbis. It’s superior in terms of quality to bitrate for all bitrates, and it’s made by the same organization.
.nix for software packaging.
whats that and why nkt flatpak
i hate to be that guy, but pick the right tool for the right job. use markdown for a readme and latex for a research paper. you dont need to create ‘the ultimate file format’ that can do both, but worse and less compatible
I agree with your assertion that there isn’t a perfect format. But the example you gave - markdown vs latex has a counter example - org mode. It can be used for both purposes and a load of others. Matroska container is similarly versatile. They are examples that carefully designed formats can reach a high level of versatility, though they may never become the perfect solution.
org mode? whats rhe file extension
Open Document Standard (.odt) for all documents. In all public institutions (it’s already a NATO standard for documents).
Because the Microsoft Word ones (.doc, .docx) are unusable outside the Microsoft Office ecosystem. I feel outraged every time I need to edit .docx file because it breaks the layout easily. And some older .doc files cannot even work with Microsoft Word.
Actually, IMHO, there should be some better alternative to .odt as well. Something more out of a declarative/scripted fashion like LaTeX but still WYSIWYG. LaTeX (and XeTeX, for my use cases) is too messy for me to work with, especially when a package is Byzantine. And it can be non-reproducible if I share/reuse the same document somewhere else.
Something has to be made with document files.
Markdown, asciidoc, restructuredtext are kinda like simple alternatives to LaTeX
There is also https://github.com/typst/typst/
Bro, trying to give padding in Ms word, when you know… YOU KNOOOOW… they can convert to html. It drives me up the wall.
And don’t get me started on excel.
Kill em all, I say.
It is unbelievable we do not have standard document format.
What’s messed up is that, technically, we do. Originally, OpenDocument was the ISO standard document format. But then, baffling everyone, Microsoft got the ISO to also have
.docx
as an ISO standard. So now we have 2 competing document standards, the second of which is simply worse.That’s awful, we should design something that covers both use cases!
I was too young to use it in any serious context, but I kinda dig how WordPerfect does formatting. It is hidden by default, but you can show them and manipulate them as needed.
It might already be a thing, but I am imagining a LaTeX-based standard for document formatting would do well with a WYSIWYG editor that would hide the complexity by default, but is available for those who need to manipulate it.
There are programs (LyX, TexMacs) that implement WYSIWYG for LaTeX, TexMacs is exceptionally good. I don’t know about the standards, though.
Another problem with LaTeX and most of the other document formats is that they are so bloated and depend on many other tasks that it is hardly possible to embed the tool into a larger document. That’s a bit of criticism for UNIX design philosophy, as well. And LaTeX code is especially hard to make portable.
There used to be a similar situation with PDFs, it was really hard to display a PDF embedded in application. Finally, Firefox pdf.js came in and solved that issue.
The only embedded and easy-to-implement standard that describes a ‘document’ is HTML, for now (with Javascript for scripting). Only that it’s not aware of page layout. If only there’s an extension standard that could make a HTML page into a document…
I was actually thinking of something like markdown or HTML forming the base of that standard. But it’s almost impossible (is it?) to do page layout with either of them.
But yeah! What I was thinking when I mentioned a LaTeX-based standard is to have a base set of “modules” (for a lack of a better term) that everyone should have and that would guarantee interoperability. That it’s possible to create a document with the exact layout one wants with just the base standard functionality. That things won’t be broken when opening up a document in a different editor.
There could be additional modules to facilitate things, but nothing like the 90’s proprietary IE tags. The way I’m imagining this is that the additional modules would work on the base modules, making things slightly easier but that they ultimately depend on the base functionality.
IDK, it’s really an idea that probably won’t work upon further investigation, but I just really like the idea of an open standard for documents based on LaTeX (kinda like how HTML has been for web pages), where you could work on it as a text file (with all the tags) if needed.
Finally, Firefox pdf.js came in and solved that issue.
Which uses a bloated and convoluted scripting format specialized on manipulating html.
True, but it offered a much more secure alternative to opening up PDFs locally.
I don’t think so. pdf.js has all few monts a new XSS CVE, which is a web thing only. And if you use anything other than Adobe Reader/Acrobat…
zip or 7z for compressed archives. I hate that for some reason rar has become the defacto standard for piracy. It’s just so bad.
The other day I saw a tar.gz containing a multipart-rar which contained an iso which contained a compressed bin file with an exe to decompress it. Soooo unnecessary.
Edit: And the decompressed game of course has all of its compressed assets in renamed zip files.
.tar.zstd
all the way IMO. I’ve almost entirely switched to archiving with zstd, it’s a fantastic format.The only annoying thing is that the extension for zstd compression is zst (no d). Tar does not recognize a zstd extension, only zst is automatically recognized and decompressed. Come on!
-I
option?Not sure what that does.
Yes, you can use options to specify exactly what you want. But it should recognize
.zstd
as zstandard compression instead of going “I don’t know what this compression is”. I don’t want to have to specify the obvious extension just because I typed zstd instead of zst when creating the file.
If we’re being entirely honest just about everything in the zstd ecosystem needs some basic UX love. Working with .tar.zst files in any GUI is an exercise in frustration as well.
I think they recently implemented support for chunked decoding so reading files inside a zstd archive (like, say, seeking to read inside tar files) should start to improve sooner or later but some of the niceties we expect from compressed archives aren’t entirely there yet.
Fantastic compression though!
why not gzip?
Gzip is slower and outputs larger compression ratio. Zstandard, on the other hand, is terribly faster than any of the existing standards in means of compression speed, this is its killer feature. Also, it provides a bit better compression ratio than gzip citation_needed.
Yes, all compression levels of gzip have some zstd compression level that is both faster and better in compression ratio.
Additionally, the highest compression levels of zstd are comparable in compression level to LZMA while also being slightly faster in compression and many many times faster in decompression
gzip is very slow compared to zstd for similar levels of compression.
The zstd algorithm is a project by the same author as lz4. lz4 was designed for decompression speed, zstd was designed to balance resource utilization, speed and compression ratio and it does a fantastic job of it.
A .tarducken, if you will.
Ziptarar?
.tar.xz masterrace
This comment didn’t age well.
It was originally rar because it’s so easy to separate into multiple files. Now you can do that in other formats, but the legacy has stuck.
Not just that. RAR also has recovery records.
This is the kind of thing i think about all the time so i have a few.
- Archive files:
.tar.zst
- Produces better compression ratios than the DEFLATE compression algorithm (used by
.zip
andgzip
/.gz
) and does so faster. - By separating the jobs of archiving (
.tar
), compressing (.zst
), and (if you so choose) encrypting (.gpg
),.tar.zst
follows the Unix philosophy of “Make each program do one thing well.”. .tar.xz
is also very good and seems more popular (probably since it was released 6 years earlier in 2009), but, when tuned to it’s maximum compression level,.tar.zst
can achieve a compression ratio pretty close to LZMA (used by.tar.xz
and.7z
) and do it faster[1].zstd and xz trade blows in their compression ratio. Recompressing all packages to zstd with our options yields a total ~0.8% increase in package size on all of our packages combined, but the decompression time for all packages saw a ~1300% speedup.
- Produces better compression ratios than the DEFLATE compression algorithm (used by
- Image files:
JPEG XL
/.jxl
- “Why JPEG XL”
- Free and open format.
- Can handle lossy images, lossless images, images with transparency, images with layers, and animated images, giving it the potential of being a universal image format.
- Much better quality and compression efficiency than current lossy and lossless image formats (
.jpeg
,.png
,.gif
). - Produces much smaller files for lossless images than AVIF[2]
- Supports much larger resolutions than AVIF’s 9-megapixel limit (important for lossless images).
- Supports up to 24-bit color depth, much more than AVIF’s 12-bit color depth limit (which, to be fair, is probably good enough).
- Videos (Codec):
AV1
- Free and open format.
- Much more efficient than x264 (used by
.mp4
) and VP9[3].
- Documents:
OpenDocument / ODF / .odt
- @[email protected] says it best here.
.odt
is simply a better standard than.docx
.
it’s already a NATO standard for documents Because the Microsoft Word ones (.doc, .docx) are unusable outside the Microsoft Office ecosystem. I feel outraged every time I need to edit .docx file because it breaks the layout easily. And some older .doc files cannot even work with Microsoft Word.
- @[email protected] says it best here.
Damn didn’t realize that JXL was such a big deal. That whole JPEG recompression actually seems pretty damn cool as well. There was some noise about GNOME starting to make use of JXL in their ecosystem too…
By separating the jobs of archiving (.tar), compressing (.zst), and (if you so choose) encrypting (.gpg), .tar.zst follows the Unix philosophy of “Make each program do one thing well.”.
The problem here being that GnuPG does nothing really well.
Videos (Codec): AV1
- Much more efficient than x264 (used by .mp4) and VP9[3].
AV1 is also much younger than H264 (AV1 is a specification, x264 is an implementation), and only recently have software-encoders become somewhat viable; a more apt comparison would have been AV1 to HEVC, though the latter is also somewhat old nowadays but still a competitive codec. Unfortunately currently there aren’t many options to use AV1 in a very meaningful way; you can encode your own media with it, but that’s about it; you can stream to YouTube, but YouTube will recode to another codec.
The problem here being that GnuPG does nothing really well.
Could you elaborate? I’ve never had any issues with gpg before and curious what people are having issues with.
Unfortunately currently there aren’t many options to use AV1 in a very meaningful way; you can encode your own media with it, but that’s about it; you can stream to YouTube, but YouTube will recode to another codec.
AV1 has almost full browser support (iirc) and companies like YouTube, Netflix, and Meta have started moving over to AV1 from VP9 (since AV1 is the successor to VP9). But you’re right, it’s still working on adoption, but this is moreso just my dreamworld than it is a prediction for future standardization.
Could you elaborate? I’ve never had any issues with gpg before and curious what people are having issues with.
This article and the blog post linked within it summarize it very well.
Encrypting Email
Don’t. Email is insecure . Even with PGP, it’s default-plaintext, which means that even if you do everything right, some totally reasonable person you mail, doing totally reasonable things, will invariably CC the quoted plaintext of your encrypted message to someone else
Okay, provide me with an open standard that is widely-used that provides similar functionality.
It isn’t there. There are parties who would like to move email users into their own little proprietary walled gardens, but not a replacement for email.
The guy is literally saying that encrypting email is unacceptable because it hasn’t been built from the ground up to support encryption.
I mean, the PGP guys added PGP to an existing system because otherwise nobody would use their nifty new system. Hell, it’s hard enough to get people to use PGP as it is. Saying “well, if everyone in the world just adopted a similar-but-new system that is more-amenable to encryption, that would be helpful”, sure, but people aren’t going to do that.
The message to be taken from here is rather “don’t bother”, if you need secure communication use something else, if you’re just using it so that Google can’t read your mail it might be ok but don’t expect this solution to be secure or anything. It’s security theater for the reasons listed, but the threat model for some people is a powerful adversary who can spend millions on software to find something against you in your communication and controls at least a significant portion of the infrastructure your data travels through. Think about whistleblowers in oppressive regimes, it’s absolutely crucial there that no information at all leaks. There’s just no way to safely rely on mail + PGP for secure communication there, and if you’re fine with your secrets leaking at one point or another, you didn’t really need that felt security in the first place. But then again, you’re just doing what the blog calls LARPing in the first place.
deleted by creator
.tar
is pretty bad as it lacks in index, making it impossible to quickly seek around in the file. The compression on top adds another layer of complication. It might still work great as tape archiver, but for sending files around the Internet it is quite horrible. It’s really just getting dragged around for cargo cult reasons, not because it’s good at the job it is doing.In general I find the archive situation a little annoying, as archives are largely completely unnecessary, that’s what we have directories for. But directories don’t exist as far as HTML is concerned and only single files can be downloaded easily. So everything has to get packed and unpacked again, for absolutely no reason. It’s a job computers should handle transparently in the background, not an explicit user action.
Many file managers try to add support for
.zip
and allow you to go into them like it is a folder, but that abstraction is always quite leaky and never as smooth as it should be..tar is pretty bad as it lacks in index, making it impossible to quickly seek around in the file.
.tar.pixz/.tpxz has an index and uses LZMA and permits for parallel compression/decompression (increasingly-important on modern processors).
It’s packaged in Debian, and I assume other Linux distros.
Only downside is that GNU tar doesn’t have a single-letter shortcut to use pixz as a compressor, the way it does “z” for gzip, “j” for bzip2, or “J” for xz (LZMA); gotta use the more-verbose “-Ipixz”.
Also, while I don’t recommend it, IIRC gzip has a limited range that the effects of compression can propagate, and so even if you aren’t intentionally trying to provide random access, there is software that leverages this to hack in random access as well. I don’t recall whether someone has rigged it up with tar and indexing, but I suppose if someone were specifically determined to use gzip, one could go that route.
I get better compression ratio with xz than zstd, both at highest. When building an Ubuntu squashFS
Zstd is way faster though
.odt is simply a better standard than .docx.
No surprise, since OOXML is barely even a standard.
wait im confusrd whats the differenc ebetween .tar.zst and .tar.xz
Different ways of compressing the initial
.tar
archive.Having “double” extensions is a terrible convention for operating systems where extensions actually matter and users are used to them, like Windows.
“.tar.xz” should be something like “.tarxz” or “.txz”
I would argue what windows does with the extensions is a bad idea. Why do you think engineers should do things in favour of these horrible decisions the most insecure OS is designed with?
I get your point. Since a
.tar.zst
file can be handled natively bytar
, using.tzst
instead does make sense.But it’s not a tarxz, it’s an xz containing a tar, and you perform operations from right to left until you arrive back at the original files with whatever extensions they use.
If I compress an exe into a zip, would you expect that to be an exezip? No, you expect it to be file.exe.zip, informing you(and your system) that this file should first be unzipped, and then should be executed.
it’s an xz containing a tar
So what? When you zip 5 documents together do you name it .zip or .config.lib.sh.deb.zip?
No, you expect it to be a file.exe.zip
Double extensions are not conventional on Windows, so no, I do not.
Dots in filenames are commonly used in any operating system like name_version.2.4.5.exe or similar… So I don’t see a problem.
Dots yes, nested extensions no.
The expected behavior is: you have a .exe binary called Example.exe. This is an executable.
Now you zip it. It’s no longer an executable binary, it’s a zip archive. Yes, the data can be reconstructed into the original file - but it is not the original file. It should now be called Example.zip, as it is a zip file.
This is important both for user mental models, but also because operating systems that use extensions as the primary indicator of file type often will hide known extensions by default, and the nested extensions in the name can create trouble.
There already are conventional abbreviations: see Section 2.1. I doubt they will be better supported by tools though.
That’s much better. Thanks for actually answering the comment, rather than the usual “Windows bad, Linux good, upvotes please”
In this case it really seems this windows convention is bad though. It is uninformative. And abbreviations mandate understanding more file extensions for no good reason. And I say this as primarily a windows user. Hiding file extensions was always a bad idea. It tries to make a simple reduced UI in a place where simple UI is not desirable. If you want a lean UI you should not be handling files directly in the first place.
Example.zip from the other comment is not a compressed .exe file, it’s a compressed archive containing the exe file and some metadata. Windows standard tools would be in real trouble trying to understand unarchived compressed files many programs might want to use for logging or other data dumps. And that means a lot of software use their own custom extensions that neither the system nor the user knows what to do with without the original software. Using standard system tools and conventions is generally preferable.
use a real operative system then
Sounds like a Windows problem
Cool. So it means it’s a problem for over 70% of all active desktop and laptop computers.
I get the frustration, but Windows is the one that strayed from convention/standard.
Also, i should’ve asked this earlier, but doesn’t Windows also only look at the characters following the last dot in the filename when determining the file type? If so, then this should be fine for Windows, since there’s only one canonical file extension at a time, right?
deleted by creator
- By separating the jobs of archiving (
.tar
), compressing (.zst
), and (if you so choose) encrypting (.gpg
),.tar.zst
follows the Unix philosophy of “Make each program do one thing well.”.
wait so does it do all of those things?
So there’s a tool called tar that creates an archive (a
.tar
file. Then theres a tool called zstd that can be used to compress files, including.tar
files, which then becomes a.tar.zst
file. And then you can encrypt your.tar.zst
file using a tool called gpg, which would leave you with an encrypted, compressed.tar.zst.gpg
archive.Now, most people aren’t doing everything in the terminal, so the process for most people would be pretty much the same as creating a ZIP archive.
- By separating the jobs of archiving (
is av1 lossy
AV1 can do lossy video as well as lossless video.
- Archive files:
Something for I/Q recordings. But I don’t know what would do it. Currently the most supported format seems to be s16be WAV, but there’s different formats, bit depths and encodings. I’ve seen .iq, .sdriq, .sdr, .raw, .wav. Then there’s different bit depths and encodings: u8, s8, s16be, s16le, f32,… Also there’s different ways metadata like center frequency is stored.
what is this
God damnit. I wrote an answer and it disappeared a while after pressing reply. I am lazy to rewrite it and my eyes are sore.
Anyway, I am too dumb to actually understand I/Q samples. It stands for In-Phase and Quadrature, they are 90° out of phase from each other. That’s somehow used to reconstruct a signal. It’s used in different areas. For me it’s useful to record raw RF signals from software defined radio (SDR).
For example, with older, less secure systems, you could record signal from someone’s car keyfob, then use a Tx-capable SDR to replay it later. Ta-da! Replay attack. You unlocked someone’s car.
In a better way, you could record raw signal from a satellite to later demodulate and decode it, if your computer isn’t powerful enough to do it in real-time.If you want an example, you can download DAB+ radio signal recording here: https://www.sigidwiki.com/wiki/DAB%2B and then replay it in Welle.io (available as Appimage) if it’s in compatible format. I haven’t tested it.