I’ve never used it. Its like all the others though and I have been forced to use snaps. Those I slowly replace every time I decide to start fresh.
Try linux mint, it’s basically ubuntu but without snap (you can install snap if you want to, but it’s not forced on you)
Oh I have. I have it running on some older hardware.
I’m a Debian fan, and even I think it’s absolutely preferable that app developers publish a Flatpak over the mildly janky mess of adding a new APT source. (It used to be simple and beautiful, just stick a new file in APT sources. Now Debian insists we add the GPG keys manually. Like cavemen.)
Someone got to say it…
There is no Debian if everything was a pile of Snaps/Flatpack/Docker/etc. Debian is the packaging and process that packaging is put through. Plus their FOSS guidelines.
So sure, if it’s something new and dev’y, it should isolate the dependencies mess. But when it’s mature, sort out the dependencies and get it into Debian, and thus all downstream of it.
I don’t want to go back to app-folders. They end up with a missmash of duplicate old or whacky lib. It’s bloaty, insecure and messy. Gift wrapping the mess in containers and VM, mitigates some of security issues, but brings more bloat and other issues.
I love FOSS package management. All the dependencies, in a database, with source and build dependencies. All building so there is one copy of a lib. All updating together. It’s like an OS ecosystem utopia. It doesn’t get the appreciation it should.
Now Debian insists we add the GPG keys manually. Like cavemen.)
Erm. Would you rather have debian auto-trust a bunch of third party people? It’s up to the user to decide whose keys they want on their system and whose packages they would accept if signed by what key.
Not “auto trust”, of course, but rather make adding keys is a bit smoother. As in “OK, there’s this key on the web site with this weird short hex cookie. Enter this simple command to add the key. Make sure signature it spits out is the same on the web page. If it matches, hit Yes.”
And maybe this could be baked somehow to the whole APT source adding process. “To add the source to APT, use
apt-source-addinate https://deb.example.com/thingamabob.apt
. Make sure the key displayed is0x123456789ABC
byThingamabob Team
with received key signature0xCBA9876654321
.”For the keys - do you mean something like
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 00000000 where 00000000 is replaced with the fingerprint of the key you want to fetch?
I do agree - the apt-key command is kinda dangerous because it imports keys that will be generally trusted, IIRC. So a similar command to fetch a key by fingerprint for it to be available to choose as signing keys for repositories that we configure for a single application (suite) would be nice.
I always disliked that signing keys are available for download from the same websites that have the repository. What’s the point in that? If someone can inject malicious code in the repository, they sure as hell can generate a matching signing key & sign the code with that.
Hence I always verify signing keys / fingerprints against somewhat trustworthy third parties.
What we really need though is a crowdsourced, reputation-based code review system. Where open source code is stored in git-like versioning history, and has clear documentations for each function what it should and should not do. And a reviewer can pick as little as an individual function and review the code to confirm (or refute) that the function
- does exactly what the interface documentation claims it does
- does nothing else
- performs input validation (range checks etc)
- is well-written (in terms of performance)
Then, your reputation score would increase according to other users concurring with your assessment (or decrease if people disagree), and your reputation can be used as a weighting factor in contributing to the “review thoroughness” of a code module that you reviewed. E.g.: a user with a reputation of 0.5 confirms that a module does exactly what it claims to do: Module gets review count +1, module gets new total score of +0.5, new total weight of ( combined previous weights + 0.5 ) and the average review score is “reviews total score” / “total weight”.
Something like that. And if you have a reputation of “0.9”, the review count goes +1, total score +0.9, total weight +0.9 (so the average score stays between 0 and 1).
Independent of the user reputation, the user’s review conclusion is stored as “1” (= performs as claimed) or “0” (= does not perform as claimed) for this module.
Reputation of reviewers could be calculated as the sum of all their individual review scores (at the time the reputation is needed), where the score they get is 1 minus the absolute difference between the average review score of a reviewed module and their own review conclusion.
E.g. User A concludes: module does what it claims to do: User A assessment is 1 (score for the module) User B concludes: module does NOT what it claims to do: User B assessment is 0 (score)
Module score is 0.8 (most reviewers agreed that it does what it claims to do)
User A reputation gained from their review of this module is 1 - abs( 1 - 0.8 ) = 0.8 User B reputation gained from their review of this module is 1 - abs( 0 - 0.8 ) = 0.2
If both users have previously gained a reputation of 1.0 from 10 reviews (where everyone agreed on the same assessment, thus full scores):
User A new reputation: ( 1 * 10 + 0.8 ) / 11 = 0.982 User B new reputation: ( 1 * 10 + 0.2 ) / 11 = 0.927
The basic idea being that all modules in the decentralized review database would have a review count which everyone could filter by, and find the least-reviewed modules (presumably weakest links) to focus their attention on.
If technically feasible, a decentralized database should prevent any given entity (secret services, botfarms) to falsify the overall review picture too much. I am not sure this can be accomplished - especially with the sophistication of the climate-destroying large language model technology. :/
And then change where we put them.
People always forget about appimages.
As they should /s
Honestly its neat but I don’t see why I would want it over flatpak ever
Your security people have not forgotten about appimages. It fills their nightmares.
Same app in native format: 2MB. As a flatpak: 15MB. As an appimage: 350MB.
Appimages are awesome, rock solid, and I have a few on my system, but flatpak never gave me any problem and integrates better with my KDE, and is smaller. Both have their advantages tho. I’m fine with using both. If you are a developer, make a flatpak or an appimage i dont really care just make your software available for linux. Both are fine, choose the one that fits your specific app the most.
But I also think appimages deserve the same attention and great integration with the OS as flatpaks. Stuff like that AppImageLauncher functionalities should just be integrated inside the DE itself.
But we need an universal package format for linux asap. Flatpak is on the front in this race, and I’m fine with it. Appimages second, for sure.
If you don’t run your install off a 12 zetabyte NAS are you even a real linux user?
I know nothing about how flatpak works other than that it’s containerized. But this meme tells me it’s the OS’s responsibility to create the flatpak, and not the developer’s? Is that right?
No the most common way is for devs to package their own software as a flatpak since you can typically choose your preferred packaging tool to use inside of the flatpak.
Traditional package management typically is done by the distro maintainers.
Oh I see, I’ve got it backwards.
False, if it exists in the Linux ecosystem it also exists in AUR
The broader meta point is that X thing you want isn’t the devs job, btw.
X thing you want isn’t the devs job
Well, it is if they decide it is, and it isn’t if they decide it isn’t.
That said, I do appreciate devs who put up native deb or rpm repos for the most common distros.
If you’re separating your application from the core system package manager and shared libraries, there had better be a good and specific reason for it (e.g. the app needs to be containerized for stability/security/weird dependency). If an app can’t be centrally managed I don’t want it on my system, with grudging exceptions.
Chocolatey has even made this possible in Windows, and lately for my Windows environments if I can’t install an application through chocolatey then I’ll try to find an alternative that I can. Package managers are absolutely superior to independent application installs.
I think containerization for security is a damn good reason for virtually all software.
Definitely. I’d rather have a “good and specific reason” why your application needs to use my shared libraries or have acess to my entire filesystem by default.
Using your shared libraries is always a good thing, no? Like your distro’s packages should always have the latest security fixes and such, while flatpaks require a separate upgrade path.
Access to your entire filesystem, however, I agree with you on.
I only use rolling releases on my desktop and have ran into enough issues with apps not working because of changes made in library updates that I’d rather they just include whatever version they’re targeting at this point. Sure, that might mean they’re using a less secure version, and they’re less incentivized to stay on the latest version and fix those issues as they arise, but I’m also not as concerned about the security implications of that because everything is running as my unprivileged user and confined to the flatpak.
I’d rather have a less secure flatpak then need to downgrade a library to make one app I need work and then have a less secure system overall.
emerge sec-policy/selinux-*
Flatpack can be centrally managed, it’s just like a parallel distribution scheme, where apps have dependencies and are centrally updated. If a flatpack is made reasonably, then it gets library updates independent of the app developer doing it.
“App image” and " install from tarball" violate those principles, but not snap or flatpack.
Um, if it’s “parallel” (e.g. separate from the OS package manager) then it’s not centrally managed. The OS package manager is the central management.
There might be specific use cases where this makes sense, but frankly if segregating an app from the OS is a requirement then it should be fully containerized with something like Docker, or run in an independent VM.
If a flatpack is made reasonably, then it gets library updates independent of the app developer doing it.
That feels like a load-bearing “if”. I never have to worry about this with the package manager.
Define “the OS package manager”. If the distro comes with flatpack and dnf equally, and both are invoked by the generic “get updates” tooling, then both could count as “the” update manager. They both check all apps for updates.
Odd to advocate for docker containers, they always have the app provider also on the hook for all dependencies because they always are inherently bundled. If a library has a critical bug fix, then your docker like containers will be stuck without the fix until the app provider gets around to fixing it, and app providers are highly unreliable on docker hub. Besides, update discipline among docker/podman users is generally atrocious, and given the relatively tedious nature of following updates with that ecosystem, I am not surprised. Even best case, docker style uses more disk space and more memory than any other option, apart from VM.
With respect to never having to worry about bundled dependencies with rpm/deb, third party packages bundle or statically link all the time. If they don’t, then they sometimes overwrite the OS provided dependency with an incompatible one that breaks OS packages, if the dependency is obscure enough for them not to notice other usage.
Typically Windows applications bundle all their dependencies, so Chocolatey, WinGet and Scoop are all more like installing a Flatpak or AppImage than a package from a distro’s system package manager. They’re all listed in one place, yes, but so’s everything on FlatHub.
This is true, the only shared libraries are usually the .NET versions, but so many apps depend on specific .NET versions that frequently the modularity doesn’t matter.
I’m not sure where you’re getting the idea that Flatpak aren’t centrally managed…
Can I
sudo apt upgrade
my installed flatpak apps?No, because they’re not apt packages. You can, however,
flatpak update
them, and you don’t even need sudo since they’re installed in the user context rather than system.
I think stability is a pretty good reason
If an app can’t be centrally managed
Open Discover, Gnome Software etc -> Click update?
Oh no, no GUI nonsense. Single, simple shell command update for the whole system so that it can be properly remotely managed, please. Something equivalent to
sudo apt upgrade
I’ve written a small script that does all the updates (repo, flatpak, docker), verified the packages, does cleanup and shows if stuff needs rebooted. Handy. That way I can do everything from one short command
flatpak upgrade
I’m now confused if they’re saying that flatpak is centrally managed or not. To me it seems centrally managed, both the flatpak ecosystem but your whole machine (repo packages, firmware, flatpak) if you use those app stores. I might’ve misunderstood what they said.
We’re both saying that it’s centrally managed
Fuck, I took both the wrong way. Sorry about that
And with topgrade you can even upgrade flatpaks and your distros repos in one go
I like Flatpak just because it isn’t Snap
Fair. Also, flatpak does not try to break everything by default, which is a plus.
The enemy of my enemy, eh?
…is my enemy’s enemy, no more, no less. (Maxims of Maximally Effective Mercenaries #29)
I just distribute it as a self-contained executable/archive. ¯\_(ツ)_/¯
Valid solution, but I miss unified updates with appimages and such
Yeah, that’s the fun part. Hooking into some auto-update mechanism would be useful to me.
But my stuff is mostly in the scratching-my-own-itch stage, so setting up a FlatHub account, Flatpak metadata, sandbox rules, probably an icon and screenshots and whatnot, and automating the build+releases, just to get auto-updates, yeah… no.
I could code a whole nother project in the time that would take.
Well, if you have any form of build script, makefile, or CI, then you can easily shove that into a flatpak-builder manifest and push the build repo anywhere you want. The default OSTree repository format can be served from any old webserver or S3 bucket after all.
I’ve done this for personal projects many times, since it’s a ridiculously easy way to get scalable distribution and automatic updates in place.
Hmm, okay, that doesn’t sound too bad.
Does the sandboxing get into the way much? Can a user tell it to poke a hole into the sandbox, to use some specific folder, for example?I think, my real problem is that I don’t actually use Flatpak for any software I have installed. 😅
I’m not opposed to using Flatpak, but I disabled Flathub pretty quickly on my distro’s software store thingamabob, when I accidentally installed some proprietary software from it. Fuck that shit, no matter how much sandboxing I get.In regards to sandboxing, it only gets as far in the way as you ask it to. For applications that you’re not planning on putting on FlatHub anyway you can be just as open as you want to be, i.e. just adding
/
- orhost
as it’s called - as read-write to the app. (OpenMW still does that as we had some issues with the data extraction for original Morrowind install media)If you do want to sandbox though, users are able to poke just as many holes as they want - or add their own restrictions atop whatever sandboxing you set up for the application. Flatpak itself has the
flatpak override
tool for this, or there’s graphical UIs like flatseal and the KDE control center module…
As long as your application is statically linked, I don’t see any issue with that.
So, like, dumb question. People here assumed that I mean AppImages, whereas I actually meant just a statically linked binary. Is that really the only reason why AppImage exists? So, that dynamically linked applications can be distributed like statically linked ones?
You cannot statically link everything. Take graphics libraries and APIs for example, do you statically link against nvidia’s or mesa’s opengl?
Sure, but presumably AppImage/Flatpak/Docker cannot help with that either…?
Flatpak solves the problem with targetable platform versions, you just update the manifest for your app every like 6-12 months to target the new one
Ah, interesting. So, it’s different from just statically linking against the latest driver lib every 6-12 months, because the Flatpak runtime gives you a bit of a guarantee that there won’t be breaking changes in the meantime.
Bingo, and if the latest mesa breaks your app for example, you can target an older one until it’s fixed instead of end users having to fuck around downgrading system packages
This is the problem those tools try to solve. They package everything else upon which software might depend that can’t simply be linked into a single binary.
You cannot statically link everything. Take graphics libraries and APIs for example, do you statically link against nvidia’s or mesa’s opengl?
The majority of AppImages I’ve seen have been dynamically linked, yes. But it’s also used for packaging assets.
Yeah, alright, packaging assets makes sense. I’ve always been fine with just a .tar.gz, but having it be a singular file without compression is cool.
I guess, since AppImage emulates a filesystem, you can also have your application logic load the assets from the same path as if the assets were installed on the OS, so that’s also cool.
AppImage for the win!
And this, this is why I love the AUR
“oh this is a flatpak or hell even a windows exe…” proceeds to search for it on AUR “ah there it is, wonderful!”
Hell I’ve found a god damn windows gaming cheat trainer on AUR and it worked.
The AUR is basically just a script that describes best case scenario for building something under Arch. They don’t have any specific quality rules they have to meet.
It’s super easy to make and publish an AUR script compared to a regular distro package (including Arch packages).
Usually they work well enough, especially things that just involve repacking binaries (e.g. printer drivers)
I think no one said it needs to be ON a distro’s repos. That’s a straw man.
A package should be available in a native package format in a way that doesn’t cause conflict with what’s in the official repo. The reasons for a single source of truth on installed status should be obvious; but given the format of some packaging and the signed assurance of provenance, thr advantages to a native format can be leaves ahead of even that.
Wow, is this meme a really naive take that is contradicted by - oh god, everything. Can someone know about enterprise Linux and also be this naive?
The responsibility to figure out the dependencies and packaging for distros, and then maintain those going forwards, should not be placed on the developer. If a developer wants to do that, then that’s fine - but if a developer just wants to provide source with solid build instructions, and then provide a flatpak, maybe an appimage, then that’s also perfectly fine.
In a sense, developers shouldn’t even be trusted to manage packaging for distributions - it’s usually not their area of expertise, maintainers of specific distributions will usually know better.
While I agree that developers (like myself) are not necessarily experts at packaging stuff, to conclude that it’s fine that a developer provides a flatpak is promoting shitty software. Whether a software should run in a jail, or within user space is a decision that - for most use cases - should be made by the user.
There is absolutely no reason not to provide software as a tar.gz source code archive with a proper makefile & documentation of dependencies - or automake configuration if that’s preferred.
From that kind of delivery, any package maintainer can easily build a distro-package.
I think you’re actually agreeing with me here. I was disputing the claim that software should be made available in “a native package format”, and my counterpoint is that devs shouldn’t be packaging things for distros, and instead providing source code with build instructions, alongside whatever builds they can comfortably provide - primarily flatpak and appimage, in my example.
I don’t use flatpak, and I prefer to use packages with my distro’s package manager, but I definitely can’t expect every package to be available in that format. Flatpak and appimage, to my knowledge, are designed to be distro-agnostic and easily distributed by the software developer, so they’re probably the best options - flatpak better for long-term use, appimage usable for quickly trying out software or one-off utilities.
As for tar.gz, these days software tends to be made available on GitHub and similar platforms, where you can fetch the source from git by commit, and releases also have autogenerated source downloads. Makefiles/automake isn’t a reasonable expectation these days, with a plethora of languages and build toolchains, but good, clear instructions are definitely something to include.
Makefiles/automake isn’t a reasonable expectation these days, with a plethora of languages and build toolchains, but good, clear instructions are definitely something to include.
As for the Makefiles, I meant that for whatever build toolchain the project uses - because the rules to build a project are an essential part of the project, linking the source code into a working library or executable. Whether it is cmake, or gnu make, or whatever else there is - that’s not so important as long as those build toolchains are available cross platforms.
I think what is really missing in the open source world is a distribution-agnostic standard how to describe application dependencies so that package maintainers can auto-generate distro-packages with the distribution-specific dependencies based on that “dependencies” file.
Similar to debian dependencies
Depends: libstdc++6 (>= 10.2.1)
but in a way that identifies code modules, not packages, so that distributions that package software together differently will still be able to identyfindPackageFor( dependency )
I would really like to add this kind of info to my projects and have a tool that can auto-build a repo-package from those.
Nix: you package it yourself and do a pull request
Sadly, many flatpaks don’t even work on NixOS properly because of assumptions about the file structure or similar
Exactly. And even if one doesn’t know how to package it, they can just open a request issue.
laughs in appimage.
Bottle’s developers disagree with this meme
I cannot use bottles since months due to their faltpak monogamy policy :/
…explain? It literally has Flatpak as first-class support, i.e. it’s guaranteed and only guaranteed to work on Flatpak
Because I use it from the AUR…
Try using the Flatpak
If you really hate flatpak just make an arch distrobox and download off the AUR. Or install Nix or something
I do sort of wish Nix was a more popular distro agnostic solution
Install Gentoo and put the package on GURU, it’s really easy (and .ebuild > PKGBUILD)
Or just use Arch… only for half of your AUR packages to be broken and end up still using flatpaks anyways.
That’s what I’ve done with my deck. Some things just aren’t available through discover, and the Firefox build on there has behavior that I don’t like or know how to correct. Distrobox gives me access to the Arch repos + AUR with persistence that you can’t get on SteamOS without it.
SteamOS is an arch derivative, so you could also just install arch, add the SteamOS repos, and set the steam UI in gamescope to launch on login
If I can choose between flatpack and distro package, distro wins hands down.
If the choice then is flatpack vs compile your own, I think I’ll generally compile it, but it depends on the circumstances.
Why?
deleted by creator
Because it’s easier to use the version that’s in the distro, and why do I need an extra set of libraries filling up my disk.
I see flatpack as a last resort, where I trade disk space for convenience, because you end up with a whole OS worth of flatpack dependencies (10+ GB) on your disk after a few upgrade cycles.
I mean it’s 2024. I regularly download archives that are several tens or even over 100 GB and then completely forget they’re sitting on my drive, because I don’t notice it when the drive is 4TB. Last time I cared about 10GB here and there was in the late-2000s.
Great that you have 4tb on your root partition then by all means use flatpack.
I have 256Gb on my laptop, as I recall I provisioned about 40-50gigs to root.
I’m sorry. I didn’t realize people were still regularly using such constrained systems. Honest. I’ve homebuilt my PCs for the last 15 years.
deleted by creator
🤣
Why not upgrade your hdd?
flatpak has dedub, so no
Yep that’s all well and good, but what flatpack doesn’t do automatically is clean up unused libs/dependencies, over time you end up with several versions of the same libs. When the apps are upgraded they get the latest version of their dependency and leave the old behind.
Is compiling it yourself with the time and effort that it costs worth more than a few GB of disk space?
Then your disk is very expensive and your labor very cheap.
For a lot of project “compiling yourself”, while obviously more involved than running some magic install command, is really not that tedious. Good projects have decent documentation in that regard and usually streamline everything down to a few things to configure and be done with it.
What’s aggravating is projects that explicitly go out of their way to make building them difficult, removing existing documentation and helper tools and replacing them with “use whatever we decided to use”. I hate these.
I should have noted that I’ll compile myself when we are talking about something that should run as a service on a server.
99% of the time it’s just “make && sudo make install” or something like that. Anything bigger or more complicated typically has a native package anyway.
They didn’t say anything about compiling it themselves, just that they prefer native packages to flatpak
edit: I can’t read
2 comments up they said
If the choice then is flatpack vs compile your own, I think I’ll generally compile it, but it depends on the circumstances.
TEN WHOLE GIGABYTES!! OMG WHAT ARE WE TO DO??
10 out of 40 is 25%
10 out of 4000 is 0.25%
I don’t know what dependencies he has but my 3 year old system that is constantly being updated is full of flatpaks and all of the dependencies combined are only around 3GB. People see 1GB of dependencies and lose their mind.
Stubbornness
Based
I change my opinion depending on which app it is. I use KDE, so any KDE app will be installed natively for sure for perfect integration. Stuff like grub costumizer etc all native. Steam, Lutris, GIMP, Discord, chrome, firefox, telegram? Flatpak, all of those. They don’t need perfect integration and I prefer the stability, easy upgrades and ease of uninstall of flatpak. Native is used when OS integration is a must. Flatpak for everything else. Especially since sometimes the distro’s package is months/years old… prefering distro packages for everything should be a thing of the past.
I’m 100% on this camp.
I don’t wanna be that guy, but someone has to say it: Nix Flakes
I have both nix and flatpak lol. Different usecases: flatpak for stuff that I would rather have sandboxed (browsers, games), nix for stuff that I would rather be integrated into the system (command line tools, etc). Tho I still have to learn about flakes, right now I’m just using
nix-env
for everything like a caveman lol