I start: the most important thing is not the desktop, it’s the package manager.
how cool and sexy and irrestible i became
to the right people ^^
Distrobox exists, so one is not bound to use a specific distro just because it packages some of the apps/binaries they require.
So enjoying immutable fedora with AUR support. Cannot be overstated…
deleted by creator
Yeah, Arch Linux is beautiful as a container OS. I use it all the time.
Am I reading the readme correctly in that I can run apt-get within distrobox on Fedora, and not be limited to dnf packages?
You can install Distrobox on Fedora (or any of the distros that support it), create a Debian distrobox on your Fedora install, and within the Debian distrobox you can use
apt-get
to install whichever Debian package you like. Or…, you could make an Arch distrobox and even install stuff from the AUR. Or really any package from any of your favorite distros as long as it’s supported.Awesome! And it’ll be segregated from the base system and from other containers, like toolbox installs are?
And it’ll be segregated from the base system and from other containers, like toolbox installs are?
Exactly. It’s even possible to segregate it beyond what Toolbx has been able to do (at least since the last time I checked) in that you can define another folder/directory as your HOME directory within the distrobox.
Amazing!! Yup. Looks like this is getting installed on my Fedora tonight. Thanks!!
Glad to be of help 💙 ! Feel free to inquire if you so desire 😉 .
I appreciate that!
Yes!
Installed distrobox on NixOS because I was worried being limited to only nixpkgs and have not touched it once lol
Same goes for the windows VM except for the time I needed to run excel macros for work
Worried about being limited to only the biggest selection of packages available. Does not compute.
I’d never heard of nixpkgs before so thought it was some small niche thing
I did on my Nix, there was a package in Nixpkgs that was outdated, so I had the opportunity to use distrobox for that, at leqst temporarily until they update the package.
Thats been a fear of mine moving to nixos. Glad to know it’ll cover most of my software needs.
Here’s a graph, it should be fine for your package needs: Graph
This is not totally accurate because nixpkgs also packages some packages that wouldn’t be in the system package manager like Python and Haskell packages. Excluding those it’s pretty much the same as the AUR
Proper drive mounting process. When I finally learned, it was a life changer.
Please explain. You make me wonder if I’m doing it wrong.
That even though you are running an LTS version of Ubuntu (e.g. Ubuntu 22.04), some packages that have arrived over a year ago on e.g. 23.10 will never arrive on 22.04.
Example: i3-wm 4.22 or up (https://packages.ubuntu.com/search?keywords=i3&searchon=names&suite=jammy§ion=all).
This is mine. This is fine for my server, where I want it to be mega stable and always up. I can always add other repos for the few packages that I need to be up to date for whatever reason (podman for me recently). But my daily driver needs quicker updates than that.
That’s the whole point of an LTS distro. And it’s why non-rolling distros for desktop OSes make no sense
That after getting used to Linux I will hate to be forced to use less free operating systems.
Once you go FOSS, you never go back.
This so much. I absolutely cannot stand Windows anymore.
I could but I always get a feeling like I’m being monitored constantly. Like imagine being at work and if you don’t move your mouse for a few minutes you’d get a warning or something. Or remember using a computer at school where the teacher could literally see the screen of every student, yeah like that.
Windows is so bad
Don’t get an Nvidia gpu
Can confirm. Don’t do it guys. Hardware acceleration for video decoding just doesn’t work for me.
This is such an underrated comment. Linux hates, hates, hates NVidia. I’ve spent ~24 hours trying to get two applications running, both of which consistently complain about my GPU and Hardware Acceleration.
Linux hates, hates, hates NVidia.
It’s the other way around, actually.
It was ~20 years ago so my advice to myself then would be pretty irrelevant now. I messed up my laptop, and my advice then would have been don’t start with a laptop (because laptop compatibility was lacking back then compared to desktop, different times).
Laptop compatibility still sucks at times, especially with weird configurations of amd apu and nvidia gpu laptops… or maybe it’s just my skill issue.
NVIDIA’s contempt for the Linux community is legendary. Definitely not a skill issue.
Skill issue
Nah but seriously Nvidia loves to make it difficult and Linux doesn’t make it any easier. It’s like an unstoppable force meets an unmoving object
There isn’t a hardware panel nor a proper task manager nor a GUI registery editor.
Well Linux doesn’t have a registry, so an editor would also not exist, to be fair.
dconf editor is kinda like regedit for GNOME apps ig?
In a hand wavy way, yes. You are just editing the settings of one suite of software, not really an OS “registry”. Closest to that in Linux is editing /etc, but even then, not all software is configured there.
True
Not for long if Lennart has anything to say about it, I’m sure.
I disagree on the task manager. I like the KDE Plasma monitor application for instance. Very convenient way to sigterm or sigkill.
Agreed, and if you’re not on KDE then htop will do just fine.
htop
Or even better, btop
There is no registry in Linux so there can’t be a registry editor.
Hardware panels and task managers do exist (and they come in more windows-like distros), they’re just different to Windows ones. I do concede that hardware management in Windows is much easier.
Task manager for Windows absolutely blows though. It doesn’t show real data, just estimates that sometimes are wildly wrong.
GNOME System Monitor and Btop++ are excellent task managers.
Unmounting removable drives after writing to then is crucially more important than on Windows
How so?
On Windows, I often simply took out the USB drive without “safely removing” it. The data was there 99% of the time. On Linux, if I’m not mistaken, unmounting the drive before disconnecting is what actually writes data to it.
That can be configured with the
sync
option ofmount
.Or just
sync
before pulling it.I don’t think Linux literally waits for you to unmount the drive before it decides to write to it. It looks like that because the buffering is completely hidden from the user.
For example say you want to transfer a few GB from your SSD to a slow USB drive. Let’s say:
- it takes about half a minute to read the data from the SSD
- it takes ten minutes to write it to the USB
- the data fits in the spare room you have in RAM at the moment
In this scenario, the kernel will take half a minute to read the data into the RAM and then report that the file transfer is complete. Whatever program is being used will also report to the user that the transfer is complete. The kernel should have already started writing to the drive as soon as the data started being read into the RAM, so it should take another nine and a half minutes to complete the transfer in the background.
So if you unmount at that point, you will have to wait nine and a half minutes. But if you leave it running and try to unmount ten minutes later it should be close to instant. That’s because the kernel kept on writing in the background and was not waiting for you to unmount the drive in order to commit the writes.
I’m not sure but I think on Windows the file manager is aware of the buffering so this doesn’t happen, at least not for so long. But I think you can still end up with corrupted files if you don’t safely remove it.
Really? I’ve literally never done this but I suppose I really only use my USB for dd’ing a distro.
I do not think this is the case. You can disable on GNOME Disks active disk write caching for removable storages, exactly the same way as on Windows.
Also, Thunar File Manager has an option to partially write files when copying/moving and when moving, only remove the file from source directory when the copy is successful. I find it remarkable against data corruption for large file transfers.
Yeah, but you just describe 2 features on specific apps that don’t need to be enabled by default.
I mean, even the SATA over UAS is a pain with Linux, since the new implementation sacrifices SMART data for faster RW speeds on Linux, and you have to fallback manually on the old driver to read SMART data on external HDDs. On Windows, you just use CrystalDiskMark and it works.
Linux needs you to do a little work here and there for such things. I do not really eject everything safely on Linux. The feature on Thunar is handy.
It’s pretty important on Windows too, though. Always “eject” or “safely remove hardware” before unplugging!
Not in Windows 10/11. You can still “eject” if it makes you feel better, but it’s basically redundant. They reworked the support for removable media so they are always ready to remove except during active read/write operations.
Read/write operations can happen in the background at any moment as long as the drive is mounted, so that’s not terribly comforting.
Anyway, Windows has always avoided deferring writes on removable media, for as long as it’s been capable of deferring writes at all. That’s not new in Windows 10.
Linux has a mount option,
sync
, to do the same thing. Dunno if any desktop environments actually use it, but they could. Besides being slower, though, it has the downside of causing more write operations (since they can’t be batched together into fewer, larger writes), so flash drives will wear out faster. I imagine Windows’ behavior has the same problem, although with Windows users accustomed to pulling out their drives without unmounting, I suppose that’s the lesser of two evils.
No
Rasberry Pi or other NUC is a great way to begin.
By the time you’ve dressed out an Rpi to be halfway usable, you’ve spent about as much as a decent NUC. And all you have to show for it is a slow-as-mud sd card, hardly any video acceleration, a USB stack that only crashes sometimes, a busy OOM killer, and no software.
Get an N95 based nuc. A Beelink with 8/256 runs about $150, and it just works. (Well, you might need pcie_aspm=off).
yeah, RPI is just ‘cookbooked’ due to fixed hw
When you’re just trying to get work done: pick a solid, well-tested high-profile distribution like Fedora, Pop!_OS, or Debian (or Ubuntu). Don’t look for the most beautiful, or most up-to-date, or most light-weight (e.g. low CPU usage, RAM, etc.). Don’t distro hop just to see what you’re missing.
Of course, do those things if you want to mess around, have fun, or learn! But not when you’re trying to get work done.
When you’re just trying to get work done: pick Windows.
I’ve gone Arch for this year’s linux adventure. It has been the most stable I’ve ever tried.
Is Pop!_OS really that popular? I started using Linux about 10 years ago and it wasn’t around then, so I never tried it in my distro hopping days. I see it’s developed by System76 so I can see why you’d choose it on their hardware, but is there any point doing that on other hardware?
The System76 engineers are culturally very aligned with the core values of freedom of choice, customization, etc. They build software with the larger ecosystem in mind, and in fact, I’ve never seen them build something only for their own hardware (even things that could have been just for their own hardware, like the system76 power management system, has extensibility built in).
That said, they also balance this freedom with a set of “opinionated” good choices that they test and support. If you care a lot about stability, it’s easy to go along with the “happy path” and get a solid, up-to-date system delivered frequently. Every time they upgrade new features or kernel, they go through a systematic quality assurance process on multiple machines–including machines not of their own brand. (I’ve contributed software/PRs to their codebase, and they’ve always sent it through a code review and QA process).
Idk, it seems to be picking up steam. It’s what I use unless I’m trying to use something super lightweight.
For me it has the stability of Ubuntu without having to use Ubuntu.
Haven’t tried Debian yet though.
the stability of Ubuntu
That’s not really a selling point.
I’m cirious about what you dislike about Ubuntu?
Snaps are basically Ubuntu’s private app store, and flatpaks (the supported method of app distribution by almost every other distro are not supported); there’s no tiling WM built-in for large monitors; the kernel is not kept up to date (i.e. improved hardware coverage and support); some things like streaming with OBS studio and Steam don’t work out of the box (this may have changed, but it was the case for me about a year ago).
Interesting, thanks. I had a feeling snaps would be in the list!
There’s a small amount of telemetry going on.
Also, Pop_OS makes running an Nvidia GPU less painful.
- tab completion in bash
- vim
- zfs
- git (though it didn’t exist then)
I wouldn’t use ZFS. Too risky. If a new kernel comes along and ZFS fails to build or something, my system will be unbootable.
Btrfs scratches my copy-on-write/checksum/integrated RAID itch well enough anyway.
Nix and ubuntu have in kernel support. Void’s module build system also prevents this situation. I use nix and void, so have never faced this problem.
a real og
I’ve been fuckin with btrfs so far haven’t tried zfs yet. Anything cool compared to btrfs?
I gave up on btrfs when Icouldn’t recover from a full disk situation (years ago, may be better nwo). But zfs tooling is so good, reliable and intuitive, I’d not want to switch anyway.
In contrast to btrfs it doesn’t break your data. Everyone learns the hard way not to use btrfs…
Btrfs was the best filesystem I had used up until it corrupted my data.
Not breaking your data, that is a pretty cool feature
It was so long ago there was nothing to know, really. Most pages looked fine in links, you had irssi for your social networks, mplayer for your movies (still great), mutt for email, vim for programming… It kind of just worked.
That’s pretty much where I’ve landed. Except I use firefox.
I still use mplayer but now it’s neovim will lots of plugins. Modern IDEs are much different today. Mutt is hard to use in the time of HTML emials. I also use lots of graphical apps like signal, Spotify, steam or libre office that didn’t exist 20 years ago. I think getting it all to work is a bit more complicated now. Maybe I just use computer for a lot more things.
That just like windows and Mac if it doesn’t support that platform prepare for headaches. Unlike windows and Mac you can get things that aren’t supposed to run on Linux to run thanks to great tools like wine, proton, and even waydroid. But if you wanna avoid headaches just stick with what’s supported for the most part.