I see Docker mentioned every other thread and was wondering how useful it is for non development things, and if so what they are.
Two things, one you care about and one you might not. The one you care about: you can set up a service in isolation. You can then test it, make sure it works, and switch over to it once you are sure, with almost no downtime. This is important for things you actually need to use. Once you do something like breaking your primary email server, you will understand. Also, less important, you can set up a service on, say, a VM at home, and move it to a VPS, without having to transfer the entire image, and it will work the same. The one you don’t care about. That last bit about moving servers around is important for cloud providers who turn these things on and off all the time.
I don’t get the question… Docker is awesome for developing, but to put things on production too. It just avoids you the hassle of configuring a virtual machine / server from scratch since you can use prebuilt minimal images of the software you need. If you get in trouble you can restore things easier than on a whole compromised system. An update consists in the vast majority of times in changing a tag inside a docker-compose.yaml file. You have resource optimisation vs virutal machines, and so on. I don’t use docker to develop at all, I use it for production. And when you don’t need the service you installed anymore, you can just delete it and the system stays clean wihtout orphan files.
Wondering too, since Docker has a non-root mode, is there a reason to use Podman?
They have a different architecture so it comes down to preference.
Docker runs a daemon that you talk to to deploy your services. podman does not have a daemon, you either directly use the podman command to deploy services or use systemd to integrate them into your system.
For me the advantage of Docker is that a random update to my system is unlikely to crash my self hosted services. It simplifies setting up the services as well but the biggest advantage is that it is generally more stable.
Aside from the technical explanation that others have given, here’s how I use Docker:
MeTube to rip videos and stuff easily. Just plug in a link and most times it’ll work. Here’s a list of all the supported sites.
I use Sonarr/Radarr and qBittorrent with gluetun to search for and download TV and movies that I watch on Plex.
I host my own Immich server that will automatically back up my photos from my phone just like Google Photos, except I own it all and it’s all kept private. It has its own machine learning and facial recognition, so I can search for “dog” and get all the pictures of my dogs, or I can search by person.
I use Docker for all this because the images come in little prepackaged containers. It’s super easy to get into once you figure out some of the basics.
Another great benefit of these containers is that you can transfer it to another system if needed. Just copy the config and data over to the new system and point the container in the right direction and it’ll pick up where it left off.
.
My company deploys a lot of cell modems. Some of them support containers. It’s really nice to deploy everything we need in one piece of equipment, as opposed to 2 or more, for a very simple application.
Several other pieces of network equipment support it now as well. A SIEM can run a remote node directly on a switch.
Containers, the concept that Docker implements, lets app developers give a self-contained environment for distribution. For devs that means consistency in deployments across environments, which in turn means sysadmins can deploy each of these apps as fully isolated units.
With that, you get really clean installs/updates/uninstalls, and your deployments get done with a well-defined, declarative definition file which can also handle multi service dependencies (a la Docker Compose/K8s)
It’s useful for every service you want to host (on a server).
It’s so useful you see it mentioned on every other thread
- When you’re prohibited from using nixos
- When there’s no package for it in nixos, and you’re lazy to package it yourself
https://lemmy.world/post/12995686 was a recent question and most of the answers will basically be duplicates of that.
One slight addition I want to add: “Docker” is just one implementation of “OCI containers”. It’s the one that broke through initially in the hype, but you can just as easily use any other (podman being a popular one) and basically all of the benefits that people ascribe to “docker” can be applied to.
So you might (as I do) have some dislike for docker (the product) and still enjoy running containers.
In simple terms, it’s like a VM for an application. You set it up with the right dependencies and your application will “just work” on it, without having to deal with other applications existing alongside it.
What makes it better than a VM is that it is much faster. It interfaces with kernel features that help isolate the processes and files from the rest of the system. It is not virtualization, rather it is namespacing.
Docker also provides a bunch of tools that help with creating this environment automatically and allowing for some escaping into the host, such as binding ports and sharing data with the host’s file system.
Once this environment is created, it can be shared with uses as a single downloadable bundle, called an image. This makes it really easy to download and run an application without having to prepare your system with the right dependencies and files.
Nothing is free though, and the cost here is more disk space and some performance overhead, although it is close to native speed.
The thing with self hosting is that you want in most cases, to set and forget and that means you want as little going wrong as possible. To ensure that, you need to find a way that other things can’t fuck with what you’re hosting. This is called a container. The trade off is disk space, but that’s okay because it’s a server, unlike on a computer, but let me not start my rant about the stupidity of Snap and Flatpak. Anyway… Thanks to containers, you don’t have any external factors and basically everything runs in its own world. Which means you can always delete, restore and edit without anything else being affected.
In the context of self-hosted it means easier cleaner installs and avoiding different poorly packaged projects from interfering with each other.